00:00:00.000 Started by upstream project "autotest-per-patch" build number 130929 00:00:00.000 originally caused by: 00:00:00.000 Started by user sys_sgci 00:00:00.046 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.046 The recommended git tool is: git 00:00:00.047 using credential 00000000-0000-0000-0000-000000000002 00:00:00.048 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.074 Fetching changes from the remote Git repository 00:00:00.076 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.122 Using shallow fetch with depth 1 00:00:00.122 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.122 > git --version # timeout=10 00:00:00.179 > git --version # 'git version 2.39.2' 00:00:00.179 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.201 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.201 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:04.645 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:04.657 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:04.669 Checking out Revision bc56972291bf21b4d2a602b495a165146a8d67a1 (FETCH_HEAD) 00:00:04.669 > git config core.sparsecheckout # timeout=10 00:00:04.679 > git read-tree -mu HEAD # timeout=10 00:00:04.695 > git checkout -f bc56972291bf21b4d2a602b495a165146a8d67a1 # timeout=5 00:00:04.714 Commit message: "jenkins/jjb-config: Remove extendedChoice from ipxe-test-images" 00:00:04.714 > git rev-list --no-walk bc56972291bf21b4d2a602b495a165146a8d67a1 # timeout=10 00:00:04.822 [Pipeline] Start of Pipeline 00:00:04.834 [Pipeline] library 00:00:04.835 Loading library shm_lib@master 00:00:04.835 Library shm_lib@master is cached. Copying from home. 00:00:04.850 [Pipeline] node 00:00:04.859 Running on CYP9 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:04.861 [Pipeline] { 00:00:04.870 [Pipeline] catchError 00:00:04.872 [Pipeline] { 00:00:04.882 [Pipeline] wrap 00:00:04.890 [Pipeline] { 00:00:04.897 [Pipeline] stage 00:00:04.899 [Pipeline] { (Prologue) 00:00:05.162 [Pipeline] sh 00:00:05.448 + logger -p user.info -t JENKINS-CI 00:00:05.465 [Pipeline] echo 00:00:05.466 Node: CYP9 00:00:05.471 [Pipeline] sh 00:00:05.773 [Pipeline] setCustomBuildProperty 00:00:05.782 [Pipeline] echo 00:00:05.783 Cleanup processes 00:00:05.786 [Pipeline] sh 00:00:06.082 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.082 2924115 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.095 [Pipeline] sh 00:00:06.380 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.380 ++ grep -v 'sudo pgrep' 00:00:06.380 ++ awk '{print $1}' 00:00:06.380 + sudo kill -9 00:00:06.380 + true 00:00:06.393 [Pipeline] cleanWs 00:00:06.401 [WS-CLEANUP] Deleting project workspace... 00:00:06.401 [WS-CLEANUP] Deferred wipeout is used... 00:00:06.409 [WS-CLEANUP] done 00:00:06.411 [Pipeline] setCustomBuildProperty 00:00:06.419 [Pipeline] sh 00:00:06.702 + sudo git config --global --replace-all safe.directory '*' 00:00:06.786 [Pipeline] httpRequest 00:00:07.648 [Pipeline] echo 00:00:07.649 Sorcerer 10.211.164.101 is alive 00:00:07.656 [Pipeline] retry 00:00:07.657 [Pipeline] { 00:00:07.669 [Pipeline] httpRequest 00:00:07.674 HttpMethod: GET 00:00:07.675 URL: http://10.211.164.101/packages/jbp_bc56972291bf21b4d2a602b495a165146a8d67a1.tar.gz 00:00:07.675 Sending request to url: http://10.211.164.101/packages/jbp_bc56972291bf21b4d2a602b495a165146a8d67a1.tar.gz 00:00:07.692 Response Code: HTTP/1.1 200 OK 00:00:07.693 Success: Status code 200 is in the accepted range: 200,404 00:00:07.693 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_bc56972291bf21b4d2a602b495a165146a8d67a1.tar.gz 00:00:11.717 [Pipeline] } 00:00:11.734 [Pipeline] // retry 00:00:11.741 [Pipeline] sh 00:00:12.041 + tar --no-same-owner -xf jbp_bc56972291bf21b4d2a602b495a165146a8d67a1.tar.gz 00:00:12.079 [Pipeline] httpRequest 00:00:12.508 [Pipeline] echo 00:00:12.510 Sorcerer 10.211.164.101 is alive 00:00:12.519 [Pipeline] retry 00:00:12.521 [Pipeline] { 00:00:12.535 [Pipeline] httpRequest 00:00:12.540 HttpMethod: GET 00:00:12.541 URL: http://10.211.164.101/packages/spdk_6101e4048d5400f2ba64e4378da28dc592756098.tar.gz 00:00:12.542 Sending request to url: http://10.211.164.101/packages/spdk_6101e4048d5400f2ba64e4378da28dc592756098.tar.gz 00:00:12.553 Response Code: HTTP/1.1 200 OK 00:00:12.553 Success: Status code 200 is in the accepted range: 200,404 00:00:12.554 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_6101e4048d5400f2ba64e4378da28dc592756098.tar.gz 00:00:56.423 [Pipeline] } 00:00:56.440 [Pipeline] // retry 00:00:56.448 [Pipeline] sh 00:00:56.741 + tar --no-same-owner -xf spdk_6101e4048d5400f2ba64e4378da28dc592756098.tar.gz 00:00:59.316 [Pipeline] sh 00:00:59.609 + git -C spdk log --oneline -n5 00:00:59.610 6101e4048 vhost: defer the g_fini_cb after called 00:00:59.610 92108e0a2 fsdev/aio: add support for null IOs 00:00:59.610 dcdab59d3 lib/reduce: Check return code of read superblock 00:00:59.610 95d9d27f7 bdev/nvme: controller failover/multipath doc change 00:00:59.610 f366dac4a bdev/nvme: removed 'multipath' param from spdk_bdev_nvme_create() 00:00:59.644 [Pipeline] } 00:00:59.653 [Pipeline] // stage 00:00:59.659 [Pipeline] stage 00:00:59.661 [Pipeline] { (Prepare) 00:00:59.670 [Pipeline] writeFile 00:00:59.680 [Pipeline] sh 00:00:59.965 + logger -p user.info -t JENKINS-CI 00:00:59.979 [Pipeline] sh 00:01:00.270 + logger -p user.info -t JENKINS-CI 00:01:00.283 [Pipeline] sh 00:01:00.575 + cat autorun-spdk.conf 00:01:00.575 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:00.575 SPDK_TEST_NVMF=1 00:01:00.575 SPDK_TEST_NVME_CLI=1 00:01:00.575 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:00.575 SPDK_TEST_NVMF_NICS=e810 00:01:00.575 SPDK_TEST_VFIOUSER=1 00:01:00.575 SPDK_RUN_UBSAN=1 00:01:00.575 NET_TYPE=phy 00:01:00.584 RUN_NIGHTLY=0 00:01:00.589 [Pipeline] readFile 00:01:00.612 [Pipeline] withEnv 00:01:00.615 [Pipeline] { 00:01:00.627 [Pipeline] sh 00:01:00.917 + set -ex 00:01:00.917 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:01:00.917 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:00.917 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:00.917 ++ SPDK_TEST_NVMF=1 00:01:00.917 ++ SPDK_TEST_NVME_CLI=1 00:01:00.917 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:00.917 ++ SPDK_TEST_NVMF_NICS=e810 00:01:00.917 ++ SPDK_TEST_VFIOUSER=1 00:01:00.917 ++ SPDK_RUN_UBSAN=1 00:01:00.917 ++ NET_TYPE=phy 00:01:00.917 ++ RUN_NIGHTLY=0 00:01:00.917 + case $SPDK_TEST_NVMF_NICS in 00:01:00.917 + DRIVERS=ice 00:01:00.917 + [[ tcp == \r\d\m\a ]] 00:01:00.917 + [[ -n ice ]] 00:01:00.917 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:01:00.917 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:00.917 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:01:00.917 rmmod: ERROR: Module irdma is not currently loaded 00:01:00.917 rmmod: ERROR: Module i40iw is not currently loaded 00:01:00.917 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:00.917 + true 00:01:00.917 + for D in $DRIVERS 00:01:00.917 + sudo modprobe ice 00:01:00.917 + exit 0 00:01:00.927 [Pipeline] } 00:01:00.943 [Pipeline] // withEnv 00:01:00.948 [Pipeline] } 00:01:00.962 [Pipeline] // stage 00:01:00.970 [Pipeline] catchError 00:01:00.972 [Pipeline] { 00:01:00.985 [Pipeline] timeout 00:01:00.985 Timeout set to expire in 1 hr 0 min 00:01:00.987 [Pipeline] { 00:01:01.001 [Pipeline] stage 00:01:01.003 [Pipeline] { (Tests) 00:01:01.016 [Pipeline] sh 00:01:01.309 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:01.309 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:01.309 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:01.309 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:01:01.309 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:01.309 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:01.309 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:01:01.309 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:01.309 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:01.309 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:01.309 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:01:01.309 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:01.309 + source /etc/os-release 00:01:01.309 ++ NAME='Fedora Linux' 00:01:01.309 ++ VERSION='39 (Cloud Edition)' 00:01:01.309 ++ ID=fedora 00:01:01.309 ++ VERSION_ID=39 00:01:01.309 ++ VERSION_CODENAME= 00:01:01.309 ++ PLATFORM_ID=platform:f39 00:01:01.309 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:01:01.309 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:01.309 ++ LOGO=fedora-logo-icon 00:01:01.309 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:01:01.309 ++ HOME_URL=https://fedoraproject.org/ 00:01:01.309 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:01:01.309 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:01.309 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:01.309 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:01.309 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:01:01.309 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:01.309 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:01:01.309 ++ SUPPORT_END=2024-11-12 00:01:01.309 ++ VARIANT='Cloud Edition' 00:01:01.309 ++ VARIANT_ID=cloud 00:01:01.309 + uname -a 00:01:01.309 Linux spdk-cyp-09 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:01:01.309 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:01:04.619 Hugepages 00:01:04.619 node hugesize free / total 00:01:04.619 node0 1048576kB 0 / 0 00:01:04.619 node0 2048kB 0 / 0 00:01:04.619 node1 1048576kB 0 / 0 00:01:04.619 node1 2048kB 0 / 0 00:01:04.619 00:01:04.619 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:04.619 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:01:04.619 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:01:04.619 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:01:04.619 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:01:04.619 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:01:04.619 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:01:04.619 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:01:04.619 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:01:04.619 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:01:04.619 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:01:04.619 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:01:04.619 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:01:04.619 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:01:04.619 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:01:04.619 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:01:04.619 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:01:04.619 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:01:04.619 + rm -f /tmp/spdk-ld-path 00:01:04.619 + source autorun-spdk.conf 00:01:04.619 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:04.619 ++ SPDK_TEST_NVMF=1 00:01:04.619 ++ SPDK_TEST_NVME_CLI=1 00:01:04.619 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:04.619 ++ SPDK_TEST_NVMF_NICS=e810 00:01:04.619 ++ SPDK_TEST_VFIOUSER=1 00:01:04.619 ++ SPDK_RUN_UBSAN=1 00:01:04.619 ++ NET_TYPE=phy 00:01:04.619 ++ RUN_NIGHTLY=0 00:01:04.619 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:04.619 + [[ -n '' ]] 00:01:04.619 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:04.619 + for M in /var/spdk/build-*-manifest.txt 00:01:04.619 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:01:04.619 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:04.619 + for M in /var/spdk/build-*-manifest.txt 00:01:04.619 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:04.619 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:04.619 + for M in /var/spdk/build-*-manifest.txt 00:01:04.619 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:04.619 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:04.619 ++ uname 00:01:04.619 + [[ Linux == \L\i\n\u\x ]] 00:01:04.619 + sudo dmesg -T 00:01:04.619 + sudo dmesg --clear 00:01:04.619 + dmesg_pid=2925087 00:01:04.619 + [[ Fedora Linux == FreeBSD ]] 00:01:04.619 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:04.619 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:04.619 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:04.619 + [[ -x /usr/src/fio-static/fio ]] 00:01:04.619 + export FIO_BIN=/usr/src/fio-static/fio 00:01:04.619 + FIO_BIN=/usr/src/fio-static/fio 00:01:04.619 + sudo dmesg -Tw 00:01:04.619 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:04.619 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:04.619 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:04.619 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:04.619 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:04.619 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:04.619 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:04.619 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:04.619 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:04.619 Test configuration: 00:01:04.619 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:04.619 SPDK_TEST_NVMF=1 00:01:04.619 SPDK_TEST_NVME_CLI=1 00:01:04.619 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:04.619 SPDK_TEST_NVMF_NICS=e810 00:01:04.619 SPDK_TEST_VFIOUSER=1 00:01:04.619 SPDK_RUN_UBSAN=1 00:01:04.619 NET_TYPE=phy 00:01:04.619 RUN_NIGHTLY=0 00:08:35 -- common/autotest_common.sh@1680 -- $ [[ n == y ]] 00:01:04.619 00:08:35 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:04.619 00:08:35 -- scripts/common.sh@15 -- $ shopt -s extglob 00:01:04.619 00:08:35 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:04.619 00:08:35 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:04.619 00:08:35 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:04.619 00:08:35 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:04.619 00:08:35 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:04.619 00:08:35 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:04.620 00:08:35 -- paths/export.sh@5 -- $ export PATH 00:01:04.620 00:08:35 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:04.620 00:08:35 -- common/autobuild_common.sh@485 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:04.620 00:08:35 -- common/autobuild_common.sh@486 -- $ date +%s 00:01:04.620 00:08:35 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1728425315.XXXXXX 00:01:04.883 00:08:35 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1728425315.oIU0XB 00:01:04.883 00:08:35 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:01:04.883 00:08:35 -- common/autobuild_common.sh@492 -- $ '[' -n '' ']' 00:01:04.883 00:08:35 -- common/autobuild_common.sh@495 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:01:04.883 00:08:35 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:04.883 00:08:35 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:04.883 00:08:35 -- common/autobuild_common.sh@502 -- $ get_config_params 00:01:04.883 00:08:35 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:01:04.883 00:08:35 -- common/autotest_common.sh@10 -- $ set +x 00:01:04.883 00:08:35 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:01:04.883 00:08:35 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:01:04.883 00:08:35 -- pm/common@17 -- $ local monitor 00:01:04.883 00:08:35 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:04.883 00:08:35 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:04.883 00:08:35 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:04.883 00:08:35 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:04.883 00:08:35 -- pm/common@21 -- $ date +%s 00:01:04.883 00:08:35 -- pm/common@25 -- $ sleep 1 00:01:04.883 00:08:35 -- pm/common@21 -- $ date +%s 00:01:04.883 00:08:35 -- pm/common@21 -- $ date +%s 00:01:04.883 00:08:35 -- pm/common@21 -- $ date +%s 00:01:04.883 00:08:35 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1728425315 00:01:04.883 00:08:35 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1728425315 00:01:04.883 00:08:35 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1728425315 00:01:04.883 00:08:35 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1728425315 00:01:04.883 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1728425315_collect-cpu-load.pm.log 00:01:04.883 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1728425315_collect-vmstat.pm.log 00:01:04.883 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1728425315_collect-cpu-temp.pm.log 00:01:04.883 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1728425315_collect-bmc-pm.bmc.pm.log 00:01:05.826 00:08:36 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:01:05.826 00:08:36 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:05.826 00:08:36 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:05.826 00:08:36 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:05.826 00:08:36 -- spdk/autobuild.sh@16 -- $ date -u 00:01:05.826 Tue Oct 8 10:08:36 PM UTC 2024 00:01:05.826 00:08:36 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:05.826 v25.01-pre-42-g6101e4048 00:01:05.826 00:08:36 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:05.826 00:08:36 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:05.826 00:08:36 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:05.826 00:08:36 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:01:05.826 00:08:36 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:01:05.826 00:08:36 -- common/autotest_common.sh@10 -- $ set +x 00:01:05.826 ************************************ 00:01:05.826 START TEST ubsan 00:01:05.826 ************************************ 00:01:05.826 00:08:36 ubsan -- common/autotest_common.sh@1125 -- $ echo 'using ubsan' 00:01:05.826 using ubsan 00:01:05.826 00:01:05.826 real 0m0.001s 00:01:05.826 user 0m0.000s 00:01:05.826 sys 0m0.001s 00:01:05.826 00:08:36 ubsan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:01:05.826 00:08:36 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:05.826 ************************************ 00:01:05.826 END TEST ubsan 00:01:05.826 ************************************ 00:01:05.826 00:08:36 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:05.826 00:08:36 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:05.826 00:08:36 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:05.826 00:08:36 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:05.826 00:08:36 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:05.826 00:08:36 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:05.826 00:08:36 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:05.826 00:08:36 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:05.826 00:08:36 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:01:06.087 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:01:06.087 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:06.349 Using 'verbs' RDMA provider 00:01:22.227 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:34.469 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:35.303 Creating mk/config.mk...done. 00:01:35.303 Creating mk/cc.flags.mk...done. 00:01:35.303 Type 'make' to build. 00:01:35.303 00:09:05 -- spdk/autobuild.sh@70 -- $ run_test make make -j144 00:01:35.303 00:09:05 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:01:35.303 00:09:05 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:01:35.303 00:09:05 -- common/autotest_common.sh@10 -- $ set +x 00:01:35.303 ************************************ 00:01:35.303 START TEST make 00:01:35.303 ************************************ 00:01:35.303 00:09:05 make -- common/autotest_common.sh@1125 -- $ make -j144 00:01:35.564 make[1]: Nothing to be done for 'all'. 00:01:37.483 The Meson build system 00:01:37.483 Version: 1.5.0 00:01:37.483 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:01:37.483 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:37.483 Build type: native build 00:01:37.483 Project name: libvfio-user 00:01:37.483 Project version: 0.0.1 00:01:37.483 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:01:37.483 C linker for the host machine: cc ld.bfd 2.40-14 00:01:37.483 Host machine cpu family: x86_64 00:01:37.483 Host machine cpu: x86_64 00:01:37.483 Run-time dependency threads found: YES 00:01:37.483 Library dl found: YES 00:01:37.483 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:01:37.483 Run-time dependency json-c found: YES 0.17 00:01:37.483 Run-time dependency cmocka found: YES 1.1.7 00:01:37.483 Program pytest-3 found: NO 00:01:37.483 Program flake8 found: NO 00:01:37.483 Program misspell-fixer found: NO 00:01:37.483 Program restructuredtext-lint found: NO 00:01:37.483 Program valgrind found: YES (/usr/bin/valgrind) 00:01:37.483 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:37.483 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:37.483 Compiler for C supports arguments -Wwrite-strings: YES 00:01:37.483 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:37.483 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:01:37.483 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:01:37.483 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:37.483 Build targets in project: 8 00:01:37.483 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:01:37.483 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:01:37.483 00:01:37.483 libvfio-user 0.0.1 00:01:37.483 00:01:37.483 User defined options 00:01:37.483 buildtype : debug 00:01:37.483 default_library: shared 00:01:37.483 libdir : /usr/local/lib 00:01:37.483 00:01:37.483 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:37.483 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:37.743 [1/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:01:37.743 [2/37] Compiling C object samples/lspci.p/lspci.c.o 00:01:37.743 [3/37] Compiling C object samples/null.p/null.c.o 00:01:37.743 [4/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:01:37.743 [5/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:01:37.743 [6/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:01:37.743 [7/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:01:37.743 [8/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:01:37.743 [9/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:01:37.743 [10/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:01:37.743 [11/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:01:37.743 [12/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:01:37.743 [13/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:01:37.743 [14/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:01:37.743 [15/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:01:37.743 [16/37] Compiling C object test/unit_tests.p/mocks.c.o 00:01:37.743 [17/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:01:37.743 [18/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:01:37.743 [19/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:01:37.743 [20/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:01:37.743 [21/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:01:37.743 [22/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:01:37.743 [23/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:01:37.743 [24/37] Compiling C object samples/server.p/server.c.o 00:01:37.743 [25/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:01:37.743 [26/37] Compiling C object samples/client.p/client.c.o 00:01:37.743 [27/37] Linking target samples/client 00:01:37.743 [28/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:01:37.743 [29/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:01:37.743 [30/37] Linking target lib/libvfio-user.so.0.0.1 00:01:37.743 [31/37] Linking target test/unit_tests 00:01:38.004 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:01:38.004 [33/37] Linking target samples/server 00:01:38.004 [34/37] Linking target samples/gpio-pci-idio-16 00:01:38.004 [35/37] Linking target samples/null 00:01:38.004 [36/37] Linking target samples/lspci 00:01:38.004 [37/37] Linking target samples/shadow_ioeventfd_server 00:01:38.004 INFO: autodetecting backend as ninja 00:01:38.004 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:38.265 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:38.526 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:38.526 ninja: no work to do. 00:01:45.111 The Meson build system 00:01:45.111 Version: 1.5.0 00:01:45.111 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:01:45.111 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:01:45.111 Build type: native build 00:01:45.111 Program cat found: YES (/usr/bin/cat) 00:01:45.111 Project name: DPDK 00:01:45.111 Project version: 24.03.0 00:01:45.111 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:01:45.111 C linker for the host machine: cc ld.bfd 2.40-14 00:01:45.111 Host machine cpu family: x86_64 00:01:45.111 Host machine cpu: x86_64 00:01:45.111 Message: ## Building in Developer Mode ## 00:01:45.111 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:45.111 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:01:45.111 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:45.111 Program python3 found: YES (/usr/bin/python3) 00:01:45.111 Program cat found: YES (/usr/bin/cat) 00:01:45.111 Compiler for C supports arguments -march=native: YES 00:01:45.111 Checking for size of "void *" : 8 00:01:45.111 Checking for size of "void *" : 8 (cached) 00:01:45.111 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:01:45.111 Library m found: YES 00:01:45.111 Library numa found: YES 00:01:45.111 Has header "numaif.h" : YES 00:01:45.111 Library fdt found: NO 00:01:45.111 Library execinfo found: NO 00:01:45.111 Has header "execinfo.h" : YES 00:01:45.111 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:01:45.111 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:45.111 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:45.111 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:45.111 Run-time dependency openssl found: YES 3.1.1 00:01:45.111 Run-time dependency libpcap found: YES 1.10.4 00:01:45.111 Has header "pcap.h" with dependency libpcap: YES 00:01:45.111 Compiler for C supports arguments -Wcast-qual: YES 00:01:45.111 Compiler for C supports arguments -Wdeprecated: YES 00:01:45.111 Compiler for C supports arguments -Wformat: YES 00:01:45.111 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:45.111 Compiler for C supports arguments -Wformat-security: NO 00:01:45.111 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:45.111 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:45.111 Compiler for C supports arguments -Wnested-externs: YES 00:01:45.111 Compiler for C supports arguments -Wold-style-definition: YES 00:01:45.111 Compiler for C supports arguments -Wpointer-arith: YES 00:01:45.111 Compiler for C supports arguments -Wsign-compare: YES 00:01:45.111 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:45.111 Compiler for C supports arguments -Wundef: YES 00:01:45.111 Compiler for C supports arguments -Wwrite-strings: YES 00:01:45.111 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:45.111 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:45.111 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:45.111 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:45.111 Program objdump found: YES (/usr/bin/objdump) 00:01:45.111 Compiler for C supports arguments -mavx512f: YES 00:01:45.111 Checking if "AVX512 checking" compiles: YES 00:01:45.111 Fetching value of define "__SSE4_2__" : 1 00:01:45.111 Fetching value of define "__AES__" : 1 00:01:45.111 Fetching value of define "__AVX__" : 1 00:01:45.111 Fetching value of define "__AVX2__" : 1 00:01:45.111 Fetching value of define "__AVX512BW__" : 1 00:01:45.111 Fetching value of define "__AVX512CD__" : 1 00:01:45.111 Fetching value of define "__AVX512DQ__" : 1 00:01:45.111 Fetching value of define "__AVX512F__" : 1 00:01:45.111 Fetching value of define "__AVX512VL__" : 1 00:01:45.111 Fetching value of define "__PCLMUL__" : 1 00:01:45.111 Fetching value of define "__RDRND__" : 1 00:01:45.111 Fetching value of define "__RDSEED__" : 1 00:01:45.111 Fetching value of define "__VPCLMULQDQ__" : 1 00:01:45.111 Fetching value of define "__znver1__" : (undefined) 00:01:45.111 Fetching value of define "__znver2__" : (undefined) 00:01:45.111 Fetching value of define "__znver3__" : (undefined) 00:01:45.111 Fetching value of define "__znver4__" : (undefined) 00:01:45.111 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:45.111 Message: lib/log: Defining dependency "log" 00:01:45.111 Message: lib/kvargs: Defining dependency "kvargs" 00:01:45.111 Message: lib/telemetry: Defining dependency "telemetry" 00:01:45.111 Checking for function "getentropy" : NO 00:01:45.111 Message: lib/eal: Defining dependency "eal" 00:01:45.111 Message: lib/ring: Defining dependency "ring" 00:01:45.111 Message: lib/rcu: Defining dependency "rcu" 00:01:45.111 Message: lib/mempool: Defining dependency "mempool" 00:01:45.111 Message: lib/mbuf: Defining dependency "mbuf" 00:01:45.111 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:45.111 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:45.111 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:45.111 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:45.111 Fetching value of define "__AVX512VL__" : 1 (cached) 00:01:45.111 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:01:45.111 Compiler for C supports arguments -mpclmul: YES 00:01:45.111 Compiler for C supports arguments -maes: YES 00:01:45.111 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:45.111 Compiler for C supports arguments -mavx512bw: YES 00:01:45.111 Compiler for C supports arguments -mavx512dq: YES 00:01:45.111 Compiler for C supports arguments -mavx512vl: YES 00:01:45.111 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:45.111 Compiler for C supports arguments -mavx2: YES 00:01:45.111 Compiler for C supports arguments -mavx: YES 00:01:45.111 Message: lib/net: Defining dependency "net" 00:01:45.111 Message: lib/meter: Defining dependency "meter" 00:01:45.111 Message: lib/ethdev: Defining dependency "ethdev" 00:01:45.111 Message: lib/pci: Defining dependency "pci" 00:01:45.111 Message: lib/cmdline: Defining dependency "cmdline" 00:01:45.111 Message: lib/hash: Defining dependency "hash" 00:01:45.111 Message: lib/timer: Defining dependency "timer" 00:01:45.111 Message: lib/compressdev: Defining dependency "compressdev" 00:01:45.111 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:45.111 Message: lib/dmadev: Defining dependency "dmadev" 00:01:45.111 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:45.111 Message: lib/power: Defining dependency "power" 00:01:45.111 Message: lib/reorder: Defining dependency "reorder" 00:01:45.111 Message: lib/security: Defining dependency "security" 00:01:45.111 Has header "linux/userfaultfd.h" : YES 00:01:45.111 Has header "linux/vduse.h" : YES 00:01:45.111 Message: lib/vhost: Defining dependency "vhost" 00:01:45.111 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:45.111 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:45.111 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:45.111 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:45.111 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:45.111 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:45.111 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:45.111 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:45.111 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:45.111 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:45.111 Program doxygen found: YES (/usr/local/bin/doxygen) 00:01:45.111 Configuring doxy-api-html.conf using configuration 00:01:45.111 Configuring doxy-api-man.conf using configuration 00:01:45.111 Program mandb found: YES (/usr/bin/mandb) 00:01:45.111 Program sphinx-build found: NO 00:01:45.111 Configuring rte_build_config.h using configuration 00:01:45.111 Message: 00:01:45.111 ================= 00:01:45.111 Applications Enabled 00:01:45.111 ================= 00:01:45.111 00:01:45.111 apps: 00:01:45.111 00:01:45.111 00:01:45.111 Message: 00:01:45.111 ================= 00:01:45.111 Libraries Enabled 00:01:45.111 ================= 00:01:45.111 00:01:45.111 libs: 00:01:45.111 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:45.111 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:45.111 cryptodev, dmadev, power, reorder, security, vhost, 00:01:45.111 00:01:45.111 Message: 00:01:45.111 =============== 00:01:45.111 Drivers Enabled 00:01:45.111 =============== 00:01:45.111 00:01:45.111 common: 00:01:45.111 00:01:45.111 bus: 00:01:45.111 pci, vdev, 00:01:45.111 mempool: 00:01:45.111 ring, 00:01:45.111 dma: 00:01:45.111 00:01:45.111 net: 00:01:45.111 00:01:45.111 crypto: 00:01:45.111 00:01:45.111 compress: 00:01:45.111 00:01:45.111 vdpa: 00:01:45.111 00:01:45.111 00:01:45.111 Message: 00:01:45.111 ================= 00:01:45.111 Content Skipped 00:01:45.111 ================= 00:01:45.111 00:01:45.111 apps: 00:01:45.111 dumpcap: explicitly disabled via build config 00:01:45.111 graph: explicitly disabled via build config 00:01:45.111 pdump: explicitly disabled via build config 00:01:45.111 proc-info: explicitly disabled via build config 00:01:45.111 test-acl: explicitly disabled via build config 00:01:45.111 test-bbdev: explicitly disabled via build config 00:01:45.111 test-cmdline: explicitly disabled via build config 00:01:45.112 test-compress-perf: explicitly disabled via build config 00:01:45.112 test-crypto-perf: explicitly disabled via build config 00:01:45.112 test-dma-perf: explicitly disabled via build config 00:01:45.112 test-eventdev: explicitly disabled via build config 00:01:45.112 test-fib: explicitly disabled via build config 00:01:45.112 test-flow-perf: explicitly disabled via build config 00:01:45.112 test-gpudev: explicitly disabled via build config 00:01:45.112 test-mldev: explicitly disabled via build config 00:01:45.112 test-pipeline: explicitly disabled via build config 00:01:45.112 test-pmd: explicitly disabled via build config 00:01:45.112 test-regex: explicitly disabled via build config 00:01:45.112 test-sad: explicitly disabled via build config 00:01:45.112 test-security-perf: explicitly disabled via build config 00:01:45.112 00:01:45.112 libs: 00:01:45.112 argparse: explicitly disabled via build config 00:01:45.112 metrics: explicitly disabled via build config 00:01:45.112 acl: explicitly disabled via build config 00:01:45.112 bbdev: explicitly disabled via build config 00:01:45.112 bitratestats: explicitly disabled via build config 00:01:45.112 bpf: explicitly disabled via build config 00:01:45.112 cfgfile: explicitly disabled via build config 00:01:45.112 distributor: explicitly disabled via build config 00:01:45.112 efd: explicitly disabled via build config 00:01:45.112 eventdev: explicitly disabled via build config 00:01:45.112 dispatcher: explicitly disabled via build config 00:01:45.112 gpudev: explicitly disabled via build config 00:01:45.112 gro: explicitly disabled via build config 00:01:45.112 gso: explicitly disabled via build config 00:01:45.112 ip_frag: explicitly disabled via build config 00:01:45.112 jobstats: explicitly disabled via build config 00:01:45.112 latencystats: explicitly disabled via build config 00:01:45.112 lpm: explicitly disabled via build config 00:01:45.112 member: explicitly disabled via build config 00:01:45.112 pcapng: explicitly disabled via build config 00:01:45.112 rawdev: explicitly disabled via build config 00:01:45.112 regexdev: explicitly disabled via build config 00:01:45.112 mldev: explicitly disabled via build config 00:01:45.112 rib: explicitly disabled via build config 00:01:45.112 sched: explicitly disabled via build config 00:01:45.112 stack: explicitly disabled via build config 00:01:45.112 ipsec: explicitly disabled via build config 00:01:45.112 pdcp: explicitly disabled via build config 00:01:45.112 fib: explicitly disabled via build config 00:01:45.112 port: explicitly disabled via build config 00:01:45.112 pdump: explicitly disabled via build config 00:01:45.112 table: explicitly disabled via build config 00:01:45.112 pipeline: explicitly disabled via build config 00:01:45.112 graph: explicitly disabled via build config 00:01:45.112 node: explicitly disabled via build config 00:01:45.112 00:01:45.112 drivers: 00:01:45.112 common/cpt: not in enabled drivers build config 00:01:45.112 common/dpaax: not in enabled drivers build config 00:01:45.112 common/iavf: not in enabled drivers build config 00:01:45.112 common/idpf: not in enabled drivers build config 00:01:45.112 common/ionic: not in enabled drivers build config 00:01:45.112 common/mvep: not in enabled drivers build config 00:01:45.112 common/octeontx: not in enabled drivers build config 00:01:45.112 bus/auxiliary: not in enabled drivers build config 00:01:45.112 bus/cdx: not in enabled drivers build config 00:01:45.112 bus/dpaa: not in enabled drivers build config 00:01:45.112 bus/fslmc: not in enabled drivers build config 00:01:45.112 bus/ifpga: not in enabled drivers build config 00:01:45.112 bus/platform: not in enabled drivers build config 00:01:45.112 bus/uacce: not in enabled drivers build config 00:01:45.112 bus/vmbus: not in enabled drivers build config 00:01:45.112 common/cnxk: not in enabled drivers build config 00:01:45.112 common/mlx5: not in enabled drivers build config 00:01:45.112 common/nfp: not in enabled drivers build config 00:01:45.112 common/nitrox: not in enabled drivers build config 00:01:45.112 common/qat: not in enabled drivers build config 00:01:45.112 common/sfc_efx: not in enabled drivers build config 00:01:45.112 mempool/bucket: not in enabled drivers build config 00:01:45.112 mempool/cnxk: not in enabled drivers build config 00:01:45.112 mempool/dpaa: not in enabled drivers build config 00:01:45.112 mempool/dpaa2: not in enabled drivers build config 00:01:45.112 mempool/octeontx: not in enabled drivers build config 00:01:45.112 mempool/stack: not in enabled drivers build config 00:01:45.112 dma/cnxk: not in enabled drivers build config 00:01:45.112 dma/dpaa: not in enabled drivers build config 00:01:45.112 dma/dpaa2: not in enabled drivers build config 00:01:45.112 dma/hisilicon: not in enabled drivers build config 00:01:45.112 dma/idxd: not in enabled drivers build config 00:01:45.112 dma/ioat: not in enabled drivers build config 00:01:45.112 dma/skeleton: not in enabled drivers build config 00:01:45.112 net/af_packet: not in enabled drivers build config 00:01:45.112 net/af_xdp: not in enabled drivers build config 00:01:45.112 net/ark: not in enabled drivers build config 00:01:45.112 net/atlantic: not in enabled drivers build config 00:01:45.112 net/avp: not in enabled drivers build config 00:01:45.112 net/axgbe: not in enabled drivers build config 00:01:45.112 net/bnx2x: not in enabled drivers build config 00:01:45.112 net/bnxt: not in enabled drivers build config 00:01:45.112 net/bonding: not in enabled drivers build config 00:01:45.112 net/cnxk: not in enabled drivers build config 00:01:45.112 net/cpfl: not in enabled drivers build config 00:01:45.112 net/cxgbe: not in enabled drivers build config 00:01:45.112 net/dpaa: not in enabled drivers build config 00:01:45.112 net/dpaa2: not in enabled drivers build config 00:01:45.112 net/e1000: not in enabled drivers build config 00:01:45.112 net/ena: not in enabled drivers build config 00:01:45.112 net/enetc: not in enabled drivers build config 00:01:45.112 net/enetfec: not in enabled drivers build config 00:01:45.112 net/enic: not in enabled drivers build config 00:01:45.112 net/failsafe: not in enabled drivers build config 00:01:45.112 net/fm10k: not in enabled drivers build config 00:01:45.112 net/gve: not in enabled drivers build config 00:01:45.112 net/hinic: not in enabled drivers build config 00:01:45.112 net/hns3: not in enabled drivers build config 00:01:45.112 net/i40e: not in enabled drivers build config 00:01:45.112 net/iavf: not in enabled drivers build config 00:01:45.112 net/ice: not in enabled drivers build config 00:01:45.112 net/idpf: not in enabled drivers build config 00:01:45.112 net/igc: not in enabled drivers build config 00:01:45.112 net/ionic: not in enabled drivers build config 00:01:45.112 net/ipn3ke: not in enabled drivers build config 00:01:45.112 net/ixgbe: not in enabled drivers build config 00:01:45.112 net/mana: not in enabled drivers build config 00:01:45.112 net/memif: not in enabled drivers build config 00:01:45.112 net/mlx4: not in enabled drivers build config 00:01:45.112 net/mlx5: not in enabled drivers build config 00:01:45.112 net/mvneta: not in enabled drivers build config 00:01:45.112 net/mvpp2: not in enabled drivers build config 00:01:45.112 net/netvsc: not in enabled drivers build config 00:01:45.112 net/nfb: not in enabled drivers build config 00:01:45.112 net/nfp: not in enabled drivers build config 00:01:45.112 net/ngbe: not in enabled drivers build config 00:01:45.112 net/null: not in enabled drivers build config 00:01:45.112 net/octeontx: not in enabled drivers build config 00:01:45.112 net/octeon_ep: not in enabled drivers build config 00:01:45.112 net/pcap: not in enabled drivers build config 00:01:45.112 net/pfe: not in enabled drivers build config 00:01:45.112 net/qede: not in enabled drivers build config 00:01:45.112 net/ring: not in enabled drivers build config 00:01:45.112 net/sfc: not in enabled drivers build config 00:01:45.112 net/softnic: not in enabled drivers build config 00:01:45.112 net/tap: not in enabled drivers build config 00:01:45.112 net/thunderx: not in enabled drivers build config 00:01:45.112 net/txgbe: not in enabled drivers build config 00:01:45.112 net/vdev_netvsc: not in enabled drivers build config 00:01:45.112 net/vhost: not in enabled drivers build config 00:01:45.112 net/virtio: not in enabled drivers build config 00:01:45.112 net/vmxnet3: not in enabled drivers build config 00:01:45.112 raw/*: missing internal dependency, "rawdev" 00:01:45.112 crypto/armv8: not in enabled drivers build config 00:01:45.112 crypto/bcmfs: not in enabled drivers build config 00:01:45.112 crypto/caam_jr: not in enabled drivers build config 00:01:45.112 crypto/ccp: not in enabled drivers build config 00:01:45.112 crypto/cnxk: not in enabled drivers build config 00:01:45.112 crypto/dpaa_sec: not in enabled drivers build config 00:01:45.112 crypto/dpaa2_sec: not in enabled drivers build config 00:01:45.112 crypto/ipsec_mb: not in enabled drivers build config 00:01:45.112 crypto/mlx5: not in enabled drivers build config 00:01:45.112 crypto/mvsam: not in enabled drivers build config 00:01:45.112 crypto/nitrox: not in enabled drivers build config 00:01:45.112 crypto/null: not in enabled drivers build config 00:01:45.112 crypto/octeontx: not in enabled drivers build config 00:01:45.112 crypto/openssl: not in enabled drivers build config 00:01:45.112 crypto/scheduler: not in enabled drivers build config 00:01:45.112 crypto/uadk: not in enabled drivers build config 00:01:45.112 crypto/virtio: not in enabled drivers build config 00:01:45.112 compress/isal: not in enabled drivers build config 00:01:45.112 compress/mlx5: not in enabled drivers build config 00:01:45.112 compress/nitrox: not in enabled drivers build config 00:01:45.112 compress/octeontx: not in enabled drivers build config 00:01:45.112 compress/zlib: not in enabled drivers build config 00:01:45.112 regex/*: missing internal dependency, "regexdev" 00:01:45.112 ml/*: missing internal dependency, "mldev" 00:01:45.112 vdpa/ifc: not in enabled drivers build config 00:01:45.112 vdpa/mlx5: not in enabled drivers build config 00:01:45.112 vdpa/nfp: not in enabled drivers build config 00:01:45.112 vdpa/sfc: not in enabled drivers build config 00:01:45.112 event/*: missing internal dependency, "eventdev" 00:01:45.112 baseband/*: missing internal dependency, "bbdev" 00:01:45.112 gpu/*: missing internal dependency, "gpudev" 00:01:45.112 00:01:45.112 00:01:45.112 Build targets in project: 84 00:01:45.112 00:01:45.112 DPDK 24.03.0 00:01:45.112 00:01:45.112 User defined options 00:01:45.112 buildtype : debug 00:01:45.112 default_library : shared 00:01:45.112 libdir : lib 00:01:45.112 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:45.112 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:01:45.112 c_link_args : 00:01:45.112 cpu_instruction_set: native 00:01:45.112 disable_apps : test-dma-perf,test,test-sad,test-acl,test-pmd,test-mldev,test-compress-perf,test-cmdline,test-regex,test-fib,graph,test-bbdev,dumpcap,test-gpudev,proc-info,test-pipeline,test-flow-perf,test-crypto-perf,pdump,test-eventdev,test-security-perf 00:01:45.113 disable_libs : port,lpm,ipsec,regexdev,dispatcher,argparse,bitratestats,rawdev,stack,graph,acl,bbdev,pipeline,member,sched,pcapng,mldev,eventdev,efd,metrics,latencystats,cfgfile,ip_frag,jobstats,pdump,pdcp,rib,node,fib,distributor,gso,table,bpf,gpudev,gro 00:01:45.113 enable_docs : false 00:01:45.113 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:01:45.113 enable_kmods : false 00:01:45.113 max_lcores : 128 00:01:45.113 tests : false 00:01:45.113 00:01:45.113 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:45.113 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:01:45.113 [1/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:45.113 [2/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:45.113 [3/267] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:45.113 [4/267] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:45.113 [5/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:45.113 [6/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:45.113 [7/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:45.113 [8/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:45.113 [9/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:45.113 [10/267] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:45.113 [11/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:45.113 [12/267] Linking static target lib/librte_kvargs.a 00:01:45.113 [13/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:45.113 [14/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:45.113 [15/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:45.113 [16/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:45.113 [17/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:45.113 [18/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:45.113 [19/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:45.113 [20/267] Linking static target lib/librte_log.a 00:01:45.113 [21/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:45.113 [22/267] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:01:45.113 [23/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:45.113 [24/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:45.113 [25/267] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:45.113 [26/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:45.113 [27/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:45.113 [28/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:45.113 [29/267] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:45.113 [30/267] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:45.113 [31/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:45.113 [32/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:45.113 [33/267] Linking static target lib/librte_pci.a 00:01:45.113 [34/267] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:45.113 [35/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:45.113 [36/267] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:45.113 [37/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:45.113 [38/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:45.466 [39/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:45.466 [40/267] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:45.466 [41/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:45.466 [42/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:45.466 [43/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:45.466 [44/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:45.466 [45/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:45.466 [46/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:45.466 [47/267] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:45.466 [48/267] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:45.466 [49/267] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:45.466 [50/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:45.466 [51/267] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:45.466 [52/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:45.466 [53/267] Linking static target lib/librte_timer.a 00:01:45.466 [54/267] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:45.466 [55/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:45.466 [56/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:45.466 [57/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:45.466 [58/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:45.466 [59/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:45.466 [60/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:45.466 [61/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:45.466 [62/267] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:45.466 [63/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:45.466 [64/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:45.466 [65/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:45.466 [66/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:45.466 [67/267] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:45.466 [68/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:45.466 [69/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:45.466 [70/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:45.466 [71/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:45.466 [72/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:45.466 [73/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:45.466 [74/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:01:45.466 [75/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:45.466 [76/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:45.466 [77/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:45.466 [78/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:45.466 [79/267] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:45.466 [80/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:45.466 [81/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:45.466 [82/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:45.466 [83/267] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:45.466 [84/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:45.466 [85/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:45.466 [86/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:45.466 [87/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:45.466 [88/267] Linking static target lib/librte_meter.a 00:01:45.466 [89/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:45.466 [90/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:45.466 [91/267] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:45.466 [92/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:45.466 [93/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:45.466 [94/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:45.466 [95/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:45.466 [96/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:45.466 [97/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:45.466 [98/267] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:45.466 [99/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:45.466 [100/267] Linking static target lib/librte_telemetry.a 00:01:45.466 [101/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:45.466 [102/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:45.466 [103/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:45.466 [104/267] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:45.466 [105/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:45.466 [106/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:45.466 [107/267] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:01:45.466 [108/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:45.466 [109/267] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:45.791 [110/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:45.791 [111/267] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:45.791 [112/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:45.791 [113/267] Linking static target lib/librte_ring.a 00:01:45.791 [114/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:45.791 [115/267] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:45.791 [116/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:45.791 [117/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:45.791 [118/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:45.791 [119/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:45.791 [120/267] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:45.791 [121/267] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:45.791 [122/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:45.791 [123/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:45.791 [124/267] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:45.791 [125/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:45.791 [126/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:45.791 [127/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:45.791 [128/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:45.791 [129/267] Linking static target lib/librte_cmdline.a 00:01:45.791 [130/267] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:45.791 [131/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:45.791 [132/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:01:45.791 [133/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:45.791 [134/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:45.791 [135/267] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:45.791 [136/267] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:45.791 [137/267] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:45.791 [138/267] Linking static target lib/librte_net.a 00:01:45.791 [139/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:45.791 [140/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:45.791 [141/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:45.791 [142/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:45.791 [143/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:45.791 [144/267] Linking static target lib/librte_dmadev.a 00:01:45.791 [145/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:45.791 [146/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:45.791 [147/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:45.791 [148/267] Linking static target lib/librte_compressdev.a 00:01:45.791 [149/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:45.791 [150/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:45.791 [151/267] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:45.791 [152/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:45.791 [153/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:45.791 [154/267] Linking target lib/librte_log.so.24.1 00:01:45.791 [155/267] Linking static target lib/librte_mempool.a 00:01:45.791 [156/267] Linking static target lib/librte_security.a 00:01:45.791 [157/267] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:45.791 [158/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:45.791 [159/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:45.791 [160/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:45.791 [161/267] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:45.791 [162/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:45.791 [163/267] Linking static target lib/librte_rcu.a 00:01:45.791 [164/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:45.791 [165/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:45.791 [166/267] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:45.791 [167/267] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:45.791 [168/267] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:45.791 [169/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:45.791 [170/267] Linking static target lib/librte_eal.a 00:01:45.791 [171/267] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:45.791 [172/267] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:45.791 [173/267] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:45.791 [174/267] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:45.791 [175/267] Linking static target drivers/librte_bus_vdev.a 00:01:45.791 [176/267] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:45.792 [177/267] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:45.792 [178/267] Linking static target lib/librte_reorder.a 00:01:45.792 [179/267] Linking static target lib/librte_power.a 00:01:45.792 [180/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:45.792 [181/267] Linking static target lib/librte_mbuf.a 00:01:45.792 [182/267] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:45.792 [183/267] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:45.792 [184/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:45.792 [185/267] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:45.792 [186/267] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:01:45.792 [187/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:45.792 [188/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:45.792 [189/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:45.792 [190/267] Linking target lib/librte_kvargs.so.24.1 00:01:45.792 [191/267] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:45.792 [192/267] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:46.062 [193/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:46.062 [194/267] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:46.062 [195/267] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:46.062 [196/267] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:46.062 [197/267] Linking static target lib/librte_hash.a 00:01:46.062 [198/267] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:46.062 [199/267] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:46.062 [200/267] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:46.062 [201/267] Linking static target drivers/librte_bus_pci.a 00:01:46.062 [202/267] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:01:46.062 [203/267] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:46.062 [204/267] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:46.062 [205/267] Linking static target drivers/librte_mempool_ring.a 00:01:46.062 [206/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:46.062 [207/267] Linking static target lib/librte_cryptodev.a 00:01:46.062 [208/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:46.062 [209/267] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:46.062 [210/267] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:46.062 [211/267] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:46.062 [212/267] Linking target lib/librte_telemetry.so.24.1 00:01:46.322 [213/267] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:46.322 [214/267] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:01:46.322 [215/267] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:46.322 [216/267] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:46.322 [217/267] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:46.584 [218/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:46.584 [219/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:46.584 [220/267] Linking static target lib/librte_ethdev.a 00:01:46.584 [221/267] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:46.584 [222/267] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:46.845 [223/267] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:46.845 [224/267] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:47.106 [225/267] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:47.106 [226/267] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:47.367 [227/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:01:47.367 [228/267] Linking static target lib/librte_vhost.a 00:01:48.319 [229/267] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.719 [230/267] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.309 [231/267] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.252 [232/267] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.252 [233/267] Linking target lib/librte_eal.so.24.1 00:01:57.513 [234/267] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:01:57.513 [235/267] Linking target lib/librte_ring.so.24.1 00:01:57.513 [236/267] Linking target lib/librte_timer.so.24.1 00:01:57.513 [237/267] Linking target lib/librte_meter.so.24.1 00:01:57.513 [238/267] Linking target lib/librte_pci.so.24.1 00:01:57.513 [239/267] Linking target lib/librte_dmadev.so.24.1 00:01:57.513 [240/267] Linking target drivers/librte_bus_vdev.so.24.1 00:01:57.513 [241/267] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:01:57.513 [242/267] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:01:57.773 [243/267] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:01:57.773 [244/267] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:01:57.773 [245/267] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:01:57.773 [246/267] Linking target lib/librte_rcu.so.24.1 00:01:57.773 [247/267] Linking target lib/librte_mempool.so.24.1 00:01:57.773 [248/267] Linking target drivers/librte_bus_pci.so.24.1 00:01:57.773 [249/267] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:01:57.773 [250/267] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:01:57.774 [251/267] Linking target drivers/librte_mempool_ring.so.24.1 00:01:57.774 [252/267] Linking target lib/librte_mbuf.so.24.1 00:01:58.034 [253/267] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:01:58.034 [254/267] Linking target lib/librte_reorder.so.24.1 00:01:58.034 [255/267] Linking target lib/librte_net.so.24.1 00:01:58.034 [256/267] Linking target lib/librte_compressdev.so.24.1 00:01:58.034 [257/267] Linking target lib/librte_cryptodev.so.24.1 00:01:58.295 [258/267] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:01:58.295 [259/267] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:01:58.295 [260/267] Linking target lib/librte_cmdline.so.24.1 00:01:58.295 [261/267] Linking target lib/librte_hash.so.24.1 00:01:58.295 [262/267] Linking target lib/librte_security.so.24.1 00:01:58.295 [263/267] Linking target lib/librte_ethdev.so.24.1 00:01:58.295 [264/267] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:01:58.295 [265/267] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:01:58.556 [266/267] Linking target lib/librte_power.so.24.1 00:01:58.556 [267/267] Linking target lib/librte_vhost.so.24.1 00:01:58.556 INFO: autodetecting backend as ninja 00:01:58.556 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 144 00:02:02.759 CC lib/log/log.o 00:02:02.759 CC lib/log/log_flags.o 00:02:02.759 CC lib/log/log_deprecated.o 00:02:02.759 CC lib/ut_mock/mock.o 00:02:02.759 CC lib/ut/ut.o 00:02:02.759 LIB libspdk_ut_mock.a 00:02:02.759 LIB libspdk_ut.a 00:02:02.759 LIB libspdk_log.a 00:02:02.759 SO libspdk_ut_mock.so.6.0 00:02:02.759 SO libspdk_ut.so.2.0 00:02:02.759 SO libspdk_log.so.7.0 00:02:02.759 SYMLINK libspdk_ut_mock.so 00:02:02.759 SYMLINK libspdk_ut.so 00:02:02.759 SYMLINK libspdk_log.so 00:02:03.020 CC lib/dma/dma.o 00:02:03.020 CC lib/util/base64.o 00:02:03.020 CC lib/util/bit_array.o 00:02:03.020 CC lib/util/cpuset.o 00:02:03.020 CC lib/util/crc16.o 00:02:03.020 CC lib/ioat/ioat.o 00:02:03.020 CC lib/util/crc32.o 00:02:03.020 CC lib/util/crc32c.o 00:02:03.020 CXX lib/trace_parser/trace.o 00:02:03.020 CC lib/util/crc32_ieee.o 00:02:03.020 CC lib/util/crc64.o 00:02:03.020 CC lib/util/dif.o 00:02:03.020 CC lib/util/fd.o 00:02:03.020 CC lib/util/fd_group.o 00:02:03.020 CC lib/util/file.o 00:02:03.020 CC lib/util/hexlify.o 00:02:03.020 CC lib/util/iov.o 00:02:03.020 CC lib/util/math.o 00:02:03.020 CC lib/util/net.o 00:02:03.020 CC lib/util/pipe.o 00:02:03.020 CC lib/util/strerror_tls.o 00:02:03.020 CC lib/util/string.o 00:02:03.020 CC lib/util/uuid.o 00:02:03.020 CC lib/util/xor.o 00:02:03.020 CC lib/util/zipf.o 00:02:03.020 CC lib/util/md5.o 00:02:03.280 CC lib/vfio_user/host/vfio_user_pci.o 00:02:03.280 CC lib/vfio_user/host/vfio_user.o 00:02:03.280 LIB libspdk_dma.a 00:02:03.280 SO libspdk_dma.so.5.0 00:02:03.280 LIB libspdk_ioat.a 00:02:03.280 SYMLINK libspdk_dma.so 00:02:03.540 SO libspdk_ioat.so.7.0 00:02:03.540 SYMLINK libspdk_ioat.so 00:02:03.540 LIB libspdk_util.a 00:02:03.540 LIB libspdk_vfio_user.a 00:02:03.540 SO libspdk_vfio_user.so.5.0 00:02:03.540 SO libspdk_util.so.10.0 00:02:03.540 SYMLINK libspdk_vfio_user.so 00:02:03.540 SYMLINK libspdk_util.so 00:02:03.802 LIB libspdk_trace_parser.a 00:02:04.087 SO libspdk_trace_parser.so.6.0 00:02:04.087 CC lib/json/json_parse.o 00:02:04.087 CC lib/rdma_utils/rdma_utils.o 00:02:04.087 CC lib/conf/conf.o 00:02:04.087 CC lib/json/json_util.o 00:02:04.087 CC lib/vmd/vmd.o 00:02:04.087 CC lib/json/json_write.o 00:02:04.087 CC lib/idxd/idxd.o 00:02:04.087 CC lib/vmd/led.o 00:02:04.087 CC lib/idxd/idxd_user.o 00:02:04.087 CC lib/idxd/idxd_kernel.o 00:02:04.087 CC lib/rdma_provider/common.o 00:02:04.087 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:04.087 CC lib/env_dpdk/env.o 00:02:04.087 CC lib/env_dpdk/memory.o 00:02:04.087 CC lib/env_dpdk/pci.o 00:02:04.087 CC lib/env_dpdk/init.o 00:02:04.087 CC lib/env_dpdk/threads.o 00:02:04.087 CC lib/env_dpdk/pci_ioat.o 00:02:04.087 CC lib/env_dpdk/pci_virtio.o 00:02:04.087 CC lib/env_dpdk/pci_vmd.o 00:02:04.087 CC lib/env_dpdk/pci_idxd.o 00:02:04.087 CC lib/env_dpdk/pci_event.o 00:02:04.087 CC lib/env_dpdk/sigbus_handler.o 00:02:04.087 CC lib/env_dpdk/pci_dpdk.o 00:02:04.087 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:04.087 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:04.087 SYMLINK libspdk_trace_parser.so 00:02:04.347 LIB libspdk_rdma_provider.a 00:02:04.347 LIB libspdk_conf.a 00:02:04.347 SO libspdk_rdma_provider.so.6.0 00:02:04.347 SO libspdk_conf.so.6.0 00:02:04.347 LIB libspdk_rdma_utils.a 00:02:04.347 LIB libspdk_json.a 00:02:04.347 SYMLINK libspdk_rdma_provider.so 00:02:04.347 SO libspdk_rdma_utils.so.1.0 00:02:04.347 SYMLINK libspdk_conf.so 00:02:04.347 SO libspdk_json.so.6.0 00:02:04.347 SYMLINK libspdk_rdma_utils.so 00:02:04.347 SYMLINK libspdk_json.so 00:02:04.608 LIB libspdk_idxd.a 00:02:04.608 SO libspdk_idxd.so.12.1 00:02:04.608 LIB libspdk_vmd.a 00:02:04.608 SO libspdk_vmd.so.6.0 00:02:04.608 SYMLINK libspdk_idxd.so 00:02:04.870 SYMLINK libspdk_vmd.so 00:02:04.870 CC lib/jsonrpc/jsonrpc_server.o 00:02:04.870 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:04.870 CC lib/jsonrpc/jsonrpc_client.o 00:02:04.870 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:05.132 LIB libspdk_jsonrpc.a 00:02:05.132 SO libspdk_jsonrpc.so.6.0 00:02:05.132 SYMLINK libspdk_jsonrpc.so 00:02:05.396 LIB libspdk_env_dpdk.a 00:02:05.396 SO libspdk_env_dpdk.so.15.0 00:02:05.396 SYMLINK libspdk_env_dpdk.so 00:02:05.656 CC lib/rpc/rpc.o 00:02:05.656 LIB libspdk_rpc.a 00:02:05.916 SO libspdk_rpc.so.6.0 00:02:05.916 SYMLINK libspdk_rpc.so 00:02:06.177 CC lib/trace/trace.o 00:02:06.177 CC lib/trace/trace_flags.o 00:02:06.177 CC lib/trace/trace_rpc.o 00:02:06.177 CC lib/notify/notify.o 00:02:06.177 CC lib/keyring/keyring.o 00:02:06.177 CC lib/notify/notify_rpc.o 00:02:06.177 CC lib/keyring/keyring_rpc.o 00:02:06.438 LIB libspdk_notify.a 00:02:06.438 SO libspdk_notify.so.6.0 00:02:06.438 LIB libspdk_keyring.a 00:02:06.438 LIB libspdk_trace.a 00:02:06.438 SO libspdk_trace.so.11.0 00:02:06.438 SO libspdk_keyring.so.2.0 00:02:06.438 SYMLINK libspdk_notify.so 00:02:06.699 SYMLINK libspdk_trace.so 00:02:06.699 SYMLINK libspdk_keyring.so 00:02:06.959 CC lib/thread/thread.o 00:02:06.959 CC lib/thread/iobuf.o 00:02:06.959 CC lib/sock/sock.o 00:02:06.959 CC lib/sock/sock_rpc.o 00:02:07.218 LIB libspdk_sock.a 00:02:07.477 SO libspdk_sock.so.10.0 00:02:07.477 SYMLINK libspdk_sock.so 00:02:07.740 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:07.740 CC lib/nvme/nvme_ctrlr.o 00:02:07.740 CC lib/nvme/nvme_fabric.o 00:02:07.740 CC lib/nvme/nvme_ns_cmd.o 00:02:07.740 CC lib/nvme/nvme_ns.o 00:02:07.740 CC lib/nvme/nvme_pcie_common.o 00:02:07.740 CC lib/nvme/nvme_pcie.o 00:02:07.740 CC lib/nvme/nvme_qpair.o 00:02:07.740 CC lib/nvme/nvme.o 00:02:07.740 CC lib/nvme/nvme_quirks.o 00:02:07.740 CC lib/nvme/nvme_transport.o 00:02:07.740 CC lib/nvme/nvme_discovery.o 00:02:07.740 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:07.740 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:07.740 CC lib/nvme/nvme_tcp.o 00:02:07.740 CC lib/nvme/nvme_opal.o 00:02:07.740 CC lib/nvme/nvme_io_msg.o 00:02:07.740 CC lib/nvme/nvme_poll_group.o 00:02:07.740 CC lib/nvme/nvme_zns.o 00:02:07.740 CC lib/nvme/nvme_stubs.o 00:02:07.740 CC lib/nvme/nvme_auth.o 00:02:07.740 CC lib/nvme/nvme_cuse.o 00:02:07.740 CC lib/nvme/nvme_vfio_user.o 00:02:07.740 CC lib/nvme/nvme_rdma.o 00:02:08.311 LIB libspdk_thread.a 00:02:08.311 SO libspdk_thread.so.10.2 00:02:08.311 SYMLINK libspdk_thread.so 00:02:08.884 CC lib/blob/blobstore.o 00:02:08.884 CC lib/blob/request.o 00:02:08.884 CC lib/blob/zeroes.o 00:02:08.884 CC lib/blob/blob_bs_dev.o 00:02:08.884 CC lib/accel/accel.o 00:02:08.884 CC lib/accel/accel_rpc.o 00:02:08.884 CC lib/accel/accel_sw.o 00:02:08.884 CC lib/init/json_config.o 00:02:08.884 CC lib/init/subsystem.o 00:02:08.884 CC lib/init/subsystem_rpc.o 00:02:08.884 CC lib/init/rpc.o 00:02:08.884 CC lib/fsdev/fsdev.o 00:02:08.884 CC lib/vfu_tgt/tgt_endpoint.o 00:02:08.884 CC lib/fsdev/fsdev_io.o 00:02:08.884 CC lib/vfu_tgt/tgt_rpc.o 00:02:08.884 CC lib/virtio/virtio.o 00:02:08.884 CC lib/fsdev/fsdev_rpc.o 00:02:08.884 CC lib/virtio/virtio_vhost_user.o 00:02:08.884 CC lib/virtio/virtio_vfio_user.o 00:02:08.884 CC lib/virtio/virtio_pci.o 00:02:09.146 LIB libspdk_init.a 00:02:09.146 SO libspdk_init.so.6.0 00:02:09.146 LIB libspdk_vfu_tgt.a 00:02:09.146 LIB libspdk_virtio.a 00:02:09.146 SO libspdk_vfu_tgt.so.3.0 00:02:09.146 SO libspdk_virtio.so.7.0 00:02:09.146 SYMLINK libspdk_init.so 00:02:09.146 SYMLINK libspdk_vfu_tgt.so 00:02:09.146 SYMLINK libspdk_virtio.so 00:02:09.407 LIB libspdk_fsdev.a 00:02:09.407 SO libspdk_fsdev.so.1.0 00:02:09.407 CC lib/event/app.o 00:02:09.407 CC lib/event/reactor.o 00:02:09.407 CC lib/event/log_rpc.o 00:02:09.407 CC lib/event/app_rpc.o 00:02:09.407 CC lib/event/scheduler_static.o 00:02:09.669 SYMLINK libspdk_fsdev.so 00:02:09.669 LIB libspdk_accel.a 00:02:09.669 SO libspdk_accel.so.16.0 00:02:09.931 LIB libspdk_nvme.a 00:02:09.931 SYMLINK libspdk_accel.so 00:02:09.931 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:02:09.931 LIB libspdk_event.a 00:02:09.931 SO libspdk_nvme.so.14.0 00:02:09.931 SO libspdk_event.so.15.0 00:02:10.192 SYMLINK libspdk_event.so 00:02:10.192 SYMLINK libspdk_nvme.so 00:02:10.192 CC lib/bdev/bdev.o 00:02:10.192 CC lib/bdev/bdev_rpc.o 00:02:10.192 CC lib/bdev/bdev_zone.o 00:02:10.192 CC lib/bdev/part.o 00:02:10.192 CC lib/bdev/scsi_nvme.o 00:02:10.454 LIB libspdk_fuse_dispatcher.a 00:02:10.454 SO libspdk_fuse_dispatcher.so.1.0 00:02:10.715 SYMLINK libspdk_fuse_dispatcher.so 00:02:11.658 LIB libspdk_blob.a 00:02:11.658 SO libspdk_blob.so.11.0 00:02:11.658 SYMLINK libspdk_blob.so 00:02:11.919 CC lib/blobfs/blobfs.o 00:02:11.920 CC lib/blobfs/tree.o 00:02:11.920 CC lib/lvol/lvol.o 00:02:12.492 LIB libspdk_bdev.a 00:02:12.753 SO libspdk_bdev.so.17.0 00:02:12.753 SYMLINK libspdk_bdev.so 00:02:12.753 LIB libspdk_blobfs.a 00:02:12.753 SO libspdk_blobfs.so.10.0 00:02:12.753 LIB libspdk_lvol.a 00:02:12.753 SYMLINK libspdk_blobfs.so 00:02:12.753 SO libspdk_lvol.so.10.0 00:02:13.015 SYMLINK libspdk_lvol.so 00:02:13.015 CC lib/nbd/nbd.o 00:02:13.015 CC lib/nbd/nbd_rpc.o 00:02:13.015 CC lib/nvmf/ctrlr.o 00:02:13.015 CC lib/scsi/dev.o 00:02:13.015 CC lib/ublk/ublk.o 00:02:13.015 CC lib/nvmf/ctrlr_discovery.o 00:02:13.015 CC lib/scsi/lun.o 00:02:13.015 CC lib/nvmf/ctrlr_bdev.o 00:02:13.015 CC lib/ublk/ublk_rpc.o 00:02:13.015 CC lib/scsi/port.o 00:02:13.015 CC lib/nvmf/subsystem.o 00:02:13.015 CC lib/ftl/ftl_core.o 00:02:13.015 CC lib/scsi/scsi.o 00:02:13.015 CC lib/scsi/scsi_bdev.o 00:02:13.015 CC lib/nvmf/nvmf.o 00:02:13.015 CC lib/ftl/ftl_init.o 00:02:13.015 CC lib/nvmf/nvmf_rpc.o 00:02:13.015 CC lib/scsi/scsi_pr.o 00:02:13.015 CC lib/ftl/ftl_layout.o 00:02:13.015 CC lib/scsi/scsi_rpc.o 00:02:13.015 CC lib/nvmf/transport.o 00:02:13.015 CC lib/ftl/ftl_debug.o 00:02:13.015 CC lib/scsi/task.o 00:02:13.015 CC lib/nvmf/tcp.o 00:02:13.015 CC lib/ftl/ftl_io.o 00:02:13.015 CC lib/nvmf/stubs.o 00:02:13.015 CC lib/ftl/ftl_sb.o 00:02:13.015 CC lib/ftl/ftl_l2p.o 00:02:13.015 CC lib/nvmf/mdns_server.o 00:02:13.015 CC lib/nvmf/vfio_user.o 00:02:13.015 CC lib/ftl/ftl_l2p_flat.o 00:02:13.015 CC lib/nvmf/rdma.o 00:02:13.015 CC lib/ftl/ftl_nv_cache.o 00:02:13.015 CC lib/nvmf/auth.o 00:02:13.015 CC lib/ftl/ftl_band.o 00:02:13.015 CC lib/ftl/ftl_band_ops.o 00:02:13.015 CC lib/ftl/ftl_writer.o 00:02:13.015 CC lib/ftl/ftl_rq.o 00:02:13.015 CC lib/ftl/ftl_reloc.o 00:02:13.015 CC lib/ftl/ftl_l2p_cache.o 00:02:13.015 CC lib/ftl/ftl_p2l.o 00:02:13.015 CC lib/ftl/ftl_p2l_log.o 00:02:13.015 CC lib/ftl/mngt/ftl_mngt.o 00:02:13.015 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:13.015 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:13.015 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:13.015 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:13.015 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:13.015 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:13.015 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:13.015 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:13.015 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:13.015 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:13.015 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:13.015 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:13.015 CC lib/ftl/utils/ftl_conf.o 00:02:13.015 CC lib/ftl/utils/ftl_md.o 00:02:13.276 CC lib/ftl/utils/ftl_bitmap.o 00:02:13.276 CC lib/ftl/utils/ftl_mempool.o 00:02:13.276 CC lib/ftl/utils/ftl_property.o 00:02:13.276 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:13.276 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:13.276 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:13.276 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:13.276 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:13.276 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:13.276 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:13.276 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:13.276 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:13.276 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:13.276 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:02:13.276 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:13.276 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:02:13.276 CC lib/ftl/base/ftl_base_dev.o 00:02:13.276 CC lib/ftl/base/ftl_base_bdev.o 00:02:13.276 CC lib/ftl/ftl_trace.o 00:02:13.857 LIB libspdk_nbd.a 00:02:13.857 SO libspdk_nbd.so.7.0 00:02:13.857 SYMLINK libspdk_nbd.so 00:02:13.857 LIB libspdk_scsi.a 00:02:13.857 SO libspdk_scsi.so.9.0 00:02:13.857 LIB libspdk_ublk.a 00:02:14.116 SO libspdk_ublk.so.3.0 00:02:14.116 SYMLINK libspdk_scsi.so 00:02:14.116 SYMLINK libspdk_ublk.so 00:02:14.376 LIB libspdk_ftl.a 00:02:14.376 CC lib/vhost/vhost.o 00:02:14.376 CC lib/vhost/vhost_rpc.o 00:02:14.376 CC lib/vhost/vhost_scsi.o 00:02:14.376 CC lib/vhost/vhost_blk.o 00:02:14.376 CC lib/iscsi/init_grp.o 00:02:14.376 CC lib/vhost/rte_vhost_user.o 00:02:14.376 CC lib/iscsi/conn.o 00:02:14.376 CC lib/iscsi/iscsi.o 00:02:14.376 CC lib/iscsi/param.o 00:02:14.376 CC lib/iscsi/portal_grp.o 00:02:14.376 CC lib/iscsi/tgt_node.o 00:02:14.376 CC lib/iscsi/iscsi_subsystem.o 00:02:14.376 CC lib/iscsi/iscsi_rpc.o 00:02:14.376 CC lib/iscsi/task.o 00:02:14.636 SO libspdk_ftl.so.9.0 00:02:14.896 SYMLINK libspdk_ftl.so 00:02:15.156 LIB libspdk_nvmf.a 00:02:15.432 SO libspdk_nvmf.so.19.0 00:02:15.432 LIB libspdk_vhost.a 00:02:15.432 SO libspdk_vhost.so.8.0 00:02:15.432 SYMLINK libspdk_nvmf.so 00:02:15.432 SYMLINK libspdk_vhost.so 00:02:15.710 LIB libspdk_iscsi.a 00:02:15.710 SO libspdk_iscsi.so.8.0 00:02:15.710 SYMLINK libspdk_iscsi.so 00:02:16.284 CC module/env_dpdk/env_dpdk_rpc.o 00:02:16.284 CC module/vfu_device/vfu_virtio.o 00:02:16.284 CC module/vfu_device/vfu_virtio_blk.o 00:02:16.284 CC module/vfu_device/vfu_virtio_rpc.o 00:02:16.284 CC module/vfu_device/vfu_virtio_scsi.o 00:02:16.284 CC module/vfu_device/vfu_virtio_fs.o 00:02:16.544 CC module/sock/posix/posix.o 00:02:16.544 CC module/keyring/file/keyring.o 00:02:16.544 CC module/keyring/file/keyring_rpc.o 00:02:16.544 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:16.544 CC module/keyring/linux/keyring.o 00:02:16.544 CC module/keyring/linux/keyring_rpc.o 00:02:16.544 LIB libspdk_env_dpdk_rpc.a 00:02:16.544 CC module/scheduler/gscheduler/gscheduler.o 00:02:16.544 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:16.544 CC module/accel/ioat/accel_ioat.o 00:02:16.544 CC module/accel/ioat/accel_ioat_rpc.o 00:02:16.544 CC module/accel/error/accel_error_rpc.o 00:02:16.544 CC module/accel/error/accel_error.o 00:02:16.544 CC module/fsdev/aio/fsdev_aio_rpc.o 00:02:16.544 CC module/fsdev/aio/fsdev_aio.o 00:02:16.544 CC module/fsdev/aio/linux_aio_mgr.o 00:02:16.544 CC module/accel/iaa/accel_iaa.o 00:02:16.544 CC module/accel/iaa/accel_iaa_rpc.o 00:02:16.544 CC module/accel/dsa/accel_dsa.o 00:02:16.544 CC module/accel/dsa/accel_dsa_rpc.o 00:02:16.544 CC module/blob/bdev/blob_bdev.o 00:02:16.544 SO libspdk_env_dpdk_rpc.so.6.0 00:02:16.806 SYMLINK libspdk_env_dpdk_rpc.so 00:02:16.806 LIB libspdk_keyring_linux.a 00:02:16.806 LIB libspdk_scheduler_dpdk_governor.a 00:02:16.806 LIB libspdk_keyring_file.a 00:02:16.806 LIB libspdk_scheduler_gscheduler.a 00:02:16.806 SO libspdk_keyring_linux.so.1.0 00:02:16.806 LIB libspdk_accel_error.a 00:02:16.806 SO libspdk_scheduler_dpdk_governor.so.4.0 00:02:16.806 LIB libspdk_accel_ioat.a 00:02:16.806 SO libspdk_keyring_file.so.2.0 00:02:16.806 LIB libspdk_scheduler_dynamic.a 00:02:16.806 SO libspdk_scheduler_gscheduler.so.4.0 00:02:16.806 SO libspdk_accel_error.so.2.0 00:02:16.806 LIB libspdk_accel_iaa.a 00:02:16.806 SO libspdk_accel_ioat.so.6.0 00:02:16.806 SYMLINK libspdk_keyring_linux.so 00:02:16.806 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:16.806 SO libspdk_scheduler_dynamic.so.4.0 00:02:16.806 SYMLINK libspdk_keyring_file.so 00:02:16.806 SO libspdk_accel_iaa.so.3.0 00:02:16.806 LIB libspdk_blob_bdev.a 00:02:16.806 SYMLINK libspdk_scheduler_gscheduler.so 00:02:17.068 LIB libspdk_accel_dsa.a 00:02:17.068 SYMLINK libspdk_accel_error.so 00:02:17.068 SYMLINK libspdk_scheduler_dynamic.so 00:02:17.068 SO libspdk_blob_bdev.so.11.0 00:02:17.068 SYMLINK libspdk_accel_ioat.so 00:02:17.068 SO libspdk_accel_dsa.so.5.0 00:02:17.068 SYMLINK libspdk_accel_iaa.so 00:02:17.068 SYMLINK libspdk_blob_bdev.so 00:02:17.068 LIB libspdk_vfu_device.a 00:02:17.068 SYMLINK libspdk_accel_dsa.so 00:02:17.068 SO libspdk_vfu_device.so.3.0 00:02:17.068 SYMLINK libspdk_vfu_device.so 00:02:17.328 LIB libspdk_fsdev_aio.a 00:02:17.328 SO libspdk_fsdev_aio.so.1.0 00:02:17.328 LIB libspdk_sock_posix.a 00:02:17.328 SO libspdk_sock_posix.so.6.0 00:02:17.328 SYMLINK libspdk_fsdev_aio.so 00:02:17.589 SYMLINK libspdk_sock_posix.so 00:02:17.589 CC module/blobfs/bdev/blobfs_bdev.o 00:02:17.589 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:17.589 CC module/bdev/null/bdev_null.o 00:02:17.589 CC module/bdev/error/vbdev_error.o 00:02:17.589 CC module/bdev/gpt/gpt.o 00:02:17.589 CC module/bdev/error/vbdev_error_rpc.o 00:02:17.589 CC module/bdev/null/bdev_null_rpc.o 00:02:17.589 CC module/bdev/gpt/vbdev_gpt.o 00:02:17.589 CC module/bdev/delay/vbdev_delay.o 00:02:17.589 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:17.589 CC module/bdev/malloc/bdev_malloc.o 00:02:17.589 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:17.589 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:17.589 CC module/bdev/nvme/bdev_nvme.o 00:02:17.589 CC module/bdev/split/vbdev_split.o 00:02:17.589 CC module/bdev/split/vbdev_split_rpc.o 00:02:17.589 CC module/bdev/nvme/nvme_rpc.o 00:02:17.589 CC module/bdev/lvol/vbdev_lvol.o 00:02:17.589 CC module/bdev/raid/bdev_raid.o 00:02:17.589 CC module/bdev/nvme/bdev_mdns_client.o 00:02:17.589 CC module/bdev/raid/bdev_raid_rpc.o 00:02:17.589 CC module/bdev/passthru/vbdev_passthru.o 00:02:17.589 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:17.589 CC module/bdev/nvme/vbdev_opal.o 00:02:17.589 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:17.589 CC module/bdev/raid/bdev_raid_sb.o 00:02:17.589 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:17.589 CC module/bdev/raid/raid0.o 00:02:17.589 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:17.589 CC module/bdev/raid/raid1.o 00:02:17.589 CC module/bdev/raid/concat.o 00:02:17.589 CC module/bdev/aio/bdev_aio.o 00:02:17.589 CC module/bdev/ftl/bdev_ftl.o 00:02:17.589 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:17.589 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:17.589 CC module/bdev/aio/bdev_aio_rpc.o 00:02:17.589 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:17.589 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:17.589 CC module/bdev/iscsi/bdev_iscsi.o 00:02:17.589 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:17.589 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:17.589 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:17.850 LIB libspdk_blobfs_bdev.a 00:02:17.850 SO libspdk_blobfs_bdev.so.6.0 00:02:17.850 LIB libspdk_bdev_error.a 00:02:17.850 LIB libspdk_bdev_split.a 00:02:17.850 LIB libspdk_bdev_null.a 00:02:17.850 SO libspdk_bdev_split.so.6.0 00:02:17.850 LIB libspdk_bdev_ftl.a 00:02:17.850 SO libspdk_bdev_error.so.6.0 00:02:17.850 SYMLINK libspdk_blobfs_bdev.so 00:02:17.850 LIB libspdk_bdev_gpt.a 00:02:17.850 LIB libspdk_bdev_passthru.a 00:02:18.109 SO libspdk_bdev_null.so.6.0 00:02:18.109 SO libspdk_bdev_ftl.so.6.0 00:02:18.109 SO libspdk_bdev_passthru.so.6.0 00:02:18.109 LIB libspdk_bdev_delay.a 00:02:18.109 SO libspdk_bdev_gpt.so.6.0 00:02:18.109 LIB libspdk_bdev_malloc.a 00:02:18.109 SYMLINK libspdk_bdev_error.so 00:02:18.109 SYMLINK libspdk_bdev_split.so 00:02:18.109 SYMLINK libspdk_bdev_null.so 00:02:18.109 SYMLINK libspdk_bdev_ftl.so 00:02:18.109 LIB libspdk_bdev_aio.a 00:02:18.109 LIB libspdk_bdev_zone_block.a 00:02:18.109 SO libspdk_bdev_delay.so.6.0 00:02:18.109 SYMLINK libspdk_bdev_passthru.so 00:02:18.109 LIB libspdk_bdev_iscsi.a 00:02:18.109 SO libspdk_bdev_malloc.so.6.0 00:02:18.109 SYMLINK libspdk_bdev_gpt.so 00:02:18.109 SO libspdk_bdev_aio.so.6.0 00:02:18.109 SO libspdk_bdev_zone_block.so.6.0 00:02:18.109 SO libspdk_bdev_iscsi.so.6.0 00:02:18.109 SYMLINK libspdk_bdev_delay.so 00:02:18.109 SYMLINK libspdk_bdev_malloc.so 00:02:18.109 SYMLINK libspdk_bdev_zone_block.so 00:02:18.109 SYMLINK libspdk_bdev_aio.so 00:02:18.109 LIB libspdk_bdev_virtio.a 00:02:18.109 SYMLINK libspdk_bdev_iscsi.so 00:02:18.109 LIB libspdk_bdev_lvol.a 00:02:18.109 SO libspdk_bdev_virtio.so.6.0 00:02:18.369 SO libspdk_bdev_lvol.so.6.0 00:02:18.369 SYMLINK libspdk_bdev_virtio.so 00:02:18.369 SYMLINK libspdk_bdev_lvol.so 00:02:18.628 LIB libspdk_bdev_raid.a 00:02:18.628 SO libspdk_bdev_raid.so.6.0 00:02:18.628 SYMLINK libspdk_bdev_raid.so 00:02:20.012 LIB libspdk_bdev_nvme.a 00:02:20.012 SO libspdk_bdev_nvme.so.7.0 00:02:20.012 SYMLINK libspdk_bdev_nvme.so 00:02:20.584 CC module/event/subsystems/iobuf/iobuf.o 00:02:20.584 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:20.584 CC module/event/subsystems/vmd/vmd.o 00:02:20.584 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:20.584 CC module/event/subsystems/sock/sock.o 00:02:20.584 CC module/event/subsystems/keyring/keyring.o 00:02:20.584 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:20.584 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:02:20.584 CC module/event/subsystems/fsdev/fsdev.o 00:02:20.584 CC module/event/subsystems/scheduler/scheduler.o 00:02:20.845 LIB libspdk_event_sock.a 00:02:20.845 LIB libspdk_event_fsdev.a 00:02:20.845 LIB libspdk_event_keyring.a 00:02:20.845 LIB libspdk_event_iobuf.a 00:02:20.845 LIB libspdk_event_vmd.a 00:02:20.845 LIB libspdk_event_vfu_tgt.a 00:02:20.845 LIB libspdk_event_vhost_blk.a 00:02:20.845 LIB libspdk_event_scheduler.a 00:02:20.845 SO libspdk_event_sock.so.5.0 00:02:20.845 SO libspdk_event_keyring.so.1.0 00:02:20.845 SO libspdk_event_fsdev.so.1.0 00:02:20.845 SO libspdk_event_vfu_tgt.so.3.0 00:02:20.845 SO libspdk_event_iobuf.so.3.0 00:02:20.845 SO libspdk_event_scheduler.so.4.0 00:02:20.845 SO libspdk_event_vmd.so.6.0 00:02:20.845 SO libspdk_event_vhost_blk.so.3.0 00:02:20.845 SYMLINK libspdk_event_sock.so 00:02:20.845 SYMLINK libspdk_event_keyring.so 00:02:20.845 SYMLINK libspdk_event_fsdev.so 00:02:20.845 SYMLINK libspdk_event_vfu_tgt.so 00:02:20.845 SYMLINK libspdk_event_scheduler.so 00:02:20.845 SYMLINK libspdk_event_iobuf.so 00:02:20.845 SYMLINK libspdk_event_vhost_blk.so 00:02:20.845 SYMLINK libspdk_event_vmd.so 00:02:21.417 CC module/event/subsystems/accel/accel.o 00:02:21.417 LIB libspdk_event_accel.a 00:02:21.417 SO libspdk_event_accel.so.6.0 00:02:21.417 SYMLINK libspdk_event_accel.so 00:02:21.988 CC module/event/subsystems/bdev/bdev.o 00:02:21.988 LIB libspdk_event_bdev.a 00:02:21.989 SO libspdk_event_bdev.so.6.0 00:02:22.249 SYMLINK libspdk_event_bdev.so 00:02:22.510 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:22.510 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:22.510 CC module/event/subsystems/scsi/scsi.o 00:02:22.510 CC module/event/subsystems/nbd/nbd.o 00:02:22.510 CC module/event/subsystems/ublk/ublk.o 00:02:22.771 LIB libspdk_event_ublk.a 00:02:22.771 LIB libspdk_event_nbd.a 00:02:22.771 LIB libspdk_event_scsi.a 00:02:22.772 SO libspdk_event_ublk.so.3.0 00:02:22.772 SO libspdk_event_nbd.so.6.0 00:02:22.772 SO libspdk_event_scsi.so.6.0 00:02:22.772 LIB libspdk_event_nvmf.a 00:02:22.772 SYMLINK libspdk_event_ublk.so 00:02:22.772 SYMLINK libspdk_event_nbd.so 00:02:22.772 SO libspdk_event_nvmf.so.6.0 00:02:22.772 SYMLINK libspdk_event_scsi.so 00:02:22.772 SYMLINK libspdk_event_nvmf.so 00:02:23.343 CC module/event/subsystems/iscsi/iscsi.o 00:02:23.343 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:23.343 LIB libspdk_event_vhost_scsi.a 00:02:23.343 LIB libspdk_event_iscsi.a 00:02:23.343 SO libspdk_event_vhost_scsi.so.3.0 00:02:23.343 SO libspdk_event_iscsi.so.6.0 00:02:23.343 SYMLINK libspdk_event_vhost_scsi.so 00:02:23.604 SYMLINK libspdk_event_iscsi.so 00:02:23.604 SO libspdk.so.6.0 00:02:23.604 SYMLINK libspdk.so 00:02:24.175 CXX app/trace/trace.o 00:02:24.175 CC app/trace_record/trace_record.o 00:02:24.175 CC app/spdk_nvme_discover/discovery_aer.o 00:02:24.175 CC app/spdk_top/spdk_top.o 00:02:24.175 CC app/spdk_nvme_perf/perf.o 00:02:24.175 CC app/spdk_lspci/spdk_lspci.o 00:02:24.175 CC app/spdk_nvme_identify/identify.o 00:02:24.175 CC test/rpc_client/rpc_client_test.o 00:02:24.175 TEST_HEADER include/spdk/accel.h 00:02:24.175 TEST_HEADER include/spdk/accel_module.h 00:02:24.175 TEST_HEADER include/spdk/barrier.h 00:02:24.175 TEST_HEADER include/spdk/assert.h 00:02:24.175 TEST_HEADER include/spdk/base64.h 00:02:24.175 TEST_HEADER include/spdk/bdev_module.h 00:02:24.175 TEST_HEADER include/spdk/bdev.h 00:02:24.175 TEST_HEADER include/spdk/bdev_zone.h 00:02:24.175 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:24.175 TEST_HEADER include/spdk/bit_array.h 00:02:24.175 TEST_HEADER include/spdk/bit_pool.h 00:02:24.175 TEST_HEADER include/spdk/blobfs.h 00:02:24.175 TEST_HEADER include/spdk/blob_bdev.h 00:02:24.175 TEST_HEADER include/spdk/blob.h 00:02:24.175 TEST_HEADER include/spdk/conf.h 00:02:24.175 TEST_HEADER include/spdk/config.h 00:02:24.175 TEST_HEADER include/spdk/cpuset.h 00:02:24.175 TEST_HEADER include/spdk/crc16.h 00:02:24.175 TEST_HEADER include/spdk/crc64.h 00:02:24.175 TEST_HEADER include/spdk/crc32.h 00:02:24.175 CC app/nvmf_tgt/nvmf_main.o 00:02:24.175 TEST_HEADER include/spdk/dif.h 00:02:24.175 TEST_HEADER include/spdk/dma.h 00:02:24.175 CC app/spdk_dd/spdk_dd.o 00:02:24.175 TEST_HEADER include/spdk/endian.h 00:02:24.175 TEST_HEADER include/spdk/env_dpdk.h 00:02:24.175 TEST_HEADER include/spdk/env.h 00:02:24.175 CC app/iscsi_tgt/iscsi_tgt.o 00:02:24.175 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:24.175 TEST_HEADER include/spdk/fd_group.h 00:02:24.175 TEST_HEADER include/spdk/event.h 00:02:24.175 TEST_HEADER include/spdk/fd.h 00:02:24.175 TEST_HEADER include/spdk/file.h 00:02:24.175 TEST_HEADER include/spdk/fsdev.h 00:02:24.175 TEST_HEADER include/spdk/fsdev_module.h 00:02:24.175 TEST_HEADER include/spdk/ftl.h 00:02:24.175 TEST_HEADER include/spdk/fuse_dispatcher.h 00:02:24.175 TEST_HEADER include/spdk/gpt_spec.h 00:02:24.175 TEST_HEADER include/spdk/histogram_data.h 00:02:24.175 TEST_HEADER include/spdk/hexlify.h 00:02:24.175 TEST_HEADER include/spdk/idxd.h 00:02:24.175 TEST_HEADER include/spdk/idxd_spec.h 00:02:24.175 TEST_HEADER include/spdk/ioat_spec.h 00:02:24.175 TEST_HEADER include/spdk/init.h 00:02:24.175 TEST_HEADER include/spdk/ioat.h 00:02:24.175 TEST_HEADER include/spdk/iscsi_spec.h 00:02:24.175 TEST_HEADER include/spdk/json.h 00:02:24.175 TEST_HEADER include/spdk/jsonrpc.h 00:02:24.175 TEST_HEADER include/spdk/keyring_module.h 00:02:24.175 TEST_HEADER include/spdk/keyring.h 00:02:24.175 TEST_HEADER include/spdk/likely.h 00:02:24.175 TEST_HEADER include/spdk/log.h 00:02:24.175 TEST_HEADER include/spdk/lvol.h 00:02:24.175 CC app/spdk_tgt/spdk_tgt.o 00:02:24.175 TEST_HEADER include/spdk/md5.h 00:02:24.175 TEST_HEADER include/spdk/memory.h 00:02:24.175 TEST_HEADER include/spdk/nbd.h 00:02:24.175 TEST_HEADER include/spdk/mmio.h 00:02:24.175 TEST_HEADER include/spdk/notify.h 00:02:24.175 TEST_HEADER include/spdk/net.h 00:02:24.175 TEST_HEADER include/spdk/nvme.h 00:02:24.175 TEST_HEADER include/spdk/nvme_intel.h 00:02:24.175 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:24.175 TEST_HEADER include/spdk/nvme_spec.h 00:02:24.175 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:24.175 TEST_HEADER include/spdk/nvme_zns.h 00:02:24.175 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:24.175 TEST_HEADER include/spdk/nvmf.h 00:02:24.175 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:24.175 TEST_HEADER include/spdk/nvmf_spec.h 00:02:24.175 TEST_HEADER include/spdk/opal.h 00:02:24.175 TEST_HEADER include/spdk/nvmf_transport.h 00:02:24.175 TEST_HEADER include/spdk/opal_spec.h 00:02:24.175 TEST_HEADER include/spdk/pipe.h 00:02:24.175 TEST_HEADER include/spdk/pci_ids.h 00:02:24.175 TEST_HEADER include/spdk/queue.h 00:02:24.175 TEST_HEADER include/spdk/reduce.h 00:02:24.175 TEST_HEADER include/spdk/rpc.h 00:02:24.175 TEST_HEADER include/spdk/scheduler.h 00:02:24.175 TEST_HEADER include/spdk/scsi.h 00:02:24.175 TEST_HEADER include/spdk/sock.h 00:02:24.175 TEST_HEADER include/spdk/scsi_spec.h 00:02:24.175 TEST_HEADER include/spdk/stdinc.h 00:02:24.175 TEST_HEADER include/spdk/string.h 00:02:24.175 TEST_HEADER include/spdk/thread.h 00:02:24.175 TEST_HEADER include/spdk/trace.h 00:02:24.175 TEST_HEADER include/spdk/ublk.h 00:02:24.175 TEST_HEADER include/spdk/trace_parser.h 00:02:24.175 TEST_HEADER include/spdk/tree.h 00:02:24.175 TEST_HEADER include/spdk/util.h 00:02:24.175 TEST_HEADER include/spdk/uuid.h 00:02:24.175 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:24.175 TEST_HEADER include/spdk/version.h 00:02:24.175 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:24.175 TEST_HEADER include/spdk/vhost.h 00:02:24.175 TEST_HEADER include/spdk/vmd.h 00:02:24.175 TEST_HEADER include/spdk/xor.h 00:02:24.175 TEST_HEADER include/spdk/zipf.h 00:02:24.175 CXX test/cpp_headers/accel.o 00:02:24.175 CXX test/cpp_headers/accel_module.o 00:02:24.175 CXX test/cpp_headers/assert.o 00:02:24.175 CXX test/cpp_headers/barrier.o 00:02:24.175 CXX test/cpp_headers/base64.o 00:02:24.175 CXX test/cpp_headers/bdev.o 00:02:24.175 CXX test/cpp_headers/bit_array.o 00:02:24.175 CXX test/cpp_headers/bdev_module.o 00:02:24.175 CXX test/cpp_headers/bdev_zone.o 00:02:24.175 CXX test/cpp_headers/bit_pool.o 00:02:24.175 CXX test/cpp_headers/blob_bdev.o 00:02:24.175 CXX test/cpp_headers/blobfs_bdev.o 00:02:24.175 CXX test/cpp_headers/blobfs.o 00:02:24.175 CXX test/cpp_headers/blob.o 00:02:24.175 CXX test/cpp_headers/conf.o 00:02:24.175 CXX test/cpp_headers/cpuset.o 00:02:24.175 CXX test/cpp_headers/config.o 00:02:24.175 CXX test/cpp_headers/crc16.o 00:02:24.175 CXX test/cpp_headers/crc64.o 00:02:24.175 CXX test/cpp_headers/crc32.o 00:02:24.175 CXX test/cpp_headers/dif.o 00:02:24.175 CXX test/cpp_headers/dma.o 00:02:24.175 CXX test/cpp_headers/endian.o 00:02:24.175 CXX test/cpp_headers/event.o 00:02:24.175 CXX test/cpp_headers/env.o 00:02:24.175 CXX test/cpp_headers/env_dpdk.o 00:02:24.175 CXX test/cpp_headers/fd.o 00:02:24.175 CXX test/cpp_headers/fd_group.o 00:02:24.175 CXX test/cpp_headers/fsdev.o 00:02:24.175 CXX test/cpp_headers/file.o 00:02:24.175 CXX test/cpp_headers/fsdev_module.o 00:02:24.176 CXX test/cpp_headers/ftl.o 00:02:24.176 CXX test/cpp_headers/fuse_dispatcher.o 00:02:24.440 CXX test/cpp_headers/gpt_spec.o 00:02:24.440 CXX test/cpp_headers/histogram_data.o 00:02:24.440 CXX test/cpp_headers/idxd.o 00:02:24.440 CXX test/cpp_headers/hexlify.o 00:02:24.440 CXX test/cpp_headers/idxd_spec.o 00:02:24.440 CC examples/ioat/perf/perf.o 00:02:24.440 CXX test/cpp_headers/init.o 00:02:24.440 CXX test/cpp_headers/ioat_spec.o 00:02:24.440 CXX test/cpp_headers/ioat.o 00:02:24.440 CXX test/cpp_headers/iscsi_spec.o 00:02:24.440 CXX test/cpp_headers/keyring.o 00:02:24.440 LINK spdk_lspci 00:02:24.440 CXX test/cpp_headers/json.o 00:02:24.440 CXX test/cpp_headers/keyring_module.o 00:02:24.440 CC examples/ioat/verify/verify.o 00:02:24.440 CXX test/cpp_headers/jsonrpc.o 00:02:24.440 CXX test/cpp_headers/log.o 00:02:24.440 CXX test/cpp_headers/lvol.o 00:02:24.440 CXX test/cpp_headers/likely.o 00:02:24.440 CC test/thread/poller_perf/poller_perf.o 00:02:24.440 CXX test/cpp_headers/mmio.o 00:02:24.440 CXX test/cpp_headers/md5.o 00:02:24.440 CXX test/cpp_headers/memory.o 00:02:24.440 CC test/env/vtophys/vtophys.o 00:02:24.440 CXX test/cpp_headers/notify.o 00:02:24.440 CXX test/cpp_headers/nbd.o 00:02:24.440 CXX test/cpp_headers/net.o 00:02:24.440 CXX test/cpp_headers/nvme_ocssd.o 00:02:24.440 CXX test/cpp_headers/nvme.o 00:02:24.440 CXX test/cpp_headers/nvme_intel.o 00:02:24.440 CXX test/cpp_headers/nvme_spec.o 00:02:24.440 CXX test/cpp_headers/nvme_zns.o 00:02:24.440 CC test/app/jsoncat/jsoncat.o 00:02:24.440 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:24.440 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:24.440 CXX test/cpp_headers/nvmf_cmd.o 00:02:24.440 CXX test/cpp_headers/nvmf.o 00:02:24.440 CC test/env/memory/memory_ut.o 00:02:24.440 CXX test/cpp_headers/nvmf_spec.o 00:02:24.440 CXX test/cpp_headers/nvmf_transport.o 00:02:24.440 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:24.440 CXX test/cpp_headers/opal.o 00:02:24.440 CC test/env/pci/pci_ut.o 00:02:24.440 CC examples/util/zipf/zipf.o 00:02:24.440 CXX test/cpp_headers/pipe.o 00:02:24.440 CXX test/cpp_headers/pci_ids.o 00:02:24.440 CXX test/cpp_headers/queue.o 00:02:24.440 CXX test/cpp_headers/opal_spec.o 00:02:24.440 CXX test/cpp_headers/reduce.o 00:02:24.440 CXX test/cpp_headers/rpc.o 00:02:24.440 CC app/fio/nvme/fio_plugin.o 00:02:24.440 CXX test/cpp_headers/scheduler.o 00:02:24.440 CXX test/cpp_headers/scsi.o 00:02:24.440 CXX test/cpp_headers/sock.o 00:02:24.440 CC test/app/histogram_perf/histogram_perf.o 00:02:24.440 CXX test/cpp_headers/scsi_spec.o 00:02:24.440 CXX test/cpp_headers/stdinc.o 00:02:24.440 CXX test/cpp_headers/string.o 00:02:24.440 CXX test/cpp_headers/thread.o 00:02:24.440 CXX test/cpp_headers/trace.o 00:02:24.440 CC test/app/stub/stub.o 00:02:24.440 CXX test/cpp_headers/util.o 00:02:24.440 CXX test/cpp_headers/trace_parser.o 00:02:24.440 CXX test/cpp_headers/ublk.o 00:02:24.440 CXX test/cpp_headers/tree.o 00:02:24.440 CXX test/cpp_headers/uuid.o 00:02:24.440 CXX test/cpp_headers/version.o 00:02:24.440 CC test/app/bdev_svc/bdev_svc.o 00:02:24.440 CXX test/cpp_headers/vfio_user_pci.o 00:02:24.440 CXX test/cpp_headers/vfio_user_spec.o 00:02:24.440 CXX test/cpp_headers/vhost.o 00:02:24.440 CC test/dma/test_dma/test_dma.o 00:02:24.440 CXX test/cpp_headers/xor.o 00:02:24.440 CXX test/cpp_headers/vmd.o 00:02:24.440 CXX test/cpp_headers/zipf.o 00:02:24.440 CC app/fio/bdev/fio_plugin.o 00:02:24.440 LINK spdk_nvme_discover 00:02:24.440 LINK spdk_trace_record 00:02:24.708 LINK nvmf_tgt 00:02:24.708 LINK interrupt_tgt 00:02:24.708 LINK rpc_client_test 00:02:24.708 LINK iscsi_tgt 00:02:24.969 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:24.970 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:24.970 LINK vtophys 00:02:24.970 CC test/env/mem_callbacks/mem_callbacks.o 00:02:25.235 LINK spdk_tgt 00:02:25.235 LINK spdk_dd 00:02:25.235 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:25.235 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:25.235 LINK bdev_svc 00:02:25.235 LINK spdk_trace 00:02:25.235 LINK poller_perf 00:02:25.235 LINK zipf 00:02:25.235 LINK jsoncat 00:02:25.493 LINK histogram_perf 00:02:25.493 LINK stub 00:02:25.493 LINK ioat_perf 00:02:25.493 LINK env_dpdk_post_init 00:02:25.493 LINK verify 00:02:25.751 CC app/vhost/vhost.o 00:02:25.751 CC examples/idxd/perf/perf.o 00:02:25.751 CC test/event/event_perf/event_perf.o 00:02:25.751 LINK nvme_fuzz 00:02:25.751 LINK vhost_fuzz 00:02:25.751 CC examples/sock/hello_world/hello_sock.o 00:02:25.752 CC test/event/reactor/reactor.o 00:02:25.752 CC examples/vmd/lsvmd/lsvmd.o 00:02:25.752 CC examples/vmd/led/led.o 00:02:25.752 CC test/event/reactor_perf/reactor_perf.o 00:02:26.013 CC examples/thread/thread/thread_ex.o 00:02:26.013 CC test/event/app_repeat/app_repeat.o 00:02:26.013 CC test/event/scheduler/scheduler.o 00:02:26.013 LINK spdk_nvme 00:02:26.013 LINK pci_ut 00:02:26.013 LINK test_dma 00:02:26.013 LINK spdk_bdev 00:02:26.013 LINK mem_callbacks 00:02:26.013 LINK spdk_nvme_perf 00:02:26.013 LINK vhost 00:02:26.013 LINK spdk_top 00:02:26.013 LINK spdk_nvme_identify 00:02:26.013 LINK event_perf 00:02:26.013 LINK reactor 00:02:26.013 LINK lsvmd 00:02:26.013 LINK led 00:02:26.013 LINK reactor_perf 00:02:26.013 LINK app_repeat 00:02:26.013 LINK hello_sock 00:02:26.275 LINK scheduler 00:02:26.275 LINK thread 00:02:26.275 LINK idxd_perf 00:02:26.536 CC test/nvme/sgl/sgl.o 00:02:26.536 CC test/nvme/reset/reset.o 00:02:26.536 CC test/nvme/e2edp/nvme_dp.o 00:02:26.536 CC test/nvme/err_injection/err_injection.o 00:02:26.536 CC test/nvme/reserve/reserve.o 00:02:26.536 CC test/nvme/connect_stress/connect_stress.o 00:02:26.536 CC test/nvme/aer/aer.o 00:02:26.536 LINK memory_ut 00:02:26.536 CC test/nvme/simple_copy/simple_copy.o 00:02:26.536 CC test/nvme/overhead/overhead.o 00:02:26.536 CC test/nvme/startup/startup.o 00:02:26.536 CC test/nvme/boot_partition/boot_partition.o 00:02:26.536 CC test/nvme/fused_ordering/fused_ordering.o 00:02:26.536 CC test/nvme/fdp/fdp.o 00:02:26.536 CC test/nvme/cuse/cuse.o 00:02:26.536 CC test/nvme/compliance/nvme_compliance.o 00:02:26.536 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:26.536 CC test/blobfs/mkfs/mkfs.o 00:02:26.536 CC test/accel/dif/dif.o 00:02:26.798 CC examples/nvme/hello_world/hello_world.o 00:02:26.798 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:26.798 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:26.798 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:26.798 CC examples/nvme/reconnect/reconnect.o 00:02:26.798 CC examples/nvme/abort/abort.o 00:02:26.798 CC examples/nvme/hotplug/hotplug.o 00:02:26.798 CC examples/nvme/arbitration/arbitration.o 00:02:26.798 CC test/lvol/esnap/esnap.o 00:02:26.798 LINK boot_partition 00:02:26.798 LINK err_injection 00:02:26.798 LINK connect_stress 00:02:26.798 LINK startup 00:02:26.798 CC examples/accel/perf/accel_perf.o 00:02:26.798 LINK doorbell_aers 00:02:26.798 LINK fused_ordering 00:02:26.798 LINK reserve 00:02:26.798 LINK simple_copy 00:02:26.798 LINK sgl 00:02:26.798 LINK mkfs 00:02:26.798 CC examples/fsdev/hello_world/hello_fsdev.o 00:02:26.798 CC examples/blob/cli/blobcli.o 00:02:26.798 CC examples/blob/hello_world/hello_blob.o 00:02:26.798 LINK aer 00:02:26.798 LINK reset 00:02:26.798 LINK nvme_dp 00:02:27.058 LINK overhead 00:02:27.058 LINK fdp 00:02:27.058 LINK nvme_compliance 00:02:27.058 LINK cmb_copy 00:02:27.058 LINK pmr_persistence 00:02:27.058 LINK hello_world 00:02:27.058 LINK hotplug 00:02:27.058 LINK iscsi_fuzz 00:02:27.058 LINK arbitration 00:02:27.058 LINK reconnect 00:02:27.058 LINK abort 00:02:27.319 LINK hello_blob 00:02:27.319 LINK hello_fsdev 00:02:27.319 LINK nvme_manage 00:02:27.319 LINK dif 00:02:27.319 LINK accel_perf 00:02:27.319 LINK blobcli 00:02:27.901 LINK cuse 00:02:27.901 CC examples/bdev/hello_world/hello_bdev.o 00:02:27.901 CC examples/bdev/bdevperf/bdevperf.o 00:02:27.901 CC test/bdev/bdevio/bdevio.o 00:02:28.164 LINK hello_bdev 00:02:28.424 LINK bdevio 00:02:28.684 LINK bdevperf 00:02:29.256 CC examples/nvmf/nvmf/nvmf.o 00:02:29.516 LINK nvmf 00:02:30.899 LINK esnap 00:02:31.160 00:02:31.160 real 0m55.975s 00:02:31.160 user 8m7.537s 00:02:31.160 sys 5m42.450s 00:02:31.160 00:10:01 make -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:02:31.160 00:10:01 make -- common/autotest_common.sh@10 -- $ set +x 00:02:31.160 ************************************ 00:02:31.160 END TEST make 00:02:31.160 ************************************ 00:02:31.160 00:10:01 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:02:31.160 00:10:01 -- pm/common@29 -- $ signal_monitor_resources TERM 00:02:31.160 00:10:01 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:02:31.160 00:10:01 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:31.160 00:10:01 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:02:31.160 00:10:01 -- pm/common@44 -- $ pid=2925118 00:02:31.160 00:10:01 -- pm/common@50 -- $ kill -TERM 2925118 00:02:31.160 00:10:01 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:31.160 00:10:01 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:02:31.160 00:10:01 -- pm/common@44 -- $ pid=2925119 00:02:31.160 00:10:01 -- pm/common@50 -- $ kill -TERM 2925119 00:02:31.160 00:10:01 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:31.160 00:10:01 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:02:31.160 00:10:01 -- pm/common@44 -- $ pid=2925121 00:02:31.160 00:10:01 -- pm/common@50 -- $ kill -TERM 2925121 00:02:31.160 00:10:01 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:31.160 00:10:01 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:02:31.160 00:10:01 -- pm/common@44 -- $ pid=2925145 00:02:31.160 00:10:01 -- pm/common@50 -- $ sudo -E kill -TERM 2925145 00:02:31.421 00:10:01 -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:02:31.422 00:10:01 -- common/autotest_common.sh@1681 -- # lcov --version 00:02:31.422 00:10:01 -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:02:31.422 00:10:01 -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:02:31.422 00:10:01 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:02:31.422 00:10:01 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:02:31.422 00:10:01 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:02:31.422 00:10:01 -- scripts/common.sh@336 -- # IFS=.-: 00:02:31.422 00:10:01 -- scripts/common.sh@336 -- # read -ra ver1 00:02:31.422 00:10:01 -- scripts/common.sh@337 -- # IFS=.-: 00:02:31.422 00:10:01 -- scripts/common.sh@337 -- # read -ra ver2 00:02:31.422 00:10:01 -- scripts/common.sh@338 -- # local 'op=<' 00:02:31.422 00:10:01 -- scripts/common.sh@340 -- # ver1_l=2 00:02:31.422 00:10:01 -- scripts/common.sh@341 -- # ver2_l=1 00:02:31.422 00:10:01 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:02:31.422 00:10:01 -- scripts/common.sh@344 -- # case "$op" in 00:02:31.422 00:10:01 -- scripts/common.sh@345 -- # : 1 00:02:31.422 00:10:01 -- scripts/common.sh@364 -- # (( v = 0 )) 00:02:31.422 00:10:01 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:31.422 00:10:01 -- scripts/common.sh@365 -- # decimal 1 00:02:31.422 00:10:01 -- scripts/common.sh@353 -- # local d=1 00:02:31.422 00:10:01 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:02:31.422 00:10:01 -- scripts/common.sh@355 -- # echo 1 00:02:31.422 00:10:01 -- scripts/common.sh@365 -- # ver1[v]=1 00:02:31.422 00:10:01 -- scripts/common.sh@366 -- # decimal 2 00:02:31.422 00:10:01 -- scripts/common.sh@353 -- # local d=2 00:02:31.422 00:10:01 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:02:31.422 00:10:01 -- scripts/common.sh@355 -- # echo 2 00:02:31.422 00:10:01 -- scripts/common.sh@366 -- # ver2[v]=2 00:02:31.422 00:10:01 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:02:31.422 00:10:01 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:02:31.422 00:10:01 -- scripts/common.sh@368 -- # return 0 00:02:31.422 00:10:01 -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:02:31.422 00:10:01 -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:02:31.422 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:31.422 --rc genhtml_branch_coverage=1 00:02:31.422 --rc genhtml_function_coverage=1 00:02:31.422 --rc genhtml_legend=1 00:02:31.422 --rc geninfo_all_blocks=1 00:02:31.422 --rc geninfo_unexecuted_blocks=1 00:02:31.422 00:02:31.422 ' 00:02:31.422 00:10:01 -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:02:31.422 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:31.422 --rc genhtml_branch_coverage=1 00:02:31.422 --rc genhtml_function_coverage=1 00:02:31.422 --rc genhtml_legend=1 00:02:31.422 --rc geninfo_all_blocks=1 00:02:31.422 --rc geninfo_unexecuted_blocks=1 00:02:31.422 00:02:31.422 ' 00:02:31.422 00:10:01 -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:02:31.422 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:31.422 --rc genhtml_branch_coverage=1 00:02:31.422 --rc genhtml_function_coverage=1 00:02:31.422 --rc genhtml_legend=1 00:02:31.422 --rc geninfo_all_blocks=1 00:02:31.422 --rc geninfo_unexecuted_blocks=1 00:02:31.422 00:02:31.422 ' 00:02:31.422 00:10:01 -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:02:31.422 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:31.422 --rc genhtml_branch_coverage=1 00:02:31.422 --rc genhtml_function_coverage=1 00:02:31.422 --rc genhtml_legend=1 00:02:31.422 --rc geninfo_all_blocks=1 00:02:31.422 --rc geninfo_unexecuted_blocks=1 00:02:31.422 00:02:31.422 ' 00:02:31.422 00:10:01 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:02:31.422 00:10:01 -- nvmf/common.sh@7 -- # uname -s 00:02:31.422 00:10:01 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:31.422 00:10:01 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:31.422 00:10:01 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:31.422 00:10:01 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:31.422 00:10:01 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:31.422 00:10:01 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:31.422 00:10:01 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:31.422 00:10:01 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:31.422 00:10:01 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:31.422 00:10:01 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:31.422 00:10:01 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:02:31.422 00:10:01 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:02:31.422 00:10:01 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:31.422 00:10:01 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:31.422 00:10:01 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:02:31.422 00:10:01 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:02:31.422 00:10:01 -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:02:31.422 00:10:01 -- scripts/common.sh@15 -- # shopt -s extglob 00:02:31.422 00:10:01 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:31.422 00:10:01 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:31.422 00:10:01 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:31.422 00:10:01 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:31.422 00:10:01 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:31.422 00:10:01 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:31.422 00:10:01 -- paths/export.sh@5 -- # export PATH 00:02:31.422 00:10:01 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:31.422 00:10:01 -- nvmf/common.sh@51 -- # : 0 00:02:31.422 00:10:01 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:02:31.422 00:10:01 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:02:31.422 00:10:01 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:02:31.422 00:10:01 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:31.422 00:10:01 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:31.422 00:10:01 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:02:31.422 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:02:31.422 00:10:01 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:02:31.422 00:10:01 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:02:31.422 00:10:01 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:02:31.422 00:10:01 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:31.422 00:10:01 -- spdk/autotest.sh@32 -- # uname -s 00:02:31.422 00:10:02 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:31.422 00:10:02 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:31.422 00:10:02 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:31.422 00:10:02 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:02:31.422 00:10:02 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:31.422 00:10:02 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:31.422 00:10:02 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:31.422 00:10:02 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:31.422 00:10:02 -- spdk/autotest.sh@48 -- # udevadm_pid=2991281 00:02:31.422 00:10:02 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:02:31.422 00:10:02 -- pm/common@17 -- # local monitor 00:02:31.422 00:10:02 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:31.422 00:10:02 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:31.422 00:10:02 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:31.422 00:10:02 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:31.422 00:10:02 -- pm/common@21 -- # date +%s 00:02:31.422 00:10:02 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:31.422 00:10:02 -- pm/common@21 -- # date +%s 00:02:31.422 00:10:02 -- pm/common@25 -- # sleep 1 00:02:31.422 00:10:02 -- pm/common@21 -- # date +%s 00:02:31.422 00:10:02 -- pm/common@21 -- # date +%s 00:02:31.422 00:10:02 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1728425402 00:02:31.422 00:10:02 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1728425402 00:02:31.422 00:10:02 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1728425402 00:02:31.422 00:10:02 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1728425402 00:02:31.683 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1728425402_collect-cpu-load.pm.log 00:02:31.683 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1728425402_collect-vmstat.pm.log 00:02:31.683 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1728425402_collect-cpu-temp.pm.log 00:02:31.683 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1728425402_collect-bmc-pm.bmc.pm.log 00:02:32.717 00:10:03 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:32.717 00:10:03 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:02:32.717 00:10:03 -- common/autotest_common.sh@724 -- # xtrace_disable 00:02:32.717 00:10:03 -- common/autotest_common.sh@10 -- # set +x 00:02:32.717 00:10:03 -- spdk/autotest.sh@59 -- # create_test_list 00:02:32.717 00:10:03 -- common/autotest_common.sh@748 -- # xtrace_disable 00:02:32.717 00:10:03 -- common/autotest_common.sh@10 -- # set +x 00:02:32.717 00:10:03 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:02:32.717 00:10:03 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:32.717 00:10:03 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:32.717 00:10:03 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:02:32.717 00:10:03 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:32.717 00:10:03 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:02:32.717 00:10:03 -- common/autotest_common.sh@1455 -- # uname 00:02:32.717 00:10:03 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:02:32.717 00:10:03 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:02:32.717 00:10:03 -- common/autotest_common.sh@1475 -- # uname 00:02:32.717 00:10:03 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:02:32.717 00:10:03 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:02:32.717 00:10:03 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:02:32.717 lcov: LCOV version 1.15 00:02:32.717 00:10:03 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:02:54.700 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:02:54.700 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:03:02.842 00:10:33 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:03:02.842 00:10:33 -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:02.842 00:10:33 -- common/autotest_common.sh@10 -- # set +x 00:03:02.843 00:10:33 -- spdk/autotest.sh@78 -- # rm -f 00:03:02.843 00:10:33 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:06.148 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:03:06.148 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:03:06.148 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:03:06.148 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:03:06.148 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:03:06.148 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:03:06.408 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:03:06.408 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:03:06.408 0000:65:00.0 (144d a80a): Already using the nvme driver 00:03:06.408 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:03:06.408 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:03:06.408 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:03:06.408 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:03:06.408 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:03:06.408 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:03:06.408 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:03:06.408 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:03:06.669 00:10:37 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:03:06.669 00:10:37 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:03:06.669 00:10:37 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:03:06.669 00:10:37 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:03:06.669 00:10:37 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:03:06.669 00:10:37 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:03:06.669 00:10:37 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:03:06.669 00:10:37 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:06.669 00:10:37 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:03:06.669 00:10:37 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:03:06.669 00:10:37 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:06.669 00:10:37 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:06.669 00:10:37 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:03:06.669 00:10:37 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:03:06.669 00:10:37 -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:06.669 No valid GPT data, bailing 00:03:06.669 00:10:37 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:06.669 00:10:37 -- scripts/common.sh@394 -- # pt= 00:03:06.669 00:10:37 -- scripts/common.sh@395 -- # return 1 00:03:06.669 00:10:37 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:06.669 1+0 records in 00:03:06.669 1+0 records out 00:03:06.669 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00191198 s, 548 MB/s 00:03:06.669 00:10:37 -- spdk/autotest.sh@105 -- # sync 00:03:06.669 00:10:37 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:06.669 00:10:37 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:06.669 00:10:37 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:16.694 00:10:45 -- spdk/autotest.sh@111 -- # uname -s 00:03:16.694 00:10:45 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:03:16.695 00:10:45 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:03:16.695 00:10:45 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:18.658 Hugepages 00:03:18.658 node hugesize free / total 00:03:18.658 node0 1048576kB 0 / 0 00:03:18.658 node0 2048kB 0 / 0 00:03:18.658 node1 1048576kB 0 / 0 00:03:18.658 node1 2048kB 0 / 0 00:03:18.658 00:03:18.658 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:18.658 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:03:18.658 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:03:18.658 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:03:18.658 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:03:18.658 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:03:18.658 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:03:18.658 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:03:18.658 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:03:18.919 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:03:18.919 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:03:18.919 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:03:18.919 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:03:18.919 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:03:18.919 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:03:18.919 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:03:18.919 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:03:18.919 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:03:18.919 00:10:49 -- spdk/autotest.sh@117 -- # uname -s 00:03:18.919 00:10:49 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:03:18.919 00:10:49 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:03:18.919 00:10:49 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:22.219 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:03:22.219 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:03:22.219 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:03:22.219 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:03:22.219 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:03:22.219 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:03:22.219 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:03:22.219 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:03:22.219 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:03:22.219 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:03:22.219 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:03:22.219 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:03:22.219 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:03:22.219 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:03:22.219 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:03:22.219 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:03:24.132 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:03:24.132 00:10:54 -- common/autotest_common.sh@1515 -- # sleep 1 00:03:25.514 00:10:55 -- common/autotest_common.sh@1516 -- # bdfs=() 00:03:25.514 00:10:55 -- common/autotest_common.sh@1516 -- # local bdfs 00:03:25.514 00:10:55 -- common/autotest_common.sh@1518 -- # bdfs=($(get_nvme_bdfs)) 00:03:25.514 00:10:55 -- common/autotest_common.sh@1518 -- # get_nvme_bdfs 00:03:25.514 00:10:55 -- common/autotest_common.sh@1496 -- # bdfs=() 00:03:25.514 00:10:55 -- common/autotest_common.sh@1496 -- # local bdfs 00:03:25.514 00:10:55 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:25.514 00:10:55 -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:25.514 00:10:55 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:03:25.514 00:10:55 -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:03:25.514 00:10:55 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:65:00.0 00:03:25.514 00:10:55 -- common/autotest_common.sh@1520 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:28.812 Waiting for block devices as requested 00:03:28.812 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:03:28.812 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:03:28.812 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:03:29.074 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:03:29.074 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:03:29.074 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:03:29.335 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:03:29.335 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:03:29.335 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:03:29.597 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:03:29.597 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:03:29.597 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:03:29.859 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:03:29.859 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:03:29.859 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:03:30.119 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:03:30.119 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:03:30.119 00:11:00 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:03:30.120 00:11:00 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:65:00.0 00:03:30.120 00:11:00 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 00:03:30.120 00:11:00 -- common/autotest_common.sh@1485 -- # grep 0000:65:00.0/nvme/nvme 00:03:30.120 00:11:00 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:03:30.120 00:11:00 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 ]] 00:03:30.120 00:11:00 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:03:30.120 00:11:00 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme0 00:03:30.120 00:11:00 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme0 00:03:30.120 00:11:00 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme0 ]] 00:03:30.120 00:11:00 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme0 00:03:30.120 00:11:00 -- common/autotest_common.sh@1529 -- # grep oacs 00:03:30.120 00:11:00 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:03:30.120 00:11:00 -- common/autotest_common.sh@1529 -- # oacs=' 0x5f' 00:03:30.120 00:11:00 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:03:30.120 00:11:00 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:03:30.120 00:11:00 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme0 00:03:30.120 00:11:00 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:03:30.120 00:11:00 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:03:30.120 00:11:00 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:03:30.120 00:11:00 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:03:30.120 00:11:00 -- common/autotest_common.sh@1541 -- # continue 00:03:30.120 00:11:00 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:03:30.120 00:11:00 -- common/autotest_common.sh@730 -- # xtrace_disable 00:03:30.120 00:11:00 -- common/autotest_common.sh@10 -- # set +x 00:03:30.120 00:11:00 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:03:30.120 00:11:00 -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:30.120 00:11:00 -- common/autotest_common.sh@10 -- # set +x 00:03:30.120 00:11:00 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:34.326 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:03:34.326 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:03:34.326 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:03:34.326 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:03:34.326 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:03:34.326 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:03:34.326 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:03:34.326 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:03:34.326 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:03:34.326 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:03:34.326 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:03:34.326 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:03:34.326 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:03:34.326 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:03:34.326 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:03:34.326 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:03:34.326 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:03:34.326 00:11:04 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:03:34.326 00:11:04 -- common/autotest_common.sh@730 -- # xtrace_disable 00:03:34.326 00:11:04 -- common/autotest_common.sh@10 -- # set +x 00:03:34.326 00:11:04 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:03:34.326 00:11:04 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:03:34.326 00:11:04 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:03:34.326 00:11:04 -- common/autotest_common.sh@1561 -- # bdfs=() 00:03:34.326 00:11:04 -- common/autotest_common.sh@1561 -- # _bdfs=() 00:03:34.326 00:11:04 -- common/autotest_common.sh@1561 -- # local bdfs _bdfs 00:03:34.326 00:11:04 -- common/autotest_common.sh@1562 -- # _bdfs=($(get_nvme_bdfs)) 00:03:34.326 00:11:04 -- common/autotest_common.sh@1562 -- # get_nvme_bdfs 00:03:34.326 00:11:04 -- common/autotest_common.sh@1496 -- # bdfs=() 00:03:34.326 00:11:04 -- common/autotest_common.sh@1496 -- # local bdfs 00:03:34.326 00:11:04 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:34.326 00:11:04 -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:34.326 00:11:04 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:03:34.326 00:11:04 -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:03:34.326 00:11:04 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:65:00.0 00:03:34.326 00:11:04 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:03:34.326 00:11:04 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:65:00.0/device 00:03:34.326 00:11:04 -- common/autotest_common.sh@1564 -- # device=0xa80a 00:03:34.326 00:11:04 -- common/autotest_common.sh@1565 -- # [[ 0xa80a == \0\x\0\a\5\4 ]] 00:03:34.326 00:11:04 -- common/autotest_common.sh@1570 -- # (( 0 > 0 )) 00:03:34.326 00:11:04 -- common/autotest_common.sh@1570 -- # return 0 00:03:34.326 00:11:04 -- common/autotest_common.sh@1577 -- # [[ -z '' ]] 00:03:34.326 00:11:04 -- common/autotest_common.sh@1578 -- # return 0 00:03:34.326 00:11:04 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:03:34.326 00:11:04 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:03:34.326 00:11:04 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:03:34.326 00:11:04 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:03:34.326 00:11:04 -- spdk/autotest.sh@149 -- # timing_enter lib 00:03:34.326 00:11:04 -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:34.326 00:11:04 -- common/autotest_common.sh@10 -- # set +x 00:03:34.326 00:11:04 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:03:34.326 00:11:04 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:03:34.326 00:11:04 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:34.326 00:11:04 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:34.326 00:11:04 -- common/autotest_common.sh@10 -- # set +x 00:03:34.326 ************************************ 00:03:34.326 START TEST env 00:03:34.326 ************************************ 00:03:34.326 00:11:04 env -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:03:34.326 * Looking for test storage... 00:03:34.326 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:03:34.326 00:11:04 env -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:03:34.326 00:11:04 env -- common/autotest_common.sh@1681 -- # lcov --version 00:03:34.326 00:11:04 env -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:03:34.326 00:11:04 env -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:03:34.326 00:11:04 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:34.326 00:11:04 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:34.326 00:11:04 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:34.326 00:11:04 env -- scripts/common.sh@336 -- # IFS=.-: 00:03:34.326 00:11:04 env -- scripts/common.sh@336 -- # read -ra ver1 00:03:34.326 00:11:04 env -- scripts/common.sh@337 -- # IFS=.-: 00:03:34.326 00:11:04 env -- scripts/common.sh@337 -- # read -ra ver2 00:03:34.326 00:11:04 env -- scripts/common.sh@338 -- # local 'op=<' 00:03:34.326 00:11:04 env -- scripts/common.sh@340 -- # ver1_l=2 00:03:34.326 00:11:04 env -- scripts/common.sh@341 -- # ver2_l=1 00:03:34.326 00:11:04 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:34.326 00:11:04 env -- scripts/common.sh@344 -- # case "$op" in 00:03:34.326 00:11:04 env -- scripts/common.sh@345 -- # : 1 00:03:34.326 00:11:04 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:34.326 00:11:04 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:34.326 00:11:04 env -- scripts/common.sh@365 -- # decimal 1 00:03:34.326 00:11:04 env -- scripts/common.sh@353 -- # local d=1 00:03:34.326 00:11:04 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:34.326 00:11:04 env -- scripts/common.sh@355 -- # echo 1 00:03:34.326 00:11:04 env -- scripts/common.sh@365 -- # ver1[v]=1 00:03:34.326 00:11:04 env -- scripts/common.sh@366 -- # decimal 2 00:03:34.326 00:11:04 env -- scripts/common.sh@353 -- # local d=2 00:03:34.326 00:11:04 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:34.326 00:11:04 env -- scripts/common.sh@355 -- # echo 2 00:03:34.326 00:11:04 env -- scripts/common.sh@366 -- # ver2[v]=2 00:03:34.326 00:11:04 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:34.326 00:11:04 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:34.326 00:11:04 env -- scripts/common.sh@368 -- # return 0 00:03:34.326 00:11:04 env -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:34.326 00:11:04 env -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:03:34.326 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:34.326 --rc genhtml_branch_coverage=1 00:03:34.326 --rc genhtml_function_coverage=1 00:03:34.326 --rc genhtml_legend=1 00:03:34.326 --rc geninfo_all_blocks=1 00:03:34.326 --rc geninfo_unexecuted_blocks=1 00:03:34.326 00:03:34.326 ' 00:03:34.326 00:11:04 env -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:03:34.327 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:34.327 --rc genhtml_branch_coverage=1 00:03:34.327 --rc genhtml_function_coverage=1 00:03:34.327 --rc genhtml_legend=1 00:03:34.327 --rc geninfo_all_blocks=1 00:03:34.327 --rc geninfo_unexecuted_blocks=1 00:03:34.327 00:03:34.327 ' 00:03:34.327 00:11:04 env -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:03:34.327 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:34.327 --rc genhtml_branch_coverage=1 00:03:34.327 --rc genhtml_function_coverage=1 00:03:34.327 --rc genhtml_legend=1 00:03:34.327 --rc geninfo_all_blocks=1 00:03:34.327 --rc geninfo_unexecuted_blocks=1 00:03:34.327 00:03:34.327 ' 00:03:34.327 00:11:04 env -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:03:34.327 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:34.327 --rc genhtml_branch_coverage=1 00:03:34.327 --rc genhtml_function_coverage=1 00:03:34.327 --rc genhtml_legend=1 00:03:34.327 --rc geninfo_all_blocks=1 00:03:34.327 --rc geninfo_unexecuted_blocks=1 00:03:34.327 00:03:34.327 ' 00:03:34.327 00:11:04 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:03:34.327 00:11:04 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:34.327 00:11:04 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:34.327 00:11:04 env -- common/autotest_common.sh@10 -- # set +x 00:03:34.327 ************************************ 00:03:34.327 START TEST env_memory 00:03:34.327 ************************************ 00:03:34.327 00:11:04 env.env_memory -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:03:34.327 00:03:34.327 00:03:34.327 CUnit - A unit testing framework for C - Version 2.1-3 00:03:34.327 http://cunit.sourceforge.net/ 00:03:34.327 00:03:34.327 00:03:34.327 Suite: memory 00:03:34.327 Test: alloc and free memory map ...[2024-10-09 00:11:04.898637] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:03:34.327 passed 00:03:34.327 Test: mem map translation ...[2024-10-09 00:11:04.924245] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:03:34.327 [2024-10-09 00:11:04.924276] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:03:34.327 [2024-10-09 00:11:04.924322] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:03:34.327 [2024-10-09 00:11:04.924329] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:03:34.598 passed 00:03:34.598 Test: mem map registration ...[2024-10-09 00:11:04.979571] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:03:34.598 [2024-10-09 00:11:04.979594] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:03:34.598 passed 00:03:34.598 Test: mem map adjacent registrations ...passed 00:03:34.598 00:03:34.598 Run Summary: Type Total Ran Passed Failed Inactive 00:03:34.598 suites 1 1 n/a 0 0 00:03:34.598 tests 4 4 4 0 0 00:03:34.598 asserts 152 152 152 0 n/a 00:03:34.598 00:03:34.598 Elapsed time = 0.192 seconds 00:03:34.598 00:03:34.598 real 0m0.207s 00:03:34.598 user 0m0.193s 00:03:34.598 sys 0m0.013s 00:03:34.598 00:11:05 env.env_memory -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:34.599 00:11:05 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:03:34.599 ************************************ 00:03:34.599 END TEST env_memory 00:03:34.599 ************************************ 00:03:34.599 00:11:05 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:03:34.599 00:11:05 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:34.599 00:11:05 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:34.599 00:11:05 env -- common/autotest_common.sh@10 -- # set +x 00:03:34.599 ************************************ 00:03:34.599 START TEST env_vtophys 00:03:34.599 ************************************ 00:03:34.599 00:11:05 env.env_vtophys -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:03:34.599 EAL: lib.eal log level changed from notice to debug 00:03:34.599 EAL: Detected lcore 0 as core 0 on socket 0 00:03:34.599 EAL: Detected lcore 1 as core 1 on socket 0 00:03:34.599 EAL: Detected lcore 2 as core 2 on socket 0 00:03:34.599 EAL: Detected lcore 3 as core 3 on socket 0 00:03:34.599 EAL: Detected lcore 4 as core 4 on socket 0 00:03:34.599 EAL: Detected lcore 5 as core 5 on socket 0 00:03:34.599 EAL: Detected lcore 6 as core 6 on socket 0 00:03:34.599 EAL: Detected lcore 7 as core 7 on socket 0 00:03:34.599 EAL: Detected lcore 8 as core 8 on socket 0 00:03:34.599 EAL: Detected lcore 9 as core 9 on socket 0 00:03:34.599 EAL: Detected lcore 10 as core 10 on socket 0 00:03:34.599 EAL: Detected lcore 11 as core 11 on socket 0 00:03:34.599 EAL: Detected lcore 12 as core 12 on socket 0 00:03:34.599 EAL: Detected lcore 13 as core 13 on socket 0 00:03:34.599 EAL: Detected lcore 14 as core 14 on socket 0 00:03:34.599 EAL: Detected lcore 15 as core 15 on socket 0 00:03:34.599 EAL: Detected lcore 16 as core 16 on socket 0 00:03:34.600 EAL: Detected lcore 17 as core 17 on socket 0 00:03:34.600 EAL: Detected lcore 18 as core 18 on socket 0 00:03:34.600 EAL: Detected lcore 19 as core 19 on socket 0 00:03:34.600 EAL: Detected lcore 20 as core 20 on socket 0 00:03:34.600 EAL: Detected lcore 21 as core 21 on socket 0 00:03:34.600 EAL: Detected lcore 22 as core 22 on socket 0 00:03:34.600 EAL: Detected lcore 23 as core 23 on socket 0 00:03:34.600 EAL: Detected lcore 24 as core 24 on socket 0 00:03:34.600 EAL: Detected lcore 25 as core 25 on socket 0 00:03:34.600 EAL: Detected lcore 26 as core 26 on socket 0 00:03:34.600 EAL: Detected lcore 27 as core 27 on socket 0 00:03:34.600 EAL: Detected lcore 28 as core 28 on socket 0 00:03:34.600 EAL: Detected lcore 29 as core 29 on socket 0 00:03:34.600 EAL: Detected lcore 30 as core 30 on socket 0 00:03:34.600 EAL: Detected lcore 31 as core 31 on socket 0 00:03:34.600 EAL: Detected lcore 32 as core 32 on socket 0 00:03:34.600 EAL: Detected lcore 33 as core 33 on socket 0 00:03:34.600 EAL: Detected lcore 34 as core 34 on socket 0 00:03:34.600 EAL: Detected lcore 35 as core 35 on socket 0 00:03:34.600 EAL: Detected lcore 36 as core 0 on socket 1 00:03:34.600 EAL: Detected lcore 37 as core 1 on socket 1 00:03:34.600 EAL: Detected lcore 38 as core 2 on socket 1 00:03:34.600 EAL: Detected lcore 39 as core 3 on socket 1 00:03:34.600 EAL: Detected lcore 40 as core 4 on socket 1 00:03:34.600 EAL: Detected lcore 41 as core 5 on socket 1 00:03:34.600 EAL: Detected lcore 42 as core 6 on socket 1 00:03:34.600 EAL: Detected lcore 43 as core 7 on socket 1 00:03:34.600 EAL: Detected lcore 44 as core 8 on socket 1 00:03:34.600 EAL: Detected lcore 45 as core 9 on socket 1 00:03:34.600 EAL: Detected lcore 46 as core 10 on socket 1 00:03:34.600 EAL: Detected lcore 47 as core 11 on socket 1 00:03:34.600 EAL: Detected lcore 48 as core 12 on socket 1 00:03:34.600 EAL: Detected lcore 49 as core 13 on socket 1 00:03:34.600 EAL: Detected lcore 50 as core 14 on socket 1 00:03:34.600 EAL: Detected lcore 51 as core 15 on socket 1 00:03:34.600 EAL: Detected lcore 52 as core 16 on socket 1 00:03:34.600 EAL: Detected lcore 53 as core 17 on socket 1 00:03:34.600 EAL: Detected lcore 54 as core 18 on socket 1 00:03:34.601 EAL: Detected lcore 55 as core 19 on socket 1 00:03:34.601 EAL: Detected lcore 56 as core 20 on socket 1 00:03:34.601 EAL: Detected lcore 57 as core 21 on socket 1 00:03:34.601 EAL: Detected lcore 58 as core 22 on socket 1 00:03:34.601 EAL: Detected lcore 59 as core 23 on socket 1 00:03:34.601 EAL: Detected lcore 60 as core 24 on socket 1 00:03:34.601 EAL: Detected lcore 61 as core 25 on socket 1 00:03:34.601 EAL: Detected lcore 62 as core 26 on socket 1 00:03:34.601 EAL: Detected lcore 63 as core 27 on socket 1 00:03:34.601 EAL: Detected lcore 64 as core 28 on socket 1 00:03:34.601 EAL: Detected lcore 65 as core 29 on socket 1 00:03:34.601 EAL: Detected lcore 66 as core 30 on socket 1 00:03:34.601 EAL: Detected lcore 67 as core 31 on socket 1 00:03:34.601 EAL: Detected lcore 68 as core 32 on socket 1 00:03:34.601 EAL: Detected lcore 69 as core 33 on socket 1 00:03:34.601 EAL: Detected lcore 70 as core 34 on socket 1 00:03:34.601 EAL: Detected lcore 71 as core 35 on socket 1 00:03:34.601 EAL: Detected lcore 72 as core 0 on socket 0 00:03:34.601 EAL: Detected lcore 73 as core 1 on socket 0 00:03:34.601 EAL: Detected lcore 74 as core 2 on socket 0 00:03:34.601 EAL: Detected lcore 75 as core 3 on socket 0 00:03:34.601 EAL: Detected lcore 76 as core 4 on socket 0 00:03:34.601 EAL: Detected lcore 77 as core 5 on socket 0 00:03:34.601 EAL: Detected lcore 78 as core 6 on socket 0 00:03:34.601 EAL: Detected lcore 79 as core 7 on socket 0 00:03:34.601 EAL: Detected lcore 80 as core 8 on socket 0 00:03:34.601 EAL: Detected lcore 81 as core 9 on socket 0 00:03:34.601 EAL: Detected lcore 82 as core 10 on socket 0 00:03:34.601 EAL: Detected lcore 83 as core 11 on socket 0 00:03:34.601 EAL: Detected lcore 84 as core 12 on socket 0 00:03:34.601 EAL: Detected lcore 85 as core 13 on socket 0 00:03:34.601 EAL: Detected lcore 86 as core 14 on socket 0 00:03:34.601 EAL: Detected lcore 87 as core 15 on socket 0 00:03:34.602 EAL: Detected lcore 88 as core 16 on socket 0 00:03:34.602 EAL: Detected lcore 89 as core 17 on socket 0 00:03:34.602 EAL: Detected lcore 90 as core 18 on socket 0 00:03:34.602 EAL: Detected lcore 91 as core 19 on socket 0 00:03:34.602 EAL: Detected lcore 92 as core 20 on socket 0 00:03:34.602 EAL: Detected lcore 93 as core 21 on socket 0 00:03:34.602 EAL: Detected lcore 94 as core 22 on socket 0 00:03:34.602 EAL: Detected lcore 95 as core 23 on socket 0 00:03:34.602 EAL: Detected lcore 96 as core 24 on socket 0 00:03:34.602 EAL: Detected lcore 97 as core 25 on socket 0 00:03:34.602 EAL: Detected lcore 98 as core 26 on socket 0 00:03:34.602 EAL: Detected lcore 99 as core 27 on socket 0 00:03:34.602 EAL: Detected lcore 100 as core 28 on socket 0 00:03:34.602 EAL: Detected lcore 101 as core 29 on socket 0 00:03:34.602 EAL: Detected lcore 102 as core 30 on socket 0 00:03:34.602 EAL: Detected lcore 103 as core 31 on socket 0 00:03:34.602 EAL: Detected lcore 104 as core 32 on socket 0 00:03:34.602 EAL: Detected lcore 105 as core 33 on socket 0 00:03:34.602 EAL: Detected lcore 106 as core 34 on socket 0 00:03:34.602 EAL: Detected lcore 107 as core 35 on socket 0 00:03:34.602 EAL: Detected lcore 108 as core 0 on socket 1 00:03:34.602 EAL: Detected lcore 109 as core 1 on socket 1 00:03:34.602 EAL: Detected lcore 110 as core 2 on socket 1 00:03:34.602 EAL: Detected lcore 111 as core 3 on socket 1 00:03:34.602 EAL: Detected lcore 112 as core 4 on socket 1 00:03:34.602 EAL: Detected lcore 113 as core 5 on socket 1 00:03:34.602 EAL: Detected lcore 114 as core 6 on socket 1 00:03:34.603 EAL: Detected lcore 115 as core 7 on socket 1 00:03:34.603 EAL: Detected lcore 116 as core 8 on socket 1 00:03:34.603 EAL: Detected lcore 117 as core 9 on socket 1 00:03:34.603 EAL: Detected lcore 118 as core 10 on socket 1 00:03:34.603 EAL: Detected lcore 119 as core 11 on socket 1 00:03:34.603 EAL: Detected lcore 120 as core 12 on socket 1 00:03:34.603 EAL: Detected lcore 121 as core 13 on socket 1 00:03:34.603 EAL: Detected lcore 122 as core 14 on socket 1 00:03:34.603 EAL: Detected lcore 123 as core 15 on socket 1 00:03:34.603 EAL: Detected lcore 124 as core 16 on socket 1 00:03:34.603 EAL: Detected lcore 125 as core 17 on socket 1 00:03:34.603 EAL: Detected lcore 126 as core 18 on socket 1 00:03:34.603 EAL: Detected lcore 127 as core 19 on socket 1 00:03:34.603 EAL: Skipped lcore 128 as core 20 on socket 1 00:03:34.603 EAL: Skipped lcore 129 as core 21 on socket 1 00:03:34.603 EAL: Skipped lcore 130 as core 22 on socket 1 00:03:34.603 EAL: Skipped lcore 131 as core 23 on socket 1 00:03:34.603 EAL: Skipped lcore 132 as core 24 on socket 1 00:03:34.603 EAL: Skipped lcore 133 as core 25 on socket 1 00:03:34.603 EAL: Skipped lcore 134 as core 26 on socket 1 00:03:34.603 EAL: Skipped lcore 135 as core 27 on socket 1 00:03:34.603 EAL: Skipped lcore 136 as core 28 on socket 1 00:03:34.603 EAL: Skipped lcore 137 as core 29 on socket 1 00:03:34.603 EAL: Skipped lcore 138 as core 30 on socket 1 00:03:34.603 EAL: Skipped lcore 139 as core 31 on socket 1 00:03:34.603 EAL: Skipped lcore 140 as core 32 on socket 1 00:03:34.603 EAL: Skipped lcore 141 as core 33 on socket 1 00:03:34.603 EAL: Skipped lcore 142 as core 34 on socket 1 00:03:34.603 EAL: Skipped lcore 143 as core 35 on socket 1 00:03:34.603 EAL: Maximum logical cores by configuration: 128 00:03:34.603 EAL: Detected CPU lcores: 128 00:03:34.603 EAL: Detected NUMA nodes: 2 00:03:34.603 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:03:34.603 EAL: Detected shared linkage of DPDK 00:03:34.603 EAL: No shared files mode enabled, IPC will be disabled 00:03:34.603 EAL: Bus pci wants IOVA as 'DC' 00:03:34.603 EAL: Buses did not request a specific IOVA mode. 00:03:34.604 EAL: IOMMU is available, selecting IOVA as VA mode. 00:03:34.604 EAL: Selected IOVA mode 'VA' 00:03:34.604 EAL: Probing VFIO support... 00:03:34.604 EAL: IOMMU type 1 (Type 1) is supported 00:03:34.604 EAL: IOMMU type 7 (sPAPR) is not supported 00:03:34.604 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:03:34.604 EAL: VFIO support initialized 00:03:34.604 EAL: Ask a virtual area of 0x2e000 bytes 00:03:34.604 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:03:34.604 EAL: Setting up physically contiguous memory... 00:03:34.604 EAL: Setting maximum number of open files to 524288 00:03:34.604 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:03:34.604 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:03:34.604 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:03:34.604 EAL: Ask a virtual area of 0x61000 bytes 00:03:34.604 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:03:34.604 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:34.604 EAL: Ask a virtual area of 0x400000000 bytes 00:03:34.604 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:03:34.604 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:03:34.604 EAL: Ask a virtual area of 0x61000 bytes 00:03:34.604 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:03:34.604 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:34.604 EAL: Ask a virtual area of 0x400000000 bytes 00:03:34.604 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:03:34.604 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:03:34.605 EAL: Ask a virtual area of 0x61000 bytes 00:03:34.605 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:03:34.605 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:34.605 EAL: Ask a virtual area of 0x400000000 bytes 00:03:34.605 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:03:34.605 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:03:34.605 EAL: Ask a virtual area of 0x61000 bytes 00:03:34.605 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:03:34.605 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:34.605 EAL: Ask a virtual area of 0x400000000 bytes 00:03:34.605 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:03:34.605 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:03:34.605 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:03:34.605 EAL: Ask a virtual area of 0x61000 bytes 00:03:34.605 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:03:34.605 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:34.605 EAL: Ask a virtual area of 0x400000000 bytes 00:03:34.605 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:03:34.605 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:03:34.605 EAL: Ask a virtual area of 0x61000 bytes 00:03:34.605 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:03:34.605 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:34.605 EAL: Ask a virtual area of 0x400000000 bytes 00:03:34.605 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:03:34.605 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:03:34.605 EAL: Ask a virtual area of 0x61000 bytes 00:03:34.605 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:03:34.605 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:34.605 EAL: Ask a virtual area of 0x400000000 bytes 00:03:34.605 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:03:34.606 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:03:34.606 EAL: Ask a virtual area of 0x61000 bytes 00:03:34.606 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:03:34.606 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:34.606 EAL: Ask a virtual area of 0x400000000 bytes 00:03:34.606 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:03:34.606 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:03:34.606 EAL: Hugepages will be freed exactly as allocated. 00:03:34.606 EAL: No shared files mode enabled, IPC is disabled 00:03:34.606 EAL: No shared files mode enabled, IPC is disabled 00:03:34.606 EAL: TSC frequency is ~2400000 KHz 00:03:34.606 EAL: Main lcore 0 is ready (tid=7febc2c0fa00;cpuset=[0]) 00:03:34.606 EAL: Trying to obtain current memory policy. 00:03:34.606 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:34.606 EAL: Restoring previous memory policy: 0 00:03:34.606 EAL: request: mp_malloc_sync 00:03:34.606 EAL: No shared files mode enabled, IPC is disabled 00:03:34.606 EAL: Heap on socket 0 was expanded by 2MB 00:03:34.606 EAL: No shared files mode enabled, IPC is disabled 00:03:34.869 EAL: No PCI address specified using 'addr=' in: bus=pci 00:03:34.869 EAL: Mem event callback 'spdk:(nil)' registered 00:03:34.869 00:03:34.869 00:03:34.869 CUnit - A unit testing framework for C - Version 2.1-3 00:03:34.869 http://cunit.sourceforge.net/ 00:03:34.869 00:03:34.869 00:03:34.869 Suite: components_suite 00:03:34.869 Test: vtophys_malloc_test ...passed 00:03:34.869 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:03:34.869 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:34.869 EAL: Restoring previous memory policy: 4 00:03:34.869 EAL: Calling mem event callback 'spdk:(nil)' 00:03:34.869 EAL: request: mp_malloc_sync 00:03:34.869 EAL: No shared files mode enabled, IPC is disabled 00:03:34.869 EAL: Heap on socket 0 was expanded by 4MB 00:03:34.869 EAL: Calling mem event callback 'spdk:(nil)' 00:03:34.869 EAL: request: mp_malloc_sync 00:03:34.869 EAL: No shared files mode enabled, IPC is disabled 00:03:34.869 EAL: Heap on socket 0 was shrunk by 4MB 00:03:34.869 EAL: Trying to obtain current memory policy. 00:03:34.869 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:34.869 EAL: Restoring previous memory policy: 4 00:03:34.869 EAL: Calling mem event callback 'spdk:(nil)' 00:03:34.869 EAL: request: mp_malloc_sync 00:03:34.869 EAL: No shared files mode enabled, IPC is disabled 00:03:34.869 EAL: Heap on socket 0 was expanded by 6MB 00:03:34.869 EAL: Calling mem event callback 'spdk:(nil)' 00:03:34.869 EAL: request: mp_malloc_sync 00:03:34.869 EAL: No shared files mode enabled, IPC is disabled 00:03:34.869 EAL: Heap on socket 0 was shrunk by 6MB 00:03:34.869 EAL: Trying to obtain current memory policy. 00:03:34.869 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:34.869 EAL: Restoring previous memory policy: 4 00:03:34.869 EAL: Calling mem event callback 'spdk:(nil)' 00:03:34.869 EAL: request: mp_malloc_sync 00:03:34.869 EAL: No shared files mode enabled, IPC is disabled 00:03:34.869 EAL: Heap on socket 0 was expanded by 10MB 00:03:34.869 EAL: Calling mem event callback 'spdk:(nil)' 00:03:34.869 EAL: request: mp_malloc_sync 00:03:34.869 EAL: No shared files mode enabled, IPC is disabled 00:03:34.869 EAL: Heap on socket 0 was shrunk by 10MB 00:03:34.869 EAL: Trying to obtain current memory policy. 00:03:34.869 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:34.869 EAL: Restoring previous memory policy: 4 00:03:34.869 EAL: Calling mem event callback 'spdk:(nil)' 00:03:34.869 EAL: request: mp_malloc_sync 00:03:34.869 EAL: No shared files mode enabled, IPC is disabled 00:03:34.869 EAL: Heap on socket 0 was expanded by 18MB 00:03:34.869 EAL: Calling mem event callback 'spdk:(nil)' 00:03:34.869 EAL: request: mp_malloc_sync 00:03:34.869 EAL: No shared files mode enabled, IPC is disabled 00:03:34.869 EAL: Heap on socket 0 was shrunk by 18MB 00:03:34.869 EAL: Trying to obtain current memory policy. 00:03:34.869 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:34.869 EAL: Restoring previous memory policy: 4 00:03:34.869 EAL: Calling mem event callback 'spdk:(nil)' 00:03:34.869 EAL: request: mp_malloc_sync 00:03:34.869 EAL: No shared files mode enabled, IPC is disabled 00:03:34.869 EAL: Heap on socket 0 was expanded by 34MB 00:03:34.869 EAL: Calling mem event callback 'spdk:(nil)' 00:03:34.869 EAL: request: mp_malloc_sync 00:03:34.869 EAL: No shared files mode enabled, IPC is disabled 00:03:34.869 EAL: Heap on socket 0 was shrunk by 34MB 00:03:34.869 EAL: Trying to obtain current memory policy. 00:03:34.869 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:34.869 EAL: Restoring previous memory policy: 4 00:03:34.869 EAL: Calling mem event callback 'spdk:(nil)' 00:03:34.869 EAL: request: mp_malloc_sync 00:03:34.869 EAL: No shared files mode enabled, IPC is disabled 00:03:34.869 EAL: Heap on socket 0 was expanded by 66MB 00:03:34.869 EAL: Calling mem event callback 'spdk:(nil)' 00:03:34.869 EAL: request: mp_malloc_sync 00:03:34.869 EAL: No shared files mode enabled, IPC is disabled 00:03:34.869 EAL: Heap on socket 0 was shrunk by 66MB 00:03:34.869 EAL: Trying to obtain current memory policy. 00:03:34.869 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:34.869 EAL: Restoring previous memory policy: 4 00:03:34.869 EAL: Calling mem event callback 'spdk:(nil)' 00:03:34.869 EAL: request: mp_malloc_sync 00:03:34.869 EAL: No shared files mode enabled, IPC is disabled 00:03:34.869 EAL: Heap on socket 0 was expanded by 130MB 00:03:34.869 EAL: Calling mem event callback 'spdk:(nil)' 00:03:34.869 EAL: request: mp_malloc_sync 00:03:34.869 EAL: No shared files mode enabled, IPC is disabled 00:03:34.869 EAL: Heap on socket 0 was shrunk by 130MB 00:03:34.869 EAL: Trying to obtain current memory policy. 00:03:34.869 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:34.869 EAL: Restoring previous memory policy: 4 00:03:34.869 EAL: Calling mem event callback 'spdk:(nil)' 00:03:34.869 EAL: request: mp_malloc_sync 00:03:34.869 EAL: No shared files mode enabled, IPC is disabled 00:03:34.869 EAL: Heap on socket 0 was expanded by 258MB 00:03:34.869 EAL: Calling mem event callback 'spdk:(nil)' 00:03:34.869 EAL: request: mp_malloc_sync 00:03:34.869 EAL: No shared files mode enabled, IPC is disabled 00:03:34.869 EAL: Heap on socket 0 was shrunk by 258MB 00:03:34.869 EAL: Trying to obtain current memory policy. 00:03:34.869 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:35.129 EAL: Restoring previous memory policy: 4 00:03:35.129 EAL: Calling mem event callback 'spdk:(nil)' 00:03:35.129 EAL: request: mp_malloc_sync 00:03:35.129 EAL: No shared files mode enabled, IPC is disabled 00:03:35.129 EAL: Heap on socket 0 was expanded by 514MB 00:03:35.129 EAL: Calling mem event callback 'spdk:(nil)' 00:03:35.129 EAL: request: mp_malloc_sync 00:03:35.129 EAL: No shared files mode enabled, IPC is disabled 00:03:35.129 EAL: Heap on socket 0 was shrunk by 514MB 00:03:35.129 EAL: Trying to obtain current memory policy. 00:03:35.129 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:35.403 EAL: Restoring previous memory policy: 4 00:03:35.403 EAL: Calling mem event callback 'spdk:(nil)' 00:03:35.403 EAL: request: mp_malloc_sync 00:03:35.403 EAL: No shared files mode enabled, IPC is disabled 00:03:35.403 EAL: Heap on socket 0 was expanded by 1026MB 00:03:35.403 EAL: Calling mem event callback 'spdk:(nil)' 00:03:35.403 EAL: request: mp_malloc_sync 00:03:35.403 EAL: No shared files mode enabled, IPC is disabled 00:03:35.403 EAL: Heap on socket 0 was shrunk by 1026MB 00:03:35.403 passed 00:03:35.403 00:03:35.403 Run Summary: Type Total Ran Passed Failed Inactive 00:03:35.403 suites 1 1 n/a 0 0 00:03:35.403 tests 2 2 2 0 0 00:03:35.403 asserts 497 497 497 0 n/a 00:03:35.403 00:03:35.403 Elapsed time = 0.692 seconds 00:03:35.403 EAL: Calling mem event callback 'spdk:(nil)' 00:03:35.403 EAL: request: mp_malloc_sync 00:03:35.403 EAL: No shared files mode enabled, IPC is disabled 00:03:35.403 EAL: Heap on socket 0 was shrunk by 2MB 00:03:35.403 EAL: No shared files mode enabled, IPC is disabled 00:03:35.403 EAL: No shared files mode enabled, IPC is disabled 00:03:35.403 EAL: No shared files mode enabled, IPC is disabled 00:03:35.403 00:03:35.403 real 0m0.841s 00:03:35.403 user 0m0.431s 00:03:35.403 sys 0m0.375s 00:03:35.403 00:11:05 env.env_vtophys -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:35.403 00:11:05 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:03:35.403 ************************************ 00:03:35.403 END TEST env_vtophys 00:03:35.403 ************************************ 00:03:35.403 00:11:06 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:03:35.403 00:11:06 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:35.403 00:11:06 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:35.403 00:11:06 env -- common/autotest_common.sh@10 -- # set +x 00:03:35.665 ************************************ 00:03:35.665 START TEST env_pci 00:03:35.665 ************************************ 00:03:35.665 00:11:06 env.env_pci -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:03:35.665 00:03:35.665 00:03:35.665 CUnit - A unit testing framework for C - Version 2.1-3 00:03:35.665 http://cunit.sourceforge.net/ 00:03:35.665 00:03:35.665 00:03:35.665 Suite: pci 00:03:35.665 Test: pci_hook ...[2024-10-09 00:11:06.068117] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1049:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 3010208 has claimed it 00:03:35.665 EAL: Cannot find device (10000:00:01.0) 00:03:35.665 EAL: Failed to attach device on primary process 00:03:35.665 passed 00:03:35.665 00:03:35.665 Run Summary: Type Total Ran Passed Failed Inactive 00:03:35.665 suites 1 1 n/a 0 0 00:03:35.665 tests 1 1 1 0 0 00:03:35.665 asserts 25 25 25 0 n/a 00:03:35.665 00:03:35.665 Elapsed time = 0.030 seconds 00:03:35.665 00:03:35.665 real 0m0.050s 00:03:35.665 user 0m0.017s 00:03:35.665 sys 0m0.032s 00:03:35.665 00:11:06 env.env_pci -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:35.665 00:11:06 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:03:35.665 ************************************ 00:03:35.665 END TEST env_pci 00:03:35.665 ************************************ 00:03:35.665 00:11:06 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:03:35.665 00:11:06 env -- env/env.sh@15 -- # uname 00:03:35.665 00:11:06 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:03:35.665 00:11:06 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:03:35.665 00:11:06 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:35.665 00:11:06 env -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:03:35.665 00:11:06 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:35.665 00:11:06 env -- common/autotest_common.sh@10 -- # set +x 00:03:35.665 ************************************ 00:03:35.665 START TEST env_dpdk_post_init 00:03:35.665 ************************************ 00:03:35.665 00:11:06 env.env_dpdk_post_init -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:35.665 EAL: Detected CPU lcores: 128 00:03:35.665 EAL: Detected NUMA nodes: 2 00:03:35.665 EAL: Detected shared linkage of DPDK 00:03:35.665 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:35.665 EAL: Selected IOVA mode 'VA' 00:03:35.665 EAL: VFIO support initialized 00:03:35.665 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:35.926 EAL: Using IOMMU type 1 (Type 1) 00:03:35.926 EAL: Ignore mapping IO port bar(1) 00:03:35.926 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.0 (socket 0) 00:03:36.186 EAL: Ignore mapping IO port bar(1) 00:03:36.186 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.1 (socket 0) 00:03:36.447 EAL: Ignore mapping IO port bar(1) 00:03:36.447 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.2 (socket 0) 00:03:36.708 EAL: Ignore mapping IO port bar(1) 00:03:36.708 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.3 (socket 0) 00:03:36.708 EAL: Ignore mapping IO port bar(1) 00:03:36.968 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.4 (socket 0) 00:03:36.968 EAL: Ignore mapping IO port bar(1) 00:03:37.228 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.5 (socket 0) 00:03:37.228 EAL: Ignore mapping IO port bar(1) 00:03:37.489 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.6 (socket 0) 00:03:37.489 EAL: Ignore mapping IO port bar(1) 00:03:37.489 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.7 (socket 0) 00:03:37.749 EAL: Probe PCI driver: spdk_nvme (144d:a80a) device: 0000:65:00.0 (socket 0) 00:03:38.010 EAL: Ignore mapping IO port bar(1) 00:03:38.010 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.0 (socket 1) 00:03:38.271 EAL: Ignore mapping IO port bar(1) 00:03:38.271 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.1 (socket 1) 00:03:38.271 EAL: Ignore mapping IO port bar(1) 00:03:38.532 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.2 (socket 1) 00:03:38.532 EAL: Ignore mapping IO port bar(1) 00:03:38.792 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.3 (socket 1) 00:03:38.792 EAL: Ignore mapping IO port bar(1) 00:03:39.053 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.4 (socket 1) 00:03:39.054 EAL: Ignore mapping IO port bar(1) 00:03:39.054 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.5 (socket 1) 00:03:39.316 EAL: Ignore mapping IO port bar(1) 00:03:39.316 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.6 (socket 1) 00:03:39.576 EAL: Ignore mapping IO port bar(1) 00:03:39.576 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.7 (socket 1) 00:03:39.576 EAL: Releasing PCI mapped resource for 0000:65:00.0 00:03:39.576 EAL: Calling pci_unmap_resource for 0000:65:00.0 at 0x202001020000 00:03:39.836 Starting DPDK initialization... 00:03:39.836 Starting SPDK post initialization... 00:03:39.836 SPDK NVMe probe 00:03:39.836 Attaching to 0000:65:00.0 00:03:39.836 Attached to 0000:65:00.0 00:03:39.836 Cleaning up... 00:03:41.752 00:03:41.752 real 0m5.734s 00:03:41.752 user 0m0.096s 00:03:41.752 sys 0m0.196s 00:03:41.752 00:11:11 env.env_dpdk_post_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:41.752 00:11:11 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:03:41.752 ************************************ 00:03:41.752 END TEST env_dpdk_post_init 00:03:41.752 ************************************ 00:03:41.752 00:11:11 env -- env/env.sh@26 -- # uname 00:03:41.752 00:11:11 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:03:41.752 00:11:11 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:03:41.752 00:11:11 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:41.752 00:11:11 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:41.752 00:11:11 env -- common/autotest_common.sh@10 -- # set +x 00:03:41.752 ************************************ 00:03:41.752 START TEST env_mem_callbacks 00:03:41.752 ************************************ 00:03:41.752 00:11:12 env.env_mem_callbacks -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:03:41.752 EAL: Detected CPU lcores: 128 00:03:41.752 EAL: Detected NUMA nodes: 2 00:03:41.752 EAL: Detected shared linkage of DPDK 00:03:41.752 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:41.752 EAL: Selected IOVA mode 'VA' 00:03:41.752 EAL: VFIO support initialized 00:03:41.752 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:41.752 00:03:41.752 00:03:41.752 CUnit - A unit testing framework for C - Version 2.1-3 00:03:41.752 http://cunit.sourceforge.net/ 00:03:41.752 00:03:41.752 00:03:41.752 Suite: memory 00:03:41.752 Test: test ... 00:03:41.752 register 0x200000200000 2097152 00:03:41.752 malloc 3145728 00:03:41.752 register 0x200000400000 4194304 00:03:41.752 buf 0x200000500000 len 3145728 PASSED 00:03:41.752 malloc 64 00:03:41.752 buf 0x2000004fff40 len 64 PASSED 00:03:41.752 malloc 4194304 00:03:41.752 register 0x200000800000 6291456 00:03:41.752 buf 0x200000a00000 len 4194304 PASSED 00:03:41.752 free 0x200000500000 3145728 00:03:41.752 free 0x2000004fff40 64 00:03:41.752 unregister 0x200000400000 4194304 PASSED 00:03:41.752 free 0x200000a00000 4194304 00:03:41.752 unregister 0x200000800000 6291456 PASSED 00:03:41.752 malloc 8388608 00:03:41.752 register 0x200000400000 10485760 00:03:41.753 buf 0x200000600000 len 8388608 PASSED 00:03:41.753 free 0x200000600000 8388608 00:03:41.753 unregister 0x200000400000 10485760 PASSED 00:03:41.753 passed 00:03:41.753 00:03:41.753 Run Summary: Type Total Ran Passed Failed Inactive 00:03:41.753 suites 1 1 n/a 0 0 00:03:41.753 tests 1 1 1 0 0 00:03:41.753 asserts 15 15 15 0 n/a 00:03:41.753 00:03:41.753 Elapsed time = 0.010 seconds 00:03:41.753 00:03:41.753 real 0m0.070s 00:03:41.753 user 0m0.024s 00:03:41.753 sys 0m0.046s 00:03:41.753 00:11:12 env.env_mem_callbacks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:41.753 00:11:12 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:03:41.753 ************************************ 00:03:41.753 END TEST env_mem_callbacks 00:03:41.753 ************************************ 00:03:41.753 00:03:41.753 real 0m7.514s 00:03:41.753 user 0m1.034s 00:03:41.753 sys 0m1.037s 00:03:41.753 00:11:12 env -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:41.753 00:11:12 env -- common/autotest_common.sh@10 -- # set +x 00:03:41.753 ************************************ 00:03:41.753 END TEST env 00:03:41.753 ************************************ 00:03:41.753 00:11:12 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:03:41.753 00:11:12 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:41.753 00:11:12 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:41.753 00:11:12 -- common/autotest_common.sh@10 -- # set +x 00:03:41.753 ************************************ 00:03:41.753 START TEST rpc 00:03:41.753 ************************************ 00:03:41.753 00:11:12 rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:03:41.753 * Looking for test storage... 00:03:41.753 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:41.753 00:11:12 rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:03:41.753 00:11:12 rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:03:41.753 00:11:12 rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:03:42.014 00:11:12 rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:03:42.014 00:11:12 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:42.014 00:11:12 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:42.014 00:11:12 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:42.014 00:11:12 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:03:42.014 00:11:12 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:03:42.014 00:11:12 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:03:42.014 00:11:12 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:03:42.014 00:11:12 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:03:42.014 00:11:12 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:03:42.014 00:11:12 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:03:42.014 00:11:12 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:42.014 00:11:12 rpc -- scripts/common.sh@344 -- # case "$op" in 00:03:42.014 00:11:12 rpc -- scripts/common.sh@345 -- # : 1 00:03:42.014 00:11:12 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:42.014 00:11:12 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:42.014 00:11:12 rpc -- scripts/common.sh@365 -- # decimal 1 00:03:42.014 00:11:12 rpc -- scripts/common.sh@353 -- # local d=1 00:03:42.014 00:11:12 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:42.014 00:11:12 rpc -- scripts/common.sh@355 -- # echo 1 00:03:42.014 00:11:12 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:03:42.015 00:11:12 rpc -- scripts/common.sh@366 -- # decimal 2 00:03:42.015 00:11:12 rpc -- scripts/common.sh@353 -- # local d=2 00:03:42.015 00:11:12 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:42.015 00:11:12 rpc -- scripts/common.sh@355 -- # echo 2 00:03:42.015 00:11:12 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:03:42.015 00:11:12 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:42.015 00:11:12 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:42.015 00:11:12 rpc -- scripts/common.sh@368 -- # return 0 00:03:42.015 00:11:12 rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:42.015 00:11:12 rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:03:42.015 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:42.015 --rc genhtml_branch_coverage=1 00:03:42.015 --rc genhtml_function_coverage=1 00:03:42.015 --rc genhtml_legend=1 00:03:42.015 --rc geninfo_all_blocks=1 00:03:42.015 --rc geninfo_unexecuted_blocks=1 00:03:42.015 00:03:42.015 ' 00:03:42.015 00:11:12 rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:03:42.015 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:42.015 --rc genhtml_branch_coverage=1 00:03:42.015 --rc genhtml_function_coverage=1 00:03:42.015 --rc genhtml_legend=1 00:03:42.015 --rc geninfo_all_blocks=1 00:03:42.015 --rc geninfo_unexecuted_blocks=1 00:03:42.015 00:03:42.015 ' 00:03:42.015 00:11:12 rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:03:42.015 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:42.015 --rc genhtml_branch_coverage=1 00:03:42.015 --rc genhtml_function_coverage=1 00:03:42.015 --rc genhtml_legend=1 00:03:42.015 --rc geninfo_all_blocks=1 00:03:42.015 --rc geninfo_unexecuted_blocks=1 00:03:42.015 00:03:42.015 ' 00:03:42.015 00:11:12 rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:03:42.015 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:42.015 --rc genhtml_branch_coverage=1 00:03:42.015 --rc genhtml_function_coverage=1 00:03:42.015 --rc genhtml_legend=1 00:03:42.015 --rc geninfo_all_blocks=1 00:03:42.015 --rc geninfo_unexecuted_blocks=1 00:03:42.015 00:03:42.015 ' 00:03:42.015 00:11:12 rpc -- rpc/rpc.sh@65 -- # spdk_pid=3011535 00:03:42.015 00:11:12 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:42.015 00:11:12 rpc -- rpc/rpc.sh@67 -- # waitforlisten 3011535 00:03:42.015 00:11:12 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:03:42.015 00:11:12 rpc -- common/autotest_common.sh@831 -- # '[' -z 3011535 ']' 00:03:42.015 00:11:12 rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:42.015 00:11:12 rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:03:42.015 00:11:12 rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:42.015 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:42.015 00:11:12 rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:03:42.015 00:11:12 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:42.015 [2024-10-09 00:11:12.475543] Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 initialization... 00:03:42.015 [2024-10-09 00:11:12.475609] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3011535 ] 00:03:42.015 [2024-10-09 00:11:12.559402] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:42.276 [2024-10-09 00:11:12.657202] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:03:42.276 [2024-10-09 00:11:12.657265] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 3011535' to capture a snapshot of events at runtime. 00:03:42.276 [2024-10-09 00:11:12.657273] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:03:42.276 [2024-10-09 00:11:12.657281] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:03:42.276 [2024-10-09 00:11:12.657287] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid3011535 for offline analysis/debug. 00:03:42.276 [2024-10-09 00:11:12.658152] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:03:42.848 00:11:13 rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:03:42.848 00:11:13 rpc -- common/autotest_common.sh@864 -- # return 0 00:03:42.848 00:11:13 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:42.848 00:11:13 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:42.848 00:11:13 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:03:42.848 00:11:13 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:03:42.848 00:11:13 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:42.848 00:11:13 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:42.848 00:11:13 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:42.848 ************************************ 00:03:42.848 START TEST rpc_integrity 00:03:42.848 ************************************ 00:03:42.848 00:11:13 rpc.rpc_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:03:42.848 00:11:13 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:03:42.848 00:11:13 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:42.848 00:11:13 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:42.848 00:11:13 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:42.848 00:11:13 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:03:42.848 00:11:13 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:03:42.848 00:11:13 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:03:42.848 00:11:13 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:03:42.848 00:11:13 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:42.848 00:11:13 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:42.848 00:11:13 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:42.848 00:11:13 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:03:42.848 00:11:13 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:03:42.848 00:11:13 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:42.848 00:11:13 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:42.848 00:11:13 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:42.848 00:11:13 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:03:42.848 { 00:03:42.848 "name": "Malloc0", 00:03:42.848 "aliases": [ 00:03:42.848 "12b82ad9-d185-4ab0-a378-204e2f200b55" 00:03:42.848 ], 00:03:42.848 "product_name": "Malloc disk", 00:03:42.848 "block_size": 512, 00:03:42.848 "num_blocks": 16384, 00:03:42.848 "uuid": "12b82ad9-d185-4ab0-a378-204e2f200b55", 00:03:42.848 "assigned_rate_limits": { 00:03:42.848 "rw_ios_per_sec": 0, 00:03:42.848 "rw_mbytes_per_sec": 0, 00:03:42.848 "r_mbytes_per_sec": 0, 00:03:42.848 "w_mbytes_per_sec": 0 00:03:42.848 }, 00:03:42.848 "claimed": false, 00:03:42.848 "zoned": false, 00:03:42.848 "supported_io_types": { 00:03:42.848 "read": true, 00:03:42.848 "write": true, 00:03:42.848 "unmap": true, 00:03:42.848 "flush": true, 00:03:42.848 "reset": true, 00:03:42.848 "nvme_admin": false, 00:03:42.848 "nvme_io": false, 00:03:42.848 "nvme_io_md": false, 00:03:42.848 "write_zeroes": true, 00:03:42.848 "zcopy": true, 00:03:42.848 "get_zone_info": false, 00:03:42.848 "zone_management": false, 00:03:42.848 "zone_append": false, 00:03:42.848 "compare": false, 00:03:42.848 "compare_and_write": false, 00:03:42.848 "abort": true, 00:03:42.848 "seek_hole": false, 00:03:42.848 "seek_data": false, 00:03:42.848 "copy": true, 00:03:42.848 "nvme_iov_md": false 00:03:42.848 }, 00:03:42.848 "memory_domains": [ 00:03:42.848 { 00:03:42.848 "dma_device_id": "system", 00:03:42.848 "dma_device_type": 1 00:03:42.848 }, 00:03:42.848 { 00:03:42.848 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:42.848 "dma_device_type": 2 00:03:42.848 } 00:03:42.848 ], 00:03:42.848 "driver_specific": {} 00:03:42.848 } 00:03:42.848 ]' 00:03:42.848 00:11:13 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:03:42.848 00:11:13 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:03:42.848 00:11:13 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:03:42.848 00:11:13 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:42.848 00:11:13 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:42.848 [2024-10-09 00:11:13.459807] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:03:42.848 [2024-10-09 00:11:13.459857] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:03:42.848 [2024-10-09 00:11:13.459873] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1e2de60 00:03:42.848 [2024-10-09 00:11:13.459881] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:03:42.848 [2024-10-09 00:11:13.461441] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:03:42.848 [2024-10-09 00:11:13.461477] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:03:42.848 Passthru0 00:03:42.848 00:11:13 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:42.848 00:11:13 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:03:42.848 00:11:13 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:42.848 00:11:13 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:43.109 00:11:13 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:43.109 00:11:13 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:03:43.109 { 00:03:43.109 "name": "Malloc0", 00:03:43.109 "aliases": [ 00:03:43.109 "12b82ad9-d185-4ab0-a378-204e2f200b55" 00:03:43.109 ], 00:03:43.109 "product_name": "Malloc disk", 00:03:43.109 "block_size": 512, 00:03:43.109 "num_blocks": 16384, 00:03:43.109 "uuid": "12b82ad9-d185-4ab0-a378-204e2f200b55", 00:03:43.109 "assigned_rate_limits": { 00:03:43.109 "rw_ios_per_sec": 0, 00:03:43.109 "rw_mbytes_per_sec": 0, 00:03:43.109 "r_mbytes_per_sec": 0, 00:03:43.109 "w_mbytes_per_sec": 0 00:03:43.109 }, 00:03:43.109 "claimed": true, 00:03:43.109 "claim_type": "exclusive_write", 00:03:43.109 "zoned": false, 00:03:43.109 "supported_io_types": { 00:03:43.109 "read": true, 00:03:43.109 "write": true, 00:03:43.109 "unmap": true, 00:03:43.109 "flush": true, 00:03:43.109 "reset": true, 00:03:43.109 "nvme_admin": false, 00:03:43.109 "nvme_io": false, 00:03:43.109 "nvme_io_md": false, 00:03:43.109 "write_zeroes": true, 00:03:43.109 "zcopy": true, 00:03:43.109 "get_zone_info": false, 00:03:43.109 "zone_management": false, 00:03:43.109 "zone_append": false, 00:03:43.109 "compare": false, 00:03:43.109 "compare_and_write": false, 00:03:43.109 "abort": true, 00:03:43.109 "seek_hole": false, 00:03:43.109 "seek_data": false, 00:03:43.109 "copy": true, 00:03:43.109 "nvme_iov_md": false 00:03:43.109 }, 00:03:43.109 "memory_domains": [ 00:03:43.109 { 00:03:43.109 "dma_device_id": "system", 00:03:43.109 "dma_device_type": 1 00:03:43.109 }, 00:03:43.109 { 00:03:43.109 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:43.109 "dma_device_type": 2 00:03:43.109 } 00:03:43.109 ], 00:03:43.109 "driver_specific": {} 00:03:43.109 }, 00:03:43.109 { 00:03:43.109 "name": "Passthru0", 00:03:43.109 "aliases": [ 00:03:43.109 "8b64ab2a-720c-546e-9a77-5465dbec35da" 00:03:43.109 ], 00:03:43.109 "product_name": "passthru", 00:03:43.109 "block_size": 512, 00:03:43.109 "num_blocks": 16384, 00:03:43.109 "uuid": "8b64ab2a-720c-546e-9a77-5465dbec35da", 00:03:43.109 "assigned_rate_limits": { 00:03:43.109 "rw_ios_per_sec": 0, 00:03:43.109 "rw_mbytes_per_sec": 0, 00:03:43.109 "r_mbytes_per_sec": 0, 00:03:43.109 "w_mbytes_per_sec": 0 00:03:43.109 }, 00:03:43.109 "claimed": false, 00:03:43.109 "zoned": false, 00:03:43.109 "supported_io_types": { 00:03:43.109 "read": true, 00:03:43.109 "write": true, 00:03:43.109 "unmap": true, 00:03:43.109 "flush": true, 00:03:43.109 "reset": true, 00:03:43.109 "nvme_admin": false, 00:03:43.109 "nvme_io": false, 00:03:43.109 "nvme_io_md": false, 00:03:43.109 "write_zeroes": true, 00:03:43.109 "zcopy": true, 00:03:43.109 "get_zone_info": false, 00:03:43.109 "zone_management": false, 00:03:43.109 "zone_append": false, 00:03:43.109 "compare": false, 00:03:43.109 "compare_and_write": false, 00:03:43.109 "abort": true, 00:03:43.109 "seek_hole": false, 00:03:43.109 "seek_data": false, 00:03:43.109 "copy": true, 00:03:43.109 "nvme_iov_md": false 00:03:43.109 }, 00:03:43.109 "memory_domains": [ 00:03:43.109 { 00:03:43.109 "dma_device_id": "system", 00:03:43.109 "dma_device_type": 1 00:03:43.109 }, 00:03:43.109 { 00:03:43.109 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:43.109 "dma_device_type": 2 00:03:43.109 } 00:03:43.109 ], 00:03:43.109 "driver_specific": { 00:03:43.109 "passthru": { 00:03:43.109 "name": "Passthru0", 00:03:43.109 "base_bdev_name": "Malloc0" 00:03:43.109 } 00:03:43.109 } 00:03:43.109 } 00:03:43.109 ]' 00:03:43.109 00:11:13 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:03:43.109 00:11:13 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:03:43.109 00:11:13 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:03:43.109 00:11:13 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:43.109 00:11:13 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:43.109 00:11:13 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:43.109 00:11:13 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:03:43.109 00:11:13 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:43.109 00:11:13 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:43.109 00:11:13 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:43.109 00:11:13 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:03:43.109 00:11:13 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:43.109 00:11:13 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:43.109 00:11:13 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:43.109 00:11:13 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:03:43.109 00:11:13 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:03:43.109 00:11:13 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:03:43.109 00:03:43.109 real 0m0.279s 00:03:43.109 user 0m0.168s 00:03:43.109 sys 0m0.048s 00:03:43.109 00:11:13 rpc.rpc_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:43.109 00:11:13 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:43.109 ************************************ 00:03:43.109 END TEST rpc_integrity 00:03:43.109 ************************************ 00:03:43.109 00:11:13 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:03:43.109 00:11:13 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:43.109 00:11:13 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:43.109 00:11:13 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:43.109 ************************************ 00:03:43.109 START TEST rpc_plugins 00:03:43.109 ************************************ 00:03:43.109 00:11:13 rpc.rpc_plugins -- common/autotest_common.sh@1125 -- # rpc_plugins 00:03:43.109 00:11:13 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:03:43.109 00:11:13 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:43.110 00:11:13 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:43.110 00:11:13 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:43.110 00:11:13 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:03:43.110 00:11:13 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:03:43.110 00:11:13 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:43.110 00:11:13 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:43.110 00:11:13 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:43.110 00:11:13 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:03:43.110 { 00:03:43.110 "name": "Malloc1", 00:03:43.110 "aliases": [ 00:03:43.110 "97571028-fee0-466a-9a32-f28341cda6db" 00:03:43.110 ], 00:03:43.110 "product_name": "Malloc disk", 00:03:43.110 "block_size": 4096, 00:03:43.110 "num_blocks": 256, 00:03:43.110 "uuid": "97571028-fee0-466a-9a32-f28341cda6db", 00:03:43.110 "assigned_rate_limits": { 00:03:43.110 "rw_ios_per_sec": 0, 00:03:43.110 "rw_mbytes_per_sec": 0, 00:03:43.110 "r_mbytes_per_sec": 0, 00:03:43.110 "w_mbytes_per_sec": 0 00:03:43.110 }, 00:03:43.110 "claimed": false, 00:03:43.110 "zoned": false, 00:03:43.110 "supported_io_types": { 00:03:43.110 "read": true, 00:03:43.110 "write": true, 00:03:43.110 "unmap": true, 00:03:43.110 "flush": true, 00:03:43.110 "reset": true, 00:03:43.110 "nvme_admin": false, 00:03:43.110 "nvme_io": false, 00:03:43.110 "nvme_io_md": false, 00:03:43.110 "write_zeroes": true, 00:03:43.110 "zcopy": true, 00:03:43.110 "get_zone_info": false, 00:03:43.110 "zone_management": false, 00:03:43.110 "zone_append": false, 00:03:43.110 "compare": false, 00:03:43.110 "compare_and_write": false, 00:03:43.110 "abort": true, 00:03:43.110 "seek_hole": false, 00:03:43.110 "seek_data": false, 00:03:43.110 "copy": true, 00:03:43.110 "nvme_iov_md": false 00:03:43.110 }, 00:03:43.110 "memory_domains": [ 00:03:43.110 { 00:03:43.110 "dma_device_id": "system", 00:03:43.110 "dma_device_type": 1 00:03:43.110 }, 00:03:43.110 { 00:03:43.110 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:43.110 "dma_device_type": 2 00:03:43.110 } 00:03:43.110 ], 00:03:43.110 "driver_specific": {} 00:03:43.110 } 00:03:43.110 ]' 00:03:43.110 00:11:13 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:03:43.371 00:11:13 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:03:43.371 00:11:13 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:03:43.371 00:11:13 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:43.371 00:11:13 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:43.371 00:11:13 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:43.371 00:11:13 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:03:43.371 00:11:13 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:43.371 00:11:13 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:43.371 00:11:13 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:43.371 00:11:13 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:03:43.371 00:11:13 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:03:43.371 00:11:13 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:03:43.371 00:03:43.371 real 0m0.157s 00:03:43.371 user 0m0.093s 00:03:43.371 sys 0m0.026s 00:03:43.371 00:11:13 rpc.rpc_plugins -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:43.371 00:11:13 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:43.371 ************************************ 00:03:43.371 END TEST rpc_plugins 00:03:43.371 ************************************ 00:03:43.371 00:11:13 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:03:43.371 00:11:13 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:43.371 00:11:13 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:43.371 00:11:13 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:43.371 ************************************ 00:03:43.371 START TEST rpc_trace_cmd_test 00:03:43.371 ************************************ 00:03:43.371 00:11:13 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1125 -- # rpc_trace_cmd_test 00:03:43.371 00:11:13 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:03:43.371 00:11:13 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:03:43.371 00:11:13 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:43.371 00:11:13 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:03:43.371 00:11:13 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:43.371 00:11:13 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:03:43.371 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid3011535", 00:03:43.371 "tpoint_group_mask": "0x8", 00:03:43.371 "iscsi_conn": { 00:03:43.371 "mask": "0x2", 00:03:43.371 "tpoint_mask": "0x0" 00:03:43.371 }, 00:03:43.371 "scsi": { 00:03:43.371 "mask": "0x4", 00:03:43.371 "tpoint_mask": "0x0" 00:03:43.371 }, 00:03:43.371 "bdev": { 00:03:43.371 "mask": "0x8", 00:03:43.371 "tpoint_mask": "0xffffffffffffffff" 00:03:43.371 }, 00:03:43.371 "nvmf_rdma": { 00:03:43.371 "mask": "0x10", 00:03:43.371 "tpoint_mask": "0x0" 00:03:43.371 }, 00:03:43.371 "nvmf_tcp": { 00:03:43.371 "mask": "0x20", 00:03:43.371 "tpoint_mask": "0x0" 00:03:43.371 }, 00:03:43.371 "ftl": { 00:03:43.371 "mask": "0x40", 00:03:43.371 "tpoint_mask": "0x0" 00:03:43.371 }, 00:03:43.371 "blobfs": { 00:03:43.371 "mask": "0x80", 00:03:43.371 "tpoint_mask": "0x0" 00:03:43.371 }, 00:03:43.371 "dsa": { 00:03:43.371 "mask": "0x200", 00:03:43.371 "tpoint_mask": "0x0" 00:03:43.371 }, 00:03:43.371 "thread": { 00:03:43.371 "mask": "0x400", 00:03:43.371 "tpoint_mask": "0x0" 00:03:43.371 }, 00:03:43.371 "nvme_pcie": { 00:03:43.371 "mask": "0x800", 00:03:43.371 "tpoint_mask": "0x0" 00:03:43.371 }, 00:03:43.371 "iaa": { 00:03:43.371 "mask": "0x1000", 00:03:43.371 "tpoint_mask": "0x0" 00:03:43.371 }, 00:03:43.371 "nvme_tcp": { 00:03:43.371 "mask": "0x2000", 00:03:43.371 "tpoint_mask": "0x0" 00:03:43.371 }, 00:03:43.371 "bdev_nvme": { 00:03:43.371 "mask": "0x4000", 00:03:43.371 "tpoint_mask": "0x0" 00:03:43.371 }, 00:03:43.371 "sock": { 00:03:43.371 "mask": "0x8000", 00:03:43.371 "tpoint_mask": "0x0" 00:03:43.371 }, 00:03:43.371 "blob": { 00:03:43.371 "mask": "0x10000", 00:03:43.371 "tpoint_mask": "0x0" 00:03:43.371 }, 00:03:43.371 "bdev_raid": { 00:03:43.371 "mask": "0x20000", 00:03:43.371 "tpoint_mask": "0x0" 00:03:43.371 }, 00:03:43.371 "scheduler": { 00:03:43.371 "mask": "0x40000", 00:03:43.371 "tpoint_mask": "0x0" 00:03:43.371 } 00:03:43.371 }' 00:03:43.371 00:11:13 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:03:43.371 00:11:13 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:03:43.371 00:11:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:03:43.633 00:11:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:03:43.633 00:11:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:03:43.633 00:11:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:03:43.633 00:11:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:03:43.633 00:11:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:03:43.633 00:11:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:03:43.633 00:11:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:03:43.633 00:03:43.633 real 0m0.214s 00:03:43.633 user 0m0.176s 00:03:43.633 sys 0m0.027s 00:03:43.633 00:11:14 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:43.633 00:11:14 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:03:43.633 ************************************ 00:03:43.633 END TEST rpc_trace_cmd_test 00:03:43.633 ************************************ 00:03:43.633 00:11:14 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:03:43.633 00:11:14 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:03:43.633 00:11:14 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:03:43.633 00:11:14 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:43.633 00:11:14 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:43.633 00:11:14 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:43.633 ************************************ 00:03:43.633 START TEST rpc_daemon_integrity 00:03:43.633 ************************************ 00:03:43.633 00:11:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:03:43.633 00:11:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:03:43.633 00:11:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:43.633 00:11:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:43.633 00:11:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:43.633 00:11:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:03:43.633 00:11:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:03:43.894 00:11:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:03:43.894 00:11:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:03:43.894 00:11:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:43.894 00:11:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:43.894 00:11:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:43.894 00:11:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:03:43.894 00:11:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:03:43.894 00:11:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:43.894 00:11:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:43.894 00:11:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:43.894 00:11:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:03:43.894 { 00:03:43.894 "name": "Malloc2", 00:03:43.894 "aliases": [ 00:03:43.894 "35fe4246-a526-4d94-a7d6-888969d05b5b" 00:03:43.894 ], 00:03:43.894 "product_name": "Malloc disk", 00:03:43.894 "block_size": 512, 00:03:43.894 "num_blocks": 16384, 00:03:43.894 "uuid": "35fe4246-a526-4d94-a7d6-888969d05b5b", 00:03:43.894 "assigned_rate_limits": { 00:03:43.894 "rw_ios_per_sec": 0, 00:03:43.894 "rw_mbytes_per_sec": 0, 00:03:43.894 "r_mbytes_per_sec": 0, 00:03:43.894 "w_mbytes_per_sec": 0 00:03:43.894 }, 00:03:43.894 "claimed": false, 00:03:43.894 "zoned": false, 00:03:43.894 "supported_io_types": { 00:03:43.894 "read": true, 00:03:43.894 "write": true, 00:03:43.894 "unmap": true, 00:03:43.894 "flush": true, 00:03:43.894 "reset": true, 00:03:43.894 "nvme_admin": false, 00:03:43.894 "nvme_io": false, 00:03:43.894 "nvme_io_md": false, 00:03:43.894 "write_zeroes": true, 00:03:43.894 "zcopy": true, 00:03:43.894 "get_zone_info": false, 00:03:43.894 "zone_management": false, 00:03:43.894 "zone_append": false, 00:03:43.894 "compare": false, 00:03:43.894 "compare_and_write": false, 00:03:43.894 "abort": true, 00:03:43.894 "seek_hole": false, 00:03:43.894 "seek_data": false, 00:03:43.894 "copy": true, 00:03:43.894 "nvme_iov_md": false 00:03:43.894 }, 00:03:43.894 "memory_domains": [ 00:03:43.894 { 00:03:43.894 "dma_device_id": "system", 00:03:43.894 "dma_device_type": 1 00:03:43.894 }, 00:03:43.894 { 00:03:43.894 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:43.894 "dma_device_type": 2 00:03:43.894 } 00:03:43.894 ], 00:03:43.894 "driver_specific": {} 00:03:43.894 } 00:03:43.894 ]' 00:03:43.894 00:11:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:03:43.894 00:11:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:03:43.894 00:11:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:03:43.894 00:11:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:43.894 00:11:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:43.894 [2024-10-09 00:11:14.374286] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:03:43.894 [2024-10-09 00:11:14.374331] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:03:43.894 [2024-10-09 00:11:14.374348] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1f5f150 00:03:43.895 [2024-10-09 00:11:14.374356] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:03:43.895 [2024-10-09 00:11:14.375892] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:03:43.895 [2024-10-09 00:11:14.375934] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:03:43.895 Passthru0 00:03:43.895 00:11:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:43.895 00:11:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:03:43.895 00:11:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:43.895 00:11:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:43.895 00:11:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:43.895 00:11:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:03:43.895 { 00:03:43.895 "name": "Malloc2", 00:03:43.895 "aliases": [ 00:03:43.895 "35fe4246-a526-4d94-a7d6-888969d05b5b" 00:03:43.895 ], 00:03:43.895 "product_name": "Malloc disk", 00:03:43.895 "block_size": 512, 00:03:43.895 "num_blocks": 16384, 00:03:43.895 "uuid": "35fe4246-a526-4d94-a7d6-888969d05b5b", 00:03:43.895 "assigned_rate_limits": { 00:03:43.895 "rw_ios_per_sec": 0, 00:03:43.895 "rw_mbytes_per_sec": 0, 00:03:43.895 "r_mbytes_per_sec": 0, 00:03:43.895 "w_mbytes_per_sec": 0 00:03:43.895 }, 00:03:43.895 "claimed": true, 00:03:43.895 "claim_type": "exclusive_write", 00:03:43.895 "zoned": false, 00:03:43.895 "supported_io_types": { 00:03:43.895 "read": true, 00:03:43.895 "write": true, 00:03:43.895 "unmap": true, 00:03:43.895 "flush": true, 00:03:43.895 "reset": true, 00:03:43.895 "nvme_admin": false, 00:03:43.895 "nvme_io": false, 00:03:43.895 "nvme_io_md": false, 00:03:43.895 "write_zeroes": true, 00:03:43.895 "zcopy": true, 00:03:43.895 "get_zone_info": false, 00:03:43.895 "zone_management": false, 00:03:43.895 "zone_append": false, 00:03:43.895 "compare": false, 00:03:43.895 "compare_and_write": false, 00:03:43.895 "abort": true, 00:03:43.895 "seek_hole": false, 00:03:43.895 "seek_data": false, 00:03:43.895 "copy": true, 00:03:43.895 "nvme_iov_md": false 00:03:43.895 }, 00:03:43.895 "memory_domains": [ 00:03:43.895 { 00:03:43.895 "dma_device_id": "system", 00:03:43.895 "dma_device_type": 1 00:03:43.895 }, 00:03:43.895 { 00:03:43.895 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:43.895 "dma_device_type": 2 00:03:43.895 } 00:03:43.895 ], 00:03:43.895 "driver_specific": {} 00:03:43.895 }, 00:03:43.895 { 00:03:43.895 "name": "Passthru0", 00:03:43.895 "aliases": [ 00:03:43.895 "beb2a80e-6608-5c97-b800-7e9643592272" 00:03:43.895 ], 00:03:43.895 "product_name": "passthru", 00:03:43.895 "block_size": 512, 00:03:43.895 "num_blocks": 16384, 00:03:43.895 "uuid": "beb2a80e-6608-5c97-b800-7e9643592272", 00:03:43.895 "assigned_rate_limits": { 00:03:43.895 "rw_ios_per_sec": 0, 00:03:43.895 "rw_mbytes_per_sec": 0, 00:03:43.895 "r_mbytes_per_sec": 0, 00:03:43.895 "w_mbytes_per_sec": 0 00:03:43.895 }, 00:03:43.895 "claimed": false, 00:03:43.895 "zoned": false, 00:03:43.895 "supported_io_types": { 00:03:43.895 "read": true, 00:03:43.895 "write": true, 00:03:43.895 "unmap": true, 00:03:43.895 "flush": true, 00:03:43.895 "reset": true, 00:03:43.895 "nvme_admin": false, 00:03:43.895 "nvme_io": false, 00:03:43.895 "nvme_io_md": false, 00:03:43.895 "write_zeroes": true, 00:03:43.895 "zcopy": true, 00:03:43.895 "get_zone_info": false, 00:03:43.895 "zone_management": false, 00:03:43.895 "zone_append": false, 00:03:43.895 "compare": false, 00:03:43.895 "compare_and_write": false, 00:03:43.895 "abort": true, 00:03:43.895 "seek_hole": false, 00:03:43.895 "seek_data": false, 00:03:43.895 "copy": true, 00:03:43.895 "nvme_iov_md": false 00:03:43.895 }, 00:03:43.895 "memory_domains": [ 00:03:43.895 { 00:03:43.895 "dma_device_id": "system", 00:03:43.895 "dma_device_type": 1 00:03:43.895 }, 00:03:43.895 { 00:03:43.895 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:43.895 "dma_device_type": 2 00:03:43.895 } 00:03:43.895 ], 00:03:43.895 "driver_specific": { 00:03:43.895 "passthru": { 00:03:43.895 "name": "Passthru0", 00:03:43.895 "base_bdev_name": "Malloc2" 00:03:43.895 } 00:03:43.895 } 00:03:43.895 } 00:03:43.895 ]' 00:03:43.895 00:11:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:03:43.895 00:11:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:03:43.895 00:11:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:03:43.895 00:11:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:43.895 00:11:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:43.895 00:11:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:43.895 00:11:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:03:43.895 00:11:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:43.895 00:11:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:43.895 00:11:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:43.895 00:11:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:03:43.895 00:11:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:43.895 00:11:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:43.895 00:11:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:43.895 00:11:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:03:43.895 00:11:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:03:44.155 00:11:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:03:44.155 00:03:44.155 real 0m0.310s 00:03:44.155 user 0m0.188s 00:03:44.155 sys 0m0.051s 00:03:44.155 00:11:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:44.155 00:11:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:44.155 ************************************ 00:03:44.155 END TEST rpc_daemon_integrity 00:03:44.155 ************************************ 00:03:44.155 00:11:14 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:03:44.155 00:11:14 rpc -- rpc/rpc.sh@84 -- # killprocess 3011535 00:03:44.155 00:11:14 rpc -- common/autotest_common.sh@950 -- # '[' -z 3011535 ']' 00:03:44.155 00:11:14 rpc -- common/autotest_common.sh@954 -- # kill -0 3011535 00:03:44.155 00:11:14 rpc -- common/autotest_common.sh@955 -- # uname 00:03:44.155 00:11:14 rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:03:44.155 00:11:14 rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3011535 00:03:44.155 00:11:14 rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:03:44.155 00:11:14 rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:03:44.155 00:11:14 rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3011535' 00:03:44.155 killing process with pid 3011535 00:03:44.155 00:11:14 rpc -- common/autotest_common.sh@969 -- # kill 3011535 00:03:44.155 00:11:14 rpc -- common/autotest_common.sh@974 -- # wait 3011535 00:03:44.415 00:03:44.415 real 0m2.699s 00:03:44.415 user 0m3.374s 00:03:44.415 sys 0m0.875s 00:03:44.415 00:11:14 rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:44.415 00:11:14 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:44.415 ************************************ 00:03:44.415 END TEST rpc 00:03:44.415 ************************************ 00:03:44.415 00:11:14 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:03:44.415 00:11:14 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:44.415 00:11:14 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:44.415 00:11:14 -- common/autotest_common.sh@10 -- # set +x 00:03:44.415 ************************************ 00:03:44.415 START TEST skip_rpc 00:03:44.415 ************************************ 00:03:44.415 00:11:14 skip_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:03:44.676 * Looking for test storage... 00:03:44.676 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:44.676 00:11:15 skip_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:03:44.676 00:11:15 skip_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:03:44.676 00:11:15 skip_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:03:44.676 00:11:15 skip_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:03:44.676 00:11:15 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:44.676 00:11:15 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:44.676 00:11:15 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:44.676 00:11:15 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:03:44.676 00:11:15 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:03:44.676 00:11:15 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:03:44.676 00:11:15 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:03:44.676 00:11:15 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:03:44.676 00:11:15 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:03:44.676 00:11:15 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:03:44.676 00:11:15 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:44.676 00:11:15 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:03:44.676 00:11:15 skip_rpc -- scripts/common.sh@345 -- # : 1 00:03:44.676 00:11:15 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:44.676 00:11:15 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:44.676 00:11:15 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:03:44.676 00:11:15 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:03:44.676 00:11:15 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:44.676 00:11:15 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:03:44.676 00:11:15 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:03:44.676 00:11:15 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:03:44.676 00:11:15 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:03:44.676 00:11:15 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:44.676 00:11:15 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:03:44.676 00:11:15 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:03:44.676 00:11:15 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:44.676 00:11:15 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:44.676 00:11:15 skip_rpc -- scripts/common.sh@368 -- # return 0 00:03:44.676 00:11:15 skip_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:44.676 00:11:15 skip_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:03:44.676 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:44.676 --rc genhtml_branch_coverage=1 00:03:44.676 --rc genhtml_function_coverage=1 00:03:44.676 --rc genhtml_legend=1 00:03:44.676 --rc geninfo_all_blocks=1 00:03:44.676 --rc geninfo_unexecuted_blocks=1 00:03:44.676 00:03:44.676 ' 00:03:44.676 00:11:15 skip_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:03:44.676 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:44.676 --rc genhtml_branch_coverage=1 00:03:44.676 --rc genhtml_function_coverage=1 00:03:44.676 --rc genhtml_legend=1 00:03:44.676 --rc geninfo_all_blocks=1 00:03:44.676 --rc geninfo_unexecuted_blocks=1 00:03:44.676 00:03:44.676 ' 00:03:44.677 00:11:15 skip_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:03:44.677 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:44.677 --rc genhtml_branch_coverage=1 00:03:44.677 --rc genhtml_function_coverage=1 00:03:44.677 --rc genhtml_legend=1 00:03:44.677 --rc geninfo_all_blocks=1 00:03:44.677 --rc geninfo_unexecuted_blocks=1 00:03:44.677 00:03:44.677 ' 00:03:44.677 00:11:15 skip_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:03:44.677 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:44.677 --rc genhtml_branch_coverage=1 00:03:44.677 --rc genhtml_function_coverage=1 00:03:44.677 --rc genhtml_legend=1 00:03:44.677 --rc geninfo_all_blocks=1 00:03:44.677 --rc geninfo_unexecuted_blocks=1 00:03:44.677 00:03:44.677 ' 00:03:44.677 00:11:15 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:44.677 00:11:15 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:03:44.677 00:11:15 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:03:44.677 00:11:15 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:44.677 00:11:15 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:44.677 00:11:15 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:44.677 ************************************ 00:03:44.677 START TEST skip_rpc 00:03:44.677 ************************************ 00:03:44.677 00:11:15 skip_rpc.skip_rpc -- common/autotest_common.sh@1125 -- # test_skip_rpc 00:03:44.677 00:11:15 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=3012202 00:03:44.677 00:11:15 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:44.677 00:11:15 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:03:44.677 00:11:15 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:03:44.677 [2024-10-09 00:11:15.292757] Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 initialization... 00:03:44.677 [2024-10-09 00:11:15.292824] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3012202 ] 00:03:44.937 [2024-10-09 00:11:15.374036] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:44.937 [2024-10-09 00:11:15.470319] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:03:50.231 00:11:20 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:03:50.231 00:11:20 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:03:50.231 00:11:20 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:03:50.231 00:11:20 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:03:50.231 00:11:20 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:03:50.231 00:11:20 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:03:50.231 00:11:20 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:03:50.231 00:11:20 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:03:50.231 00:11:20 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:50.231 00:11:20 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:50.231 00:11:20 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:03:50.231 00:11:20 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:03:50.231 00:11:20 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:03:50.231 00:11:20 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:03:50.231 00:11:20 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:03:50.231 00:11:20 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:03:50.231 00:11:20 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 3012202 00:03:50.231 00:11:20 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # '[' -z 3012202 ']' 00:03:50.231 00:11:20 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # kill -0 3012202 00:03:50.231 00:11:20 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # uname 00:03:50.231 00:11:20 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:03:50.231 00:11:20 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3012202 00:03:50.231 00:11:20 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:03:50.231 00:11:20 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:03:50.231 00:11:20 skip_rpc.skip_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3012202' 00:03:50.231 killing process with pid 3012202 00:03:50.231 00:11:20 skip_rpc.skip_rpc -- common/autotest_common.sh@969 -- # kill 3012202 00:03:50.231 00:11:20 skip_rpc.skip_rpc -- common/autotest_common.sh@974 -- # wait 3012202 00:03:50.231 00:03:50.231 real 0m5.282s 00:03:50.231 user 0m5.000s 00:03:50.231 sys 0m0.317s 00:03:50.231 00:11:20 skip_rpc.skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:50.231 00:11:20 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:50.231 ************************************ 00:03:50.231 END TEST skip_rpc 00:03:50.231 ************************************ 00:03:50.231 00:11:20 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:03:50.231 00:11:20 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:50.231 00:11:20 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:50.231 00:11:20 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:50.231 ************************************ 00:03:50.231 START TEST skip_rpc_with_json 00:03:50.231 ************************************ 00:03:50.231 00:11:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_json 00:03:50.231 00:11:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:03:50.231 00:11:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=3013250 00:03:50.231 00:11:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:50.232 00:11:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 3013250 00:03:50.232 00:11:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:03:50.232 00:11:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # '[' -z 3013250 ']' 00:03:50.232 00:11:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:50.232 00:11:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # local max_retries=100 00:03:50.232 00:11:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:50.232 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:50.232 00:11:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # xtrace_disable 00:03:50.232 00:11:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:50.232 [2024-10-09 00:11:20.649556] Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 initialization... 00:03:50.232 [2024-10-09 00:11:20.649615] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3013250 ] 00:03:50.232 [2024-10-09 00:11:20.731297] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:50.232 [2024-10-09 00:11:20.806074] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:03:50.849 00:11:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:03:50.849 00:11:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # return 0 00:03:50.849 00:11:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:03:50.849 00:11:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:50.849 00:11:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:50.849 [2024-10-09 00:11:21.460434] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:03:50.849 request: 00:03:50.849 { 00:03:50.849 "trtype": "tcp", 00:03:50.849 "method": "nvmf_get_transports", 00:03:50.849 "req_id": 1 00:03:50.849 } 00:03:50.849 Got JSON-RPC error response 00:03:50.849 response: 00:03:50.849 { 00:03:50.849 "code": -19, 00:03:50.849 "message": "No such device" 00:03:50.849 } 00:03:51.203 00:11:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:03:51.203 00:11:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:03:51.203 00:11:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:51.203 00:11:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:51.203 [2024-10-09 00:11:21.472539] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:03:51.203 00:11:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:51.203 00:11:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:03:51.203 00:11:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:51.203 00:11:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:51.203 00:11:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:51.203 00:11:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:51.203 { 00:03:51.203 "subsystems": [ 00:03:51.203 { 00:03:51.203 "subsystem": "fsdev", 00:03:51.203 "config": [ 00:03:51.203 { 00:03:51.203 "method": "fsdev_set_opts", 00:03:51.203 "params": { 00:03:51.203 "fsdev_io_pool_size": 65535, 00:03:51.203 "fsdev_io_cache_size": 256 00:03:51.203 } 00:03:51.203 } 00:03:51.203 ] 00:03:51.203 }, 00:03:51.203 { 00:03:51.203 "subsystem": "vfio_user_target", 00:03:51.203 "config": null 00:03:51.203 }, 00:03:51.203 { 00:03:51.203 "subsystem": "keyring", 00:03:51.203 "config": [] 00:03:51.203 }, 00:03:51.203 { 00:03:51.203 "subsystem": "iobuf", 00:03:51.203 "config": [ 00:03:51.203 { 00:03:51.203 "method": "iobuf_set_options", 00:03:51.203 "params": { 00:03:51.203 "small_pool_count": 8192, 00:03:51.203 "large_pool_count": 1024, 00:03:51.203 "small_bufsize": 8192, 00:03:51.203 "large_bufsize": 135168 00:03:51.203 } 00:03:51.203 } 00:03:51.203 ] 00:03:51.203 }, 00:03:51.203 { 00:03:51.203 "subsystem": "sock", 00:03:51.203 "config": [ 00:03:51.203 { 00:03:51.203 "method": "sock_set_default_impl", 00:03:51.203 "params": { 00:03:51.203 "impl_name": "posix" 00:03:51.203 } 00:03:51.203 }, 00:03:51.203 { 00:03:51.203 "method": "sock_impl_set_options", 00:03:51.203 "params": { 00:03:51.203 "impl_name": "ssl", 00:03:51.203 "recv_buf_size": 4096, 00:03:51.203 "send_buf_size": 4096, 00:03:51.203 "enable_recv_pipe": true, 00:03:51.203 "enable_quickack": false, 00:03:51.203 "enable_placement_id": 0, 00:03:51.203 "enable_zerocopy_send_server": true, 00:03:51.203 "enable_zerocopy_send_client": false, 00:03:51.203 "zerocopy_threshold": 0, 00:03:51.203 "tls_version": 0, 00:03:51.203 "enable_ktls": false 00:03:51.203 } 00:03:51.203 }, 00:03:51.203 { 00:03:51.203 "method": "sock_impl_set_options", 00:03:51.203 "params": { 00:03:51.203 "impl_name": "posix", 00:03:51.203 "recv_buf_size": 2097152, 00:03:51.203 "send_buf_size": 2097152, 00:03:51.203 "enable_recv_pipe": true, 00:03:51.203 "enable_quickack": false, 00:03:51.203 "enable_placement_id": 0, 00:03:51.203 "enable_zerocopy_send_server": true, 00:03:51.203 "enable_zerocopy_send_client": false, 00:03:51.203 "zerocopy_threshold": 0, 00:03:51.203 "tls_version": 0, 00:03:51.203 "enable_ktls": false 00:03:51.203 } 00:03:51.203 } 00:03:51.203 ] 00:03:51.203 }, 00:03:51.203 { 00:03:51.203 "subsystem": "vmd", 00:03:51.203 "config": [] 00:03:51.203 }, 00:03:51.203 { 00:03:51.203 "subsystem": "accel", 00:03:51.203 "config": [ 00:03:51.203 { 00:03:51.203 "method": "accel_set_options", 00:03:51.203 "params": { 00:03:51.203 "small_cache_size": 128, 00:03:51.203 "large_cache_size": 16, 00:03:51.203 "task_count": 2048, 00:03:51.203 "sequence_count": 2048, 00:03:51.203 "buf_count": 2048 00:03:51.203 } 00:03:51.203 } 00:03:51.203 ] 00:03:51.203 }, 00:03:51.203 { 00:03:51.203 "subsystem": "bdev", 00:03:51.203 "config": [ 00:03:51.203 { 00:03:51.203 "method": "bdev_set_options", 00:03:51.203 "params": { 00:03:51.203 "bdev_io_pool_size": 65535, 00:03:51.203 "bdev_io_cache_size": 256, 00:03:51.203 "bdev_auto_examine": true, 00:03:51.203 "iobuf_small_cache_size": 128, 00:03:51.203 "iobuf_large_cache_size": 16 00:03:51.203 } 00:03:51.203 }, 00:03:51.203 { 00:03:51.203 "method": "bdev_raid_set_options", 00:03:51.203 "params": { 00:03:51.203 "process_window_size_kb": 1024, 00:03:51.203 "process_max_bandwidth_mb_sec": 0 00:03:51.203 } 00:03:51.203 }, 00:03:51.203 { 00:03:51.203 "method": "bdev_iscsi_set_options", 00:03:51.203 "params": { 00:03:51.203 "timeout_sec": 30 00:03:51.203 } 00:03:51.203 }, 00:03:51.203 { 00:03:51.203 "method": "bdev_nvme_set_options", 00:03:51.203 "params": { 00:03:51.203 "action_on_timeout": "none", 00:03:51.203 "timeout_us": 0, 00:03:51.203 "timeout_admin_us": 0, 00:03:51.203 "keep_alive_timeout_ms": 10000, 00:03:51.203 "arbitration_burst": 0, 00:03:51.203 "low_priority_weight": 0, 00:03:51.203 "medium_priority_weight": 0, 00:03:51.203 "high_priority_weight": 0, 00:03:51.203 "nvme_adminq_poll_period_us": 10000, 00:03:51.203 "nvme_ioq_poll_period_us": 0, 00:03:51.203 "io_queue_requests": 0, 00:03:51.203 "delay_cmd_submit": true, 00:03:51.203 "transport_retry_count": 4, 00:03:51.203 "bdev_retry_count": 3, 00:03:51.203 "transport_ack_timeout": 0, 00:03:51.203 "ctrlr_loss_timeout_sec": 0, 00:03:51.203 "reconnect_delay_sec": 0, 00:03:51.203 "fast_io_fail_timeout_sec": 0, 00:03:51.203 "disable_auto_failback": false, 00:03:51.203 "generate_uuids": false, 00:03:51.203 "transport_tos": 0, 00:03:51.203 "nvme_error_stat": false, 00:03:51.203 "rdma_srq_size": 0, 00:03:51.203 "io_path_stat": false, 00:03:51.203 "allow_accel_sequence": false, 00:03:51.203 "rdma_max_cq_size": 0, 00:03:51.203 "rdma_cm_event_timeout_ms": 0, 00:03:51.203 "dhchap_digests": [ 00:03:51.203 "sha256", 00:03:51.203 "sha384", 00:03:51.203 "sha512" 00:03:51.203 ], 00:03:51.203 "dhchap_dhgroups": [ 00:03:51.203 "null", 00:03:51.203 "ffdhe2048", 00:03:51.203 "ffdhe3072", 00:03:51.203 "ffdhe4096", 00:03:51.203 "ffdhe6144", 00:03:51.203 "ffdhe8192" 00:03:51.203 ] 00:03:51.203 } 00:03:51.203 }, 00:03:51.203 { 00:03:51.203 "method": "bdev_nvme_set_hotplug", 00:03:51.203 "params": { 00:03:51.203 "period_us": 100000, 00:03:51.203 "enable": false 00:03:51.203 } 00:03:51.203 }, 00:03:51.203 { 00:03:51.203 "method": "bdev_wait_for_examine" 00:03:51.203 } 00:03:51.203 ] 00:03:51.203 }, 00:03:51.203 { 00:03:51.203 "subsystem": "scsi", 00:03:51.203 "config": null 00:03:51.203 }, 00:03:51.203 { 00:03:51.204 "subsystem": "scheduler", 00:03:51.204 "config": [ 00:03:51.204 { 00:03:51.204 "method": "framework_set_scheduler", 00:03:51.204 "params": { 00:03:51.204 "name": "static" 00:03:51.204 } 00:03:51.204 } 00:03:51.204 ] 00:03:51.204 }, 00:03:51.204 { 00:03:51.204 "subsystem": "vhost_scsi", 00:03:51.204 "config": [] 00:03:51.204 }, 00:03:51.204 { 00:03:51.204 "subsystem": "vhost_blk", 00:03:51.204 "config": [] 00:03:51.204 }, 00:03:51.204 { 00:03:51.204 "subsystem": "ublk", 00:03:51.204 "config": [] 00:03:51.204 }, 00:03:51.204 { 00:03:51.204 "subsystem": "nbd", 00:03:51.204 "config": [] 00:03:51.204 }, 00:03:51.204 { 00:03:51.204 "subsystem": "nvmf", 00:03:51.204 "config": [ 00:03:51.204 { 00:03:51.204 "method": "nvmf_set_config", 00:03:51.204 "params": { 00:03:51.204 "discovery_filter": "match_any", 00:03:51.204 "admin_cmd_passthru": { 00:03:51.204 "identify_ctrlr": false 00:03:51.204 }, 00:03:51.204 "dhchap_digests": [ 00:03:51.204 "sha256", 00:03:51.204 "sha384", 00:03:51.204 "sha512" 00:03:51.204 ], 00:03:51.204 "dhchap_dhgroups": [ 00:03:51.204 "null", 00:03:51.204 "ffdhe2048", 00:03:51.204 "ffdhe3072", 00:03:51.204 "ffdhe4096", 00:03:51.204 "ffdhe6144", 00:03:51.204 "ffdhe8192" 00:03:51.204 ] 00:03:51.204 } 00:03:51.204 }, 00:03:51.204 { 00:03:51.204 "method": "nvmf_set_max_subsystems", 00:03:51.204 "params": { 00:03:51.204 "max_subsystems": 1024 00:03:51.204 } 00:03:51.204 }, 00:03:51.204 { 00:03:51.204 "method": "nvmf_set_crdt", 00:03:51.204 "params": { 00:03:51.204 "crdt1": 0, 00:03:51.204 "crdt2": 0, 00:03:51.204 "crdt3": 0 00:03:51.204 } 00:03:51.204 }, 00:03:51.204 { 00:03:51.204 "method": "nvmf_create_transport", 00:03:51.204 "params": { 00:03:51.204 "trtype": "TCP", 00:03:51.204 "max_queue_depth": 128, 00:03:51.204 "max_io_qpairs_per_ctrlr": 127, 00:03:51.204 "in_capsule_data_size": 4096, 00:03:51.204 "max_io_size": 131072, 00:03:51.204 "io_unit_size": 131072, 00:03:51.204 "max_aq_depth": 128, 00:03:51.204 "num_shared_buffers": 511, 00:03:51.204 "buf_cache_size": 4294967295, 00:03:51.204 "dif_insert_or_strip": false, 00:03:51.204 "zcopy": false, 00:03:51.204 "c2h_success": true, 00:03:51.204 "sock_priority": 0, 00:03:51.204 "abort_timeout_sec": 1, 00:03:51.204 "ack_timeout": 0, 00:03:51.204 "data_wr_pool_size": 0 00:03:51.204 } 00:03:51.204 } 00:03:51.204 ] 00:03:51.204 }, 00:03:51.204 { 00:03:51.204 "subsystem": "iscsi", 00:03:51.204 "config": [ 00:03:51.204 { 00:03:51.204 "method": "iscsi_set_options", 00:03:51.204 "params": { 00:03:51.204 "node_base": "iqn.2016-06.io.spdk", 00:03:51.204 "max_sessions": 128, 00:03:51.204 "max_connections_per_session": 2, 00:03:51.204 "max_queue_depth": 64, 00:03:51.204 "default_time2wait": 2, 00:03:51.204 "default_time2retain": 20, 00:03:51.204 "first_burst_length": 8192, 00:03:51.204 "immediate_data": true, 00:03:51.204 "allow_duplicated_isid": false, 00:03:51.204 "error_recovery_level": 0, 00:03:51.204 "nop_timeout": 60, 00:03:51.204 "nop_in_interval": 30, 00:03:51.204 "disable_chap": false, 00:03:51.204 "require_chap": false, 00:03:51.204 "mutual_chap": false, 00:03:51.204 "chap_group": 0, 00:03:51.204 "max_large_datain_per_connection": 64, 00:03:51.204 "max_r2t_per_connection": 4, 00:03:51.204 "pdu_pool_size": 36864, 00:03:51.204 "immediate_data_pool_size": 16384, 00:03:51.204 "data_out_pool_size": 2048 00:03:51.204 } 00:03:51.204 } 00:03:51.204 ] 00:03:51.204 } 00:03:51.204 ] 00:03:51.204 } 00:03:51.204 00:11:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:03:51.204 00:11:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 3013250 00:03:51.204 00:11:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 3013250 ']' 00:03:51.204 00:11:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 3013250 00:03:51.204 00:11:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:03:51.204 00:11:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:03:51.204 00:11:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3013250 00:03:51.204 00:11:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:03:51.204 00:11:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:03:51.204 00:11:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3013250' 00:03:51.204 killing process with pid 3013250 00:03:51.204 00:11:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 3013250 00:03:51.204 00:11:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 3013250 00:03:51.466 00:11:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=3013581 00:03:51.466 00:11:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:03:51.466 00:11:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:56.749 00:11:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 3013581 00:03:56.749 00:11:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 3013581 ']' 00:03:56.749 00:11:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 3013581 00:03:56.749 00:11:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:03:56.749 00:11:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:03:56.749 00:11:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3013581 00:03:56.749 00:11:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:03:56.749 00:11:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:03:56.749 00:11:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3013581' 00:03:56.749 killing process with pid 3013581 00:03:56.749 00:11:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 3013581 00:03:56.749 00:11:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 3013581 00:03:56.749 00:11:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:03:56.749 00:11:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:03:56.749 00:03:56.749 real 0m6.599s 00:03:56.749 user 0m6.486s 00:03:56.749 sys 0m0.598s 00:03:56.749 00:11:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:56.749 00:11:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:56.749 ************************************ 00:03:56.749 END TEST skip_rpc_with_json 00:03:56.749 ************************************ 00:03:56.749 00:11:27 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:03:56.749 00:11:27 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:56.749 00:11:27 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:56.749 00:11:27 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:56.749 ************************************ 00:03:56.749 START TEST skip_rpc_with_delay 00:03:56.749 ************************************ 00:03:56.749 00:11:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_delay 00:03:56.749 00:11:27 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:03:56.749 00:11:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:03:56.749 00:11:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:03:56.749 00:11:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:56.749 00:11:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:03:56.749 00:11:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:56.749 00:11:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:03:56.749 00:11:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:56.749 00:11:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:03:56.749 00:11:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:56.749 00:11:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:03:56.749 00:11:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:03:56.749 [2024-10-09 00:11:27.332568] app.c: 840:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:03:56.749 [2024-10-09 00:11:27.332662] app.c: 719:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:03:56.749 00:11:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:03:56.749 00:11:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:03:56.749 00:11:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:03:56.749 00:11:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:03:56.749 00:03:56.749 real 0m0.078s 00:03:56.749 user 0m0.051s 00:03:56.749 sys 0m0.027s 00:03:56.749 00:11:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:56.749 00:11:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:03:56.749 ************************************ 00:03:56.749 END TEST skip_rpc_with_delay 00:03:56.749 ************************************ 00:03:57.010 00:11:27 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:03:57.010 00:11:27 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:03:57.010 00:11:27 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:03:57.010 00:11:27 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:57.010 00:11:27 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:57.010 00:11:27 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:57.010 ************************************ 00:03:57.010 START TEST exit_on_failed_rpc_init 00:03:57.010 ************************************ 00:03:57.010 00:11:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1125 -- # test_exit_on_failed_rpc_init 00:03:57.010 00:11:27 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=3014733 00:03:57.010 00:11:27 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 3014733 00:03:57.010 00:11:27 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:03:57.010 00:11:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # '[' -z 3014733 ']' 00:03:57.010 00:11:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:57.010 00:11:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:03:57.010 00:11:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:57.010 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:57.010 00:11:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:03:57.010 00:11:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:03:57.010 [2024-10-09 00:11:27.491006] Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 initialization... 00:03:57.010 [2024-10-09 00:11:27.491054] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3014733 ] 00:03:57.010 [2024-10-09 00:11:27.566698] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:57.010 [2024-10-09 00:11:27.623803] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:03:57.952 00:11:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:03:57.952 00:11:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # return 0 00:03:57.952 00:11:28 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:57.952 00:11:28 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:03:57.952 00:11:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:03:57.952 00:11:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:03:57.952 00:11:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:57.952 00:11:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:03:57.952 00:11:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:57.952 00:11:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:03:57.952 00:11:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:57.952 00:11:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:03:57.952 00:11:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:57.952 00:11:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:03:57.952 00:11:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:03:57.952 [2024-10-09 00:11:28.346031] Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 initialization... 00:03:57.952 [2024-10-09 00:11:28.346081] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3014989 ] 00:03:57.953 [2024-10-09 00:11:28.422485] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:57.953 [2024-10-09 00:11:28.486236] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:03:57.953 [2024-10-09 00:11:28.486292] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:03:57.953 [2024-10-09 00:11:28.486302] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:03:57.953 [2024-10-09 00:11:28.486309] app.c:1062:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:03:57.953 00:11:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:03:57.953 00:11:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:03:57.953 00:11:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:03:57.953 00:11:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:03:57.953 00:11:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:03:57.953 00:11:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:03:57.953 00:11:28 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:03:57.953 00:11:28 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 3014733 00:03:57.953 00:11:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # '[' -z 3014733 ']' 00:03:57.953 00:11:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # kill -0 3014733 00:03:57.953 00:11:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # uname 00:03:57.953 00:11:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:03:57.953 00:11:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3014733 00:03:58.213 00:11:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:03:58.213 00:11:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:03:58.213 00:11:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3014733' 00:03:58.213 killing process with pid 3014733 00:03:58.213 00:11:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@969 -- # kill 3014733 00:03:58.213 00:11:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@974 -- # wait 3014733 00:03:58.213 00:03:58.213 real 0m1.377s 00:03:58.213 user 0m1.631s 00:03:58.213 sys 0m0.394s 00:03:58.213 00:11:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:58.213 00:11:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:03:58.213 ************************************ 00:03:58.213 END TEST exit_on_failed_rpc_init 00:03:58.213 ************************************ 00:03:58.473 00:11:28 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:58.473 00:03:58.473 real 0m13.869s 00:03:58.473 user 0m13.382s 00:03:58.473 sys 0m1.684s 00:03:58.473 00:11:28 skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:58.473 00:11:28 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:58.473 ************************************ 00:03:58.473 END TEST skip_rpc 00:03:58.473 ************************************ 00:03:58.473 00:11:28 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:03:58.473 00:11:28 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:58.473 00:11:28 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:58.473 00:11:28 -- common/autotest_common.sh@10 -- # set +x 00:03:58.473 ************************************ 00:03:58.473 START TEST rpc_client 00:03:58.473 ************************************ 00:03:58.473 00:11:28 rpc_client -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:03:58.473 * Looking for test storage... 00:03:58.473 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:03:58.473 00:11:29 rpc_client -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:03:58.473 00:11:29 rpc_client -- common/autotest_common.sh@1681 -- # lcov --version 00:03:58.473 00:11:29 rpc_client -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:03:58.473 00:11:29 rpc_client -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:03:58.473 00:11:29 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:58.473 00:11:29 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:58.473 00:11:29 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:58.473 00:11:29 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:03:58.473 00:11:29 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:03:58.473 00:11:29 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:03:58.473 00:11:29 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:03:58.734 00:11:29 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:03:58.734 00:11:29 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:03:58.734 00:11:29 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:03:58.734 00:11:29 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:58.734 00:11:29 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:03:58.734 00:11:29 rpc_client -- scripts/common.sh@345 -- # : 1 00:03:58.734 00:11:29 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:58.734 00:11:29 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:58.734 00:11:29 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:03:58.734 00:11:29 rpc_client -- scripts/common.sh@353 -- # local d=1 00:03:58.734 00:11:29 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:58.734 00:11:29 rpc_client -- scripts/common.sh@355 -- # echo 1 00:03:58.734 00:11:29 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:03:58.734 00:11:29 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:03:58.734 00:11:29 rpc_client -- scripts/common.sh@353 -- # local d=2 00:03:58.734 00:11:29 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:58.734 00:11:29 rpc_client -- scripts/common.sh@355 -- # echo 2 00:03:58.734 00:11:29 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:03:58.734 00:11:29 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:58.734 00:11:29 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:58.734 00:11:29 rpc_client -- scripts/common.sh@368 -- # return 0 00:03:58.734 00:11:29 rpc_client -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:58.734 00:11:29 rpc_client -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:03:58.734 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:58.734 --rc genhtml_branch_coverage=1 00:03:58.734 --rc genhtml_function_coverage=1 00:03:58.734 --rc genhtml_legend=1 00:03:58.734 --rc geninfo_all_blocks=1 00:03:58.734 --rc geninfo_unexecuted_blocks=1 00:03:58.734 00:03:58.734 ' 00:03:58.734 00:11:29 rpc_client -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:03:58.734 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:58.734 --rc genhtml_branch_coverage=1 00:03:58.734 --rc genhtml_function_coverage=1 00:03:58.734 --rc genhtml_legend=1 00:03:58.734 --rc geninfo_all_blocks=1 00:03:58.734 --rc geninfo_unexecuted_blocks=1 00:03:58.734 00:03:58.734 ' 00:03:58.734 00:11:29 rpc_client -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:03:58.734 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:58.734 --rc genhtml_branch_coverage=1 00:03:58.734 --rc genhtml_function_coverage=1 00:03:58.734 --rc genhtml_legend=1 00:03:58.734 --rc geninfo_all_blocks=1 00:03:58.734 --rc geninfo_unexecuted_blocks=1 00:03:58.734 00:03:58.734 ' 00:03:58.734 00:11:29 rpc_client -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:03:58.734 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:58.734 --rc genhtml_branch_coverage=1 00:03:58.734 --rc genhtml_function_coverage=1 00:03:58.734 --rc genhtml_legend=1 00:03:58.734 --rc geninfo_all_blocks=1 00:03:58.734 --rc geninfo_unexecuted_blocks=1 00:03:58.734 00:03:58.734 ' 00:03:58.734 00:11:29 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:03:58.734 OK 00:03:58.734 00:11:29 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:03:58.734 00:03:58.734 real 0m0.225s 00:03:58.734 user 0m0.130s 00:03:58.734 sys 0m0.109s 00:03:58.734 00:11:29 rpc_client -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:58.734 00:11:29 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:03:58.734 ************************************ 00:03:58.734 END TEST rpc_client 00:03:58.734 ************************************ 00:03:58.734 00:11:29 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:03:58.734 00:11:29 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:58.734 00:11:29 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:58.734 00:11:29 -- common/autotest_common.sh@10 -- # set +x 00:03:58.734 ************************************ 00:03:58.734 START TEST json_config 00:03:58.734 ************************************ 00:03:58.734 00:11:29 json_config -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:03:58.734 00:11:29 json_config -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:03:58.734 00:11:29 json_config -- common/autotest_common.sh@1681 -- # lcov --version 00:03:58.734 00:11:29 json_config -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:03:58.996 00:11:29 json_config -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:03:58.996 00:11:29 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:58.996 00:11:29 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:58.996 00:11:29 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:58.996 00:11:29 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:03:58.996 00:11:29 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:03:58.996 00:11:29 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:03:58.996 00:11:29 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:03:58.996 00:11:29 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:03:58.996 00:11:29 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:03:58.996 00:11:29 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:03:58.996 00:11:29 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:58.996 00:11:29 json_config -- scripts/common.sh@344 -- # case "$op" in 00:03:58.996 00:11:29 json_config -- scripts/common.sh@345 -- # : 1 00:03:58.996 00:11:29 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:58.996 00:11:29 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:58.996 00:11:29 json_config -- scripts/common.sh@365 -- # decimal 1 00:03:58.996 00:11:29 json_config -- scripts/common.sh@353 -- # local d=1 00:03:58.996 00:11:29 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:58.996 00:11:29 json_config -- scripts/common.sh@355 -- # echo 1 00:03:58.996 00:11:29 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:03:58.996 00:11:29 json_config -- scripts/common.sh@366 -- # decimal 2 00:03:58.996 00:11:29 json_config -- scripts/common.sh@353 -- # local d=2 00:03:58.996 00:11:29 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:58.996 00:11:29 json_config -- scripts/common.sh@355 -- # echo 2 00:03:58.996 00:11:29 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:03:58.996 00:11:29 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:58.996 00:11:29 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:58.996 00:11:29 json_config -- scripts/common.sh@368 -- # return 0 00:03:58.996 00:11:29 json_config -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:58.996 00:11:29 json_config -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:03:58.996 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:58.996 --rc genhtml_branch_coverage=1 00:03:58.996 --rc genhtml_function_coverage=1 00:03:58.996 --rc genhtml_legend=1 00:03:58.996 --rc geninfo_all_blocks=1 00:03:58.996 --rc geninfo_unexecuted_blocks=1 00:03:58.996 00:03:58.996 ' 00:03:58.996 00:11:29 json_config -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:03:58.996 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:58.996 --rc genhtml_branch_coverage=1 00:03:58.996 --rc genhtml_function_coverage=1 00:03:58.996 --rc genhtml_legend=1 00:03:58.996 --rc geninfo_all_blocks=1 00:03:58.996 --rc geninfo_unexecuted_blocks=1 00:03:58.996 00:03:58.996 ' 00:03:58.996 00:11:29 json_config -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:03:58.996 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:58.996 --rc genhtml_branch_coverage=1 00:03:58.996 --rc genhtml_function_coverage=1 00:03:58.996 --rc genhtml_legend=1 00:03:58.996 --rc geninfo_all_blocks=1 00:03:58.996 --rc geninfo_unexecuted_blocks=1 00:03:58.996 00:03:58.996 ' 00:03:58.996 00:11:29 json_config -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:03:58.996 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:58.996 --rc genhtml_branch_coverage=1 00:03:58.996 --rc genhtml_function_coverage=1 00:03:58.996 --rc genhtml_legend=1 00:03:58.996 --rc geninfo_all_blocks=1 00:03:58.996 --rc geninfo_unexecuted_blocks=1 00:03:58.996 00:03:58.996 ' 00:03:58.996 00:11:29 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:03:58.996 00:11:29 json_config -- nvmf/common.sh@7 -- # uname -s 00:03:58.996 00:11:29 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:58.996 00:11:29 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:58.996 00:11:29 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:58.996 00:11:29 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:58.996 00:11:29 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:58.996 00:11:29 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:58.996 00:11:29 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:58.996 00:11:29 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:58.996 00:11:29 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:58.996 00:11:29 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:58.996 00:11:29 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:03:58.996 00:11:29 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:03:58.996 00:11:29 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:58.996 00:11:29 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:58.996 00:11:29 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:03:58.996 00:11:29 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:58.996 00:11:29 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:03:58.996 00:11:29 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:03:58.996 00:11:29 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:58.996 00:11:29 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:58.996 00:11:29 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:58.996 00:11:29 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:58.996 00:11:29 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:58.996 00:11:29 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:58.996 00:11:29 json_config -- paths/export.sh@5 -- # export PATH 00:03:58.996 00:11:29 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:58.996 00:11:29 json_config -- nvmf/common.sh@51 -- # : 0 00:03:58.996 00:11:29 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:03:58.996 00:11:29 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:03:58.996 00:11:29 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:58.996 00:11:29 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:58.996 00:11:29 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:58.996 00:11:29 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:03:58.996 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:03:58.996 00:11:29 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:03:58.996 00:11:29 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:03:58.996 00:11:29 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:03:58.996 00:11:29 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:03:58.996 00:11:29 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:03:58.996 00:11:29 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:03:58.996 00:11:29 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:03:58.996 00:11:29 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:03:58.996 00:11:29 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:03:58.996 00:11:29 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:03:58.996 00:11:29 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:03:58.996 00:11:29 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:03:58.996 00:11:29 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:03:58.996 00:11:29 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:03:58.996 00:11:29 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:03:58.996 00:11:29 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:03:58.996 00:11:29 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:03:58.996 00:11:29 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:03:58.996 00:11:29 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:03:58.996 INFO: JSON configuration test init 00:03:58.996 00:11:29 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:03:58.996 00:11:29 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:03:58.996 00:11:29 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:58.996 00:11:29 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:58.996 00:11:29 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:03:58.996 00:11:29 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:58.996 00:11:29 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:58.996 00:11:29 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:03:58.996 00:11:29 json_config -- json_config/common.sh@9 -- # local app=target 00:03:58.996 00:11:29 json_config -- json_config/common.sh@10 -- # shift 00:03:58.996 00:11:29 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:03:58.996 00:11:29 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:03:58.996 00:11:29 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:03:58.996 00:11:29 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:03:58.996 00:11:29 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:03:58.996 00:11:29 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=3015369 00:03:58.996 00:11:29 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:03:58.997 Waiting for target to run... 00:03:58.997 00:11:29 json_config -- json_config/common.sh@25 -- # waitforlisten 3015369 /var/tmp/spdk_tgt.sock 00:03:58.997 00:11:29 json_config -- common/autotest_common.sh@831 -- # '[' -z 3015369 ']' 00:03:58.997 00:11:29 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:03:58.997 00:11:29 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:03:58.997 00:11:29 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:03:58.997 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:03:58.997 00:11:29 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:03:58.997 00:11:29 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:03:58.997 00:11:29 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:58.997 [2024-10-09 00:11:29.532177] Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 initialization... 00:03:58.997 [2024-10-09 00:11:29.532251] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3015369 ] 00:03:59.258 [2024-10-09 00:11:29.850222] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:59.517 [2024-10-09 00:11:29.904125] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:03:59.777 00:11:30 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:03:59.777 00:11:30 json_config -- common/autotest_common.sh@864 -- # return 0 00:03:59.777 00:11:30 json_config -- json_config/common.sh@26 -- # echo '' 00:03:59.777 00:03:59.777 00:11:30 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:03:59.777 00:11:30 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:03:59.777 00:11:30 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:59.777 00:11:30 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:59.777 00:11:30 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:03:59.777 00:11:30 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:03:59.777 00:11:30 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:03:59.777 00:11:30 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:59.778 00:11:30 json_config -- json_config/json_config.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:03:59.778 00:11:30 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:03:59.778 00:11:30 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:04:00.347 00:11:30 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:04:00.347 00:11:30 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:04:00.347 00:11:30 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:00.347 00:11:30 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:00.347 00:11:30 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:04:00.347 00:11:30 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:04:00.347 00:11:30 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:04:00.347 00:11:30 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:04:00.347 00:11:30 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:04:00.347 00:11:30 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:04:00.347 00:11:30 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:04:00.347 00:11:30 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:04:00.608 00:11:31 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:04:00.608 00:11:31 json_config -- json_config/json_config.sh@51 -- # local get_types 00:04:00.608 00:11:31 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:04:00.608 00:11:31 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:04:00.608 00:11:31 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:04:00.608 00:11:31 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:04:00.608 00:11:31 json_config -- json_config/json_config.sh@54 -- # sort 00:04:00.608 00:11:31 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:04:00.608 00:11:31 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:04:00.608 00:11:31 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:04:00.608 00:11:31 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:00.608 00:11:31 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:00.608 00:11:31 json_config -- json_config/json_config.sh@62 -- # return 0 00:04:00.608 00:11:31 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:04:00.608 00:11:31 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:04:00.608 00:11:31 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:04:00.608 00:11:31 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:04:00.608 00:11:31 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:04:00.608 00:11:31 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:04:00.608 00:11:31 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:00.608 00:11:31 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:00.608 00:11:31 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:04:00.608 00:11:31 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:04:00.608 00:11:31 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:04:00.608 00:11:31 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:00.608 00:11:31 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:00.868 MallocForNvmf0 00:04:00.868 00:11:31 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:00.868 00:11:31 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:00.868 MallocForNvmf1 00:04:00.868 00:11:31 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:04:00.868 00:11:31 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:04:01.129 [2024-10-09 00:11:31.628092] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:01.129 00:11:31 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:01.129 00:11:31 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:01.389 00:11:31 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:01.389 00:11:31 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:01.389 00:11:31 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:01.390 00:11:31 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:01.650 00:11:32 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:01.650 00:11:32 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:01.650 [2024-10-09 00:11:32.266057] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:01.650 00:11:32 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:04:01.650 00:11:32 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:01.650 00:11:32 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:01.910 00:11:32 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:04:01.910 00:11:32 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:01.910 00:11:32 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:01.910 00:11:32 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:04:01.910 00:11:32 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:01.910 00:11:32 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:01.910 MallocBdevForConfigChangeCheck 00:04:01.910 00:11:32 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:04:01.910 00:11:32 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:01.910 00:11:32 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:02.171 00:11:32 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:04:02.171 00:11:32 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:02.432 00:11:32 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:04:02.432 INFO: shutting down applications... 00:04:02.432 00:11:32 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:04:02.432 00:11:32 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:04:02.432 00:11:32 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:04:02.432 00:11:32 json_config -- json_config/json_config.sh@340 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:04:02.692 Calling clear_iscsi_subsystem 00:04:02.692 Calling clear_nvmf_subsystem 00:04:02.692 Calling clear_nbd_subsystem 00:04:02.692 Calling clear_ublk_subsystem 00:04:02.692 Calling clear_vhost_blk_subsystem 00:04:02.692 Calling clear_vhost_scsi_subsystem 00:04:02.692 Calling clear_bdev_subsystem 00:04:02.692 00:11:33 json_config -- json_config/json_config.sh@344 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:04:02.692 00:11:33 json_config -- json_config/json_config.sh@350 -- # count=100 00:04:02.692 00:11:33 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:04:02.692 00:11:33 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:02.692 00:11:33 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:04:02.692 00:11:33 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:04:03.262 00:11:33 json_config -- json_config/json_config.sh@352 -- # break 00:04:03.262 00:11:33 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:04:03.262 00:11:33 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:04:03.262 00:11:33 json_config -- json_config/common.sh@31 -- # local app=target 00:04:03.262 00:11:33 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:03.262 00:11:33 json_config -- json_config/common.sh@35 -- # [[ -n 3015369 ]] 00:04:03.262 00:11:33 json_config -- json_config/common.sh@38 -- # kill -SIGINT 3015369 00:04:03.262 00:11:33 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:03.262 00:11:33 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:03.262 00:11:33 json_config -- json_config/common.sh@41 -- # kill -0 3015369 00:04:03.262 00:11:33 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:04:03.836 00:11:34 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:04:03.836 00:11:34 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:03.836 00:11:34 json_config -- json_config/common.sh@41 -- # kill -0 3015369 00:04:03.836 00:11:34 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:03.836 00:11:34 json_config -- json_config/common.sh@43 -- # break 00:04:03.836 00:11:34 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:03.836 00:11:34 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:03.836 SPDK target shutdown done 00:04:03.836 00:11:34 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:04:03.836 INFO: relaunching applications... 00:04:03.836 00:11:34 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:03.836 00:11:34 json_config -- json_config/common.sh@9 -- # local app=target 00:04:03.836 00:11:34 json_config -- json_config/common.sh@10 -- # shift 00:04:03.836 00:11:34 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:03.836 00:11:34 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:03.836 00:11:34 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:03.836 00:11:34 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:03.836 00:11:34 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:03.836 00:11:34 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=3016356 00:04:03.836 00:11:34 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:03.836 Waiting for target to run... 00:04:03.836 00:11:34 json_config -- json_config/common.sh@25 -- # waitforlisten 3016356 /var/tmp/spdk_tgt.sock 00:04:03.836 00:11:34 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:03.836 00:11:34 json_config -- common/autotest_common.sh@831 -- # '[' -z 3016356 ']' 00:04:03.836 00:11:34 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:03.836 00:11:34 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:03.836 00:11:34 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:03.836 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:03.836 00:11:34 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:03.836 00:11:34 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:03.836 [2024-10-09 00:11:34.279656] Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 initialization... 00:04:03.836 [2024-10-09 00:11:34.279749] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3016356 ] 00:04:04.097 [2024-10-09 00:11:34.547960] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:04.097 [2024-10-09 00:11:34.590226] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:04:04.667 [2024-10-09 00:11:35.090249] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:04.667 [2024-10-09 00:11:35.122601] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:04.667 00:11:35 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:04.667 00:11:35 json_config -- common/autotest_common.sh@864 -- # return 0 00:04:04.667 00:11:35 json_config -- json_config/common.sh@26 -- # echo '' 00:04:04.667 00:04:04.667 00:11:35 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:04:04.667 00:11:35 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:04:04.667 INFO: Checking if target configuration is the same... 00:04:04.667 00:11:35 json_config -- json_config/json_config.sh@385 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:04.667 00:11:35 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:04:04.667 00:11:35 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:04.667 + '[' 2 -ne 2 ']' 00:04:04.667 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:04.667 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:04.667 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:04.667 +++ basename /dev/fd/62 00:04:04.667 ++ mktemp /tmp/62.XXX 00:04:04.667 + tmp_file_1=/tmp/62.Pfm 00:04:04.667 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:04.667 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:04.667 + tmp_file_2=/tmp/spdk_tgt_config.json.go6 00:04:04.667 + ret=0 00:04:04.667 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:04.928 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:04.928 + diff -u /tmp/62.Pfm /tmp/spdk_tgt_config.json.go6 00:04:04.928 + echo 'INFO: JSON config files are the same' 00:04:04.928 INFO: JSON config files are the same 00:04:04.928 + rm /tmp/62.Pfm /tmp/spdk_tgt_config.json.go6 00:04:04.928 + exit 0 00:04:04.928 00:11:35 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:04:04.928 00:11:35 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:04:04.928 INFO: changing configuration and checking if this can be detected... 00:04:04.928 00:11:35 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:04.928 00:11:35 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:05.188 00:11:35 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:04:05.188 00:11:35 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:05.188 00:11:35 json_config -- json_config/json_config.sh@394 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:05.188 + '[' 2 -ne 2 ']' 00:04:05.188 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:05.188 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:05.188 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:05.188 +++ basename /dev/fd/62 00:04:05.188 ++ mktemp /tmp/62.XXX 00:04:05.188 + tmp_file_1=/tmp/62.aL8 00:04:05.188 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:05.188 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:05.188 + tmp_file_2=/tmp/spdk_tgt_config.json.yJ5 00:04:05.188 + ret=0 00:04:05.188 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:05.456 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:05.719 + diff -u /tmp/62.aL8 /tmp/spdk_tgt_config.json.yJ5 00:04:05.719 + ret=1 00:04:05.719 + echo '=== Start of file: /tmp/62.aL8 ===' 00:04:05.719 + cat /tmp/62.aL8 00:04:05.719 + echo '=== End of file: /tmp/62.aL8 ===' 00:04:05.719 + echo '' 00:04:05.719 + echo '=== Start of file: /tmp/spdk_tgt_config.json.yJ5 ===' 00:04:05.719 + cat /tmp/spdk_tgt_config.json.yJ5 00:04:05.719 + echo '=== End of file: /tmp/spdk_tgt_config.json.yJ5 ===' 00:04:05.719 + echo '' 00:04:05.719 + rm /tmp/62.aL8 /tmp/spdk_tgt_config.json.yJ5 00:04:05.719 + exit 1 00:04:05.719 00:11:36 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:04:05.719 INFO: configuration change detected. 00:04:05.719 00:11:36 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:04:05.719 00:11:36 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:04:05.719 00:11:36 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:05.719 00:11:36 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:05.719 00:11:36 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:04:05.719 00:11:36 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:04:05.719 00:11:36 json_config -- json_config/json_config.sh@324 -- # [[ -n 3016356 ]] 00:04:05.719 00:11:36 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:04:05.719 00:11:36 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:04:05.719 00:11:36 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:05.719 00:11:36 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:05.719 00:11:36 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:04:05.719 00:11:36 json_config -- json_config/json_config.sh@200 -- # uname -s 00:04:05.719 00:11:36 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:04:05.719 00:11:36 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:04:05.719 00:11:36 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:04:05.719 00:11:36 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:04:05.719 00:11:36 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:05.719 00:11:36 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:05.719 00:11:36 json_config -- json_config/json_config.sh@330 -- # killprocess 3016356 00:04:05.719 00:11:36 json_config -- common/autotest_common.sh@950 -- # '[' -z 3016356 ']' 00:04:05.719 00:11:36 json_config -- common/autotest_common.sh@954 -- # kill -0 3016356 00:04:05.719 00:11:36 json_config -- common/autotest_common.sh@955 -- # uname 00:04:05.719 00:11:36 json_config -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:05.719 00:11:36 json_config -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3016356 00:04:05.719 00:11:36 json_config -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:05.719 00:11:36 json_config -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:05.719 00:11:36 json_config -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3016356' 00:04:05.719 killing process with pid 3016356 00:04:05.719 00:11:36 json_config -- common/autotest_common.sh@969 -- # kill 3016356 00:04:05.719 00:11:36 json_config -- common/autotest_common.sh@974 -- # wait 3016356 00:04:05.980 00:11:36 json_config -- json_config/json_config.sh@333 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:05.980 00:11:36 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:04:05.980 00:11:36 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:05.980 00:11:36 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:05.981 00:11:36 json_config -- json_config/json_config.sh@335 -- # return 0 00:04:05.981 00:11:36 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:04:05.981 INFO: Success 00:04:05.981 00:04:05.981 real 0m7.315s 00:04:05.981 user 0m8.813s 00:04:05.981 sys 0m1.955s 00:04:05.981 00:11:36 json_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:05.981 00:11:36 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:05.981 ************************************ 00:04:05.981 END TEST json_config 00:04:05.981 ************************************ 00:04:05.981 00:11:36 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:05.981 00:11:36 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:05.981 00:11:36 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:05.981 00:11:36 -- common/autotest_common.sh@10 -- # set +x 00:04:06.242 ************************************ 00:04:06.242 START TEST json_config_extra_key 00:04:06.242 ************************************ 00:04:06.242 00:11:36 json_config_extra_key -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:06.242 00:11:36 json_config_extra_key -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:06.242 00:11:36 json_config_extra_key -- common/autotest_common.sh@1681 -- # lcov --version 00:04:06.242 00:11:36 json_config_extra_key -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:06.242 00:11:36 json_config_extra_key -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:06.242 00:11:36 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:06.242 00:11:36 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:06.242 00:11:36 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:06.242 00:11:36 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:04:06.242 00:11:36 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:04:06.242 00:11:36 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:04:06.242 00:11:36 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:04:06.242 00:11:36 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:04:06.242 00:11:36 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:04:06.242 00:11:36 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:04:06.242 00:11:36 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:06.242 00:11:36 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:04:06.242 00:11:36 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:04:06.242 00:11:36 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:06.242 00:11:36 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:06.242 00:11:36 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:04:06.242 00:11:36 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:04:06.242 00:11:36 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:06.242 00:11:36 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:04:06.242 00:11:36 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:04:06.242 00:11:36 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:04:06.242 00:11:36 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:04:06.242 00:11:36 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:06.242 00:11:36 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:04:06.242 00:11:36 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:04:06.242 00:11:36 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:06.242 00:11:36 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:06.242 00:11:36 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:04:06.242 00:11:36 json_config_extra_key -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:06.242 00:11:36 json_config_extra_key -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:06.242 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:06.242 --rc genhtml_branch_coverage=1 00:04:06.242 --rc genhtml_function_coverage=1 00:04:06.242 --rc genhtml_legend=1 00:04:06.242 --rc geninfo_all_blocks=1 00:04:06.242 --rc geninfo_unexecuted_blocks=1 00:04:06.242 00:04:06.242 ' 00:04:06.242 00:11:36 json_config_extra_key -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:06.242 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:06.242 --rc genhtml_branch_coverage=1 00:04:06.242 --rc genhtml_function_coverage=1 00:04:06.243 --rc genhtml_legend=1 00:04:06.243 --rc geninfo_all_blocks=1 00:04:06.243 --rc geninfo_unexecuted_blocks=1 00:04:06.243 00:04:06.243 ' 00:04:06.243 00:11:36 json_config_extra_key -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:06.243 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:06.243 --rc genhtml_branch_coverage=1 00:04:06.243 --rc genhtml_function_coverage=1 00:04:06.243 --rc genhtml_legend=1 00:04:06.243 --rc geninfo_all_blocks=1 00:04:06.243 --rc geninfo_unexecuted_blocks=1 00:04:06.243 00:04:06.243 ' 00:04:06.243 00:11:36 json_config_extra_key -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:06.243 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:06.243 --rc genhtml_branch_coverage=1 00:04:06.243 --rc genhtml_function_coverage=1 00:04:06.243 --rc genhtml_legend=1 00:04:06.243 --rc geninfo_all_blocks=1 00:04:06.243 --rc geninfo_unexecuted_blocks=1 00:04:06.243 00:04:06.243 ' 00:04:06.243 00:11:36 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:06.243 00:11:36 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:04:06.243 00:11:36 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:06.243 00:11:36 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:06.243 00:11:36 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:06.243 00:11:36 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:06.243 00:11:36 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:06.243 00:11:36 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:06.243 00:11:36 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:06.243 00:11:36 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:06.243 00:11:36 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:06.243 00:11:36 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:06.243 00:11:36 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:04:06.243 00:11:36 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:04:06.243 00:11:36 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:06.243 00:11:36 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:06.243 00:11:36 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:06.243 00:11:36 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:06.243 00:11:36 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:06.243 00:11:36 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:04:06.243 00:11:36 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:06.243 00:11:36 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:06.243 00:11:36 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:06.243 00:11:36 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:06.243 00:11:36 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:06.243 00:11:36 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:06.243 00:11:36 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:04:06.243 00:11:36 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:06.243 00:11:36 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:04:06.243 00:11:36 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:06.243 00:11:36 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:06.243 00:11:36 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:06.243 00:11:36 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:06.243 00:11:36 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:06.243 00:11:36 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:06.243 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:06.243 00:11:36 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:06.243 00:11:36 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:06.243 00:11:36 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:06.243 00:11:36 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:04:06.243 00:11:36 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:06.243 00:11:36 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:06.243 00:11:36 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:06.243 00:11:36 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:06.243 00:11:36 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:06.243 00:11:36 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:06.243 00:11:36 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:04:06.243 00:11:36 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:06.243 00:11:36 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:06.243 00:11:36 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:06.243 INFO: launching applications... 00:04:06.243 00:11:36 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:06.243 00:11:36 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:04:06.243 00:11:36 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:04:06.243 00:11:36 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:06.243 00:11:36 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:06.243 00:11:36 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:04:06.243 00:11:36 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:06.243 00:11:36 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:06.243 00:11:36 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=3017049 00:04:06.243 00:11:36 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:06.243 Waiting for target to run... 00:04:06.243 00:11:36 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 3017049 /var/tmp/spdk_tgt.sock 00:04:06.243 00:11:36 json_config_extra_key -- common/autotest_common.sh@831 -- # '[' -z 3017049 ']' 00:04:06.243 00:11:36 json_config_extra_key -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:06.243 00:11:36 json_config_extra_key -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:06.243 00:11:36 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:06.243 00:11:36 json_config_extra_key -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:06.243 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:06.243 00:11:36 json_config_extra_key -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:06.243 00:11:36 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:06.504 [2024-10-09 00:11:36.891941] Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 initialization... 00:04:06.504 [2024-10-09 00:11:36.892011] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3017049 ] 00:04:06.764 [2024-10-09 00:11:37.207460] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:06.764 [2024-10-09 00:11:37.248629] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:04:07.336 00:11:37 json_config_extra_key -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:07.336 00:11:37 json_config_extra_key -- common/autotest_common.sh@864 -- # return 0 00:04:07.336 00:11:37 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:04:07.336 00:04:07.336 00:11:37 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:07.336 INFO: shutting down applications... 00:04:07.336 00:11:37 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:07.336 00:11:37 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:04:07.336 00:11:37 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:07.336 00:11:37 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 3017049 ]] 00:04:07.336 00:11:37 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 3017049 00:04:07.336 00:11:37 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:07.336 00:11:37 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:07.336 00:11:37 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3017049 00:04:07.336 00:11:37 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:07.597 00:11:38 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:07.597 00:11:38 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:07.597 00:11:38 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3017049 00:04:07.597 00:11:38 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:07.597 00:11:38 json_config_extra_key -- json_config/common.sh@43 -- # break 00:04:07.597 00:11:38 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:07.597 00:11:38 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:07.597 SPDK target shutdown done 00:04:07.597 00:11:38 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:04:07.597 Success 00:04:07.597 00:04:07.597 real 0m1.564s 00:04:07.597 user 0m1.166s 00:04:07.597 sys 0m0.439s 00:04:07.597 00:11:38 json_config_extra_key -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:07.597 00:11:38 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:07.597 ************************************ 00:04:07.597 END TEST json_config_extra_key 00:04:07.597 ************************************ 00:04:07.597 00:11:38 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:07.597 00:11:38 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:07.597 00:11:38 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:07.858 00:11:38 -- common/autotest_common.sh@10 -- # set +x 00:04:07.858 ************************************ 00:04:07.858 START TEST alias_rpc 00:04:07.858 ************************************ 00:04:07.858 00:11:38 alias_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:07.858 * Looking for test storage... 00:04:07.858 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:04:07.858 00:11:38 alias_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:07.858 00:11:38 alias_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:04:07.858 00:11:38 alias_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:07.858 00:11:38 alias_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:07.858 00:11:38 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:07.858 00:11:38 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:07.858 00:11:38 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:07.858 00:11:38 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:07.858 00:11:38 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:07.858 00:11:38 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:07.858 00:11:38 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:07.858 00:11:38 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:07.858 00:11:38 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:07.858 00:11:38 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:07.858 00:11:38 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:07.858 00:11:38 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:07.858 00:11:38 alias_rpc -- scripts/common.sh@345 -- # : 1 00:04:07.858 00:11:38 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:07.858 00:11:38 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:07.858 00:11:38 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:07.858 00:11:38 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:04:07.858 00:11:38 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:07.858 00:11:38 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:04:07.858 00:11:38 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:07.858 00:11:38 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:07.858 00:11:38 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:04:07.858 00:11:38 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:07.858 00:11:38 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:04:07.858 00:11:38 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:07.858 00:11:38 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:07.858 00:11:38 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:07.858 00:11:38 alias_rpc -- scripts/common.sh@368 -- # return 0 00:04:07.858 00:11:38 alias_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:07.858 00:11:38 alias_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:07.858 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:07.858 --rc genhtml_branch_coverage=1 00:04:07.858 --rc genhtml_function_coverage=1 00:04:07.858 --rc genhtml_legend=1 00:04:07.858 --rc geninfo_all_blocks=1 00:04:07.858 --rc geninfo_unexecuted_blocks=1 00:04:07.858 00:04:07.858 ' 00:04:07.858 00:11:38 alias_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:07.858 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:07.858 --rc genhtml_branch_coverage=1 00:04:07.858 --rc genhtml_function_coverage=1 00:04:07.858 --rc genhtml_legend=1 00:04:07.858 --rc geninfo_all_blocks=1 00:04:07.858 --rc geninfo_unexecuted_blocks=1 00:04:07.858 00:04:07.858 ' 00:04:07.858 00:11:38 alias_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:07.858 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:07.858 --rc genhtml_branch_coverage=1 00:04:07.858 --rc genhtml_function_coverage=1 00:04:07.858 --rc genhtml_legend=1 00:04:07.858 --rc geninfo_all_blocks=1 00:04:07.858 --rc geninfo_unexecuted_blocks=1 00:04:07.858 00:04:07.858 ' 00:04:07.858 00:11:38 alias_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:07.858 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:07.858 --rc genhtml_branch_coverage=1 00:04:07.858 --rc genhtml_function_coverage=1 00:04:07.858 --rc genhtml_legend=1 00:04:07.858 --rc geninfo_all_blocks=1 00:04:07.858 --rc geninfo_unexecuted_blocks=1 00:04:07.858 00:04:07.858 ' 00:04:07.858 00:11:38 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:07.858 00:11:38 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=3017442 00:04:07.858 00:11:38 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 3017442 00:04:07.858 00:11:38 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:07.858 00:11:38 alias_rpc -- common/autotest_common.sh@831 -- # '[' -z 3017442 ']' 00:04:07.858 00:11:38 alias_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:07.858 00:11:38 alias_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:07.858 00:11:38 alias_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:07.858 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:07.858 00:11:38 alias_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:07.858 00:11:38 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:08.119 [2024-10-09 00:11:38.531454] Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 initialization... 00:04:08.119 [2024-10-09 00:11:38.531524] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3017442 ] 00:04:08.119 [2024-10-09 00:11:38.590137] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:08.119 [2024-10-09 00:11:38.644548] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:04:09.060 00:11:39 alias_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:09.060 00:11:39 alias_rpc -- common/autotest_common.sh@864 -- # return 0 00:04:09.060 00:11:39 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:04:09.060 00:11:39 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 3017442 00:04:09.060 00:11:39 alias_rpc -- common/autotest_common.sh@950 -- # '[' -z 3017442 ']' 00:04:09.060 00:11:39 alias_rpc -- common/autotest_common.sh@954 -- # kill -0 3017442 00:04:09.060 00:11:39 alias_rpc -- common/autotest_common.sh@955 -- # uname 00:04:09.060 00:11:39 alias_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:09.060 00:11:39 alias_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3017442 00:04:09.060 00:11:39 alias_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:09.060 00:11:39 alias_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:09.060 00:11:39 alias_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3017442' 00:04:09.060 killing process with pid 3017442 00:04:09.060 00:11:39 alias_rpc -- common/autotest_common.sh@969 -- # kill 3017442 00:04:09.060 00:11:39 alias_rpc -- common/autotest_common.sh@974 -- # wait 3017442 00:04:09.320 00:04:09.320 real 0m1.533s 00:04:09.320 user 0m1.704s 00:04:09.320 sys 0m0.418s 00:04:09.320 00:11:39 alias_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:09.320 00:11:39 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:09.320 ************************************ 00:04:09.320 END TEST alias_rpc 00:04:09.320 ************************************ 00:04:09.320 00:11:39 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:04:09.320 00:11:39 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:09.320 00:11:39 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:09.320 00:11:39 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:09.320 00:11:39 -- common/autotest_common.sh@10 -- # set +x 00:04:09.320 ************************************ 00:04:09.320 START TEST spdkcli_tcp 00:04:09.320 ************************************ 00:04:09.320 00:11:39 spdkcli_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:09.580 * Looking for test storage... 00:04:09.580 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:04:09.580 00:11:39 spdkcli_tcp -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:09.580 00:11:39 spdkcli_tcp -- common/autotest_common.sh@1681 -- # lcov --version 00:04:09.580 00:11:39 spdkcli_tcp -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:09.580 00:11:40 spdkcli_tcp -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:09.580 00:11:40 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:09.580 00:11:40 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:09.580 00:11:40 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:09.580 00:11:40 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:04:09.580 00:11:40 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:04:09.580 00:11:40 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:04:09.580 00:11:40 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:04:09.580 00:11:40 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:04:09.580 00:11:40 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:04:09.580 00:11:40 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:04:09.580 00:11:40 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:09.580 00:11:40 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:04:09.580 00:11:40 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:04:09.580 00:11:40 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:09.580 00:11:40 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:09.580 00:11:40 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:04:09.580 00:11:40 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:04:09.580 00:11:40 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:09.580 00:11:40 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:04:09.580 00:11:40 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:04:09.580 00:11:40 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:04:09.580 00:11:40 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:04:09.580 00:11:40 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:09.580 00:11:40 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:04:09.580 00:11:40 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:04:09.580 00:11:40 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:09.580 00:11:40 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:09.580 00:11:40 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:04:09.580 00:11:40 spdkcli_tcp -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:09.580 00:11:40 spdkcli_tcp -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:09.580 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:09.580 --rc genhtml_branch_coverage=1 00:04:09.580 --rc genhtml_function_coverage=1 00:04:09.580 --rc genhtml_legend=1 00:04:09.580 --rc geninfo_all_blocks=1 00:04:09.580 --rc geninfo_unexecuted_blocks=1 00:04:09.580 00:04:09.580 ' 00:04:09.580 00:11:40 spdkcli_tcp -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:09.580 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:09.580 --rc genhtml_branch_coverage=1 00:04:09.580 --rc genhtml_function_coverage=1 00:04:09.580 --rc genhtml_legend=1 00:04:09.580 --rc geninfo_all_blocks=1 00:04:09.580 --rc geninfo_unexecuted_blocks=1 00:04:09.580 00:04:09.580 ' 00:04:09.580 00:11:40 spdkcli_tcp -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:09.580 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:09.580 --rc genhtml_branch_coverage=1 00:04:09.580 --rc genhtml_function_coverage=1 00:04:09.580 --rc genhtml_legend=1 00:04:09.580 --rc geninfo_all_blocks=1 00:04:09.580 --rc geninfo_unexecuted_blocks=1 00:04:09.580 00:04:09.580 ' 00:04:09.580 00:11:40 spdkcli_tcp -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:09.580 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:09.580 --rc genhtml_branch_coverage=1 00:04:09.580 --rc genhtml_function_coverage=1 00:04:09.580 --rc genhtml_legend=1 00:04:09.580 --rc geninfo_all_blocks=1 00:04:09.580 --rc geninfo_unexecuted_blocks=1 00:04:09.580 00:04:09.580 ' 00:04:09.580 00:11:40 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:04:09.580 00:11:40 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:04:09.580 00:11:40 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:04:09.580 00:11:40 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:09.580 00:11:40 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:09.580 00:11:40 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:09.580 00:11:40 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:09.580 00:11:40 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:09.580 00:11:40 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:09.580 00:11:40 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=3017840 00:04:09.580 00:11:40 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 3017840 00:04:09.580 00:11:40 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:09.580 00:11:40 spdkcli_tcp -- common/autotest_common.sh@831 -- # '[' -z 3017840 ']' 00:04:09.580 00:11:40 spdkcli_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:09.580 00:11:40 spdkcli_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:09.580 00:11:40 spdkcli_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:09.580 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:09.581 00:11:40 spdkcli_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:09.581 00:11:40 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:09.581 [2024-10-09 00:11:40.151704] Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 initialization... 00:04:09.581 [2024-10-09 00:11:40.151783] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3017840 ] 00:04:09.841 [2024-10-09 00:11:40.228691] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:09.841 [2024-10-09 00:11:40.284732] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:04:09.841 [2024-10-09 00:11:40.284745] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:04:10.412 00:11:40 spdkcli_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:10.412 00:11:40 spdkcli_tcp -- common/autotest_common.sh@864 -- # return 0 00:04:10.412 00:11:40 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=3017924 00:04:10.412 00:11:40 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:10.412 00:11:40 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:10.673 [ 00:04:10.673 "bdev_malloc_delete", 00:04:10.673 "bdev_malloc_create", 00:04:10.673 "bdev_null_resize", 00:04:10.673 "bdev_null_delete", 00:04:10.673 "bdev_null_create", 00:04:10.673 "bdev_nvme_cuse_unregister", 00:04:10.673 "bdev_nvme_cuse_register", 00:04:10.673 "bdev_opal_new_user", 00:04:10.673 "bdev_opal_set_lock_state", 00:04:10.673 "bdev_opal_delete", 00:04:10.673 "bdev_opal_get_info", 00:04:10.673 "bdev_opal_create", 00:04:10.673 "bdev_nvme_opal_revert", 00:04:10.673 "bdev_nvme_opal_init", 00:04:10.673 "bdev_nvme_send_cmd", 00:04:10.673 "bdev_nvme_set_keys", 00:04:10.673 "bdev_nvme_get_path_iostat", 00:04:10.673 "bdev_nvme_get_mdns_discovery_info", 00:04:10.673 "bdev_nvme_stop_mdns_discovery", 00:04:10.673 "bdev_nvme_start_mdns_discovery", 00:04:10.673 "bdev_nvme_set_multipath_policy", 00:04:10.673 "bdev_nvme_set_preferred_path", 00:04:10.673 "bdev_nvme_get_io_paths", 00:04:10.673 "bdev_nvme_remove_error_injection", 00:04:10.673 "bdev_nvme_add_error_injection", 00:04:10.673 "bdev_nvme_get_discovery_info", 00:04:10.673 "bdev_nvme_stop_discovery", 00:04:10.673 "bdev_nvme_start_discovery", 00:04:10.673 "bdev_nvme_get_controller_health_info", 00:04:10.673 "bdev_nvme_disable_controller", 00:04:10.673 "bdev_nvme_enable_controller", 00:04:10.673 "bdev_nvme_reset_controller", 00:04:10.673 "bdev_nvme_get_transport_statistics", 00:04:10.673 "bdev_nvme_apply_firmware", 00:04:10.673 "bdev_nvme_detach_controller", 00:04:10.673 "bdev_nvme_get_controllers", 00:04:10.673 "bdev_nvme_attach_controller", 00:04:10.673 "bdev_nvme_set_hotplug", 00:04:10.673 "bdev_nvme_set_options", 00:04:10.673 "bdev_passthru_delete", 00:04:10.673 "bdev_passthru_create", 00:04:10.673 "bdev_lvol_set_parent_bdev", 00:04:10.673 "bdev_lvol_set_parent", 00:04:10.673 "bdev_lvol_check_shallow_copy", 00:04:10.673 "bdev_lvol_start_shallow_copy", 00:04:10.673 "bdev_lvol_grow_lvstore", 00:04:10.673 "bdev_lvol_get_lvols", 00:04:10.673 "bdev_lvol_get_lvstores", 00:04:10.673 "bdev_lvol_delete", 00:04:10.673 "bdev_lvol_set_read_only", 00:04:10.673 "bdev_lvol_resize", 00:04:10.673 "bdev_lvol_decouple_parent", 00:04:10.673 "bdev_lvol_inflate", 00:04:10.673 "bdev_lvol_rename", 00:04:10.673 "bdev_lvol_clone_bdev", 00:04:10.673 "bdev_lvol_clone", 00:04:10.673 "bdev_lvol_snapshot", 00:04:10.673 "bdev_lvol_create", 00:04:10.673 "bdev_lvol_delete_lvstore", 00:04:10.673 "bdev_lvol_rename_lvstore", 00:04:10.673 "bdev_lvol_create_lvstore", 00:04:10.673 "bdev_raid_set_options", 00:04:10.673 "bdev_raid_remove_base_bdev", 00:04:10.673 "bdev_raid_add_base_bdev", 00:04:10.673 "bdev_raid_delete", 00:04:10.673 "bdev_raid_create", 00:04:10.673 "bdev_raid_get_bdevs", 00:04:10.673 "bdev_error_inject_error", 00:04:10.673 "bdev_error_delete", 00:04:10.673 "bdev_error_create", 00:04:10.673 "bdev_split_delete", 00:04:10.673 "bdev_split_create", 00:04:10.673 "bdev_delay_delete", 00:04:10.673 "bdev_delay_create", 00:04:10.673 "bdev_delay_update_latency", 00:04:10.673 "bdev_zone_block_delete", 00:04:10.673 "bdev_zone_block_create", 00:04:10.673 "blobfs_create", 00:04:10.673 "blobfs_detect", 00:04:10.673 "blobfs_set_cache_size", 00:04:10.673 "bdev_aio_delete", 00:04:10.673 "bdev_aio_rescan", 00:04:10.673 "bdev_aio_create", 00:04:10.673 "bdev_ftl_set_property", 00:04:10.673 "bdev_ftl_get_properties", 00:04:10.673 "bdev_ftl_get_stats", 00:04:10.673 "bdev_ftl_unmap", 00:04:10.673 "bdev_ftl_unload", 00:04:10.673 "bdev_ftl_delete", 00:04:10.673 "bdev_ftl_load", 00:04:10.673 "bdev_ftl_create", 00:04:10.673 "bdev_virtio_attach_controller", 00:04:10.673 "bdev_virtio_scsi_get_devices", 00:04:10.673 "bdev_virtio_detach_controller", 00:04:10.673 "bdev_virtio_blk_set_hotplug", 00:04:10.673 "bdev_iscsi_delete", 00:04:10.673 "bdev_iscsi_create", 00:04:10.673 "bdev_iscsi_set_options", 00:04:10.673 "accel_error_inject_error", 00:04:10.673 "ioat_scan_accel_module", 00:04:10.673 "dsa_scan_accel_module", 00:04:10.673 "iaa_scan_accel_module", 00:04:10.673 "vfu_virtio_create_fs_endpoint", 00:04:10.673 "vfu_virtio_create_scsi_endpoint", 00:04:10.673 "vfu_virtio_scsi_remove_target", 00:04:10.673 "vfu_virtio_scsi_add_target", 00:04:10.673 "vfu_virtio_create_blk_endpoint", 00:04:10.673 "vfu_virtio_delete_endpoint", 00:04:10.673 "keyring_file_remove_key", 00:04:10.673 "keyring_file_add_key", 00:04:10.673 "keyring_linux_set_options", 00:04:10.673 "fsdev_aio_delete", 00:04:10.673 "fsdev_aio_create", 00:04:10.673 "iscsi_get_histogram", 00:04:10.673 "iscsi_enable_histogram", 00:04:10.673 "iscsi_set_options", 00:04:10.673 "iscsi_get_auth_groups", 00:04:10.673 "iscsi_auth_group_remove_secret", 00:04:10.673 "iscsi_auth_group_add_secret", 00:04:10.673 "iscsi_delete_auth_group", 00:04:10.673 "iscsi_create_auth_group", 00:04:10.673 "iscsi_set_discovery_auth", 00:04:10.673 "iscsi_get_options", 00:04:10.673 "iscsi_target_node_request_logout", 00:04:10.673 "iscsi_target_node_set_redirect", 00:04:10.673 "iscsi_target_node_set_auth", 00:04:10.673 "iscsi_target_node_add_lun", 00:04:10.673 "iscsi_get_stats", 00:04:10.673 "iscsi_get_connections", 00:04:10.673 "iscsi_portal_group_set_auth", 00:04:10.673 "iscsi_start_portal_group", 00:04:10.673 "iscsi_delete_portal_group", 00:04:10.673 "iscsi_create_portal_group", 00:04:10.673 "iscsi_get_portal_groups", 00:04:10.673 "iscsi_delete_target_node", 00:04:10.673 "iscsi_target_node_remove_pg_ig_maps", 00:04:10.673 "iscsi_target_node_add_pg_ig_maps", 00:04:10.673 "iscsi_create_target_node", 00:04:10.673 "iscsi_get_target_nodes", 00:04:10.673 "iscsi_delete_initiator_group", 00:04:10.673 "iscsi_initiator_group_remove_initiators", 00:04:10.673 "iscsi_initiator_group_add_initiators", 00:04:10.673 "iscsi_create_initiator_group", 00:04:10.673 "iscsi_get_initiator_groups", 00:04:10.673 "nvmf_set_crdt", 00:04:10.673 "nvmf_set_config", 00:04:10.673 "nvmf_set_max_subsystems", 00:04:10.673 "nvmf_stop_mdns_prr", 00:04:10.673 "nvmf_publish_mdns_prr", 00:04:10.673 "nvmf_subsystem_get_listeners", 00:04:10.673 "nvmf_subsystem_get_qpairs", 00:04:10.673 "nvmf_subsystem_get_controllers", 00:04:10.673 "nvmf_get_stats", 00:04:10.673 "nvmf_get_transports", 00:04:10.674 "nvmf_create_transport", 00:04:10.674 "nvmf_get_targets", 00:04:10.674 "nvmf_delete_target", 00:04:10.674 "nvmf_create_target", 00:04:10.674 "nvmf_subsystem_allow_any_host", 00:04:10.674 "nvmf_subsystem_set_keys", 00:04:10.674 "nvmf_subsystem_remove_host", 00:04:10.674 "nvmf_subsystem_add_host", 00:04:10.674 "nvmf_ns_remove_host", 00:04:10.674 "nvmf_ns_add_host", 00:04:10.674 "nvmf_subsystem_remove_ns", 00:04:10.674 "nvmf_subsystem_set_ns_ana_group", 00:04:10.674 "nvmf_subsystem_add_ns", 00:04:10.674 "nvmf_subsystem_listener_set_ana_state", 00:04:10.674 "nvmf_discovery_get_referrals", 00:04:10.674 "nvmf_discovery_remove_referral", 00:04:10.674 "nvmf_discovery_add_referral", 00:04:10.674 "nvmf_subsystem_remove_listener", 00:04:10.674 "nvmf_subsystem_add_listener", 00:04:10.674 "nvmf_delete_subsystem", 00:04:10.674 "nvmf_create_subsystem", 00:04:10.674 "nvmf_get_subsystems", 00:04:10.674 "env_dpdk_get_mem_stats", 00:04:10.674 "nbd_get_disks", 00:04:10.674 "nbd_stop_disk", 00:04:10.674 "nbd_start_disk", 00:04:10.674 "ublk_recover_disk", 00:04:10.674 "ublk_get_disks", 00:04:10.674 "ublk_stop_disk", 00:04:10.674 "ublk_start_disk", 00:04:10.674 "ublk_destroy_target", 00:04:10.674 "ublk_create_target", 00:04:10.674 "virtio_blk_create_transport", 00:04:10.674 "virtio_blk_get_transports", 00:04:10.674 "vhost_controller_set_coalescing", 00:04:10.674 "vhost_get_controllers", 00:04:10.674 "vhost_delete_controller", 00:04:10.674 "vhost_create_blk_controller", 00:04:10.674 "vhost_scsi_controller_remove_target", 00:04:10.674 "vhost_scsi_controller_add_target", 00:04:10.674 "vhost_start_scsi_controller", 00:04:10.674 "vhost_create_scsi_controller", 00:04:10.674 "thread_set_cpumask", 00:04:10.674 "scheduler_set_options", 00:04:10.674 "framework_get_governor", 00:04:10.674 "framework_get_scheduler", 00:04:10.674 "framework_set_scheduler", 00:04:10.674 "framework_get_reactors", 00:04:10.674 "thread_get_io_channels", 00:04:10.674 "thread_get_pollers", 00:04:10.674 "thread_get_stats", 00:04:10.674 "framework_monitor_context_switch", 00:04:10.674 "spdk_kill_instance", 00:04:10.674 "log_enable_timestamps", 00:04:10.674 "log_get_flags", 00:04:10.674 "log_clear_flag", 00:04:10.674 "log_set_flag", 00:04:10.674 "log_get_level", 00:04:10.674 "log_set_level", 00:04:10.674 "log_get_print_level", 00:04:10.674 "log_set_print_level", 00:04:10.674 "framework_enable_cpumask_locks", 00:04:10.674 "framework_disable_cpumask_locks", 00:04:10.674 "framework_wait_init", 00:04:10.674 "framework_start_init", 00:04:10.674 "scsi_get_devices", 00:04:10.674 "bdev_get_histogram", 00:04:10.674 "bdev_enable_histogram", 00:04:10.674 "bdev_set_qos_limit", 00:04:10.674 "bdev_set_qd_sampling_period", 00:04:10.674 "bdev_get_bdevs", 00:04:10.674 "bdev_reset_iostat", 00:04:10.674 "bdev_get_iostat", 00:04:10.674 "bdev_examine", 00:04:10.674 "bdev_wait_for_examine", 00:04:10.674 "bdev_set_options", 00:04:10.674 "accel_get_stats", 00:04:10.674 "accel_set_options", 00:04:10.674 "accel_set_driver", 00:04:10.674 "accel_crypto_key_destroy", 00:04:10.674 "accel_crypto_keys_get", 00:04:10.674 "accel_crypto_key_create", 00:04:10.674 "accel_assign_opc", 00:04:10.674 "accel_get_module_info", 00:04:10.674 "accel_get_opc_assignments", 00:04:10.674 "vmd_rescan", 00:04:10.674 "vmd_remove_device", 00:04:10.674 "vmd_enable", 00:04:10.674 "sock_get_default_impl", 00:04:10.674 "sock_set_default_impl", 00:04:10.674 "sock_impl_set_options", 00:04:10.674 "sock_impl_get_options", 00:04:10.674 "iobuf_get_stats", 00:04:10.674 "iobuf_set_options", 00:04:10.674 "keyring_get_keys", 00:04:10.674 "vfu_tgt_set_base_path", 00:04:10.674 "framework_get_pci_devices", 00:04:10.674 "framework_get_config", 00:04:10.674 "framework_get_subsystems", 00:04:10.674 "fsdev_set_opts", 00:04:10.674 "fsdev_get_opts", 00:04:10.674 "trace_get_info", 00:04:10.674 "trace_get_tpoint_group_mask", 00:04:10.674 "trace_disable_tpoint_group", 00:04:10.674 "trace_enable_tpoint_group", 00:04:10.674 "trace_clear_tpoint_mask", 00:04:10.674 "trace_set_tpoint_mask", 00:04:10.674 "notify_get_notifications", 00:04:10.674 "notify_get_types", 00:04:10.674 "spdk_get_version", 00:04:10.674 "rpc_get_methods" 00:04:10.674 ] 00:04:10.674 00:11:41 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:10.674 00:11:41 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:10.674 00:11:41 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:10.674 00:11:41 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:10.674 00:11:41 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 3017840 00:04:10.674 00:11:41 spdkcli_tcp -- common/autotest_common.sh@950 -- # '[' -z 3017840 ']' 00:04:10.674 00:11:41 spdkcli_tcp -- common/autotest_common.sh@954 -- # kill -0 3017840 00:04:10.674 00:11:41 spdkcli_tcp -- common/autotest_common.sh@955 -- # uname 00:04:10.674 00:11:41 spdkcli_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:10.674 00:11:41 spdkcli_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3017840 00:04:10.674 00:11:41 spdkcli_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:10.674 00:11:41 spdkcli_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:10.674 00:11:41 spdkcli_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3017840' 00:04:10.674 killing process with pid 3017840 00:04:10.674 00:11:41 spdkcli_tcp -- common/autotest_common.sh@969 -- # kill 3017840 00:04:10.674 00:11:41 spdkcli_tcp -- common/autotest_common.sh@974 -- # wait 3017840 00:04:10.943 00:04:10.943 real 0m1.559s 00:04:10.943 user 0m2.840s 00:04:10.943 sys 0m0.450s 00:04:10.943 00:11:41 spdkcli_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:10.943 00:11:41 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:10.943 ************************************ 00:04:10.943 END TEST spdkcli_tcp 00:04:10.943 ************************************ 00:04:10.943 00:11:41 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:10.943 00:11:41 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:10.943 00:11:41 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:10.943 00:11:41 -- common/autotest_common.sh@10 -- # set +x 00:04:10.943 ************************************ 00:04:10.943 START TEST dpdk_mem_utility 00:04:10.943 ************************************ 00:04:10.943 00:11:41 dpdk_mem_utility -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:11.205 * Looking for test storage... 00:04:11.205 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:04:11.205 00:11:41 dpdk_mem_utility -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:11.205 00:11:41 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # lcov --version 00:04:11.205 00:11:41 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:11.205 00:11:41 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:11.205 00:11:41 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:11.205 00:11:41 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:11.205 00:11:41 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:11.205 00:11:41 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:04:11.205 00:11:41 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:04:11.205 00:11:41 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:04:11.205 00:11:41 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:04:11.205 00:11:41 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:04:11.205 00:11:41 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:04:11.205 00:11:41 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:04:11.205 00:11:41 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:11.205 00:11:41 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:04:11.205 00:11:41 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:04:11.205 00:11:41 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:11.205 00:11:41 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:11.205 00:11:41 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:04:11.205 00:11:41 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:04:11.205 00:11:41 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:11.205 00:11:41 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:04:11.205 00:11:41 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:04:11.205 00:11:41 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:04:11.205 00:11:41 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:04:11.205 00:11:41 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:11.205 00:11:41 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:04:11.205 00:11:41 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:04:11.205 00:11:41 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:11.205 00:11:41 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:11.205 00:11:41 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:04:11.205 00:11:41 dpdk_mem_utility -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:11.205 00:11:41 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:11.205 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:11.205 --rc genhtml_branch_coverage=1 00:04:11.205 --rc genhtml_function_coverage=1 00:04:11.205 --rc genhtml_legend=1 00:04:11.205 --rc geninfo_all_blocks=1 00:04:11.205 --rc geninfo_unexecuted_blocks=1 00:04:11.205 00:04:11.205 ' 00:04:11.205 00:11:41 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:11.205 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:11.205 --rc genhtml_branch_coverage=1 00:04:11.205 --rc genhtml_function_coverage=1 00:04:11.205 --rc genhtml_legend=1 00:04:11.205 --rc geninfo_all_blocks=1 00:04:11.205 --rc geninfo_unexecuted_blocks=1 00:04:11.205 00:04:11.205 ' 00:04:11.205 00:11:41 dpdk_mem_utility -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:11.205 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:11.205 --rc genhtml_branch_coverage=1 00:04:11.205 --rc genhtml_function_coverage=1 00:04:11.205 --rc genhtml_legend=1 00:04:11.205 --rc geninfo_all_blocks=1 00:04:11.205 --rc geninfo_unexecuted_blocks=1 00:04:11.205 00:04:11.205 ' 00:04:11.205 00:11:41 dpdk_mem_utility -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:11.205 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:11.205 --rc genhtml_branch_coverage=1 00:04:11.205 --rc genhtml_function_coverage=1 00:04:11.205 --rc genhtml_legend=1 00:04:11.205 --rc geninfo_all_blocks=1 00:04:11.205 --rc geninfo_unexecuted_blocks=1 00:04:11.205 00:04:11.205 ' 00:04:11.205 00:11:41 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:11.205 00:11:41 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=3018255 00:04:11.205 00:11:41 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 3018255 00:04:11.205 00:11:41 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:11.205 00:11:41 dpdk_mem_utility -- common/autotest_common.sh@831 -- # '[' -z 3018255 ']' 00:04:11.205 00:11:41 dpdk_mem_utility -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:11.205 00:11:41 dpdk_mem_utility -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:11.205 00:11:41 dpdk_mem_utility -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:11.205 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:11.205 00:11:41 dpdk_mem_utility -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:11.205 00:11:41 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:11.205 [2024-10-09 00:11:41.771010] Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 initialization... 00:04:11.205 [2024-10-09 00:11:41.771060] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3018255 ] 00:04:11.467 [2024-10-09 00:11:41.849608] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:11.467 [2024-10-09 00:11:41.906960] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:04:12.037 00:11:42 dpdk_mem_utility -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:12.037 00:11:42 dpdk_mem_utility -- common/autotest_common.sh@864 -- # return 0 00:04:12.037 00:11:42 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:12.037 00:11:42 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:12.037 00:11:42 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:12.037 00:11:42 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:12.037 { 00:04:12.037 "filename": "/tmp/spdk_mem_dump.txt" 00:04:12.037 } 00:04:12.037 00:11:42 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:12.037 00:11:42 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:12.037 DPDK memory size 860.000000 MiB in 1 heap(s) 00:04:12.037 1 heaps totaling size 860.000000 MiB 00:04:12.037 size: 860.000000 MiB heap id: 0 00:04:12.037 end heaps---------- 00:04:12.037 9 mempools totaling size 642.649841 MiB 00:04:12.037 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:04:12.037 size: 158.602051 MiB name: PDU_data_out_Pool 00:04:12.037 size: 92.545471 MiB name: bdev_io_3018255 00:04:12.037 size: 51.011292 MiB name: evtpool_3018255 00:04:12.037 size: 50.003479 MiB name: msgpool_3018255 00:04:12.037 size: 36.509338 MiB name: fsdev_io_3018255 00:04:12.037 size: 21.763794 MiB name: PDU_Pool 00:04:12.037 size: 19.513306 MiB name: SCSI_TASK_Pool 00:04:12.037 size: 0.026123 MiB name: Session_Pool 00:04:12.037 end mempools------- 00:04:12.037 6 memzones totaling size 4.142822 MiB 00:04:12.037 size: 1.000366 MiB name: RG_ring_0_3018255 00:04:12.037 size: 1.000366 MiB name: RG_ring_1_3018255 00:04:12.037 size: 1.000366 MiB name: RG_ring_4_3018255 00:04:12.037 size: 1.000366 MiB name: RG_ring_5_3018255 00:04:12.037 size: 0.125366 MiB name: RG_ring_2_3018255 00:04:12.037 size: 0.015991 MiB name: RG_ring_3_3018255 00:04:12.037 end memzones------- 00:04:12.037 00:11:42 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:04:12.037 heap id: 0 total size: 860.000000 MiB number of busy elements: 44 number of free elements: 16 00:04:12.037 list of free elements. size: 13.984680 MiB 00:04:12.037 element at address: 0x200000400000 with size: 1.999512 MiB 00:04:12.037 element at address: 0x200000800000 with size: 1.996948 MiB 00:04:12.037 element at address: 0x20001bc00000 with size: 0.999878 MiB 00:04:12.037 element at address: 0x20001be00000 with size: 0.999878 MiB 00:04:12.037 element at address: 0x200034a00000 with size: 0.994446 MiB 00:04:12.037 element at address: 0x200009600000 with size: 0.959839 MiB 00:04:12.037 element at address: 0x200015e00000 with size: 0.954285 MiB 00:04:12.037 element at address: 0x20001c000000 with size: 0.936584 MiB 00:04:12.037 element at address: 0x200000200000 with size: 0.841614 MiB 00:04:12.037 element at address: 0x20001d800000 with size: 0.582886 MiB 00:04:12.037 element at address: 0x200003e00000 with size: 0.495422 MiB 00:04:12.037 element at address: 0x20000d800000 with size: 0.490723 MiB 00:04:12.037 element at address: 0x20001c200000 with size: 0.485657 MiB 00:04:12.037 element at address: 0x200007000000 with size: 0.481934 MiB 00:04:12.037 element at address: 0x20002ac00000 with size: 0.410034 MiB 00:04:12.037 element at address: 0x200003a00000 with size: 0.355042 MiB 00:04:12.037 list of standard malloc elements. size: 199.218628 MiB 00:04:12.037 element at address: 0x20000d9fff80 with size: 132.000122 MiB 00:04:12.037 element at address: 0x2000097fff80 with size: 64.000122 MiB 00:04:12.037 element at address: 0x20001bcfff80 with size: 1.000122 MiB 00:04:12.037 element at address: 0x20001befff80 with size: 1.000122 MiB 00:04:12.037 element at address: 0x20001c0fff80 with size: 1.000122 MiB 00:04:12.037 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:04:12.037 element at address: 0x20001c0eff00 with size: 0.062622 MiB 00:04:12.037 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:04:12.037 element at address: 0x20001c0efdc0 with size: 0.000305 MiB 00:04:12.037 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:04:12.037 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:04:12.037 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:04:12.037 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:04:12.037 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:04:12.037 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:04:12.037 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:04:12.037 element at address: 0x200003a5ae40 with size: 0.000183 MiB 00:04:12.037 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:04:12.037 element at address: 0x200003a5f300 with size: 0.000183 MiB 00:04:12.037 element at address: 0x200003a7f5c0 with size: 0.000183 MiB 00:04:12.037 element at address: 0x200003a7f680 with size: 0.000183 MiB 00:04:12.037 element at address: 0x200003aff940 with size: 0.000183 MiB 00:04:12.037 element at address: 0x200003affb40 with size: 0.000183 MiB 00:04:12.037 element at address: 0x200003e7ed40 with size: 0.000183 MiB 00:04:12.037 element at address: 0x200003eff000 with size: 0.000183 MiB 00:04:12.037 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:04:12.038 element at address: 0x20000707b600 with size: 0.000183 MiB 00:04:12.038 element at address: 0x20000707b6c0 with size: 0.000183 MiB 00:04:12.038 element at address: 0x2000070fb980 with size: 0.000183 MiB 00:04:12.038 element at address: 0x2000096fdd80 with size: 0.000183 MiB 00:04:12.038 element at address: 0x20000d87da00 with size: 0.000183 MiB 00:04:12.038 element at address: 0x20000d87dac0 with size: 0.000183 MiB 00:04:12.038 element at address: 0x20000d8fdd80 with size: 0.000183 MiB 00:04:12.038 element at address: 0x200015ef44c0 with size: 0.000183 MiB 00:04:12.038 element at address: 0x20001c0efc40 with size: 0.000183 MiB 00:04:12.038 element at address: 0x20001c0efd00 with size: 0.000183 MiB 00:04:12.038 element at address: 0x20001c2bc740 with size: 0.000183 MiB 00:04:12.038 element at address: 0x20001d895380 with size: 0.000183 MiB 00:04:12.038 element at address: 0x20001d895440 with size: 0.000183 MiB 00:04:12.038 element at address: 0x20002ac68f80 with size: 0.000183 MiB 00:04:12.038 element at address: 0x20002ac69040 with size: 0.000183 MiB 00:04:12.038 element at address: 0x20002ac6fc40 with size: 0.000183 MiB 00:04:12.038 element at address: 0x20002ac6fe40 with size: 0.000183 MiB 00:04:12.038 element at address: 0x20002ac6ff00 with size: 0.000183 MiB 00:04:12.038 list of memzone associated elements. size: 646.796692 MiB 00:04:12.038 element at address: 0x20001d895500 with size: 211.416748 MiB 00:04:12.038 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:04:12.038 element at address: 0x20002ac6ffc0 with size: 157.562561 MiB 00:04:12.038 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:04:12.038 element at address: 0x200015ff4780 with size: 92.045044 MiB 00:04:12.038 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_3018255_0 00:04:12.038 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:04:12.038 associated memzone info: size: 48.002930 MiB name: MP_evtpool_3018255_0 00:04:12.038 element at address: 0x200003fff380 with size: 48.003052 MiB 00:04:12.038 associated memzone info: size: 48.002930 MiB name: MP_msgpool_3018255_0 00:04:12.038 element at address: 0x2000071fdb80 with size: 36.008911 MiB 00:04:12.038 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_3018255_0 00:04:12.038 element at address: 0x20001c3be940 with size: 20.255554 MiB 00:04:12.038 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:04:12.038 element at address: 0x200034bfeb40 with size: 18.005066 MiB 00:04:12.038 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:04:12.038 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:04:12.038 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_3018255 00:04:12.038 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:04:12.038 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_3018255 00:04:12.038 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:04:12.038 associated memzone info: size: 1.007996 MiB name: MP_evtpool_3018255 00:04:12.038 element at address: 0x20000d8fde40 with size: 1.008118 MiB 00:04:12.038 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:12.038 element at address: 0x20001c2bc800 with size: 1.008118 MiB 00:04:12.038 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:12.038 element at address: 0x2000096fde40 with size: 1.008118 MiB 00:04:12.038 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:12.038 element at address: 0x2000070fba40 with size: 1.008118 MiB 00:04:12.038 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:12.038 element at address: 0x200003eff180 with size: 1.000488 MiB 00:04:12.038 associated memzone info: size: 1.000366 MiB name: RG_ring_0_3018255 00:04:12.038 element at address: 0x200003affc00 with size: 1.000488 MiB 00:04:12.038 associated memzone info: size: 1.000366 MiB name: RG_ring_1_3018255 00:04:12.038 element at address: 0x200015ef4580 with size: 1.000488 MiB 00:04:12.038 associated memzone info: size: 1.000366 MiB name: RG_ring_4_3018255 00:04:12.038 element at address: 0x200034afe940 with size: 1.000488 MiB 00:04:12.038 associated memzone info: size: 1.000366 MiB name: RG_ring_5_3018255 00:04:12.038 element at address: 0x200003a7f740 with size: 0.500488 MiB 00:04:12.038 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_3018255 00:04:12.038 element at address: 0x200003e7ee00 with size: 0.500488 MiB 00:04:12.038 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_3018255 00:04:12.038 element at address: 0x20000d87db80 with size: 0.500488 MiB 00:04:12.038 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:12.038 element at address: 0x20000707b780 with size: 0.500488 MiB 00:04:12.038 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:12.038 element at address: 0x20001c27c540 with size: 0.250488 MiB 00:04:12.038 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:12.038 element at address: 0x200003a5f3c0 with size: 0.125488 MiB 00:04:12.038 associated memzone info: size: 0.125366 MiB name: RG_ring_2_3018255 00:04:12.038 element at address: 0x2000096f5b80 with size: 0.031738 MiB 00:04:12.038 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:12.038 element at address: 0x20002ac69100 with size: 0.023743 MiB 00:04:12.038 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:12.038 element at address: 0x200003a5b100 with size: 0.016113 MiB 00:04:12.038 associated memzone info: size: 0.015991 MiB name: RG_ring_3_3018255 00:04:12.038 element at address: 0x20002ac6f240 with size: 0.002441 MiB 00:04:12.038 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:12.038 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:04:12.038 associated memzone info: size: 0.000183 MiB name: MP_msgpool_3018255 00:04:12.038 element at address: 0x200003affa00 with size: 0.000305 MiB 00:04:12.038 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_3018255 00:04:12.038 element at address: 0x200003a5af00 with size: 0.000305 MiB 00:04:12.038 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_3018255 00:04:12.038 element at address: 0x20002ac6fd00 with size: 0.000305 MiB 00:04:12.038 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:12.038 00:11:42 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:12.038 00:11:42 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 3018255 00:04:12.038 00:11:42 dpdk_mem_utility -- common/autotest_common.sh@950 -- # '[' -z 3018255 ']' 00:04:12.038 00:11:42 dpdk_mem_utility -- common/autotest_common.sh@954 -- # kill -0 3018255 00:04:12.038 00:11:42 dpdk_mem_utility -- common/autotest_common.sh@955 -- # uname 00:04:12.038 00:11:42 dpdk_mem_utility -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:12.038 00:11:42 dpdk_mem_utility -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3018255 00:04:12.299 00:11:42 dpdk_mem_utility -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:12.299 00:11:42 dpdk_mem_utility -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:12.299 00:11:42 dpdk_mem_utility -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3018255' 00:04:12.299 killing process with pid 3018255 00:04:12.299 00:11:42 dpdk_mem_utility -- common/autotest_common.sh@969 -- # kill 3018255 00:04:12.299 00:11:42 dpdk_mem_utility -- common/autotest_common.sh@974 -- # wait 3018255 00:04:12.299 00:04:12.299 real 0m1.411s 00:04:12.299 user 0m1.479s 00:04:12.299 sys 0m0.420s 00:04:12.299 00:11:42 dpdk_mem_utility -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:12.299 00:11:42 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:12.299 ************************************ 00:04:12.299 END TEST dpdk_mem_utility 00:04:12.299 ************************************ 00:04:12.560 00:11:42 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:04:12.560 00:11:42 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:12.560 00:11:42 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:12.560 00:11:42 -- common/autotest_common.sh@10 -- # set +x 00:04:12.560 ************************************ 00:04:12.560 START TEST event 00:04:12.560 ************************************ 00:04:12.560 00:11:43 event -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:04:12.560 * Looking for test storage... 00:04:12.560 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:04:12.560 00:11:43 event -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:12.560 00:11:43 event -- common/autotest_common.sh@1681 -- # lcov --version 00:04:12.560 00:11:43 event -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:12.560 00:11:43 event -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:12.560 00:11:43 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:12.560 00:11:43 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:12.560 00:11:43 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:12.560 00:11:43 event -- scripts/common.sh@336 -- # IFS=.-: 00:04:12.560 00:11:43 event -- scripts/common.sh@336 -- # read -ra ver1 00:04:12.560 00:11:43 event -- scripts/common.sh@337 -- # IFS=.-: 00:04:12.560 00:11:43 event -- scripts/common.sh@337 -- # read -ra ver2 00:04:12.560 00:11:43 event -- scripts/common.sh@338 -- # local 'op=<' 00:04:12.560 00:11:43 event -- scripts/common.sh@340 -- # ver1_l=2 00:04:12.560 00:11:43 event -- scripts/common.sh@341 -- # ver2_l=1 00:04:12.560 00:11:43 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:12.560 00:11:43 event -- scripts/common.sh@344 -- # case "$op" in 00:04:12.560 00:11:43 event -- scripts/common.sh@345 -- # : 1 00:04:12.560 00:11:43 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:12.560 00:11:43 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:12.560 00:11:43 event -- scripts/common.sh@365 -- # decimal 1 00:04:12.560 00:11:43 event -- scripts/common.sh@353 -- # local d=1 00:04:12.560 00:11:43 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:12.560 00:11:43 event -- scripts/common.sh@355 -- # echo 1 00:04:12.821 00:11:43 event -- scripts/common.sh@365 -- # ver1[v]=1 00:04:12.821 00:11:43 event -- scripts/common.sh@366 -- # decimal 2 00:04:12.821 00:11:43 event -- scripts/common.sh@353 -- # local d=2 00:04:12.821 00:11:43 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:12.821 00:11:43 event -- scripts/common.sh@355 -- # echo 2 00:04:12.821 00:11:43 event -- scripts/common.sh@366 -- # ver2[v]=2 00:04:12.821 00:11:43 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:12.821 00:11:43 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:12.821 00:11:43 event -- scripts/common.sh@368 -- # return 0 00:04:12.821 00:11:43 event -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:12.821 00:11:43 event -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:12.821 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:12.821 --rc genhtml_branch_coverage=1 00:04:12.821 --rc genhtml_function_coverage=1 00:04:12.821 --rc genhtml_legend=1 00:04:12.821 --rc geninfo_all_blocks=1 00:04:12.821 --rc geninfo_unexecuted_blocks=1 00:04:12.821 00:04:12.821 ' 00:04:12.821 00:11:43 event -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:12.821 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:12.821 --rc genhtml_branch_coverage=1 00:04:12.821 --rc genhtml_function_coverage=1 00:04:12.821 --rc genhtml_legend=1 00:04:12.821 --rc geninfo_all_blocks=1 00:04:12.821 --rc geninfo_unexecuted_blocks=1 00:04:12.821 00:04:12.821 ' 00:04:12.821 00:11:43 event -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:12.821 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:12.821 --rc genhtml_branch_coverage=1 00:04:12.821 --rc genhtml_function_coverage=1 00:04:12.821 --rc genhtml_legend=1 00:04:12.821 --rc geninfo_all_blocks=1 00:04:12.821 --rc geninfo_unexecuted_blocks=1 00:04:12.821 00:04:12.821 ' 00:04:12.821 00:11:43 event -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:12.821 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:12.821 --rc genhtml_branch_coverage=1 00:04:12.821 --rc genhtml_function_coverage=1 00:04:12.821 --rc genhtml_legend=1 00:04:12.821 --rc geninfo_all_blocks=1 00:04:12.821 --rc geninfo_unexecuted_blocks=1 00:04:12.821 00:04:12.821 ' 00:04:12.821 00:11:43 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:04:12.821 00:11:43 event -- bdev/nbd_common.sh@6 -- # set -e 00:04:12.821 00:11:43 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:12.821 00:11:43 event -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:04:12.821 00:11:43 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:12.821 00:11:43 event -- common/autotest_common.sh@10 -- # set +x 00:04:12.821 ************************************ 00:04:12.821 START TEST event_perf 00:04:12.821 ************************************ 00:04:12.821 00:11:43 event.event_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:12.821 Running I/O for 1 seconds...[2024-10-09 00:11:43.263812] Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 initialization... 00:04:12.821 [2024-10-09 00:11:43.263900] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3018652 ] 00:04:12.821 [2024-10-09 00:11:43.344954] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:12.821 [2024-10-09 00:11:43.403380] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:04:12.821 [2024-10-09 00:11:43.403536] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:04:12.821 [2024-10-09 00:11:43.403961] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:04:12.821 Running I/O for 1 seconds...[2024-10-09 00:11:43.403961] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:04:14.206 00:04:14.206 lcore 0: 183741 00:04:14.206 lcore 1: 183744 00:04:14.206 lcore 2: 183742 00:04:14.206 lcore 3: 183740 00:04:14.206 done. 00:04:14.206 00:04:14.206 real 0m1.205s 00:04:14.206 user 0m4.113s 00:04:14.206 sys 0m0.089s 00:04:14.206 00:11:44 event.event_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:14.206 00:11:44 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:04:14.206 ************************************ 00:04:14.206 END TEST event_perf 00:04:14.206 ************************************ 00:04:14.206 00:11:44 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:14.206 00:11:44 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:04:14.206 00:11:44 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:14.206 00:11:44 event -- common/autotest_common.sh@10 -- # set +x 00:04:14.206 ************************************ 00:04:14.206 START TEST event_reactor 00:04:14.206 ************************************ 00:04:14.206 00:11:44 event.event_reactor -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:14.206 [2024-10-09 00:11:44.548419] Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 initialization... 00:04:14.206 [2024-10-09 00:11:44.548499] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3018870 ] 00:04:14.206 [2024-10-09 00:11:44.631527] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:14.206 [2024-10-09 00:11:44.693920] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:04:15.147 test_start 00:04:15.147 oneshot 00:04:15.147 tick 100 00:04:15.147 tick 100 00:04:15.147 tick 250 00:04:15.147 tick 100 00:04:15.147 tick 100 00:04:15.147 tick 250 00:04:15.147 tick 100 00:04:15.147 tick 500 00:04:15.147 tick 100 00:04:15.147 tick 100 00:04:15.147 tick 250 00:04:15.147 tick 100 00:04:15.147 tick 100 00:04:15.147 test_end 00:04:15.147 00:04:15.147 real 0m1.210s 00:04:15.147 user 0m1.125s 00:04:15.147 sys 0m0.082s 00:04:15.147 00:11:45 event.event_reactor -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:15.147 00:11:45 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:04:15.147 ************************************ 00:04:15.147 END TEST event_reactor 00:04:15.147 ************************************ 00:04:15.147 00:11:45 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:15.147 00:11:45 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:04:15.147 00:11:45 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:15.147 00:11:45 event -- common/autotest_common.sh@10 -- # set +x 00:04:15.416 ************************************ 00:04:15.416 START TEST event_reactor_perf 00:04:15.416 ************************************ 00:04:15.416 00:11:45 event.event_reactor_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:15.416 [2024-10-09 00:11:45.839023] Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 initialization... 00:04:15.416 [2024-10-09 00:11:45.839101] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3019050 ] 00:04:15.416 [2024-10-09 00:11:45.921785] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:15.416 [2024-10-09 00:11:45.984522] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:04:16.811 test_start 00:04:16.811 test_end 00:04:16.811 Performance: 537359 events per second 00:04:16.811 00:04:16.811 real 0m1.210s 00:04:16.811 user 0m1.125s 00:04:16.811 sys 0m0.080s 00:04:16.811 00:11:47 event.event_reactor_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:16.811 00:11:47 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:04:16.811 ************************************ 00:04:16.811 END TEST event_reactor_perf 00:04:16.811 ************************************ 00:04:16.811 00:11:47 event -- event/event.sh@49 -- # uname -s 00:04:16.811 00:11:47 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:04:16.811 00:11:47 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:16.811 00:11:47 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:16.811 00:11:47 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:16.811 00:11:47 event -- common/autotest_common.sh@10 -- # set +x 00:04:16.811 ************************************ 00:04:16.811 START TEST event_scheduler 00:04:16.811 ************************************ 00:04:16.811 00:11:47 event.event_scheduler -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:16.811 * Looking for test storage... 00:04:16.811 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:04:16.811 00:11:47 event.event_scheduler -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:16.811 00:11:47 event.event_scheduler -- common/autotest_common.sh@1681 -- # lcov --version 00:04:16.811 00:11:47 event.event_scheduler -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:16.811 00:11:47 event.event_scheduler -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:16.811 00:11:47 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:16.811 00:11:47 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:16.811 00:11:47 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:16.811 00:11:47 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:04:16.811 00:11:47 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:04:16.811 00:11:47 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:04:16.811 00:11:47 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:04:16.811 00:11:47 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:04:16.811 00:11:47 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:04:16.811 00:11:47 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:04:16.811 00:11:47 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:16.811 00:11:47 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:04:16.811 00:11:47 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:04:16.811 00:11:47 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:16.811 00:11:47 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:16.811 00:11:47 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:04:16.811 00:11:47 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:04:16.811 00:11:47 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:16.811 00:11:47 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:04:16.811 00:11:47 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:04:16.811 00:11:47 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:04:16.811 00:11:47 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:04:16.811 00:11:47 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:16.811 00:11:47 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:04:16.811 00:11:47 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:04:16.811 00:11:47 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:16.811 00:11:47 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:16.811 00:11:47 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:04:16.811 00:11:47 event.event_scheduler -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:16.811 00:11:47 event.event_scheduler -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:16.811 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:16.811 --rc genhtml_branch_coverage=1 00:04:16.811 --rc genhtml_function_coverage=1 00:04:16.811 --rc genhtml_legend=1 00:04:16.811 --rc geninfo_all_blocks=1 00:04:16.811 --rc geninfo_unexecuted_blocks=1 00:04:16.811 00:04:16.811 ' 00:04:16.811 00:11:47 event.event_scheduler -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:16.811 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:16.811 --rc genhtml_branch_coverage=1 00:04:16.811 --rc genhtml_function_coverage=1 00:04:16.811 --rc genhtml_legend=1 00:04:16.811 --rc geninfo_all_blocks=1 00:04:16.811 --rc geninfo_unexecuted_blocks=1 00:04:16.811 00:04:16.811 ' 00:04:16.811 00:11:47 event.event_scheduler -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:16.811 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:16.811 --rc genhtml_branch_coverage=1 00:04:16.811 --rc genhtml_function_coverage=1 00:04:16.811 --rc genhtml_legend=1 00:04:16.811 --rc geninfo_all_blocks=1 00:04:16.811 --rc geninfo_unexecuted_blocks=1 00:04:16.811 00:04:16.811 ' 00:04:16.811 00:11:47 event.event_scheduler -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:16.811 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:16.811 --rc genhtml_branch_coverage=1 00:04:16.811 --rc genhtml_function_coverage=1 00:04:16.811 --rc genhtml_legend=1 00:04:16.811 --rc geninfo_all_blocks=1 00:04:16.811 --rc geninfo_unexecuted_blocks=1 00:04:16.811 00:04:16.811 ' 00:04:16.811 00:11:47 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:04:16.811 00:11:47 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:04:16.811 00:11:47 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=3019437 00:04:16.811 00:11:47 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:04:16.811 00:11:47 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 3019437 00:04:16.811 00:11:47 event.event_scheduler -- common/autotest_common.sh@831 -- # '[' -z 3019437 ']' 00:04:16.811 00:11:47 event.event_scheduler -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:16.811 00:11:47 event.event_scheduler -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:16.811 00:11:47 event.event_scheduler -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:16.811 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:16.811 00:11:47 event.event_scheduler -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:16.811 00:11:47 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:16.811 [2024-10-09 00:11:47.351765] Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 initialization... 00:04:16.811 [2024-10-09 00:11:47.351877] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3019437 ] 00:04:16.811 [2024-10-09 00:11:47.421807] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:17.073 [2024-10-09 00:11:47.508362] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:04:17.073 [2024-10-09 00:11:47.508523] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:04:17.073 [2024-10-09 00:11:47.508683] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:04:17.073 [2024-10-09 00:11:47.508684] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:04:17.644 00:11:48 event.event_scheduler -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:17.644 00:11:48 event.event_scheduler -- common/autotest_common.sh@864 -- # return 0 00:04:17.644 00:11:48 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:04:17.644 00:11:48 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:17.644 00:11:48 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:17.644 [2024-10-09 00:11:48.179133] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:04:17.644 [2024-10-09 00:11:48.179151] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:04:17.645 [2024-10-09 00:11:48.179161] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:04:17.645 [2024-10-09 00:11:48.179168] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:04:17.645 [2024-10-09 00:11:48.179173] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:04:17.645 00:11:48 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:17.645 00:11:48 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:04:17.645 00:11:48 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:17.645 00:11:48 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:17.645 [2024-10-09 00:11:48.245198] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:04:17.645 00:11:48 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:17.645 00:11:48 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:04:17.645 00:11:48 event.event_scheduler -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:17.645 00:11:48 event.event_scheduler -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:17.645 00:11:48 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:17.905 ************************************ 00:04:17.905 START TEST scheduler_create_thread 00:04:17.905 ************************************ 00:04:17.905 00:11:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1125 -- # scheduler_create_thread 00:04:17.905 00:11:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:04:17.905 00:11:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:17.905 00:11:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:17.905 2 00:04:17.905 00:11:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:17.905 00:11:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:04:17.905 00:11:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:17.905 00:11:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:17.905 3 00:04:17.905 00:11:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:17.905 00:11:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:04:17.905 00:11:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:17.905 00:11:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:17.905 4 00:04:17.905 00:11:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:17.905 00:11:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:04:17.905 00:11:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:17.905 00:11:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:17.905 5 00:04:17.905 00:11:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:17.905 00:11:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:04:17.905 00:11:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:17.905 00:11:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:17.905 6 00:04:17.905 00:11:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:17.905 00:11:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:04:17.905 00:11:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:17.905 00:11:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:17.905 7 00:04:17.905 00:11:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:17.905 00:11:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:04:17.905 00:11:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:17.905 00:11:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:17.905 8 00:04:17.905 00:11:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:17.905 00:11:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:04:17.905 00:11:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:17.905 00:11:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:18.166 9 00:04:18.166 00:11:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:18.166 00:11:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:04:18.166 00:11:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:18.166 00:11:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:19.551 10 00:04:19.551 00:11:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:19.551 00:11:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:04:19.552 00:11:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:19.552 00:11:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:20.492 00:11:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:20.492 00:11:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:04:20.492 00:11:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:04:20.492 00:11:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:20.492 00:11:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:21.064 00:11:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:21.064 00:11:51 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:04:21.064 00:11:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:21.064 00:11:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:21.634 00:11:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:21.634 00:11:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:04:21.634 00:11:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:04:21.634 00:11:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:21.634 00:11:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:22.205 00:11:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:22.205 00:04:22.205 real 0m4.466s 00:04:22.205 user 0m0.023s 00:04:22.205 sys 0m0.009s 00:04:22.205 00:11:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:22.205 00:11:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:22.205 ************************************ 00:04:22.205 END TEST scheduler_create_thread 00:04:22.205 ************************************ 00:04:22.205 00:11:52 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:04:22.205 00:11:52 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 3019437 00:04:22.205 00:11:52 event.event_scheduler -- common/autotest_common.sh@950 -- # '[' -z 3019437 ']' 00:04:22.205 00:11:52 event.event_scheduler -- common/autotest_common.sh@954 -- # kill -0 3019437 00:04:22.205 00:11:52 event.event_scheduler -- common/autotest_common.sh@955 -- # uname 00:04:22.205 00:11:52 event.event_scheduler -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:22.205 00:11:52 event.event_scheduler -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3019437 00:04:22.473 00:11:52 event.event_scheduler -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:04:22.473 00:11:52 event.event_scheduler -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:04:22.473 00:11:52 event.event_scheduler -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3019437' 00:04:22.473 killing process with pid 3019437 00:04:22.473 00:11:52 event.event_scheduler -- common/autotest_common.sh@969 -- # kill 3019437 00:04:22.473 00:11:52 event.event_scheduler -- common/autotest_common.sh@974 -- # wait 3019437 00:04:22.473 [2024-10-09 00:11:53.028071] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:04:22.734 00:04:22.734 real 0m6.078s 00:04:22.734 user 0m14.398s 00:04:22.734 sys 0m0.412s 00:04:22.734 00:11:53 event.event_scheduler -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:22.734 00:11:53 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:22.734 ************************************ 00:04:22.734 END TEST event_scheduler 00:04:22.734 ************************************ 00:04:22.734 00:11:53 event -- event/event.sh@51 -- # modprobe -n nbd 00:04:22.734 00:11:53 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:04:22.734 00:11:53 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:22.735 00:11:53 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:22.735 00:11:53 event -- common/autotest_common.sh@10 -- # set +x 00:04:22.735 ************************************ 00:04:22.735 START TEST app_repeat 00:04:22.735 ************************************ 00:04:22.735 00:11:53 event.app_repeat -- common/autotest_common.sh@1125 -- # app_repeat_test 00:04:22.735 00:11:53 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:22.735 00:11:53 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:22.735 00:11:53 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:04:22.735 00:11:53 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:22.735 00:11:53 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:04:22.735 00:11:53 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:04:22.735 00:11:53 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:04:22.735 00:11:53 event.app_repeat -- event/event.sh@19 -- # repeat_pid=3020819 00:04:22.735 00:11:53 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:04:22.735 00:11:53 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:04:22.735 00:11:53 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 3020819' 00:04:22.735 Process app_repeat pid: 3020819 00:04:22.735 00:11:53 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:22.735 00:11:53 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:04:22.735 spdk_app_start Round 0 00:04:22.735 00:11:53 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3020819 /var/tmp/spdk-nbd.sock 00:04:22.735 00:11:53 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 3020819 ']' 00:04:22.735 00:11:53 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:22.735 00:11:53 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:22.735 00:11:53 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:22.735 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:22.735 00:11:53 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:22.735 00:11:53 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:22.735 [2024-10-09 00:11:53.314129] Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 initialization... 00:04:22.735 [2024-10-09 00:11:53.314196] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3020819 ] 00:04:22.996 [2024-10-09 00:11:53.388729] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:22.996 [2024-10-09 00:11:53.444263] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:04:22.996 [2024-10-09 00:11:53.444264] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:04:23.565 00:11:54 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:23.565 00:11:54 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:04:23.565 00:11:54 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:23.826 Malloc0 00:04:23.826 00:11:54 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:24.087 Malloc1 00:04:24.087 00:11:54 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:24.087 00:11:54 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:24.087 00:11:54 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:24.087 00:11:54 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:24.087 00:11:54 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:24.087 00:11:54 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:24.087 00:11:54 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:24.087 00:11:54 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:24.087 00:11:54 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:24.087 00:11:54 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:24.087 00:11:54 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:24.087 00:11:54 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:24.087 00:11:54 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:24.087 00:11:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:24.087 00:11:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:24.087 00:11:54 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:24.087 /dev/nbd0 00:04:24.347 00:11:54 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:24.347 00:11:54 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:24.347 00:11:54 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:04:24.347 00:11:54 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:04:24.347 00:11:54 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:04:24.347 00:11:54 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:04:24.347 00:11:54 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:04:24.347 00:11:54 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:04:24.347 00:11:54 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:04:24.347 00:11:54 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:04:24.347 00:11:54 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:24.347 1+0 records in 00:04:24.347 1+0 records out 00:04:24.347 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000282955 s, 14.5 MB/s 00:04:24.347 00:11:54 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:24.347 00:11:54 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:04:24.347 00:11:54 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:24.347 00:11:54 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:04:24.347 00:11:54 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:04:24.347 00:11:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:24.347 00:11:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:24.347 00:11:54 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:24.347 /dev/nbd1 00:04:24.347 00:11:54 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:24.347 00:11:54 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:24.347 00:11:54 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:04:24.347 00:11:54 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:04:24.347 00:11:54 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:04:24.347 00:11:54 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:04:24.347 00:11:54 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:04:24.347 00:11:54 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:04:24.347 00:11:54 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:04:24.347 00:11:54 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:04:24.347 00:11:54 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:24.347 1+0 records in 00:04:24.347 1+0 records out 00:04:24.347 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00027852 s, 14.7 MB/s 00:04:24.347 00:11:54 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:24.347 00:11:54 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:04:24.347 00:11:54 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:24.347 00:11:54 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:04:24.347 00:11:54 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:04:24.347 00:11:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:24.347 00:11:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:24.347 00:11:54 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:24.347 00:11:54 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:24.347 00:11:54 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:24.607 00:11:55 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:24.607 { 00:04:24.607 "nbd_device": "/dev/nbd0", 00:04:24.607 "bdev_name": "Malloc0" 00:04:24.607 }, 00:04:24.607 { 00:04:24.607 "nbd_device": "/dev/nbd1", 00:04:24.607 "bdev_name": "Malloc1" 00:04:24.607 } 00:04:24.607 ]' 00:04:24.607 00:11:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:24.607 00:11:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:24.607 { 00:04:24.607 "nbd_device": "/dev/nbd0", 00:04:24.607 "bdev_name": "Malloc0" 00:04:24.607 }, 00:04:24.607 { 00:04:24.607 "nbd_device": "/dev/nbd1", 00:04:24.607 "bdev_name": "Malloc1" 00:04:24.607 } 00:04:24.607 ]' 00:04:24.607 00:11:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:24.607 /dev/nbd1' 00:04:24.607 00:11:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:24.607 00:11:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:24.607 /dev/nbd1' 00:04:24.607 00:11:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:24.607 00:11:55 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:24.607 00:11:55 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:24.607 00:11:55 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:24.607 00:11:55 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:24.607 00:11:55 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:24.607 00:11:55 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:24.607 00:11:55 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:24.607 00:11:55 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:24.607 00:11:55 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:24.607 00:11:55 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:24.607 256+0 records in 00:04:24.607 256+0 records out 00:04:24.607 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0126262 s, 83.0 MB/s 00:04:24.607 00:11:55 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:24.607 00:11:55 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:24.607 256+0 records in 00:04:24.607 256+0 records out 00:04:24.607 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0116872 s, 89.7 MB/s 00:04:24.607 00:11:55 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:24.607 00:11:55 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:24.867 256+0 records in 00:04:24.867 256+0 records out 00:04:24.867 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0127611 s, 82.2 MB/s 00:04:24.867 00:11:55 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:24.867 00:11:55 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:24.867 00:11:55 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:24.867 00:11:55 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:24.867 00:11:55 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:24.867 00:11:55 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:24.867 00:11:55 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:24.867 00:11:55 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:24.867 00:11:55 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:24.867 00:11:55 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:24.867 00:11:55 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:24.867 00:11:55 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:24.867 00:11:55 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:24.867 00:11:55 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:24.867 00:11:55 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:24.867 00:11:55 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:24.867 00:11:55 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:24.867 00:11:55 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:24.867 00:11:55 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:24.867 00:11:55 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:24.867 00:11:55 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:24.867 00:11:55 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:24.867 00:11:55 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:24.867 00:11:55 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:24.867 00:11:55 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:24.867 00:11:55 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:24.867 00:11:55 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:24.867 00:11:55 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:24.867 00:11:55 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:25.127 00:11:55 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:25.127 00:11:55 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:25.127 00:11:55 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:25.127 00:11:55 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:25.127 00:11:55 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:25.127 00:11:55 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:25.127 00:11:55 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:25.127 00:11:55 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:25.127 00:11:55 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:25.127 00:11:55 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:25.127 00:11:55 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:25.387 00:11:55 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:25.387 00:11:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:25.387 00:11:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:25.387 00:11:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:25.387 00:11:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:25.387 00:11:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:25.387 00:11:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:25.387 00:11:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:25.387 00:11:55 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:25.387 00:11:55 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:25.387 00:11:55 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:25.387 00:11:55 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:25.387 00:11:55 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:25.647 00:11:56 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:25.647 [2024-10-09 00:11:56.169126] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:25.647 [2024-10-09 00:11:56.222627] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:04:25.647 [2024-10-09 00:11:56.222629] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:04:25.647 [2024-10-09 00:11:56.251587] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:25.647 [2024-10-09 00:11:56.251617] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:28.948 00:11:59 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:28.948 00:11:59 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:04:28.948 spdk_app_start Round 1 00:04:28.948 00:11:59 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3020819 /var/tmp/spdk-nbd.sock 00:04:28.948 00:11:59 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 3020819 ']' 00:04:28.948 00:11:59 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:28.948 00:11:59 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:28.948 00:11:59 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:28.948 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:28.948 00:11:59 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:28.948 00:11:59 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:28.948 00:11:59 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:28.948 00:11:59 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:04:28.948 00:11:59 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:28.948 Malloc0 00:04:28.948 00:11:59 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:29.208 Malloc1 00:04:29.208 00:11:59 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:29.208 00:11:59 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:29.208 00:11:59 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:29.208 00:11:59 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:29.208 00:11:59 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:29.208 00:11:59 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:29.208 00:11:59 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:29.208 00:11:59 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:29.208 00:11:59 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:29.208 00:11:59 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:29.208 00:11:59 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:29.208 00:11:59 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:29.208 00:11:59 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:29.208 00:11:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:29.208 00:11:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:29.208 00:11:59 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:29.208 /dev/nbd0 00:04:29.208 00:11:59 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:29.468 00:11:59 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:29.468 00:11:59 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:04:29.468 00:11:59 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:04:29.468 00:11:59 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:04:29.468 00:11:59 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:04:29.468 00:11:59 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:04:29.468 00:11:59 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:04:29.469 00:11:59 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:04:29.469 00:11:59 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:04:29.469 00:11:59 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:29.469 1+0 records in 00:04:29.469 1+0 records out 00:04:29.469 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000289131 s, 14.2 MB/s 00:04:29.469 00:11:59 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:29.469 00:11:59 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:04:29.469 00:11:59 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:29.469 00:11:59 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:04:29.469 00:11:59 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:04:29.469 00:11:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:29.469 00:11:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:29.469 00:11:59 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:29.469 /dev/nbd1 00:04:29.469 00:12:00 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:29.469 00:12:00 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:29.469 00:12:00 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:04:29.469 00:12:00 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:04:29.469 00:12:00 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:04:29.469 00:12:00 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:04:29.469 00:12:00 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:04:29.469 00:12:00 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:04:29.469 00:12:00 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:04:29.469 00:12:00 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:04:29.469 00:12:00 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:29.469 1+0 records in 00:04:29.469 1+0 records out 00:04:29.469 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000216827 s, 18.9 MB/s 00:04:29.469 00:12:00 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:29.469 00:12:00 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:04:29.469 00:12:00 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:29.469 00:12:00 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:04:29.469 00:12:00 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:04:29.469 00:12:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:29.469 00:12:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:29.469 00:12:00 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:29.469 00:12:00 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:29.469 00:12:00 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:29.728 00:12:00 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:29.728 { 00:04:29.728 "nbd_device": "/dev/nbd0", 00:04:29.728 "bdev_name": "Malloc0" 00:04:29.728 }, 00:04:29.728 { 00:04:29.728 "nbd_device": "/dev/nbd1", 00:04:29.728 "bdev_name": "Malloc1" 00:04:29.728 } 00:04:29.728 ]' 00:04:29.728 00:12:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:29.728 { 00:04:29.728 "nbd_device": "/dev/nbd0", 00:04:29.728 "bdev_name": "Malloc0" 00:04:29.728 }, 00:04:29.728 { 00:04:29.728 "nbd_device": "/dev/nbd1", 00:04:29.728 "bdev_name": "Malloc1" 00:04:29.728 } 00:04:29.728 ]' 00:04:29.728 00:12:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:29.728 00:12:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:29.728 /dev/nbd1' 00:04:29.728 00:12:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:29.728 /dev/nbd1' 00:04:29.728 00:12:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:29.728 00:12:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:29.728 00:12:00 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:29.728 00:12:00 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:29.728 00:12:00 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:29.728 00:12:00 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:29.728 00:12:00 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:29.728 00:12:00 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:29.728 00:12:00 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:29.728 00:12:00 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:29.728 00:12:00 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:29.728 00:12:00 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:29.728 256+0 records in 00:04:29.728 256+0 records out 00:04:29.728 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0125772 s, 83.4 MB/s 00:04:29.728 00:12:00 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:29.728 00:12:00 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:29.728 256+0 records in 00:04:29.728 256+0 records out 00:04:29.728 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.013305 s, 78.8 MB/s 00:04:29.728 00:12:00 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:29.728 00:12:00 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:29.728 256+0 records in 00:04:29.728 256+0 records out 00:04:29.728 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0127923 s, 82.0 MB/s 00:04:29.728 00:12:00 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:29.728 00:12:00 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:29.728 00:12:00 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:29.728 00:12:00 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:29.728 00:12:00 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:29.728 00:12:00 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:29.728 00:12:00 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:29.728 00:12:00 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:29.728 00:12:00 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:29.988 00:12:00 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:29.988 00:12:00 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:29.988 00:12:00 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:29.988 00:12:00 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:29.988 00:12:00 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:29.988 00:12:00 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:29.988 00:12:00 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:29.988 00:12:00 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:29.988 00:12:00 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:29.988 00:12:00 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:29.988 00:12:00 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:29.988 00:12:00 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:29.988 00:12:00 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:29.988 00:12:00 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:29.988 00:12:00 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:29.988 00:12:00 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:29.988 00:12:00 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:29.988 00:12:00 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:29.988 00:12:00 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:29.988 00:12:00 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:30.249 00:12:00 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:30.249 00:12:00 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:30.249 00:12:00 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:30.249 00:12:00 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:30.249 00:12:00 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:30.249 00:12:00 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:30.249 00:12:00 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:30.249 00:12:00 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:30.249 00:12:00 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:30.249 00:12:00 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:30.249 00:12:00 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:30.509 00:12:00 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:30.509 00:12:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:30.509 00:12:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:30.509 00:12:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:30.509 00:12:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:30.509 00:12:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:30.509 00:12:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:30.509 00:12:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:30.509 00:12:01 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:30.509 00:12:01 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:30.509 00:12:01 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:30.509 00:12:01 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:30.509 00:12:01 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:30.770 00:12:01 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:30.770 [2024-10-09 00:12:01.291470] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:30.770 [2024-10-09 00:12:01.345498] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:04:30.770 [2024-10-09 00:12:01.345499] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:04:30.770 [2024-10-09 00:12:01.375191] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:30.770 [2024-10-09 00:12:01.375222] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:34.114 00:12:04 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:34.114 00:12:04 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:04:34.114 spdk_app_start Round 2 00:04:34.114 00:12:04 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3020819 /var/tmp/spdk-nbd.sock 00:04:34.114 00:12:04 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 3020819 ']' 00:04:34.114 00:12:04 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:34.114 00:12:04 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:34.114 00:12:04 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:34.114 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:34.114 00:12:04 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:34.114 00:12:04 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:34.114 00:12:04 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:34.114 00:12:04 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:04:34.114 00:12:04 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:34.114 Malloc0 00:04:34.114 00:12:04 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:34.114 Malloc1 00:04:34.375 00:12:04 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:34.375 00:12:04 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:34.375 00:12:04 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:34.375 00:12:04 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:34.375 00:12:04 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:34.375 00:12:04 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:34.375 00:12:04 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:34.375 00:12:04 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:34.375 00:12:04 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:34.375 00:12:04 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:34.375 00:12:04 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:34.375 00:12:04 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:34.375 00:12:04 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:34.375 00:12:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:34.375 00:12:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:34.375 00:12:04 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:34.375 /dev/nbd0 00:04:34.375 00:12:04 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:34.375 00:12:04 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:34.375 00:12:04 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:04:34.375 00:12:04 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:04:34.375 00:12:04 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:04:34.375 00:12:04 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:04:34.375 00:12:04 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:04:34.375 00:12:04 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:04:34.375 00:12:04 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:04:34.375 00:12:04 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:04:34.375 00:12:04 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:34.375 1+0 records in 00:04:34.375 1+0 records out 00:04:34.375 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000309896 s, 13.2 MB/s 00:04:34.375 00:12:04 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:34.375 00:12:04 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:04:34.375 00:12:04 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:34.375 00:12:04 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:04:34.375 00:12:04 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:04:34.375 00:12:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:34.375 00:12:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:34.375 00:12:04 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:34.636 /dev/nbd1 00:04:34.636 00:12:05 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:34.636 00:12:05 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:34.636 00:12:05 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:04:34.636 00:12:05 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:04:34.636 00:12:05 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:04:34.636 00:12:05 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:04:34.636 00:12:05 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:04:34.636 00:12:05 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:04:34.636 00:12:05 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:04:34.636 00:12:05 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:04:34.636 00:12:05 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:34.636 1+0 records in 00:04:34.636 1+0 records out 00:04:34.636 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0002721 s, 15.1 MB/s 00:04:34.636 00:12:05 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:34.636 00:12:05 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:04:34.636 00:12:05 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:34.636 00:12:05 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:04:34.636 00:12:05 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:04:34.636 00:12:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:34.636 00:12:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:34.636 00:12:05 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:34.636 00:12:05 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:34.636 00:12:05 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:34.897 00:12:05 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:34.897 { 00:04:34.897 "nbd_device": "/dev/nbd0", 00:04:34.897 "bdev_name": "Malloc0" 00:04:34.897 }, 00:04:34.897 { 00:04:34.897 "nbd_device": "/dev/nbd1", 00:04:34.897 "bdev_name": "Malloc1" 00:04:34.897 } 00:04:34.897 ]' 00:04:34.897 00:12:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:34.897 { 00:04:34.897 "nbd_device": "/dev/nbd0", 00:04:34.897 "bdev_name": "Malloc0" 00:04:34.897 }, 00:04:34.897 { 00:04:34.897 "nbd_device": "/dev/nbd1", 00:04:34.897 "bdev_name": "Malloc1" 00:04:34.897 } 00:04:34.897 ]' 00:04:34.897 00:12:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:34.897 00:12:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:34.897 /dev/nbd1' 00:04:34.897 00:12:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:34.897 /dev/nbd1' 00:04:34.897 00:12:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:34.897 00:12:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:34.897 00:12:05 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:34.897 00:12:05 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:34.897 00:12:05 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:34.897 00:12:05 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:34.897 00:12:05 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:34.897 00:12:05 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:34.897 00:12:05 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:34.897 00:12:05 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:34.897 00:12:05 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:34.897 00:12:05 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:34.897 256+0 records in 00:04:34.897 256+0 records out 00:04:34.897 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0127148 s, 82.5 MB/s 00:04:34.897 00:12:05 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:34.897 00:12:05 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:34.897 256+0 records in 00:04:34.897 256+0 records out 00:04:34.897 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0120937 s, 86.7 MB/s 00:04:34.897 00:12:05 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:34.897 00:12:05 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:34.897 256+0 records in 00:04:34.897 256+0 records out 00:04:34.897 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0129926 s, 80.7 MB/s 00:04:34.897 00:12:05 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:34.897 00:12:05 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:34.897 00:12:05 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:34.897 00:12:05 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:34.897 00:12:05 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:34.897 00:12:05 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:34.897 00:12:05 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:34.897 00:12:05 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:34.897 00:12:05 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:34.897 00:12:05 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:34.897 00:12:05 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:34.897 00:12:05 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:34.897 00:12:05 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:34.897 00:12:05 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:34.897 00:12:05 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:34.897 00:12:05 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:34.897 00:12:05 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:34.897 00:12:05 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:34.897 00:12:05 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:35.157 00:12:05 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:35.157 00:12:05 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:35.157 00:12:05 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:35.157 00:12:05 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:35.157 00:12:05 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:35.157 00:12:05 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:35.157 00:12:05 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:35.157 00:12:05 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:35.157 00:12:05 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:35.157 00:12:05 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:35.495 00:12:05 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:35.495 00:12:05 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:35.495 00:12:05 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:35.495 00:12:05 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:35.495 00:12:05 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:35.495 00:12:05 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:35.495 00:12:05 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:35.495 00:12:05 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:35.495 00:12:05 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:35.495 00:12:05 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:35.495 00:12:05 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:35.495 00:12:06 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:35.495 00:12:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:35.495 00:12:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:35.790 00:12:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:35.790 00:12:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:35.790 00:12:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:35.790 00:12:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:35.790 00:12:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:35.790 00:12:06 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:35.790 00:12:06 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:35.790 00:12:06 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:35.790 00:12:06 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:35.790 00:12:06 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:35.790 00:12:06 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:36.049 [2024-10-09 00:12:06.429226] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:36.049 [2024-10-09 00:12:06.482775] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:04:36.049 [2024-10-09 00:12:06.482776] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:04:36.049 [2024-10-09 00:12:06.511786] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:36.049 [2024-10-09 00:12:06.511816] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:39.342 00:12:09 event.app_repeat -- event/event.sh@38 -- # waitforlisten 3020819 /var/tmp/spdk-nbd.sock 00:04:39.342 00:12:09 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 3020819 ']' 00:04:39.342 00:12:09 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:39.342 00:12:09 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:39.342 00:12:09 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:39.342 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:39.342 00:12:09 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:39.342 00:12:09 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:39.342 00:12:09 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:39.342 00:12:09 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:04:39.342 00:12:09 event.app_repeat -- event/event.sh@39 -- # killprocess 3020819 00:04:39.342 00:12:09 event.app_repeat -- common/autotest_common.sh@950 -- # '[' -z 3020819 ']' 00:04:39.342 00:12:09 event.app_repeat -- common/autotest_common.sh@954 -- # kill -0 3020819 00:04:39.342 00:12:09 event.app_repeat -- common/autotest_common.sh@955 -- # uname 00:04:39.342 00:12:09 event.app_repeat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:39.342 00:12:09 event.app_repeat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3020819 00:04:39.342 00:12:09 event.app_repeat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:39.342 00:12:09 event.app_repeat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:39.342 00:12:09 event.app_repeat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3020819' 00:04:39.342 killing process with pid 3020819 00:04:39.342 00:12:09 event.app_repeat -- common/autotest_common.sh@969 -- # kill 3020819 00:04:39.342 00:12:09 event.app_repeat -- common/autotest_common.sh@974 -- # wait 3020819 00:04:39.342 spdk_app_start is called in Round 0. 00:04:39.342 Shutdown signal received, stop current app iteration 00:04:39.342 Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 reinitialization... 00:04:39.342 spdk_app_start is called in Round 1. 00:04:39.342 Shutdown signal received, stop current app iteration 00:04:39.342 Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 reinitialization... 00:04:39.342 spdk_app_start is called in Round 2. 00:04:39.342 Shutdown signal received, stop current app iteration 00:04:39.342 Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 reinitialization... 00:04:39.342 spdk_app_start is called in Round 3. 00:04:39.342 Shutdown signal received, stop current app iteration 00:04:39.342 00:12:09 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:04:39.342 00:12:09 event.app_repeat -- event/event.sh@42 -- # return 0 00:04:39.342 00:04:39.342 real 0m16.384s 00:04:39.342 user 0m36.071s 00:04:39.342 sys 0m2.274s 00:04:39.342 00:12:09 event.app_repeat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:39.342 00:12:09 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:39.342 ************************************ 00:04:39.342 END TEST app_repeat 00:04:39.342 ************************************ 00:04:39.342 00:12:09 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:04:39.342 00:12:09 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:04:39.342 00:12:09 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:39.342 00:12:09 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:39.342 00:12:09 event -- common/autotest_common.sh@10 -- # set +x 00:04:39.342 ************************************ 00:04:39.342 START TEST cpu_locks 00:04:39.342 ************************************ 00:04:39.342 00:12:09 event.cpu_locks -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:04:39.342 * Looking for test storage... 00:04:39.342 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:04:39.342 00:12:09 event.cpu_locks -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:39.342 00:12:09 event.cpu_locks -- common/autotest_common.sh@1681 -- # lcov --version 00:04:39.342 00:12:09 event.cpu_locks -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:39.342 00:12:09 event.cpu_locks -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:39.342 00:12:09 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:39.342 00:12:09 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:39.342 00:12:09 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:39.342 00:12:09 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:04:39.342 00:12:09 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:04:39.342 00:12:09 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:04:39.342 00:12:09 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:04:39.342 00:12:09 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:04:39.342 00:12:09 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:04:39.342 00:12:09 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:04:39.342 00:12:09 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:39.342 00:12:09 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:04:39.342 00:12:09 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:04:39.342 00:12:09 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:39.342 00:12:09 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:39.342 00:12:09 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:04:39.342 00:12:09 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:04:39.342 00:12:09 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:39.342 00:12:09 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:04:39.342 00:12:09 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:04:39.342 00:12:09 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:04:39.342 00:12:09 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:04:39.342 00:12:09 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:39.342 00:12:09 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:04:39.342 00:12:09 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:04:39.342 00:12:09 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:39.342 00:12:09 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:39.342 00:12:09 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:04:39.342 00:12:09 event.cpu_locks -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:39.342 00:12:09 event.cpu_locks -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:39.342 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:39.342 --rc genhtml_branch_coverage=1 00:04:39.342 --rc genhtml_function_coverage=1 00:04:39.342 --rc genhtml_legend=1 00:04:39.342 --rc geninfo_all_blocks=1 00:04:39.342 --rc geninfo_unexecuted_blocks=1 00:04:39.342 00:04:39.342 ' 00:04:39.342 00:12:09 event.cpu_locks -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:39.342 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:39.342 --rc genhtml_branch_coverage=1 00:04:39.342 --rc genhtml_function_coverage=1 00:04:39.342 --rc genhtml_legend=1 00:04:39.342 --rc geninfo_all_blocks=1 00:04:39.342 --rc geninfo_unexecuted_blocks=1 00:04:39.342 00:04:39.342 ' 00:04:39.342 00:12:09 event.cpu_locks -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:39.342 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:39.342 --rc genhtml_branch_coverage=1 00:04:39.342 --rc genhtml_function_coverage=1 00:04:39.342 --rc genhtml_legend=1 00:04:39.342 --rc geninfo_all_blocks=1 00:04:39.342 --rc geninfo_unexecuted_blocks=1 00:04:39.342 00:04:39.342 ' 00:04:39.342 00:12:09 event.cpu_locks -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:39.342 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:39.342 --rc genhtml_branch_coverage=1 00:04:39.342 --rc genhtml_function_coverage=1 00:04:39.342 --rc genhtml_legend=1 00:04:39.342 --rc geninfo_all_blocks=1 00:04:39.342 --rc geninfo_unexecuted_blocks=1 00:04:39.342 00:04:39.342 ' 00:04:39.342 00:12:09 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:04:39.342 00:12:09 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:04:39.342 00:12:09 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:04:39.342 00:12:09 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:04:39.342 00:12:09 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:39.342 00:12:09 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:39.342 00:12:09 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:39.603 ************************************ 00:04:39.603 START TEST default_locks 00:04:39.603 ************************************ 00:04:39.603 00:12:09 event.cpu_locks.default_locks -- common/autotest_common.sh@1125 -- # default_locks 00:04:39.603 00:12:09 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=3024747 00:04:39.603 00:12:09 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 3024747 00:04:39.603 00:12:09 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:39.603 00:12:09 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 3024747 ']' 00:04:39.603 00:12:09 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:39.603 00:12:09 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:39.603 00:12:09 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:39.603 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:39.603 00:12:09 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:39.603 00:12:09 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:39.603 [2024-10-09 00:12:10.048055] Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 initialization... 00:04:39.603 [2024-10-09 00:12:10.048119] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3024747 ] 00:04:39.603 [2024-10-09 00:12:10.131151] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:39.603 [2024-10-09 00:12:10.207161] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:04:40.543 00:12:10 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:40.543 00:12:10 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 0 00:04:40.543 00:12:10 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 3024747 00:04:40.544 00:12:10 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 3024747 00:04:40.544 00:12:10 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:40.803 lslocks: write error 00:04:40.803 00:12:11 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 3024747 00:04:40.803 00:12:11 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # '[' -z 3024747 ']' 00:04:40.803 00:12:11 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # kill -0 3024747 00:04:40.803 00:12:11 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # uname 00:04:40.803 00:12:11 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:40.803 00:12:11 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3024747 00:04:41.064 00:12:11 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:41.064 00:12:11 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:41.064 00:12:11 event.cpu_locks.default_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3024747' 00:04:41.064 killing process with pid 3024747 00:04:41.064 00:12:11 event.cpu_locks.default_locks -- common/autotest_common.sh@969 -- # kill 3024747 00:04:41.064 00:12:11 event.cpu_locks.default_locks -- common/autotest_common.sh@974 -- # wait 3024747 00:04:41.064 00:12:11 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 3024747 00:04:41.064 00:12:11 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:04:41.064 00:12:11 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 3024747 00:04:41.064 00:12:11 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:04:41.064 00:12:11 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:41.064 00:12:11 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:04:41.064 00:12:11 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:41.064 00:12:11 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 3024747 00:04:41.064 00:12:11 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 3024747 ']' 00:04:41.064 00:12:11 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:41.064 00:12:11 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:41.064 00:12:11 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:41.064 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:41.064 00:12:11 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:41.064 00:12:11 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:41.064 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (3024747) - No such process 00:04:41.064 ERROR: process (pid: 3024747) is no longer running 00:04:41.064 00:12:11 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:41.064 00:12:11 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 1 00:04:41.064 00:12:11 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:04:41.064 00:12:11 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:41.064 00:12:11 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:41.064 00:12:11 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:41.064 00:12:11 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:04:41.064 00:12:11 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:04:41.064 00:12:11 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:04:41.064 00:12:11 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:04:41.064 00:04:41.064 real 0m1.709s 00:04:41.064 user 0m1.822s 00:04:41.064 sys 0m0.622s 00:04:41.064 00:12:11 event.cpu_locks.default_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:41.064 00:12:11 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:41.064 ************************************ 00:04:41.064 END TEST default_locks 00:04:41.064 ************************************ 00:04:41.324 00:12:11 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:04:41.324 00:12:11 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:41.324 00:12:11 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:41.324 00:12:11 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:41.324 ************************************ 00:04:41.324 START TEST default_locks_via_rpc 00:04:41.324 ************************************ 00:04:41.324 00:12:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1125 -- # default_locks_via_rpc 00:04:41.324 00:12:11 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=3025314 00:04:41.324 00:12:11 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 3025314 00:04:41.324 00:12:11 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:41.324 00:12:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 3025314 ']' 00:04:41.324 00:12:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:41.324 00:12:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:41.324 00:12:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:41.324 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:41.324 00:12:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:41.324 00:12:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:41.324 [2024-10-09 00:12:11.825764] Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 initialization... 00:04:41.324 [2024-10-09 00:12:11.825818] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3025314 ] 00:04:41.324 [2024-10-09 00:12:11.901741] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:41.324 [2024-10-09 00:12:11.958291] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:04:42.263 00:12:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:42.263 00:12:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:04:42.263 00:12:12 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:04:42.263 00:12:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:42.263 00:12:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:42.263 00:12:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:42.263 00:12:12 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:04:42.263 00:12:12 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:04:42.263 00:12:12 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:04:42.263 00:12:12 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:04:42.263 00:12:12 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:04:42.263 00:12:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:42.263 00:12:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:42.263 00:12:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:42.263 00:12:12 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 3025314 00:04:42.263 00:12:12 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 3025314 00:04:42.263 00:12:12 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:42.531 00:12:12 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 3025314 00:04:42.531 00:12:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # '[' -z 3025314 ']' 00:04:42.531 00:12:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # kill -0 3025314 00:04:42.531 00:12:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # uname 00:04:42.531 00:12:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:42.531 00:12:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3025314 00:04:42.531 00:12:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:42.531 00:12:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:42.531 00:12:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3025314' 00:04:42.531 killing process with pid 3025314 00:04:42.531 00:12:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@969 -- # kill 3025314 00:04:42.531 00:12:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@974 -- # wait 3025314 00:04:42.791 00:04:42.791 real 0m1.493s 00:04:42.791 user 0m1.606s 00:04:42.791 sys 0m0.518s 00:04:42.791 00:12:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:42.791 00:12:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:42.791 ************************************ 00:04:42.791 END TEST default_locks_via_rpc 00:04:42.791 ************************************ 00:04:42.791 00:12:13 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:04:42.791 00:12:13 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:42.791 00:12:13 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:42.791 00:12:13 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:42.791 ************************************ 00:04:42.791 START TEST non_locking_app_on_locked_coremask 00:04:42.791 ************************************ 00:04:42.791 00:12:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # non_locking_app_on_locked_coremask 00:04:42.791 00:12:13 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=3025600 00:04:42.791 00:12:13 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 3025600 /var/tmp/spdk.sock 00:04:42.791 00:12:13 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:42.791 00:12:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 3025600 ']' 00:04:42.791 00:12:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:42.791 00:12:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:42.791 00:12:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:42.791 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:42.791 00:12:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:42.791 00:12:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:42.791 [2024-10-09 00:12:13.392696] Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 initialization... 00:04:42.791 [2024-10-09 00:12:13.392755] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3025600 ] 00:04:43.050 [2024-10-09 00:12:13.467617] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:43.050 [2024-10-09 00:12:13.524304] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:04:43.619 00:12:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:43.619 00:12:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:04:43.619 00:12:14 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:04:43.619 00:12:14 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=3025737 00:04:43.619 00:12:14 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 3025737 /var/tmp/spdk2.sock 00:04:43.619 00:12:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 3025737 ']' 00:04:43.619 00:12:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:43.619 00:12:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:43.619 00:12:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:43.619 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:43.619 00:12:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:43.619 00:12:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:43.619 [2024-10-09 00:12:14.212442] Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 initialization... 00:04:43.619 [2024-10-09 00:12:14.212492] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3025737 ] 00:04:43.879 [2024-10-09 00:12:14.284363] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:43.879 [2024-10-09 00:12:14.284383] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:43.879 [2024-10-09 00:12:14.394742] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:04:44.449 00:12:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:44.449 00:12:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:04:44.449 00:12:15 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 3025600 00:04:44.449 00:12:15 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3025600 00:04:44.449 00:12:15 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:45.390 lslocks: write error 00:04:45.390 00:12:15 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 3025600 00:04:45.390 00:12:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 3025600 ']' 00:04:45.390 00:12:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 3025600 00:04:45.390 00:12:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:04:45.390 00:12:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:45.390 00:12:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3025600 00:04:45.390 00:12:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:45.390 00:12:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:45.390 00:12:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3025600' 00:04:45.390 killing process with pid 3025600 00:04:45.390 00:12:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 3025600 00:04:45.390 00:12:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 3025600 00:04:45.653 00:12:16 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 3025737 00:04:45.653 00:12:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 3025737 ']' 00:04:45.653 00:12:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 3025737 00:04:45.653 00:12:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:04:45.653 00:12:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:45.653 00:12:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3025737 00:04:45.653 00:12:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:45.653 00:12:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:45.653 00:12:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3025737' 00:04:45.653 killing process with pid 3025737 00:04:45.653 00:12:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 3025737 00:04:45.653 00:12:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 3025737 00:04:45.913 00:04:45.913 real 0m3.072s 00:04:45.913 user 0m3.363s 00:04:45.913 sys 0m0.974s 00:04:45.913 00:12:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:45.914 00:12:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:45.914 ************************************ 00:04:45.914 END TEST non_locking_app_on_locked_coremask 00:04:45.914 ************************************ 00:04:45.914 00:12:16 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:04:45.914 00:12:16 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:45.914 00:12:16 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:45.914 00:12:16 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:45.914 ************************************ 00:04:45.914 START TEST locking_app_on_unlocked_coremask 00:04:45.914 ************************************ 00:04:45.914 00:12:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_unlocked_coremask 00:04:45.914 00:12:16 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=3026198 00:04:45.914 00:12:16 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 3026198 /var/tmp/spdk.sock 00:04:45.914 00:12:16 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:04:45.914 00:12:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 3026198 ']' 00:04:45.914 00:12:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:45.914 00:12:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:45.914 00:12:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:45.914 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:45.914 00:12:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:45.914 00:12:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:45.914 [2024-10-09 00:12:16.544599] Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 initialization... 00:04:45.914 [2024-10-09 00:12:16.544656] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3026198 ] 00:04:46.173 [2024-10-09 00:12:16.622316] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:46.173 [2024-10-09 00:12:16.622344] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:46.173 [2024-10-09 00:12:16.681972] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:04:46.742 00:12:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:46.742 00:12:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:04:46.742 00:12:17 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:04:46.742 00:12:17 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=3026462 00:04:46.742 00:12:17 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 3026462 /var/tmp/spdk2.sock 00:04:46.742 00:12:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 3026462 ']' 00:04:46.742 00:12:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:46.742 00:12:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:46.742 00:12:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:46.742 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:46.742 00:12:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:46.742 00:12:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:46.742 [2024-10-09 00:12:17.363993] Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 initialization... 00:04:46.742 [2024-10-09 00:12:17.364041] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3026462 ] 00:04:47.002 [2024-10-09 00:12:17.433769] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:47.002 [2024-10-09 00:12:17.545163] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:04:47.571 00:12:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:47.571 00:12:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:04:47.571 00:12:18 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 3026462 00:04:47.571 00:12:18 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3026462 00:04:47.571 00:12:18 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:48.510 lslocks: write error 00:04:48.510 00:12:18 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 3026198 00:04:48.510 00:12:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 3026198 ']' 00:04:48.510 00:12:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 3026198 00:04:48.510 00:12:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:04:48.510 00:12:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:48.510 00:12:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3026198 00:04:48.511 00:12:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:48.511 00:12:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:48.511 00:12:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3026198' 00:04:48.511 killing process with pid 3026198 00:04:48.511 00:12:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 3026198 00:04:48.511 00:12:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 3026198 00:04:48.770 00:12:19 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 3026462 00:04:48.770 00:12:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 3026462 ']' 00:04:48.770 00:12:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 3026462 00:04:48.770 00:12:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:04:48.770 00:12:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:48.770 00:12:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3026462 00:04:49.030 00:12:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:49.030 00:12:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:49.030 00:12:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3026462' 00:04:49.030 killing process with pid 3026462 00:04:49.030 00:12:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 3026462 00:04:49.030 00:12:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 3026462 00:04:49.030 00:04:49.030 real 0m3.142s 00:04:49.030 user 0m3.459s 00:04:49.030 sys 0m0.975s 00:04:49.030 00:12:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:49.030 00:12:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:49.030 ************************************ 00:04:49.030 END TEST locking_app_on_unlocked_coremask 00:04:49.030 ************************************ 00:04:49.290 00:12:19 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:04:49.290 00:12:19 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:49.290 00:12:19 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:49.290 00:12:19 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:49.290 ************************************ 00:04:49.290 START TEST locking_app_on_locked_coremask 00:04:49.290 ************************************ 00:04:49.290 00:12:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_locked_coremask 00:04:49.290 00:12:19 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=3026864 00:04:49.290 00:12:19 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 3026864 /var/tmp/spdk.sock 00:04:49.290 00:12:19 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:49.290 00:12:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 3026864 ']' 00:04:49.290 00:12:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:49.290 00:12:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:49.290 00:12:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:49.290 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:49.290 00:12:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:49.290 00:12:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:49.290 [2024-10-09 00:12:19.763603] Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 initialization... 00:04:49.290 [2024-10-09 00:12:19.763657] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3026864 ] 00:04:49.290 [2024-10-09 00:12:19.841315] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:49.290 [2024-10-09 00:12:19.897021] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:04:50.231 00:12:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:50.231 00:12:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:04:50.231 00:12:20 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:04:50.231 00:12:20 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=3027172 00:04:50.231 00:12:20 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 3027172 /var/tmp/spdk2.sock 00:04:50.231 00:12:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:04:50.231 00:12:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 3027172 /var/tmp/spdk2.sock 00:04:50.231 00:12:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:04:50.232 00:12:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:50.232 00:12:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:04:50.232 00:12:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:50.232 00:12:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 3027172 /var/tmp/spdk2.sock 00:04:50.232 00:12:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 3027172 ']' 00:04:50.232 00:12:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:50.232 00:12:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:50.232 00:12:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:50.232 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:50.232 00:12:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:50.232 00:12:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:50.232 [2024-10-09 00:12:20.579638] Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 initialization... 00:04:50.232 [2024-10-09 00:12:20.579690] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3027172 ] 00:04:50.232 [2024-10-09 00:12:20.649895] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 3026864 has claimed it. 00:04:50.232 [2024-10-09 00:12:20.649928] app.c: 910:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:04:50.802 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (3027172) - No such process 00:04:50.802 ERROR: process (pid: 3027172) is no longer running 00:04:50.802 00:12:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:50.802 00:12:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 1 00:04:50.802 00:12:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:04:50.802 00:12:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:50.802 00:12:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:50.802 00:12:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:50.802 00:12:21 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 3026864 00:04:50.802 00:12:21 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3026864 00:04:50.802 00:12:21 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:51.372 lslocks: write error 00:04:51.372 00:12:21 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 3026864 00:04:51.372 00:12:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 3026864 ']' 00:04:51.372 00:12:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 3026864 00:04:51.372 00:12:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:04:51.372 00:12:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:51.372 00:12:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3026864 00:04:51.372 00:12:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:51.372 00:12:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:51.372 00:12:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3026864' 00:04:51.372 killing process with pid 3026864 00:04:51.372 00:12:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 3026864 00:04:51.372 00:12:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 3026864 00:04:51.631 00:04:51.631 real 0m2.324s 00:04:51.631 user 0m2.585s 00:04:51.631 sys 0m0.658s 00:04:51.631 00:12:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:51.631 00:12:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:51.631 ************************************ 00:04:51.631 END TEST locking_app_on_locked_coremask 00:04:51.631 ************************************ 00:04:51.631 00:12:22 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:04:51.631 00:12:22 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:51.631 00:12:22 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:51.631 00:12:22 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:51.631 ************************************ 00:04:51.631 START TEST locking_overlapped_coremask 00:04:51.631 ************************************ 00:04:51.631 00:12:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask 00:04:51.631 00:12:22 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=3027537 00:04:51.631 00:12:22 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 3027537 /var/tmp/spdk.sock 00:04:51.631 00:12:22 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:04:51.631 00:12:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 3027537 ']' 00:04:51.631 00:12:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:51.631 00:12:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:51.631 00:12:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:51.631 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:51.631 00:12:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:51.632 00:12:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:51.632 [2024-10-09 00:12:22.166464] Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 initialization... 00:04:51.632 [2024-10-09 00:12:22.166522] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3027537 ] 00:04:51.632 [2024-10-09 00:12:22.244277] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:04:51.891 [2024-10-09 00:12:22.305813] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:04:51.891 [2024-10-09 00:12:22.306071] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:04:51.891 [2024-10-09 00:12:22.306072] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:04:52.461 00:12:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:52.461 00:12:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 0 00:04:52.461 00:12:22 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=3027557 00:04:52.461 00:12:22 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 3027557 /var/tmp/spdk2.sock 00:04:52.461 00:12:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:04:52.461 00:12:22 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:04:52.461 00:12:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 3027557 /var/tmp/spdk2.sock 00:04:52.461 00:12:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:04:52.461 00:12:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:52.461 00:12:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:04:52.461 00:12:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:52.461 00:12:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 3027557 /var/tmp/spdk2.sock 00:04:52.461 00:12:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 3027557 ']' 00:04:52.461 00:12:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:52.461 00:12:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:52.461 00:12:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:52.461 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:52.461 00:12:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:52.461 00:12:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:52.461 [2024-10-09 00:12:23.027912] Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 initialization... 00:04:52.461 [2024-10-09 00:12:23.027965] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3027557 ] 00:04:52.721 [2024-10-09 00:12:23.120931] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 3027537 has claimed it. 00:04:52.721 [2024-10-09 00:12:23.120971] app.c: 910:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:04:53.290 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (3027557) - No such process 00:04:53.290 ERROR: process (pid: 3027557) is no longer running 00:04:53.290 00:12:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:53.290 00:12:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 1 00:04:53.290 00:12:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:04:53.290 00:12:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:53.290 00:12:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:53.290 00:12:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:53.290 00:12:23 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:04:53.290 00:12:23 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:04:53.290 00:12:23 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:04:53.290 00:12:23 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:04:53.290 00:12:23 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 3027537 00:04:53.290 00:12:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # '[' -z 3027537 ']' 00:04:53.290 00:12:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # kill -0 3027537 00:04:53.290 00:12:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # uname 00:04:53.290 00:12:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:53.290 00:12:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3027537 00:04:53.290 00:12:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:53.290 00:12:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:53.290 00:12:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3027537' 00:04:53.290 killing process with pid 3027537 00:04:53.290 00:12:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@969 -- # kill 3027537 00:04:53.290 00:12:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@974 -- # wait 3027537 00:04:53.290 00:04:53.290 real 0m1.798s 00:04:53.290 user 0m5.124s 00:04:53.290 sys 0m0.407s 00:04:53.290 00:12:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:53.290 00:12:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:53.290 ************************************ 00:04:53.290 END TEST locking_overlapped_coremask 00:04:53.290 ************************************ 00:04:53.549 00:12:23 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:04:53.549 00:12:23 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:53.549 00:12:23 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:53.549 00:12:23 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:53.549 ************************************ 00:04:53.549 START TEST locking_overlapped_coremask_via_rpc 00:04:53.549 ************************************ 00:04:53.549 00:12:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask_via_rpc 00:04:53.549 00:12:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=3027913 00:04:53.549 00:12:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 3027913 /var/tmp/spdk.sock 00:04:53.549 00:12:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:04:53.549 00:12:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 3027913 ']' 00:04:53.549 00:12:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:53.549 00:12:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:53.549 00:12:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:53.549 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:53.549 00:12:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:53.549 00:12:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:53.549 [2024-10-09 00:12:24.040424] Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 initialization... 00:04:53.549 [2024-10-09 00:12:24.040482] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3027913 ] 00:04:53.549 [2024-10-09 00:12:24.119065] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:53.549 [2024-10-09 00:12:24.119091] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:04:53.550 [2024-10-09 00:12:24.179151] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:04:53.550 [2024-10-09 00:12:24.179298] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:04:53.550 [2024-10-09 00:12:24.179300] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:04:54.490 00:12:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:54.490 00:12:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:04:54.490 00:12:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=3027969 00:04:54.490 00:12:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 3027969 /var/tmp/spdk2.sock 00:04:54.490 00:12:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:04:54.490 00:12:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 3027969 ']' 00:04:54.490 00:12:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:54.490 00:12:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:54.490 00:12:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:54.490 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:54.490 00:12:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:54.490 00:12:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:54.490 [2024-10-09 00:12:24.895333] Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 initialization... 00:04:54.490 [2024-10-09 00:12:24.895386] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3027969 ] 00:04:54.490 [2024-10-09 00:12:24.989904] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:54.490 [2024-10-09 00:12:24.989931] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:04:54.490 [2024-10-09 00:12:25.119377] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:04:54.490 [2024-10-09 00:12:25.119535] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:04:54.490 [2024-10-09 00:12:25.119536] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 4 00:04:55.059 00:12:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:55.059 00:12:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:04:55.059 00:12:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:04:55.059 00:12:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:55.059 00:12:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:55.059 00:12:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:55.059 00:12:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:04:55.059 00:12:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:04:55.059 00:12:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:04:55.059 00:12:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:04:55.059 00:12:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:55.059 00:12:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:04:55.059 00:12:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:55.059 00:12:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:04:55.059 00:12:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:55.325 00:12:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:55.326 [2024-10-09 00:12:25.699799] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 3027913 has claimed it. 00:04:55.326 request: 00:04:55.326 { 00:04:55.326 "method": "framework_enable_cpumask_locks", 00:04:55.326 "req_id": 1 00:04:55.326 } 00:04:55.326 Got JSON-RPC error response 00:04:55.326 response: 00:04:55.326 { 00:04:55.326 "code": -32603, 00:04:55.326 "message": "Failed to claim CPU core: 2" 00:04:55.326 } 00:04:55.326 00:12:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:04:55.326 00:12:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:04:55.326 00:12:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:55.326 00:12:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:55.326 00:12:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:55.326 00:12:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 3027913 /var/tmp/spdk.sock 00:04:55.326 00:12:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 3027913 ']' 00:04:55.326 00:12:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:55.326 00:12:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:55.326 00:12:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:55.326 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:55.326 00:12:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:55.326 00:12:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:55.326 00:12:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:55.326 00:12:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:04:55.326 00:12:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 3027969 /var/tmp/spdk2.sock 00:04:55.326 00:12:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 3027969 ']' 00:04:55.326 00:12:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:55.326 00:12:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:55.326 00:12:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:55.326 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:55.326 00:12:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:55.326 00:12:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:55.599 00:12:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:55.599 00:12:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:04:55.599 00:12:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:04:55.599 00:12:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:04:55.599 00:12:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:04:55.599 00:12:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:04:55.599 00:04:55.599 real 0m2.092s 00:04:55.599 user 0m0.877s 00:04:55.599 sys 0m0.135s 00:04:55.599 00:12:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:55.599 00:12:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:55.599 ************************************ 00:04:55.599 END TEST locking_overlapped_coremask_via_rpc 00:04:55.599 ************************************ 00:04:55.599 00:12:26 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:04:55.599 00:12:26 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 3027913 ]] 00:04:55.599 00:12:26 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 3027913 00:04:55.599 00:12:26 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 3027913 ']' 00:04:55.599 00:12:26 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 3027913 00:04:55.599 00:12:26 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:04:55.599 00:12:26 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:55.599 00:12:26 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3027913 00:04:55.599 00:12:26 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:55.599 00:12:26 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:55.599 00:12:26 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3027913' 00:04:55.599 killing process with pid 3027913 00:04:55.599 00:12:26 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 3027913 00:04:55.599 00:12:26 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 3027913 00:04:55.858 00:12:26 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 3027969 ]] 00:04:55.858 00:12:26 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 3027969 00:04:55.858 00:12:26 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 3027969 ']' 00:04:55.858 00:12:26 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 3027969 00:04:55.858 00:12:26 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:04:55.858 00:12:26 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:55.858 00:12:26 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3027969 00:04:55.858 00:12:26 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:04:55.858 00:12:26 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:04:55.858 00:12:26 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3027969' 00:04:55.858 killing process with pid 3027969 00:04:55.858 00:12:26 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 3027969 00:04:55.858 00:12:26 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 3027969 00:04:56.118 00:12:26 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:04:56.118 00:12:26 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:04:56.118 00:12:26 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 3027913 ]] 00:04:56.118 00:12:26 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 3027913 00:04:56.118 00:12:26 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 3027913 ']' 00:04:56.118 00:12:26 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 3027913 00:04:56.118 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (3027913) - No such process 00:04:56.118 00:12:26 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 3027913 is not found' 00:04:56.118 Process with pid 3027913 is not found 00:04:56.118 00:12:26 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 3027969 ]] 00:04:56.118 00:12:26 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 3027969 00:04:56.118 00:12:26 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 3027969 ']' 00:04:56.118 00:12:26 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 3027969 00:04:56.118 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (3027969) - No such process 00:04:56.118 00:12:26 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 3027969 is not found' 00:04:56.118 Process with pid 3027969 is not found 00:04:56.118 00:12:26 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:04:56.118 00:04:56.118 real 0m16.913s 00:04:56.118 user 0m28.808s 00:04:56.118 sys 0m5.245s 00:04:56.118 00:12:26 event.cpu_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:56.118 00:12:26 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:56.118 ************************************ 00:04:56.118 END TEST cpu_locks 00:04:56.118 ************************************ 00:04:56.118 00:04:56.118 real 0m43.690s 00:04:56.118 user 1m25.920s 00:04:56.118 sys 0m8.630s 00:04:56.118 00:12:26 event -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:56.118 00:12:26 event -- common/autotest_common.sh@10 -- # set +x 00:04:56.118 ************************************ 00:04:56.118 END TEST event 00:04:56.118 ************************************ 00:04:56.118 00:12:26 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:04:56.118 00:12:26 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:56.118 00:12:26 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:56.118 00:12:26 -- common/autotest_common.sh@10 -- # set +x 00:04:56.379 ************************************ 00:04:56.379 START TEST thread 00:04:56.379 ************************************ 00:04:56.379 00:12:26 thread -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:04:56.379 * Looking for test storage... 00:04:56.379 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:04:56.380 00:12:26 thread -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:56.380 00:12:26 thread -- common/autotest_common.sh@1681 -- # lcov --version 00:04:56.380 00:12:26 thread -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:56.380 00:12:26 thread -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:56.380 00:12:26 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:56.380 00:12:26 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:56.380 00:12:26 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:56.380 00:12:26 thread -- scripts/common.sh@336 -- # IFS=.-: 00:04:56.380 00:12:26 thread -- scripts/common.sh@336 -- # read -ra ver1 00:04:56.380 00:12:26 thread -- scripts/common.sh@337 -- # IFS=.-: 00:04:56.380 00:12:26 thread -- scripts/common.sh@337 -- # read -ra ver2 00:04:56.380 00:12:26 thread -- scripts/common.sh@338 -- # local 'op=<' 00:04:56.380 00:12:26 thread -- scripts/common.sh@340 -- # ver1_l=2 00:04:56.380 00:12:26 thread -- scripts/common.sh@341 -- # ver2_l=1 00:04:56.380 00:12:26 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:56.380 00:12:26 thread -- scripts/common.sh@344 -- # case "$op" in 00:04:56.380 00:12:26 thread -- scripts/common.sh@345 -- # : 1 00:04:56.380 00:12:26 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:56.380 00:12:26 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:56.380 00:12:26 thread -- scripts/common.sh@365 -- # decimal 1 00:04:56.380 00:12:26 thread -- scripts/common.sh@353 -- # local d=1 00:04:56.380 00:12:26 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:56.380 00:12:26 thread -- scripts/common.sh@355 -- # echo 1 00:04:56.380 00:12:26 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:04:56.380 00:12:26 thread -- scripts/common.sh@366 -- # decimal 2 00:04:56.380 00:12:26 thread -- scripts/common.sh@353 -- # local d=2 00:04:56.380 00:12:26 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:56.380 00:12:26 thread -- scripts/common.sh@355 -- # echo 2 00:04:56.380 00:12:26 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:04:56.380 00:12:26 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:56.380 00:12:26 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:56.380 00:12:26 thread -- scripts/common.sh@368 -- # return 0 00:04:56.380 00:12:26 thread -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:56.380 00:12:26 thread -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:56.380 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:56.380 --rc genhtml_branch_coverage=1 00:04:56.380 --rc genhtml_function_coverage=1 00:04:56.380 --rc genhtml_legend=1 00:04:56.380 --rc geninfo_all_blocks=1 00:04:56.380 --rc geninfo_unexecuted_blocks=1 00:04:56.380 00:04:56.380 ' 00:04:56.380 00:12:26 thread -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:56.380 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:56.380 --rc genhtml_branch_coverage=1 00:04:56.380 --rc genhtml_function_coverage=1 00:04:56.380 --rc genhtml_legend=1 00:04:56.380 --rc geninfo_all_blocks=1 00:04:56.380 --rc geninfo_unexecuted_blocks=1 00:04:56.380 00:04:56.380 ' 00:04:56.380 00:12:26 thread -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:56.380 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:56.380 --rc genhtml_branch_coverage=1 00:04:56.380 --rc genhtml_function_coverage=1 00:04:56.380 --rc genhtml_legend=1 00:04:56.380 --rc geninfo_all_blocks=1 00:04:56.380 --rc geninfo_unexecuted_blocks=1 00:04:56.380 00:04:56.380 ' 00:04:56.380 00:12:26 thread -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:56.380 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:56.380 --rc genhtml_branch_coverage=1 00:04:56.380 --rc genhtml_function_coverage=1 00:04:56.380 --rc genhtml_legend=1 00:04:56.380 --rc geninfo_all_blocks=1 00:04:56.380 --rc geninfo_unexecuted_blocks=1 00:04:56.380 00:04:56.380 ' 00:04:56.380 00:12:26 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:04:56.380 00:12:26 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:04:56.380 00:12:26 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:56.380 00:12:26 thread -- common/autotest_common.sh@10 -- # set +x 00:04:56.640 ************************************ 00:04:56.640 START TEST thread_poller_perf 00:04:56.640 ************************************ 00:04:56.640 00:12:27 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:04:56.640 [2024-10-09 00:12:27.039844] Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 initialization... 00:04:56.640 [2024-10-09 00:12:27.039961] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3028658 ] 00:04:56.640 [2024-10-09 00:12:27.121511] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:56.640 [2024-10-09 00:12:27.190049] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:04:56.640 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:04:58.023 [2024-10-08T22:12:28.658Z] ====================================== 00:04:58.023 [2024-10-08T22:12:28.658Z] busy:2405571900 (cyc) 00:04:58.023 [2024-10-08T22:12:28.658Z] total_run_count: 417000 00:04:58.023 [2024-10-08T22:12:28.658Z] tsc_hz: 2400000000 (cyc) 00:04:58.023 [2024-10-08T22:12:28.658Z] ====================================== 00:04:58.023 [2024-10-08T22:12:28.658Z] poller_cost: 5768 (cyc), 2403 (nsec) 00:04:58.023 00:04:58.023 real 0m1.220s 00:04:58.023 user 0m1.126s 00:04:58.023 sys 0m0.090s 00:04:58.023 00:12:28 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:58.023 00:12:28 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:04:58.023 ************************************ 00:04:58.023 END TEST thread_poller_perf 00:04:58.023 ************************************ 00:04:58.023 00:12:28 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:04:58.023 00:12:28 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:04:58.023 00:12:28 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:58.023 00:12:28 thread -- common/autotest_common.sh@10 -- # set +x 00:04:58.023 ************************************ 00:04:58.023 START TEST thread_poller_perf 00:04:58.023 ************************************ 00:04:58.023 00:12:28 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:04:58.023 [2024-10-09 00:12:28.336222] Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 initialization... 00:04:58.023 [2024-10-09 00:12:28.336328] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3028810 ] 00:04:58.023 [2024-10-09 00:12:28.419309] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:58.023 [2024-10-09 00:12:28.486953] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:04:58.023 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:04:58.962 [2024-10-08T22:12:29.597Z] ====================================== 00:04:58.962 [2024-10-08T22:12:29.597Z] busy:2401463900 (cyc) 00:04:58.962 [2024-10-08T22:12:29.597Z] total_run_count: 5565000 00:04:58.962 [2024-10-08T22:12:29.597Z] tsc_hz: 2400000000 (cyc) 00:04:58.962 [2024-10-08T22:12:29.597Z] ====================================== 00:04:58.962 [2024-10-08T22:12:29.597Z] poller_cost: 431 (cyc), 179 (nsec) 00:04:58.962 00:04:58.962 real 0m1.215s 00:04:58.962 user 0m1.118s 00:04:58.962 sys 0m0.092s 00:04:58.962 00:12:29 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:58.962 00:12:29 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:04:58.962 ************************************ 00:04:58.962 END TEST thread_poller_perf 00:04:58.962 ************************************ 00:04:58.962 00:12:29 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:04:58.962 00:04:58.962 real 0m2.795s 00:04:58.962 user 0m2.402s 00:04:58.962 sys 0m0.407s 00:04:58.962 00:12:29 thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:58.962 00:12:29 thread -- common/autotest_common.sh@10 -- # set +x 00:04:58.962 ************************************ 00:04:58.962 END TEST thread 00:04:58.962 ************************************ 00:04:59.222 00:12:29 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:04:59.222 00:12:29 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:04:59.222 00:12:29 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:59.222 00:12:29 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:59.222 00:12:29 -- common/autotest_common.sh@10 -- # set +x 00:04:59.222 ************************************ 00:04:59.222 START TEST app_cmdline 00:04:59.222 ************************************ 00:04:59.222 00:12:29 app_cmdline -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:04:59.222 * Looking for test storage... 00:04:59.222 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:04:59.222 00:12:29 app_cmdline -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:59.222 00:12:29 app_cmdline -- common/autotest_common.sh@1681 -- # lcov --version 00:04:59.222 00:12:29 app_cmdline -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:59.222 00:12:29 app_cmdline -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:59.222 00:12:29 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:59.222 00:12:29 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:59.222 00:12:29 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:59.222 00:12:29 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:04:59.222 00:12:29 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:04:59.223 00:12:29 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:04:59.223 00:12:29 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:04:59.223 00:12:29 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:04:59.223 00:12:29 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:04:59.223 00:12:29 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:04:59.223 00:12:29 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:59.223 00:12:29 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:04:59.223 00:12:29 app_cmdline -- scripts/common.sh@345 -- # : 1 00:04:59.223 00:12:29 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:59.223 00:12:29 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:59.223 00:12:29 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:04:59.223 00:12:29 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:04:59.223 00:12:29 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:59.223 00:12:29 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:04:59.223 00:12:29 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:04:59.223 00:12:29 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:04:59.223 00:12:29 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:04:59.223 00:12:29 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:59.223 00:12:29 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:04:59.223 00:12:29 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:04:59.223 00:12:29 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:59.223 00:12:29 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:59.223 00:12:29 app_cmdline -- scripts/common.sh@368 -- # return 0 00:04:59.223 00:12:29 app_cmdline -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:59.223 00:12:29 app_cmdline -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:59.223 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:59.223 --rc genhtml_branch_coverage=1 00:04:59.223 --rc genhtml_function_coverage=1 00:04:59.223 --rc genhtml_legend=1 00:04:59.223 --rc geninfo_all_blocks=1 00:04:59.223 --rc geninfo_unexecuted_blocks=1 00:04:59.223 00:04:59.223 ' 00:04:59.223 00:12:29 app_cmdline -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:59.223 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:59.223 --rc genhtml_branch_coverage=1 00:04:59.223 --rc genhtml_function_coverage=1 00:04:59.223 --rc genhtml_legend=1 00:04:59.223 --rc geninfo_all_blocks=1 00:04:59.223 --rc geninfo_unexecuted_blocks=1 00:04:59.223 00:04:59.223 ' 00:04:59.223 00:12:29 app_cmdline -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:59.223 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:59.223 --rc genhtml_branch_coverage=1 00:04:59.223 --rc genhtml_function_coverage=1 00:04:59.223 --rc genhtml_legend=1 00:04:59.223 --rc geninfo_all_blocks=1 00:04:59.223 --rc geninfo_unexecuted_blocks=1 00:04:59.223 00:04:59.223 ' 00:04:59.223 00:12:29 app_cmdline -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:59.223 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:59.223 --rc genhtml_branch_coverage=1 00:04:59.223 --rc genhtml_function_coverage=1 00:04:59.223 --rc genhtml_legend=1 00:04:59.223 --rc geninfo_all_blocks=1 00:04:59.223 --rc geninfo_unexecuted_blocks=1 00:04:59.223 00:04:59.223 ' 00:04:59.223 00:12:29 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:04:59.223 00:12:29 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=3029141 00:04:59.223 00:12:29 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:04:59.223 00:12:29 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 3029141 00:04:59.223 00:12:29 app_cmdline -- common/autotest_common.sh@831 -- # '[' -z 3029141 ']' 00:04:59.223 00:12:29 app_cmdline -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:59.223 00:12:29 app_cmdline -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:59.223 00:12:29 app_cmdline -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:59.223 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:59.223 00:12:29 app_cmdline -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:59.223 00:12:29 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:04:59.483 [2024-10-09 00:12:29.885853] Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 initialization... 00:04:59.483 [2024-10-09 00:12:29.885926] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3029141 ] 00:04:59.483 [2024-10-09 00:12:29.965295] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:59.484 [2024-10-09 00:12:30.034035] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:00.056 00:12:30 app_cmdline -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:00.056 00:12:30 app_cmdline -- common/autotest_common.sh@864 -- # return 0 00:05:00.056 00:12:30 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:05:00.316 { 00:05:00.316 "version": "SPDK v25.01-pre git sha1 6101e4048", 00:05:00.316 "fields": { 00:05:00.316 "major": 25, 00:05:00.316 "minor": 1, 00:05:00.316 "patch": 0, 00:05:00.316 "suffix": "-pre", 00:05:00.316 "commit": "6101e4048" 00:05:00.316 } 00:05:00.316 } 00:05:00.316 00:12:30 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:05:00.316 00:12:30 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:05:00.316 00:12:30 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:05:00.316 00:12:30 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:05:00.316 00:12:30 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:05:00.316 00:12:30 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:05:00.316 00:12:30 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:00.316 00:12:30 app_cmdline -- app/cmdline.sh@26 -- # sort 00:05:00.316 00:12:30 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:00.316 00:12:30 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:00.316 00:12:30 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:05:00.316 00:12:30 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:05:00.316 00:12:30 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:00.316 00:12:30 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:05:00.316 00:12:30 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:00.316 00:12:30 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:00.316 00:12:30 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:00.316 00:12:30 app_cmdline -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:00.316 00:12:30 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:00.316 00:12:30 app_cmdline -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:00.316 00:12:30 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:00.316 00:12:30 app_cmdline -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:00.316 00:12:30 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:05:00.316 00:12:30 app_cmdline -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:00.575 request: 00:05:00.575 { 00:05:00.575 "method": "env_dpdk_get_mem_stats", 00:05:00.575 "req_id": 1 00:05:00.575 } 00:05:00.575 Got JSON-RPC error response 00:05:00.575 response: 00:05:00.575 { 00:05:00.575 "code": -32601, 00:05:00.575 "message": "Method not found" 00:05:00.575 } 00:05:00.575 00:12:31 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:05:00.575 00:12:31 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:00.575 00:12:31 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:00.575 00:12:31 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:00.575 00:12:31 app_cmdline -- app/cmdline.sh@1 -- # killprocess 3029141 00:05:00.575 00:12:31 app_cmdline -- common/autotest_common.sh@950 -- # '[' -z 3029141 ']' 00:05:00.576 00:12:31 app_cmdline -- common/autotest_common.sh@954 -- # kill -0 3029141 00:05:00.576 00:12:31 app_cmdline -- common/autotest_common.sh@955 -- # uname 00:05:00.576 00:12:31 app_cmdline -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:00.576 00:12:31 app_cmdline -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3029141 00:05:00.576 00:12:31 app_cmdline -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:00.576 00:12:31 app_cmdline -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:00.576 00:12:31 app_cmdline -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3029141' 00:05:00.576 killing process with pid 3029141 00:05:00.576 00:12:31 app_cmdline -- common/autotest_common.sh@969 -- # kill 3029141 00:05:00.576 00:12:31 app_cmdline -- common/autotest_common.sh@974 -- # wait 3029141 00:05:00.835 00:05:00.835 real 0m1.673s 00:05:00.835 user 0m1.981s 00:05:00.835 sys 0m0.444s 00:05:00.835 00:12:31 app_cmdline -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:00.835 00:12:31 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:00.835 ************************************ 00:05:00.835 END TEST app_cmdline 00:05:00.835 ************************************ 00:05:00.835 00:12:31 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:05:00.835 00:12:31 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:00.835 00:12:31 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:00.835 00:12:31 -- common/autotest_common.sh@10 -- # set +x 00:05:00.835 ************************************ 00:05:00.835 START TEST version 00:05:00.835 ************************************ 00:05:00.835 00:12:31 version -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:05:01.096 * Looking for test storage... 00:05:01.096 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:05:01.096 00:12:31 version -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:01.096 00:12:31 version -- common/autotest_common.sh@1681 -- # lcov --version 00:05:01.096 00:12:31 version -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:01.096 00:12:31 version -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:01.096 00:12:31 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:01.096 00:12:31 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:01.096 00:12:31 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:01.096 00:12:31 version -- scripts/common.sh@336 -- # IFS=.-: 00:05:01.096 00:12:31 version -- scripts/common.sh@336 -- # read -ra ver1 00:05:01.096 00:12:31 version -- scripts/common.sh@337 -- # IFS=.-: 00:05:01.096 00:12:31 version -- scripts/common.sh@337 -- # read -ra ver2 00:05:01.096 00:12:31 version -- scripts/common.sh@338 -- # local 'op=<' 00:05:01.096 00:12:31 version -- scripts/common.sh@340 -- # ver1_l=2 00:05:01.096 00:12:31 version -- scripts/common.sh@341 -- # ver2_l=1 00:05:01.096 00:12:31 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:01.096 00:12:31 version -- scripts/common.sh@344 -- # case "$op" in 00:05:01.096 00:12:31 version -- scripts/common.sh@345 -- # : 1 00:05:01.096 00:12:31 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:01.096 00:12:31 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:01.096 00:12:31 version -- scripts/common.sh@365 -- # decimal 1 00:05:01.096 00:12:31 version -- scripts/common.sh@353 -- # local d=1 00:05:01.096 00:12:31 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:01.096 00:12:31 version -- scripts/common.sh@355 -- # echo 1 00:05:01.096 00:12:31 version -- scripts/common.sh@365 -- # ver1[v]=1 00:05:01.096 00:12:31 version -- scripts/common.sh@366 -- # decimal 2 00:05:01.096 00:12:31 version -- scripts/common.sh@353 -- # local d=2 00:05:01.096 00:12:31 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:01.096 00:12:31 version -- scripts/common.sh@355 -- # echo 2 00:05:01.096 00:12:31 version -- scripts/common.sh@366 -- # ver2[v]=2 00:05:01.096 00:12:31 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:01.096 00:12:31 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:01.096 00:12:31 version -- scripts/common.sh@368 -- # return 0 00:05:01.096 00:12:31 version -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:01.096 00:12:31 version -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:01.096 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.096 --rc genhtml_branch_coverage=1 00:05:01.096 --rc genhtml_function_coverage=1 00:05:01.096 --rc genhtml_legend=1 00:05:01.096 --rc geninfo_all_blocks=1 00:05:01.096 --rc geninfo_unexecuted_blocks=1 00:05:01.096 00:05:01.096 ' 00:05:01.096 00:12:31 version -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:01.096 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.096 --rc genhtml_branch_coverage=1 00:05:01.096 --rc genhtml_function_coverage=1 00:05:01.096 --rc genhtml_legend=1 00:05:01.096 --rc geninfo_all_blocks=1 00:05:01.096 --rc geninfo_unexecuted_blocks=1 00:05:01.096 00:05:01.096 ' 00:05:01.096 00:12:31 version -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:01.096 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.096 --rc genhtml_branch_coverage=1 00:05:01.096 --rc genhtml_function_coverage=1 00:05:01.096 --rc genhtml_legend=1 00:05:01.096 --rc geninfo_all_blocks=1 00:05:01.096 --rc geninfo_unexecuted_blocks=1 00:05:01.096 00:05:01.096 ' 00:05:01.096 00:12:31 version -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:01.096 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.096 --rc genhtml_branch_coverage=1 00:05:01.096 --rc genhtml_function_coverage=1 00:05:01.096 --rc genhtml_legend=1 00:05:01.096 --rc geninfo_all_blocks=1 00:05:01.096 --rc geninfo_unexecuted_blocks=1 00:05:01.096 00:05:01.096 ' 00:05:01.096 00:12:31 version -- app/version.sh@17 -- # get_header_version major 00:05:01.096 00:12:31 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:01.096 00:12:31 version -- app/version.sh@14 -- # cut -f2 00:05:01.096 00:12:31 version -- app/version.sh@14 -- # tr -d '"' 00:05:01.096 00:12:31 version -- app/version.sh@17 -- # major=25 00:05:01.096 00:12:31 version -- app/version.sh@18 -- # get_header_version minor 00:05:01.096 00:12:31 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:01.096 00:12:31 version -- app/version.sh@14 -- # cut -f2 00:05:01.096 00:12:31 version -- app/version.sh@14 -- # tr -d '"' 00:05:01.096 00:12:31 version -- app/version.sh@18 -- # minor=1 00:05:01.096 00:12:31 version -- app/version.sh@19 -- # get_header_version patch 00:05:01.096 00:12:31 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:01.096 00:12:31 version -- app/version.sh@14 -- # cut -f2 00:05:01.096 00:12:31 version -- app/version.sh@14 -- # tr -d '"' 00:05:01.096 00:12:31 version -- app/version.sh@19 -- # patch=0 00:05:01.096 00:12:31 version -- app/version.sh@20 -- # get_header_version suffix 00:05:01.096 00:12:31 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:01.096 00:12:31 version -- app/version.sh@14 -- # cut -f2 00:05:01.096 00:12:31 version -- app/version.sh@14 -- # tr -d '"' 00:05:01.096 00:12:31 version -- app/version.sh@20 -- # suffix=-pre 00:05:01.096 00:12:31 version -- app/version.sh@22 -- # version=25.1 00:05:01.096 00:12:31 version -- app/version.sh@25 -- # (( patch != 0 )) 00:05:01.096 00:12:31 version -- app/version.sh@28 -- # version=25.1rc0 00:05:01.096 00:12:31 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:05:01.096 00:12:31 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:05:01.096 00:12:31 version -- app/version.sh@30 -- # py_version=25.1rc0 00:05:01.096 00:12:31 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:05:01.096 00:05:01.096 real 0m0.276s 00:05:01.096 user 0m0.173s 00:05:01.096 sys 0m0.150s 00:05:01.096 00:12:31 version -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:01.096 00:12:31 version -- common/autotest_common.sh@10 -- # set +x 00:05:01.096 ************************************ 00:05:01.096 END TEST version 00:05:01.096 ************************************ 00:05:01.096 00:12:31 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:05:01.096 00:12:31 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:05:01.096 00:12:31 -- spdk/autotest.sh@194 -- # uname -s 00:05:01.096 00:12:31 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:05:01.096 00:12:31 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:05:01.096 00:12:31 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:05:01.096 00:12:31 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:05:01.096 00:12:31 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:05:01.096 00:12:31 -- spdk/autotest.sh@256 -- # timing_exit lib 00:05:01.096 00:12:31 -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:01.096 00:12:31 -- common/autotest_common.sh@10 -- # set +x 00:05:01.357 00:12:31 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:05:01.357 00:12:31 -- spdk/autotest.sh@263 -- # '[' 0 -eq 1 ']' 00:05:01.357 00:12:31 -- spdk/autotest.sh@272 -- # '[' 1 -eq 1 ']' 00:05:01.357 00:12:31 -- spdk/autotest.sh@273 -- # export NET_TYPE 00:05:01.357 00:12:31 -- spdk/autotest.sh@276 -- # '[' tcp = rdma ']' 00:05:01.357 00:12:31 -- spdk/autotest.sh@279 -- # '[' tcp = tcp ']' 00:05:01.357 00:12:31 -- spdk/autotest.sh@280 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:05:01.357 00:12:31 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:05:01.357 00:12:31 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:01.357 00:12:31 -- common/autotest_common.sh@10 -- # set +x 00:05:01.357 ************************************ 00:05:01.357 START TEST nvmf_tcp 00:05:01.357 ************************************ 00:05:01.357 00:12:31 nvmf_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:05:01.357 * Looking for test storage... 00:05:01.357 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:05:01.357 00:12:31 nvmf_tcp -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:01.357 00:12:31 nvmf_tcp -- common/autotest_common.sh@1681 -- # lcov --version 00:05:01.357 00:12:31 nvmf_tcp -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:01.357 00:12:31 nvmf_tcp -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:01.357 00:12:31 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:01.357 00:12:31 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:01.357 00:12:31 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:01.357 00:12:31 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:05:01.357 00:12:31 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:05:01.357 00:12:31 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:05:01.357 00:12:31 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:05:01.357 00:12:31 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:05:01.357 00:12:31 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:05:01.357 00:12:31 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:05:01.357 00:12:31 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:01.357 00:12:31 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:05:01.357 00:12:31 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:05:01.357 00:12:31 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:01.357 00:12:31 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:01.357 00:12:31 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:05:01.357 00:12:31 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:05:01.357 00:12:31 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:01.357 00:12:31 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:05:01.618 00:12:31 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:05:01.618 00:12:31 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:05:01.618 00:12:31 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:05:01.618 00:12:31 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:01.618 00:12:31 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:05:01.618 00:12:31 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:05:01.618 00:12:31 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:01.618 00:12:31 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:01.618 00:12:31 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:05:01.618 00:12:31 nvmf_tcp -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:01.618 00:12:32 nvmf_tcp -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:01.618 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.618 --rc genhtml_branch_coverage=1 00:05:01.618 --rc genhtml_function_coverage=1 00:05:01.618 --rc genhtml_legend=1 00:05:01.618 --rc geninfo_all_blocks=1 00:05:01.618 --rc geninfo_unexecuted_blocks=1 00:05:01.618 00:05:01.618 ' 00:05:01.618 00:12:32 nvmf_tcp -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:01.618 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.618 --rc genhtml_branch_coverage=1 00:05:01.618 --rc genhtml_function_coverage=1 00:05:01.618 --rc genhtml_legend=1 00:05:01.618 --rc geninfo_all_blocks=1 00:05:01.618 --rc geninfo_unexecuted_blocks=1 00:05:01.618 00:05:01.618 ' 00:05:01.618 00:12:32 nvmf_tcp -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:01.618 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.618 --rc genhtml_branch_coverage=1 00:05:01.618 --rc genhtml_function_coverage=1 00:05:01.618 --rc genhtml_legend=1 00:05:01.618 --rc geninfo_all_blocks=1 00:05:01.618 --rc geninfo_unexecuted_blocks=1 00:05:01.619 00:05:01.619 ' 00:05:01.619 00:12:32 nvmf_tcp -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:01.619 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.619 --rc genhtml_branch_coverage=1 00:05:01.619 --rc genhtml_function_coverage=1 00:05:01.619 --rc genhtml_legend=1 00:05:01.619 --rc geninfo_all_blocks=1 00:05:01.619 --rc geninfo_unexecuted_blocks=1 00:05:01.619 00:05:01.619 ' 00:05:01.619 00:12:32 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:05:01.619 00:12:32 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:05:01.619 00:12:32 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:05:01.619 00:12:32 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:05:01.619 00:12:32 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:01.619 00:12:32 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:01.619 ************************************ 00:05:01.619 START TEST nvmf_target_core 00:05:01.619 ************************************ 00:05:01.619 00:12:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:05:01.619 * Looking for test storage... 00:05:01.619 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:05:01.619 00:12:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:01.619 00:12:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1681 -- # lcov --version 00:05:01.619 00:12:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:01.619 00:12:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:01.619 00:12:32 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:01.619 00:12:32 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:01.619 00:12:32 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:01.619 00:12:32 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:05:01.619 00:12:32 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:05:01.619 00:12:32 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:05:01.619 00:12:32 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:05:01.619 00:12:32 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:05:01.619 00:12:32 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:05:01.619 00:12:32 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:05:01.619 00:12:32 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:01.619 00:12:32 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:05:01.619 00:12:32 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:05:01.619 00:12:32 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:01.619 00:12:32 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:01.619 00:12:32 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:05:01.619 00:12:32 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:05:01.619 00:12:32 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:01.619 00:12:32 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:05:01.619 00:12:32 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:05:01.619 00:12:32 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:05:01.619 00:12:32 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:05:01.619 00:12:32 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:01.619 00:12:32 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:05:01.880 00:12:32 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:05:01.880 00:12:32 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:01.880 00:12:32 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:01.880 00:12:32 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:05:01.880 00:12:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:01.880 00:12:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:01.880 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.880 --rc genhtml_branch_coverage=1 00:05:01.880 --rc genhtml_function_coverage=1 00:05:01.880 --rc genhtml_legend=1 00:05:01.880 --rc geninfo_all_blocks=1 00:05:01.880 --rc geninfo_unexecuted_blocks=1 00:05:01.880 00:05:01.880 ' 00:05:01.880 00:12:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:01.880 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.880 --rc genhtml_branch_coverage=1 00:05:01.880 --rc genhtml_function_coverage=1 00:05:01.880 --rc genhtml_legend=1 00:05:01.880 --rc geninfo_all_blocks=1 00:05:01.880 --rc geninfo_unexecuted_blocks=1 00:05:01.880 00:05:01.880 ' 00:05:01.880 00:12:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:01.880 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.880 --rc genhtml_branch_coverage=1 00:05:01.880 --rc genhtml_function_coverage=1 00:05:01.880 --rc genhtml_legend=1 00:05:01.880 --rc geninfo_all_blocks=1 00:05:01.880 --rc geninfo_unexecuted_blocks=1 00:05:01.880 00:05:01.880 ' 00:05:01.880 00:12:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:01.880 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.880 --rc genhtml_branch_coverage=1 00:05:01.880 --rc genhtml_function_coverage=1 00:05:01.880 --rc genhtml_legend=1 00:05:01.880 --rc geninfo_all_blocks=1 00:05:01.880 --rc geninfo_unexecuted_blocks=1 00:05:01.880 00:05:01.880 ' 00:05:01.880 00:12:32 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:05:01.880 00:12:32 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:05:01.880 00:12:32 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:01.880 00:12:32 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:05:01.880 00:12:32 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:01.880 00:12:32 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:01.880 00:12:32 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:01.880 00:12:32 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:01.880 00:12:32 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:01.880 00:12:32 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:01.880 00:12:32 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:01.880 00:12:32 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:01.880 00:12:32 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:01.880 00:12:32 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:01.880 00:12:32 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:05:01.880 00:12:32 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:05:01.880 00:12:32 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:01.880 00:12:32 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:01.880 00:12:32 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:01.880 00:12:32 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:01.880 00:12:32 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:01.880 00:12:32 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:05:01.880 00:12:32 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:01.880 00:12:32 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:01.880 00:12:32 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:01.880 00:12:32 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:01.880 00:12:32 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:01.880 00:12:32 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:01.880 00:12:32 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:05:01.880 00:12:32 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:01.880 00:12:32 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:05:01.880 00:12:32 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:01.880 00:12:32 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:01.880 00:12:32 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:01.880 00:12:32 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:01.880 00:12:32 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:01.880 00:12:32 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:01.880 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:01.881 00:12:32 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:01.881 00:12:32 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:01.881 00:12:32 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:01.881 00:12:32 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:05:01.881 00:12:32 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:05:01.881 00:12:32 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:05:01.881 00:12:32 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:05:01.881 00:12:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:05:01.881 00:12:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:01.881 00:12:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:05:01.881 ************************************ 00:05:01.881 START TEST nvmf_abort 00:05:01.881 ************************************ 00:05:01.881 00:12:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:05:01.881 * Looking for test storage... 00:05:01.881 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:01.881 00:12:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:01.881 00:12:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1681 -- # lcov --version 00:05:01.881 00:12:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:02.142 00:12:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:02.142 00:12:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:02.142 00:12:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:02.142 00:12:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:02.142 00:12:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:05:02.142 00:12:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:05:02.142 00:12:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:05:02.142 00:12:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:05:02.142 00:12:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:05:02.142 00:12:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:05:02.142 00:12:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:05:02.142 00:12:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:02.142 00:12:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:05:02.142 00:12:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:05:02.142 00:12:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:02.142 00:12:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:02.142 00:12:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:05:02.142 00:12:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:05:02.142 00:12:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:02.142 00:12:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:05:02.142 00:12:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:05:02.142 00:12:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:05:02.142 00:12:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:05:02.142 00:12:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:02.142 00:12:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:05:02.142 00:12:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:05:02.142 00:12:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:02.142 00:12:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:02.142 00:12:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:05:02.142 00:12:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:02.142 00:12:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:02.142 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:02.142 --rc genhtml_branch_coverage=1 00:05:02.142 --rc genhtml_function_coverage=1 00:05:02.142 --rc genhtml_legend=1 00:05:02.142 --rc geninfo_all_blocks=1 00:05:02.142 --rc geninfo_unexecuted_blocks=1 00:05:02.142 00:05:02.142 ' 00:05:02.142 00:12:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:02.142 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:02.142 --rc genhtml_branch_coverage=1 00:05:02.142 --rc genhtml_function_coverage=1 00:05:02.142 --rc genhtml_legend=1 00:05:02.142 --rc geninfo_all_blocks=1 00:05:02.142 --rc geninfo_unexecuted_blocks=1 00:05:02.142 00:05:02.142 ' 00:05:02.142 00:12:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:02.142 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:02.142 --rc genhtml_branch_coverage=1 00:05:02.142 --rc genhtml_function_coverage=1 00:05:02.142 --rc genhtml_legend=1 00:05:02.142 --rc geninfo_all_blocks=1 00:05:02.142 --rc geninfo_unexecuted_blocks=1 00:05:02.142 00:05:02.142 ' 00:05:02.142 00:12:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:02.142 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:02.142 --rc genhtml_branch_coverage=1 00:05:02.142 --rc genhtml_function_coverage=1 00:05:02.142 --rc genhtml_legend=1 00:05:02.142 --rc geninfo_all_blocks=1 00:05:02.142 --rc geninfo_unexecuted_blocks=1 00:05:02.142 00:05:02.142 ' 00:05:02.142 00:12:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:02.142 00:12:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:05:02.142 00:12:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:02.142 00:12:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:02.142 00:12:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:02.142 00:12:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:02.142 00:12:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:02.142 00:12:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:02.142 00:12:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:02.142 00:12:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:02.143 00:12:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:02.143 00:12:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:02.143 00:12:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:05:02.143 00:12:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:05:02.143 00:12:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:02.143 00:12:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:02.143 00:12:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:02.143 00:12:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:02.143 00:12:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:02.143 00:12:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:05:02.143 00:12:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:02.143 00:12:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:02.143 00:12:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:02.143 00:12:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:02.143 00:12:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:02.143 00:12:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:02.143 00:12:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:05:02.143 00:12:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:02.143 00:12:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:05:02.143 00:12:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:02.143 00:12:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:02.143 00:12:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:02.143 00:12:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:02.143 00:12:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:02.143 00:12:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:02.143 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:02.143 00:12:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:02.143 00:12:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:02.143 00:12:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:02.143 00:12:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:05:02.143 00:12:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:05:02.143 00:12:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:05:02.143 00:12:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:05:02.143 00:12:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:02.143 00:12:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # prepare_net_devs 00:05:02.143 00:12:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@436 -- # local -g is_hw=no 00:05:02.143 00:12:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # remove_spdk_ns 00:05:02.143 00:12:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:02.143 00:12:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:02.143 00:12:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:02.143 00:12:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:05:02.143 00:12:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:05:02.143 00:12:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:05:02.143 00:12:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:10.277 00:12:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:05:10.277 00:12:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:05:10.277 00:12:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:05:10.277 00:12:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:05:10.277 00:12:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:05:10.277 00:12:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:05:10.277 00:12:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:05:10.277 00:12:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:05:10.277 00:12:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:05:10.277 00:12:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:05:10.277 00:12:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:05:10.277 00:12:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:05:10.277 00:12:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:05:10.277 00:12:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:05:10.277 00:12:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:05:10.277 00:12:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:05:10.277 00:12:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:05:10.277 00:12:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:05:10.277 00:12:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:05:10.277 00:12:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:05:10.277 00:12:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:05:10.277 00:12:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:05:10.277 00:12:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:05:10.277 00:12:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:05:10.277 00:12:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:05:10.277 00:12:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:05:10.277 00:12:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:05:10.277 00:12:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:05:10.277 00:12:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:05:10.277 00:12:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:05:10.277 00:12:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:05:10.277 00:12:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:05:10.277 00:12:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:05:10.278 00:12:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:10.278 00:12:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:05:10.278 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:05:10.278 00:12:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:10.278 00:12:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:10.278 00:12:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:10.278 00:12:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:10.278 00:12:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:10.278 00:12:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:10.278 00:12:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:05:10.278 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:05:10.278 00:12:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:10.278 00:12:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:10.278 00:12:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:10.278 00:12:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:10.278 00:12:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:10.278 00:12:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:05:10.278 00:12:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:05:10.278 00:12:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:05:10.278 00:12:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:05:10.278 00:12:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:10.278 00:12:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:05:10.278 00:12:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:10.278 00:12:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ up == up ]] 00:05:10.278 00:12:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:05:10.278 00:12:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:10.278 00:12:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:05:10.278 Found net devices under 0000:4b:00.0: cvl_0_0 00:05:10.278 00:12:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:05:10.278 00:12:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:05:10.278 00:12:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:10.278 00:12:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:05:10.278 00:12:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:10.278 00:12:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ up == up ]] 00:05:10.278 00:12:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:05:10.278 00:12:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:10.278 00:12:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:05:10.278 Found net devices under 0000:4b:00.1: cvl_0_1 00:05:10.278 00:12:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:05:10.278 00:12:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:05:10.278 00:12:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # is_hw=yes 00:05:10.278 00:12:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:05:10.278 00:12:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:05:10.278 00:12:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:05:10.278 00:12:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:05:10.278 00:12:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:05:10.278 00:12:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:05:10.278 00:12:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:05:10.278 00:12:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:05:10.278 00:12:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:05:10.278 00:12:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:05:10.278 00:12:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:05:10.278 00:12:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:05:10.278 00:12:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:05:10.278 00:12:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:05:10.278 00:12:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:05:10.278 00:12:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:05:10.278 00:12:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:05:10.278 00:12:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:05:10.278 00:12:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:05:10.278 00:12:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:05:10.278 00:12:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:05:10.278 00:12:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:05:10.278 00:12:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:05:10.278 00:12:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:05:10.278 00:12:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:05:10.278 00:12:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:05:10.278 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:05:10.278 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.606 ms 00:05:10.278 00:05:10.278 --- 10.0.0.2 ping statistics --- 00:05:10.278 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:10.278 rtt min/avg/max/mdev = 0.606/0.606/0.606/0.000 ms 00:05:10.278 00:12:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:05:10.278 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:05:10.278 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.314 ms 00:05:10.278 00:05:10.278 --- 10.0.0.1 ping statistics --- 00:05:10.278 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:10.278 rtt min/avg/max/mdev = 0.314/0.314/0.314/0.000 ms 00:05:10.278 00:12:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:05:10.278 00:12:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@448 -- # return 0 00:05:10.278 00:12:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:05:10.278 00:12:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:05:10.278 00:12:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:05:10.278 00:12:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:05:10.278 00:12:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:05:10.278 00:12:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:05:10.278 00:12:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:05:10.278 00:12:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:05:10.278 00:12:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:05:10.278 00:12:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:10.278 00:12:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:10.278 00:12:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@507 -- # nvmfpid=3033626 00:05:10.278 00:12:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@508 -- # waitforlisten 3033626 00:05:10.278 00:12:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:05:10.278 00:12:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@831 -- # '[' -z 3033626 ']' 00:05:10.278 00:12:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:10.278 00:12:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:10.278 00:12:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:10.278 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:10.278 00:12:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:10.278 00:12:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:10.278 [2024-10-09 00:12:40.037256] Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 initialization... 00:05:10.278 [2024-10-09 00:12:40.037323] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:05:10.278 [2024-10-09 00:12:40.128046] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:10.278 [2024-10-09 00:12:40.226692] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:05:10.278 [2024-10-09 00:12:40.226761] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:05:10.278 [2024-10-09 00:12:40.226770] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:10.278 [2024-10-09 00:12:40.226777] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:10.278 [2024-10-09 00:12:40.226783] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:05:10.278 [2024-10-09 00:12:40.228280] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:05:10.278 [2024-10-09 00:12:40.228539] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:05:10.278 [2024-10-09 00:12:40.228638] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:05:10.278 00:12:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:10.279 00:12:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # return 0 00:05:10.279 00:12:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:05:10.279 00:12:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:10.279 00:12:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:10.279 00:12:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:05:10.279 00:12:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:05:10.279 00:12:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:10.279 00:12:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:10.279 [2024-10-09 00:12:40.898965] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:10.539 00:12:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:10.539 00:12:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:05:10.539 00:12:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:10.539 00:12:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:10.539 Malloc0 00:05:10.539 00:12:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:10.539 00:12:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:05:10.539 00:12:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:10.539 00:12:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:10.539 Delay0 00:05:10.539 00:12:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:10.539 00:12:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:05:10.539 00:12:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:10.539 00:12:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:10.539 00:12:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:10.539 00:12:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:05:10.539 00:12:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:10.539 00:12:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:10.539 00:12:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:10.539 00:12:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:05:10.539 00:12:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:10.539 00:12:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:10.539 [2024-10-09 00:12:40.983363] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:05:10.539 00:12:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:10.539 00:12:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:05:10.539 00:12:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:10.539 00:12:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:10.539 00:12:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:10.539 00:12:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:05:10.539 [2024-10-09 00:12:41.123988] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:05:13.088 Initializing NVMe Controllers 00:05:13.088 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:05:13.088 controller IO queue size 128 less than required 00:05:13.088 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:05:13.088 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:05:13.088 Initialization complete. Launching workers. 00:05:13.088 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 127, failed: 28405 00:05:13.088 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 28470, failed to submit 62 00:05:13.088 success 28409, unsuccessful 61, failed 0 00:05:13.088 00:12:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:05:13.088 00:12:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:13.088 00:12:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:13.088 00:12:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:13.088 00:12:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:05:13.088 00:12:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:05:13.088 00:12:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@514 -- # nvmfcleanup 00:05:13.088 00:12:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:05:13.088 00:12:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:05:13.088 00:12:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:05:13.088 00:12:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:05:13.089 00:12:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:05:13.089 rmmod nvme_tcp 00:05:13.089 rmmod nvme_fabrics 00:05:13.089 rmmod nvme_keyring 00:05:13.089 00:12:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:05:13.089 00:12:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:05:13.089 00:12:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:05:13.089 00:12:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@515 -- # '[' -n 3033626 ']' 00:05:13.089 00:12:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@516 -- # killprocess 3033626 00:05:13.089 00:12:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@950 -- # '[' -z 3033626 ']' 00:05:13.089 00:12:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # kill -0 3033626 00:05:13.089 00:12:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@955 -- # uname 00:05:13.089 00:12:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:13.089 00:12:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3033626 00:05:13.089 00:12:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:05:13.089 00:12:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:05:13.089 00:12:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3033626' 00:05:13.089 killing process with pid 3033626 00:05:13.089 00:12:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@969 -- # kill 3033626 00:05:13.089 00:12:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@974 -- # wait 3033626 00:05:13.089 00:12:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:05:13.089 00:12:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:05:13.089 00:12:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:05:13.089 00:12:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:05:13.089 00:12:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@789 -- # iptables-save 00:05:13.089 00:12:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:05:13.089 00:12:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@789 -- # iptables-restore 00:05:13.089 00:12:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:05:13.089 00:12:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:05:13.089 00:12:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:13.089 00:12:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:13.089 00:12:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:15.637 00:12:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:05:15.637 00:05:15.637 real 0m13.322s 00:05:15.637 user 0m14.054s 00:05:15.637 sys 0m6.511s 00:05:15.637 00:12:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:15.637 00:12:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:15.637 ************************************ 00:05:15.637 END TEST nvmf_abort 00:05:15.637 ************************************ 00:05:15.637 00:12:45 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:05:15.637 00:12:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:05:15.637 00:12:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:15.637 00:12:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:05:15.637 ************************************ 00:05:15.637 START TEST nvmf_ns_hotplug_stress 00:05:15.637 ************************************ 00:05:15.637 00:12:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:05:15.637 * Looking for test storage... 00:05:15.637 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:15.637 00:12:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:15.637 00:12:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1681 -- # lcov --version 00:05:15.637 00:12:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:15.637 00:12:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:15.637 00:12:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:15.637 00:12:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:15.637 00:12:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:15.637 00:12:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:05:15.637 00:12:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:05:15.637 00:12:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:05:15.637 00:12:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:05:15.637 00:12:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:05:15.637 00:12:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:05:15.637 00:12:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:05:15.637 00:12:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:15.637 00:12:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:05:15.637 00:12:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:05:15.637 00:12:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:15.637 00:12:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:15.637 00:12:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:05:15.637 00:12:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:05:15.637 00:12:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:15.637 00:12:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:05:15.637 00:12:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:05:15.637 00:12:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:05:15.637 00:12:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:05:15.637 00:12:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:15.637 00:12:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:05:15.637 00:12:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:05:15.637 00:12:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:15.637 00:12:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:15.637 00:12:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:05:15.637 00:12:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:15.637 00:12:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:15.637 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:15.637 --rc genhtml_branch_coverage=1 00:05:15.637 --rc genhtml_function_coverage=1 00:05:15.637 --rc genhtml_legend=1 00:05:15.637 --rc geninfo_all_blocks=1 00:05:15.637 --rc geninfo_unexecuted_blocks=1 00:05:15.637 00:05:15.637 ' 00:05:15.637 00:12:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:15.637 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:15.637 --rc genhtml_branch_coverage=1 00:05:15.637 --rc genhtml_function_coverage=1 00:05:15.637 --rc genhtml_legend=1 00:05:15.637 --rc geninfo_all_blocks=1 00:05:15.637 --rc geninfo_unexecuted_blocks=1 00:05:15.637 00:05:15.637 ' 00:05:15.637 00:12:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:15.637 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:15.637 --rc genhtml_branch_coverage=1 00:05:15.637 --rc genhtml_function_coverage=1 00:05:15.637 --rc genhtml_legend=1 00:05:15.637 --rc geninfo_all_blocks=1 00:05:15.637 --rc geninfo_unexecuted_blocks=1 00:05:15.637 00:05:15.638 ' 00:05:15.638 00:12:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:15.638 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:15.638 --rc genhtml_branch_coverage=1 00:05:15.638 --rc genhtml_function_coverage=1 00:05:15.638 --rc genhtml_legend=1 00:05:15.638 --rc geninfo_all_blocks=1 00:05:15.638 --rc geninfo_unexecuted_blocks=1 00:05:15.638 00:05:15.638 ' 00:05:15.638 00:12:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:15.638 00:12:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:05:15.638 00:12:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:15.638 00:12:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:15.638 00:12:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:15.638 00:12:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:15.638 00:12:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:15.638 00:12:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:15.638 00:12:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:15.638 00:12:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:15.638 00:12:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:15.638 00:12:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:15.638 00:12:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:05:15.638 00:12:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:05:15.638 00:12:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:15.638 00:12:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:15.638 00:12:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:15.638 00:12:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:15.638 00:12:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:15.638 00:12:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:05:15.638 00:12:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:15.638 00:12:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:15.638 00:12:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:15.638 00:12:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:15.638 00:12:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:15.638 00:12:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:15.638 00:12:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:05:15.638 00:12:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:15.638 00:12:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:05:15.638 00:12:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:15.638 00:12:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:15.638 00:12:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:15.638 00:12:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:15.638 00:12:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:15.638 00:12:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:15.638 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:15.638 00:12:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:15.638 00:12:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:15.638 00:12:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:15.638 00:12:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:15.638 00:12:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:05:15.638 00:12:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:05:15.638 00:12:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:15.638 00:12:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # prepare_net_devs 00:05:15.638 00:12:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@436 -- # local -g is_hw=no 00:05:15.638 00:12:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # remove_spdk_ns 00:05:15.638 00:12:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:15.638 00:12:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:15.638 00:12:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:15.638 00:12:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:05:15.638 00:12:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:05:15.638 00:12:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:05:15.638 00:12:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:23.777 00:12:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:05:23.777 00:12:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:05:23.777 00:12:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:05:23.777 00:12:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:05:23.777 00:12:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:05:23.777 00:12:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:05:23.777 00:12:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:05:23.777 00:12:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:05:23.777 00:12:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:05:23.777 00:12:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:05:23.777 00:12:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:05:23.777 00:12:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:05:23.777 00:12:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:05:23.777 00:12:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:05:23.777 00:12:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:05:23.777 00:12:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:05:23.777 00:12:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:05:23.777 00:12:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:05:23.777 00:12:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:05:23.777 00:12:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:05:23.777 00:12:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:05:23.777 00:12:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:05:23.777 00:12:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:05:23.777 00:12:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:05:23.777 00:12:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:05:23.777 00:12:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:05:23.777 00:12:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:05:23.777 00:12:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:05:23.777 00:12:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:05:23.777 00:12:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:05:23.777 00:12:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:05:23.777 00:12:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:05:23.777 00:12:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:05:23.777 00:12:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:23.777 00:12:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:05:23.777 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:05:23.777 00:12:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:23.777 00:12:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:23.777 00:12:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:23.777 00:12:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:23.777 00:12:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:23.777 00:12:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:23.777 00:12:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:05:23.777 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:05:23.777 00:12:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:23.777 00:12:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:23.777 00:12:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:23.777 00:12:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:23.777 00:12:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:23.777 00:12:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:05:23.777 00:12:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:05:23.777 00:12:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:05:23.777 00:12:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:05:23.777 00:12:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:23.777 00:12:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:05:23.777 00:12:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:23.777 00:12:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ up == up ]] 00:05:23.777 00:12:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:05:23.777 00:12:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:23.777 00:12:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:05:23.777 Found net devices under 0000:4b:00.0: cvl_0_0 00:05:23.777 00:12:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:05:23.777 00:12:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:05:23.777 00:12:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:23.777 00:12:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:05:23.777 00:12:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:23.777 00:12:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ up == up ]] 00:05:23.777 00:12:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:05:23.777 00:12:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:23.777 00:12:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:05:23.777 Found net devices under 0000:4b:00.1: cvl_0_1 00:05:23.777 00:12:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:05:23.777 00:12:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:05:23.777 00:12:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # is_hw=yes 00:05:23.777 00:12:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:05:23.777 00:12:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:05:23.777 00:12:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:05:23.777 00:12:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:05:23.777 00:12:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:05:23.777 00:12:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:05:23.777 00:12:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:05:23.777 00:12:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:05:23.777 00:12:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:05:23.777 00:12:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:05:23.777 00:12:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:05:23.777 00:12:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:05:23.777 00:12:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:05:23.777 00:12:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:05:23.777 00:12:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:05:23.777 00:12:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:05:23.777 00:12:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:05:23.777 00:12:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:05:23.777 00:12:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:05:23.777 00:12:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:05:23.777 00:12:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:05:23.777 00:12:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:05:23.777 00:12:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:05:23.777 00:12:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:05:23.777 00:12:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:05:23.777 00:12:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:05:23.777 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:05:23.777 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.665 ms 00:05:23.777 00:05:23.777 --- 10.0.0.2 ping statistics --- 00:05:23.777 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:23.777 rtt min/avg/max/mdev = 0.665/0.665/0.665/0.000 ms 00:05:23.777 00:12:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:05:23.777 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:05:23.777 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.302 ms 00:05:23.777 00:05:23.777 --- 10.0.0.1 ping statistics --- 00:05:23.777 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:23.777 rtt min/avg/max/mdev = 0.302/0.302/0.302/0.000 ms 00:05:23.777 00:12:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:05:23.777 00:12:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # return 0 00:05:23.777 00:12:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:05:23.777 00:12:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:05:23.777 00:12:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:05:23.777 00:12:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:05:23.777 00:12:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:05:23.777 00:12:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:05:23.777 00:12:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:05:23.777 00:12:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:05:23.777 00:12:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:05:23.777 00:12:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:23.777 00:12:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:23.777 00:12:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # nvmfpid=3038664 00:05:23.777 00:12:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # waitforlisten 3038664 00:05:23.777 00:12:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:05:23.777 00:12:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@831 -- # '[' -z 3038664 ']' 00:05:23.777 00:12:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:23.777 00:12:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:23.777 00:12:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:23.777 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:23.777 00:12:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:23.777 00:12:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:23.777 [2024-10-09 00:12:53.584211] Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 initialization... 00:05:23.777 [2024-10-09 00:12:53.584275] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:05:23.777 [2024-10-09 00:12:53.677036] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:23.777 [2024-10-09 00:12:53.773506] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:05:23.777 [2024-10-09 00:12:53.773563] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:05:23.777 [2024-10-09 00:12:53.773572] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:23.777 [2024-10-09 00:12:53.773580] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:23.777 [2024-10-09 00:12:53.773587] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:05:23.777 [2024-10-09 00:12:53.774887] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:05:23.777 [2024-10-09 00:12:53.775051] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:05:23.777 [2024-10-09 00:12:53.775062] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:05:23.777 00:12:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:23.777 00:12:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # return 0 00:05:23.777 00:12:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:05:24.037 00:12:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:24.037 00:12:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:24.037 00:12:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:05:24.037 00:12:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:05:24.037 00:12:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:05:24.037 [2024-10-09 00:12:54.619360] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:24.037 00:12:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:05:24.299 00:12:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:05:24.559 [2024-10-09 00:12:55.032774] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:05:24.559 00:12:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:05:24.819 00:12:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:05:24.819 Malloc0 00:05:25.079 00:12:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:05:25.079 Delay0 00:05:25.079 00:12:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:25.339 00:12:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:05:25.609 NULL1 00:05:25.609 00:12:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:05:25.609 00:12:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=3039074 00:05:25.609 00:12:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3039074 00:05:25.609 00:12:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:05:25.609 00:12:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:25.873 00:12:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:26.133 00:12:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:05:26.133 00:12:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:05:26.392 true 00:05:26.392 00:12:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3039074 00:05:26.392 00:12:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:26.392 00:12:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:26.652 00:12:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:05:26.652 00:12:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:05:26.911 true 00:05:26.911 00:12:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3039074 00:05:26.911 00:12:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:26.911 00:12:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:27.202 00:12:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:05:27.202 00:12:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:05:27.518 true 00:05:27.518 00:12:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3039074 00:05:27.518 00:12:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:27.518 00:12:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:27.826 00:12:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:05:27.826 00:12:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:05:27.826 true 00:05:27.826 00:12:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3039074 00:05:27.826 00:12:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:28.087 00:12:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:28.347 00:12:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:05:28.347 00:12:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:05:28.347 true 00:05:28.608 00:12:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3039074 00:05:28.608 00:12:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:28.608 00:12:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:28.869 00:12:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:05:28.869 00:12:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:05:28.869 true 00:05:29.131 00:12:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3039074 00:05:29.131 00:12:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:29.131 00:12:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:29.391 00:12:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:05:29.391 00:12:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:05:29.652 true 00:05:29.652 00:13:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3039074 00:05:29.652 00:13:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:29.652 00:13:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:29.912 00:13:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:05:29.912 00:13:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:05:30.172 true 00:05:30.172 00:13:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3039074 00:05:30.172 00:13:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:30.431 00:13:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:30.431 00:13:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:05:30.431 00:13:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:05:30.691 true 00:05:30.691 00:13:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3039074 00:05:30.691 00:13:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:30.995 00:13:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:30.995 00:13:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:05:30.995 00:13:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:05:31.254 true 00:05:31.254 00:13:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3039074 00:05:31.254 00:13:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:31.515 00:13:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:31.515 00:13:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:05:31.515 00:13:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:05:31.773 true 00:05:31.773 00:13:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3039074 00:05:31.773 00:13:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:32.033 00:13:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:32.033 00:13:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:05:32.033 00:13:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:05:32.293 true 00:05:32.293 00:13:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3039074 00:05:32.293 00:13:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:32.553 00:13:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:32.813 00:13:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:05:32.813 00:13:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:05:32.813 true 00:05:32.813 00:13:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3039074 00:05:32.813 00:13:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:33.073 00:13:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:33.332 00:13:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:05:33.332 00:13:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:05:33.332 true 00:05:33.332 00:13:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3039074 00:05:33.332 00:13:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:33.592 00:13:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:33.852 00:13:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:05:33.852 00:13:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:05:33.852 true 00:05:34.122 00:13:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3039074 00:05:34.122 00:13:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:34.122 00:13:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:34.384 00:13:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:05:34.384 00:13:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:05:34.384 true 00:05:34.644 00:13:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3039074 00:05:34.644 00:13:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:34.644 00:13:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:34.903 00:13:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:05:34.903 00:13:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:05:35.163 true 00:05:35.163 00:13:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3039074 00:05:35.163 00:13:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:35.163 00:13:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:35.421 00:13:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:05:35.421 00:13:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:05:35.680 true 00:05:35.680 00:13:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3039074 00:05:35.680 00:13:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:35.680 00:13:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:35.942 00:13:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:05:35.942 00:13:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:05:36.202 true 00:05:36.202 00:13:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3039074 00:05:36.202 00:13:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:36.462 00:13:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:36.462 00:13:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:05:36.462 00:13:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:05:36.722 true 00:05:36.722 00:13:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3039074 00:05:36.722 00:13:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:36.982 00:13:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:36.982 00:13:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:05:36.982 00:13:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:05:37.241 true 00:05:37.241 00:13:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3039074 00:05:37.241 00:13:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:37.501 00:13:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:37.762 00:13:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:05:37.762 00:13:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:05:37.762 true 00:05:37.762 00:13:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3039074 00:05:37.762 00:13:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:38.023 00:13:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:38.282 00:13:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:05:38.282 00:13:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:05:38.282 true 00:05:38.282 00:13:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3039074 00:05:38.282 00:13:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:38.541 00:13:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:38.801 00:13:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:05:38.801 00:13:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:05:38.801 true 00:05:38.801 00:13:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3039074 00:05:38.801 00:13:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:39.067 00:13:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:39.326 00:13:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:05:39.326 00:13:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:05:39.326 true 00:05:39.326 00:13:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3039074 00:05:39.326 00:13:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:39.586 00:13:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:39.846 00:13:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:05:39.846 00:13:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:05:39.846 true 00:05:40.106 00:13:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3039074 00:05:40.106 00:13:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:40.106 00:13:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:40.367 00:13:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:05:40.367 00:13:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:05:40.627 true 00:05:40.627 00:13:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3039074 00:05:40.627 00:13:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:40.627 00:13:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:40.888 00:13:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:05:40.888 00:13:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:05:41.149 true 00:05:41.149 00:13:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3039074 00:05:41.149 00:13:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:41.149 00:13:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:41.410 00:13:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:05:41.410 00:13:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:05:41.671 true 00:05:41.671 00:13:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3039074 00:05:41.671 00:13:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:41.931 00:13:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:41.931 00:13:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:05:41.931 00:13:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:05:42.191 true 00:05:42.191 00:13:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3039074 00:05:42.191 00:13:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:42.450 00:13:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:42.450 00:13:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:05:42.450 00:13:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:05:42.709 true 00:05:42.709 00:13:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3039074 00:05:42.709 00:13:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:42.968 00:13:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:43.237 00:13:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:05:43.237 00:13:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:05:43.237 true 00:05:43.237 00:13:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3039074 00:05:43.237 00:13:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:43.502 00:13:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:43.762 00:13:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:05:43.762 00:13:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:05:43.762 true 00:05:43.762 00:13:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3039074 00:05:43.762 00:13:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:44.023 00:13:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:44.284 00:13:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:05:44.284 00:13:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:05:44.284 true 00:05:44.284 00:13:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3039074 00:05:44.284 00:13:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:44.549 00:13:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:44.811 00:13:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:05:44.811 00:13:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:05:44.811 true 00:05:45.072 00:13:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3039074 00:05:45.072 00:13:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:45.072 00:13:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:45.331 00:13:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1036 00:05:45.331 00:13:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:05:45.590 true 00:05:45.590 00:13:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3039074 00:05:45.590 00:13:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:45.590 00:13:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:45.850 00:13:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1037 00:05:45.850 00:13:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1037 00:05:46.110 true 00:05:46.110 00:13:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3039074 00:05:46.110 00:13:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:46.369 00:13:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:46.369 00:13:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1038 00:05:46.370 00:13:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1038 00:05:46.630 true 00:05:46.630 00:13:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3039074 00:05:46.630 00:13:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:46.889 00:13:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:46.889 00:13:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1039 00:05:46.889 00:13:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1039 00:05:47.149 true 00:05:47.149 00:13:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3039074 00:05:47.149 00:13:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:47.409 00:13:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:47.669 00:13:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1040 00:05:47.669 00:13:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1040 00:05:47.669 true 00:05:47.669 00:13:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3039074 00:05:47.669 00:13:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:47.930 00:13:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:48.190 00:13:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1041 00:05:48.190 00:13:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1041 00:05:48.190 true 00:05:48.190 00:13:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3039074 00:05:48.190 00:13:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:48.450 00:13:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:48.710 00:13:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1042 00:05:48.710 00:13:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1042 00:05:48.710 true 00:05:48.970 00:13:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3039074 00:05:48.970 00:13:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:48.970 00:13:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:49.229 00:13:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1043 00:05:49.229 00:13:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1043 00:05:49.489 true 00:05:49.489 00:13:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3039074 00:05:49.489 00:13:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:49.489 00:13:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:49.749 00:13:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1044 00:05:49.749 00:13:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1044 00:05:50.009 true 00:05:50.009 00:13:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3039074 00:05:50.009 00:13:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:50.269 00:13:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:50.269 00:13:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1045 00:05:50.269 00:13:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1045 00:05:50.529 true 00:05:50.529 00:13:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3039074 00:05:50.529 00:13:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:50.789 00:13:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:50.789 00:13:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1046 00:05:50.789 00:13:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1046 00:05:51.049 true 00:05:51.049 00:13:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3039074 00:05:51.049 00:13:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:51.308 00:13:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:51.308 00:13:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1047 00:05:51.308 00:13:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1047 00:05:51.569 true 00:05:51.569 00:13:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3039074 00:05:51.569 00:13:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:51.829 00:13:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:51.829 00:13:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1048 00:05:51.829 00:13:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1048 00:05:52.090 true 00:05:52.090 00:13:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3039074 00:05:52.090 00:13:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:52.349 00:13:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:52.609 00:13:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1049 00:05:52.609 00:13:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1049 00:05:52.609 true 00:05:52.609 00:13:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3039074 00:05:52.610 00:13:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:52.869 00:13:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:53.131 00:13:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1050 00:05:53.131 00:13:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1050 00:05:53.131 true 00:05:53.131 00:13:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3039074 00:05:53.131 00:13:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:53.391 00:13:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:53.650 00:13:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1051 00:05:53.650 00:13:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1051 00:05:53.650 true 00:05:53.650 00:13:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3039074 00:05:53.650 00:13:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:53.911 00:13:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:54.171 00:13:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1052 00:05:54.171 00:13:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1052 00:05:54.171 true 00:05:54.431 00:13:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3039074 00:05:54.431 00:13:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:54.431 00:13:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:54.691 00:13:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1053 00:05:54.691 00:13:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1053 00:05:54.952 true 00:05:54.952 00:13:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3039074 00:05:54.952 00:13:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:54.952 00:13:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:55.211 00:13:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1054 00:05:55.211 00:13:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1054 00:05:55.471 true 00:05:55.471 00:13:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3039074 00:05:55.471 00:13:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:55.731 00:13:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:55.731 00:13:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1055 00:05:55.731 00:13:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1055 00:05:55.991 true 00:05:55.991 00:13:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3039074 00:05:55.991 00:13:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:55.991 Initializing NVMe Controllers 00:05:55.991 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:05:55.991 Controller IO queue size 128, less than required. 00:05:55.991 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:05:55.991 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:05:55.991 Initialization complete. Launching workers. 00:05:55.991 ======================================================== 00:05:55.991 Latency(us) 00:05:55.991 Device Information : IOPS MiB/s Average min max 00:05:55.991 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 31140.50 15.21 4110.36 1141.10 8033.58 00:05:55.991 ======================================================== 00:05:55.991 Total : 31140.50 15.21 4110.36 1141.10 8033.58 00:05:55.991 00:05:56.251 00:13:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:56.251 00:13:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1056 00:05:56.251 00:13:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1056 00:05:56.510 true 00:05:56.510 00:13:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3039074 00:05:56.510 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (3039074) - No such process 00:05:56.510 00:13:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 3039074 00:05:56.510 00:13:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:56.770 00:13:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:56.770 00:13:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:05:56.770 00:13:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:05:56.770 00:13:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:05:56.770 00:13:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:56.770 00:13:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:05:57.029 null0 00:05:57.029 00:13:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:57.029 00:13:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:57.029 00:13:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:05:57.289 null1 00:05:57.289 00:13:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:57.289 00:13:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:57.289 00:13:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:05:57.289 null2 00:05:57.549 00:13:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:57.549 00:13:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:57.549 00:13:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:05:57.549 null3 00:05:57.549 00:13:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:57.549 00:13:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:57.549 00:13:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:05:57.810 null4 00:05:57.810 00:13:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:57.810 00:13:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:57.810 00:13:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:05:57.810 null5 00:05:58.069 00:13:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:58.069 00:13:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:58.069 00:13:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:05:58.069 null6 00:05:58.069 00:13:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:58.069 00:13:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:58.069 00:13:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:05:58.330 null7 00:05:58.330 00:13:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:58.330 00:13:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:58.330 00:13:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:05:58.330 00:13:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:58.330 00:13:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:58.330 00:13:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:58.330 00:13:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:58.330 00:13:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:05:58.330 00:13:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:05:58.330 00:13:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:58.330 00:13:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:58.330 00:13:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:58.330 00:13:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:58.330 00:13:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:58.330 00:13:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:05:58.330 00:13:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:58.330 00:13:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:05:58.330 00:13:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:58.330 00:13:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:58.330 00:13:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:58.330 00:13:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:58.330 00:13:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:58.330 00:13:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:58.330 00:13:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:05:58.330 00:13:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:05:58.330 00:13:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:58.330 00:13:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:58.330 00:13:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:58.330 00:13:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:58.330 00:13:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:58.330 00:13:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:58.330 00:13:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:05:58.330 00:13:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:05:58.330 00:13:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:58.330 00:13:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:58.330 00:13:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:58.330 00:13:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:58.330 00:13:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:58.330 00:13:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:58.330 00:13:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:05:58.330 00:13:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:05:58.330 00:13:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:58.330 00:13:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:58.330 00:13:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:58.330 00:13:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:58.330 00:13:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:58.330 00:13:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:58.330 00:13:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:05:58.330 00:13:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:05:58.330 00:13:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:58.330 00:13:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:58.330 00:13:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:58.330 00:13:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:58.330 00:13:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:58.330 00:13:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:58.330 00:13:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:05:58.330 00:13:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:05:58.330 00:13:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:58.330 00:13:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:58.330 00:13:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:58.330 00:13:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:58.330 00:13:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:58.330 00:13:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 3045848 3045850 3045853 3045856 3045859 3045862 3045865 3045867 00:05:58.330 00:13:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:58.330 00:13:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:05:58.330 00:13:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:05:58.330 00:13:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:58.330 00:13:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:58.330 00:13:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:58.591 00:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:58.591 00:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:58.591 00:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:58.591 00:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:58.591 00:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:58.591 00:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:58.591 00:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:58.591 00:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:58.591 00:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:58.591 00:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:58.591 00:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:58.852 00:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:58.852 00:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:58.852 00:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:58.852 00:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:58.852 00:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:58.852 00:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:58.852 00:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:58.852 00:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:58.852 00:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:58.852 00:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:58.852 00:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:58.852 00:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:58.852 00:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:58.852 00:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:58.852 00:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:58.852 00:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:58.852 00:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:58.852 00:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:58.852 00:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:58.852 00:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:58.852 00:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:58.852 00:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:58.852 00:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:58.852 00:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:58.852 00:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:58.852 00:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:58.852 00:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:59.112 00:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:59.113 00:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:59.113 00:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:59.113 00:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:59.113 00:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:59.113 00:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:59.113 00:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:59.113 00:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:59.113 00:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:59.113 00:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:59.113 00:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:59.113 00:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:59.113 00:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:59.113 00:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:59.113 00:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:59.113 00:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:59.113 00:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:59.113 00:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:59.113 00:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:59.113 00:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:59.113 00:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:59.113 00:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:59.113 00:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:59.113 00:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:59.113 00:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:59.113 00:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:59.113 00:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:59.374 00:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:59.374 00:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:59.374 00:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:59.374 00:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:59.374 00:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:59.374 00:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:59.374 00:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:59.374 00:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:59.374 00:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:59.374 00:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:59.374 00:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:59.374 00:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:59.374 00:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:59.374 00:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:59.374 00:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:59.374 00:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:59.634 00:13:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:59.634 00:13:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:59.634 00:13:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:59.634 00:13:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:59.634 00:13:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:59.634 00:13:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:59.634 00:13:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:59.634 00:13:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:59.634 00:13:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:59.634 00:13:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:59.634 00:13:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:59.634 00:13:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:59.634 00:13:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:59.634 00:13:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:59.634 00:13:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:59.634 00:13:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:59.634 00:13:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:59.634 00:13:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:59.634 00:13:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:59.634 00:13:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:59.634 00:13:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:59.635 00:13:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:59.635 00:13:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:59.635 00:13:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:59.895 00:13:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:59.895 00:13:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:59.895 00:13:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:59.895 00:13:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:59.895 00:13:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:59.895 00:13:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:59.895 00:13:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:59.895 00:13:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:59.895 00:13:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:59.895 00:13:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:59.895 00:13:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:59.895 00:13:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:59.895 00:13:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:59.895 00:13:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:59.895 00:13:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:59.895 00:13:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:59.895 00:13:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:59.895 00:13:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:59.895 00:13:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:59.895 00:13:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:59.895 00:13:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:59.895 00:13:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:59.895 00:13:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:59.895 00:13:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:59.895 00:13:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:59.895 00:13:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:00.157 00:13:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:00.157 00:13:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:00.157 00:13:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:00.157 00:13:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:00.157 00:13:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:00.157 00:13:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:00.157 00:13:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:00.157 00:13:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:00.157 00:13:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:00.157 00:13:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:00.157 00:13:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:00.157 00:13:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:00.157 00:13:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:00.157 00:13:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:00.157 00:13:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:00.157 00:13:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:00.157 00:13:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:00.157 00:13:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:00.157 00:13:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:00.157 00:13:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:00.417 00:13:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:00.417 00:13:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:00.417 00:13:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:00.417 00:13:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:00.417 00:13:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:00.417 00:13:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:00.417 00:13:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:00.417 00:13:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:00.417 00:13:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:00.417 00:13:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:00.417 00:13:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:00.417 00:13:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:00.418 00:13:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:00.418 00:13:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:00.418 00:13:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:00.418 00:13:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:00.418 00:13:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:00.418 00:13:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:00.418 00:13:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:00.418 00:13:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:00.418 00:13:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:00.418 00:13:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:00.418 00:13:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:00.418 00:13:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:00.418 00:13:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:00.679 00:13:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:00.679 00:13:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:00.679 00:13:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:00.679 00:13:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:00.679 00:13:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:00.679 00:13:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:00.679 00:13:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:00.679 00:13:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:00.679 00:13:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:00.679 00:13:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:00.679 00:13:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:00.679 00:13:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:00.679 00:13:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:00.679 00:13:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:00.679 00:13:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:00.679 00:13:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:00.679 00:13:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:00.679 00:13:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:00.679 00:13:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:00.679 00:13:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:00.940 00:13:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:00.940 00:13:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:00.940 00:13:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:00.940 00:13:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:00.940 00:13:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:00.940 00:13:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:00.940 00:13:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:00.940 00:13:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:00.940 00:13:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:00.940 00:13:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:00.940 00:13:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:00.940 00:13:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:00.940 00:13:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:00.940 00:13:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:00.940 00:13:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:00.940 00:13:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:00.940 00:13:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:00.940 00:13:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:00.940 00:13:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:00.940 00:13:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:00.940 00:13:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:00.940 00:13:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:00.940 00:13:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:00.940 00:13:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:00.940 00:13:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:00.940 00:13:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:00.940 00:13:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:00.940 00:13:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:00.940 00:13:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:00.940 00:13:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:01.201 00:13:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:01.201 00:13:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:01.201 00:13:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:01.201 00:13:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:01.201 00:13:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:01.201 00:13:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:01.201 00:13:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:01.201 00:13:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:01.201 00:13:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:01.201 00:13:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:01.201 00:13:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:01.201 00:13:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:01.201 00:13:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:01.201 00:13:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:01.201 00:13:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:01.462 00:13:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:01.462 00:13:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:01.462 00:13:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:01.462 00:13:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:01.462 00:13:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:01.462 00:13:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:01.462 00:13:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:01.462 00:13:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:01.462 00:13:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:01.462 00:13:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:01.462 00:13:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:01.462 00:13:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:01.462 00:13:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:01.462 00:13:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:01.462 00:13:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:01.462 00:13:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:01.462 00:13:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:01.462 00:13:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:01.462 00:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:01.462 00:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:01.462 00:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:01.462 00:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:01.462 00:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:01.462 00:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:01.462 00:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:01.725 00:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:01.725 00:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:01.725 00:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:01.725 00:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:01.725 00:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:01.725 00:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:01.725 00:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:01.725 00:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:01.725 00:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:01.725 00:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:01.725 00:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:01.725 00:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:01.725 00:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:01.725 00:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:01.725 00:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:01.725 00:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:01.726 00:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:01.726 00:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:01.726 00:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:01.726 00:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:01.726 00:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:01.726 00:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:01.726 00:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:01.726 00:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:01.726 00:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:01.726 00:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:01.986 00:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:01.986 00:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:01.986 00:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:01.986 00:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:01.986 00:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:01.986 00:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:01.986 00:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:01.986 00:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:01.986 00:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:01.986 00:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:01.986 00:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:01.986 00:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:01.986 00:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:01.986 00:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:02.247 00:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:02.247 00:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:02.247 00:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:02.247 00:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:02.247 00:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:02.247 00:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:02.247 00:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:06:02.247 00:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:06:02.247 00:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@514 -- # nvmfcleanup 00:06:02.247 00:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:06:02.247 00:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:02.247 00:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:06:02.247 00:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:02.247 00:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:02.247 rmmod nvme_tcp 00:06:02.247 rmmod nvme_fabrics 00:06:02.247 rmmod nvme_keyring 00:06:02.247 00:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:02.247 00:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:06:02.247 00:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:06:02.247 00:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@515 -- # '[' -n 3038664 ']' 00:06:02.247 00:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # killprocess 3038664 00:06:02.247 00:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@950 -- # '[' -z 3038664 ']' 00:06:02.247 00:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # kill -0 3038664 00:06:02.247 00:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # uname 00:06:02.247 00:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:02.247 00:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3038664 00:06:02.247 00:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:06:02.247 00:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:06:02.247 00:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3038664' 00:06:02.247 killing process with pid 3038664 00:06:02.247 00:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@969 -- # kill 3038664 00:06:02.247 00:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@974 -- # wait 3038664 00:06:02.506 00:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:06:02.506 00:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:06:02.506 00:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:06:02.506 00:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:06:02.506 00:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@789 -- # iptables-save 00:06:02.506 00:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:06:02.506 00:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@789 -- # iptables-restore 00:06:02.506 00:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:02.507 00:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:02.507 00:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:02.507 00:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:02.507 00:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:04.524 00:13:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:04.524 00:06:04.524 real 0m49.279s 00:06:04.524 user 3m20.374s 00:06:04.524 sys 0m17.358s 00:06:04.524 00:13:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:04.524 00:13:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:04.524 ************************************ 00:06:04.524 END TEST nvmf_ns_hotplug_stress 00:06:04.524 ************************************ 00:06:04.524 00:13:35 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:06:04.524 00:13:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:04.524 00:13:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:04.524 00:13:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:04.524 ************************************ 00:06:04.524 START TEST nvmf_delete_subsystem 00:06:04.524 ************************************ 00:06:04.524 00:13:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:06:04.813 * Looking for test storage... 00:06:04.813 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:04.813 00:13:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:04.813 00:13:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1681 -- # lcov --version 00:06:04.813 00:13:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:04.813 00:13:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:04.813 00:13:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:04.813 00:13:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:04.813 00:13:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:04.813 00:13:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:06:04.813 00:13:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:06:04.813 00:13:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:06:04.813 00:13:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:06:04.813 00:13:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:06:04.813 00:13:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:06:04.813 00:13:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:06:04.813 00:13:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:04.813 00:13:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:06:04.813 00:13:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:06:04.813 00:13:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:04.813 00:13:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:04.813 00:13:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:06:04.813 00:13:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:06:04.813 00:13:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:04.813 00:13:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:06:04.813 00:13:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:06:04.813 00:13:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:06:04.813 00:13:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:06:04.813 00:13:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:04.813 00:13:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:06:04.813 00:13:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:06:04.813 00:13:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:04.813 00:13:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:04.813 00:13:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:06:04.813 00:13:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:04.813 00:13:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:04.813 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:04.813 --rc genhtml_branch_coverage=1 00:06:04.813 --rc genhtml_function_coverage=1 00:06:04.813 --rc genhtml_legend=1 00:06:04.813 --rc geninfo_all_blocks=1 00:06:04.813 --rc geninfo_unexecuted_blocks=1 00:06:04.813 00:06:04.813 ' 00:06:04.813 00:13:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:04.813 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:04.813 --rc genhtml_branch_coverage=1 00:06:04.813 --rc genhtml_function_coverage=1 00:06:04.813 --rc genhtml_legend=1 00:06:04.813 --rc geninfo_all_blocks=1 00:06:04.813 --rc geninfo_unexecuted_blocks=1 00:06:04.813 00:06:04.813 ' 00:06:04.813 00:13:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:04.813 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:04.813 --rc genhtml_branch_coverage=1 00:06:04.813 --rc genhtml_function_coverage=1 00:06:04.813 --rc genhtml_legend=1 00:06:04.813 --rc geninfo_all_blocks=1 00:06:04.813 --rc geninfo_unexecuted_blocks=1 00:06:04.813 00:06:04.813 ' 00:06:04.813 00:13:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:04.813 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:04.813 --rc genhtml_branch_coverage=1 00:06:04.813 --rc genhtml_function_coverage=1 00:06:04.813 --rc genhtml_legend=1 00:06:04.813 --rc geninfo_all_blocks=1 00:06:04.813 --rc geninfo_unexecuted_blocks=1 00:06:04.813 00:06:04.813 ' 00:06:04.813 00:13:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:04.813 00:13:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:06:04.813 00:13:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:04.813 00:13:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:04.813 00:13:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:04.813 00:13:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:04.813 00:13:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:04.813 00:13:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:04.813 00:13:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:04.813 00:13:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:04.814 00:13:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:04.814 00:13:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:04.814 00:13:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:06:04.814 00:13:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:06:04.814 00:13:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:04.814 00:13:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:04.814 00:13:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:04.814 00:13:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:04.814 00:13:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:04.814 00:13:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:06:04.814 00:13:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:04.814 00:13:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:04.814 00:13:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:04.814 00:13:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:04.814 00:13:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:04.814 00:13:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:04.814 00:13:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:06:04.814 00:13:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:04.814 00:13:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:06:04.814 00:13:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:04.814 00:13:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:04.814 00:13:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:04.814 00:13:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:04.814 00:13:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:04.814 00:13:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:04.814 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:04.814 00:13:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:04.814 00:13:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:04.814 00:13:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:04.814 00:13:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:06:04.814 00:13:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:06:04.814 00:13:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:04.814 00:13:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # prepare_net_devs 00:06:04.814 00:13:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@436 -- # local -g is_hw=no 00:06:04.814 00:13:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # remove_spdk_ns 00:06:04.814 00:13:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:04.814 00:13:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:04.814 00:13:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:04.814 00:13:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:06:04.814 00:13:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:06:04.814 00:13:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:06:04.814 00:13:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:12.960 00:13:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:12.960 00:13:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:06:12.960 00:13:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:12.960 00:13:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:12.960 00:13:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:12.960 00:13:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:12.960 00:13:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:12.960 00:13:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:06:12.960 00:13:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:12.960 00:13:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:06:12.960 00:13:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:06:12.960 00:13:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:06:12.960 00:13:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:06:12.960 00:13:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:06:12.960 00:13:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:06:12.960 00:13:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:12.960 00:13:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:12.960 00:13:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:12.960 00:13:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:12.960 00:13:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:12.960 00:13:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:12.960 00:13:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:12.960 00:13:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:12.960 00:13:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:12.961 00:13:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:12.961 00:13:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:12.961 00:13:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:12.961 00:13:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:12.961 00:13:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:12.961 00:13:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:12.961 00:13:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:12.961 00:13:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:12.961 00:13:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:12.961 00:13:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:12.961 00:13:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:06:12.961 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:06:12.961 00:13:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:12.961 00:13:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:12.961 00:13:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:12.961 00:13:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:12.961 00:13:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:12.961 00:13:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:12.961 00:13:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:06:12.961 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:06:12.961 00:13:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:12.961 00:13:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:12.961 00:13:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:12.961 00:13:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:12.961 00:13:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:12.961 00:13:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:12.961 00:13:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:12.961 00:13:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:12.961 00:13:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:06:12.961 00:13:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:12.961 00:13:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:06:12.961 00:13:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:12.961 00:13:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ up == up ]] 00:06:12.961 00:13:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:06:12.961 00:13:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:12.961 00:13:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:06:12.961 Found net devices under 0000:4b:00.0: cvl_0_0 00:06:12.961 00:13:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:06:12.961 00:13:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:06:12.961 00:13:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:12.961 00:13:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:06:12.961 00:13:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:12.961 00:13:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ up == up ]] 00:06:12.961 00:13:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:06:12.961 00:13:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:12.961 00:13:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:06:12.961 Found net devices under 0000:4b:00.1: cvl_0_1 00:06:12.961 00:13:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:06:12.961 00:13:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:06:12.961 00:13:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # is_hw=yes 00:06:12.961 00:13:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:06:12.961 00:13:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:06:12.961 00:13:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:06:12.961 00:13:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:12.961 00:13:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:12.961 00:13:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:12.961 00:13:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:12.961 00:13:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:12.961 00:13:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:12.961 00:13:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:12.961 00:13:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:12.961 00:13:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:12.961 00:13:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:12.961 00:13:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:12.961 00:13:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:12.961 00:13:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:12.961 00:13:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:12.961 00:13:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:12.961 00:13:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:12.961 00:13:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:12.961 00:13:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:12.961 00:13:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:12.961 00:13:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:12.961 00:13:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:12.961 00:13:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:12.961 00:13:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:12.961 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:12.961 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.541 ms 00:06:12.961 00:06:12.961 --- 10.0.0.2 ping statistics --- 00:06:12.961 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:12.961 rtt min/avg/max/mdev = 0.541/0.541/0.541/0.000 ms 00:06:12.961 00:13:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:12.961 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:12.961 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.289 ms 00:06:12.961 00:06:12.961 --- 10.0.0.1 ping statistics --- 00:06:12.961 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:12.961 rtt min/avg/max/mdev = 0.289/0.289/0.289/0.000 ms 00:06:12.961 00:13:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:12.961 00:13:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # return 0 00:06:12.961 00:13:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:06:12.961 00:13:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:12.961 00:13:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:06:12.961 00:13:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:06:12.961 00:13:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:12.961 00:13:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:06:12.961 00:13:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:06:12.961 00:13:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:06:12.961 00:13:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:06:12.961 00:13:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:12.961 00:13:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:12.961 00:13:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # nvmfpid=3051094 00:06:12.961 00:13:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # waitforlisten 3051094 00:06:12.961 00:13:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:06:12.961 00:13:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@831 -- # '[' -z 3051094 ']' 00:06:12.961 00:13:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:12.961 00:13:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:12.961 00:13:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:12.961 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:12.961 00:13:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:12.961 00:13:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:12.961 [2024-10-09 00:13:42.945668] Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 initialization... 00:06:12.961 [2024-10-09 00:13:42.945740] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:12.961 [2024-10-09 00:13:43.014117] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:12.961 [2024-10-09 00:13:43.098849] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:12.962 [2024-10-09 00:13:43.098910] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:12.962 [2024-10-09 00:13:43.098917] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:12.962 [2024-10-09 00:13:43.098922] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:12.962 [2024-10-09 00:13:43.098927] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:12.962 [2024-10-09 00:13:43.101747] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:06:12.962 [2024-10-09 00:13:43.101778] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.962 00:13:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:12.962 00:13:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # return 0 00:06:12.962 00:13:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:06:12.962 00:13:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:12.962 00:13:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:12.962 00:13:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:12.962 00:13:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:12.962 00:13:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:12.962 00:13:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:12.962 [2024-10-09 00:13:43.253826] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:12.962 00:13:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:12.962 00:13:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:12.962 00:13:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:12.962 00:13:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:12.962 00:13:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:12.962 00:13:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:12.962 00:13:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:12.962 00:13:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:12.962 [2024-10-09 00:13:43.278160] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:12.962 00:13:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:12.962 00:13:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:06:12.962 00:13:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:12.962 00:13:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:12.962 NULL1 00:06:12.962 00:13:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:12.962 00:13:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:12.962 00:13:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:12.962 00:13:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:12.962 Delay0 00:06:12.962 00:13:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:12.962 00:13:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:12.962 00:13:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:12.962 00:13:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:12.962 00:13:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:12.962 00:13:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=3051138 00:06:12.962 00:13:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:06:12.962 00:13:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:06:12.962 [2024-10-09 00:13:43.395040] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:06:14.875 00:13:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:06:14.875 00:13:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:14.875 00:13:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:15.136 Read completed with error (sct=0, sc=8) 00:06:15.136 starting I/O failed: -6 00:06:15.136 Read completed with error (sct=0, sc=8) 00:06:15.136 Write completed with error (sct=0, sc=8) 00:06:15.136 Read completed with error (sct=0, sc=8) 00:06:15.136 Write completed with error (sct=0, sc=8) 00:06:15.136 starting I/O failed: -6 00:06:15.136 Read completed with error (sct=0, sc=8) 00:06:15.136 Read completed with error (sct=0, sc=8) 00:06:15.136 Read completed with error (sct=0, sc=8) 00:06:15.136 Read completed with error (sct=0, sc=8) 00:06:15.136 starting I/O failed: -6 00:06:15.136 Read completed with error (sct=0, sc=8) 00:06:15.136 Write completed with error (sct=0, sc=8) 00:06:15.136 Read completed with error (sct=0, sc=8) 00:06:15.136 Read completed with error (sct=0, sc=8) 00:06:15.136 starting I/O failed: -6 00:06:15.136 Read completed with error (sct=0, sc=8) 00:06:15.136 Read completed with error (sct=0, sc=8) 00:06:15.136 Read completed with error (sct=0, sc=8) 00:06:15.136 Read completed with error (sct=0, sc=8) 00:06:15.136 starting I/O failed: -6 00:06:15.136 Read completed with error (sct=0, sc=8) 00:06:15.136 Read completed with error (sct=0, sc=8) 00:06:15.136 Read completed with error (sct=0, sc=8) 00:06:15.136 Read completed with error (sct=0, sc=8) 00:06:15.136 starting I/O failed: -6 00:06:15.136 Write completed with error (sct=0, sc=8) 00:06:15.136 Write completed with error (sct=0, sc=8) 00:06:15.136 Read completed with error (sct=0, sc=8) 00:06:15.136 Read completed with error (sct=0, sc=8) 00:06:15.136 starting I/O failed: -6 00:06:15.136 Read completed with error (sct=0, sc=8) 00:06:15.136 Read completed with error (sct=0, sc=8) 00:06:15.136 Read completed with error (sct=0, sc=8) 00:06:15.136 Read completed with error (sct=0, sc=8) 00:06:15.136 starting I/O failed: -6 00:06:15.136 Read completed with error (sct=0, sc=8) 00:06:15.136 Read completed with error (sct=0, sc=8) 00:06:15.136 Read completed with error (sct=0, sc=8) 00:06:15.136 Read completed with error (sct=0, sc=8) 00:06:15.136 starting I/O failed: -6 00:06:15.136 Read completed with error (sct=0, sc=8) 00:06:15.136 Write completed with error (sct=0, sc=8) 00:06:15.136 Write completed with error (sct=0, sc=8) 00:06:15.136 Read completed with error (sct=0, sc=8) 00:06:15.136 starting I/O failed: -6 00:06:15.136 Write completed with error (sct=0, sc=8) 00:06:15.136 Read completed with error (sct=0, sc=8) 00:06:15.136 Write completed with error (sct=0, sc=8) 00:06:15.136 Read completed with error (sct=0, sc=8) 00:06:15.136 starting I/O failed: -6 00:06:15.136 Read completed with error (sct=0, sc=8) 00:06:15.137 Write completed with error (sct=0, sc=8) 00:06:15.137 Write completed with error (sct=0, sc=8) 00:06:15.137 Read completed with error (sct=0, sc=8) 00:06:15.137 starting I/O failed: -6 00:06:15.137 Read completed with error (sct=0, sc=8) 00:06:15.137 Write completed with error (sct=0, sc=8) 00:06:15.137 Write completed with error (sct=0, sc=8) 00:06:15.137 [2024-10-09 00:13:45.626460] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c14390 is same with the state(6) to be set 00:06:15.137 Read completed with error (sct=0, sc=8) 00:06:15.137 Write completed with error (sct=0, sc=8) 00:06:15.137 Read completed with error (sct=0, sc=8) 00:06:15.137 Read completed with error (sct=0, sc=8) 00:06:15.137 Read completed with error (sct=0, sc=8) 00:06:15.137 Read completed with error (sct=0, sc=8) 00:06:15.137 Read completed with error (sct=0, sc=8) 00:06:15.137 Read completed with error (sct=0, sc=8) 00:06:15.137 Read completed with error (sct=0, sc=8) 00:06:15.137 Read completed with error (sct=0, sc=8) 00:06:15.137 Read completed with error (sct=0, sc=8) 00:06:15.137 Write completed with error (sct=0, sc=8) 00:06:15.137 Write completed with error (sct=0, sc=8) 00:06:15.137 Write completed with error (sct=0, sc=8) 00:06:15.137 Read completed with error (sct=0, sc=8) 00:06:15.137 Read completed with error (sct=0, sc=8) 00:06:15.137 Read completed with error (sct=0, sc=8) 00:06:15.137 Read completed with error (sct=0, sc=8) 00:06:15.137 Read completed with error (sct=0, sc=8) 00:06:15.137 Read completed with error (sct=0, sc=8) 00:06:15.137 Read completed with error (sct=0, sc=8) 00:06:15.137 Read completed with error (sct=0, sc=8) 00:06:15.137 Read completed with error (sct=0, sc=8) 00:06:15.137 Read completed with error (sct=0, sc=8) 00:06:15.137 Write completed with error (sct=0, sc=8) 00:06:15.137 Read completed with error (sct=0, sc=8) 00:06:15.137 starting I/O failed: -6 00:06:15.137 Read completed with error (sct=0, sc=8) 00:06:15.137 Read completed with error (sct=0, sc=8) 00:06:15.137 Write completed with error (sct=0, sc=8) 00:06:15.137 Read completed with error (sct=0, sc=8) 00:06:15.137 Read completed with error (sct=0, sc=8) 00:06:15.137 Read completed with error (sct=0, sc=8) 00:06:15.137 Read completed with error (sct=0, sc=8) 00:06:15.137 Read completed with error (sct=0, sc=8) 00:06:15.137 Read completed with error (sct=0, sc=8) 00:06:15.137 Write completed with error (sct=0, sc=8) 00:06:15.137 Write completed with error (sct=0, sc=8) 00:06:15.137 Read completed with error (sct=0, sc=8) 00:06:15.137 starting I/O failed: -6 00:06:15.137 Read completed with error (sct=0, sc=8) 00:06:15.137 Read completed with error (sct=0, sc=8) 00:06:15.137 Write completed with error (sct=0, sc=8) 00:06:15.137 Read completed with error (sct=0, sc=8) 00:06:15.137 Read completed with error (sct=0, sc=8) 00:06:15.137 Read completed with error (sct=0, sc=8) 00:06:15.137 Read completed with error (sct=0, sc=8) 00:06:15.137 Read completed with error (sct=0, sc=8) 00:06:15.137 Read completed with error (sct=0, sc=8) 00:06:15.137 Read completed with error (sct=0, sc=8) 00:06:15.137 Read completed with error (sct=0, sc=8) 00:06:15.137 Write completed with error (sct=0, sc=8) 00:06:15.137 starting I/O failed: -6 00:06:15.137 Read completed with error (sct=0, sc=8) 00:06:15.137 Read completed with error (sct=0, sc=8) 00:06:15.137 Write completed with error (sct=0, sc=8) 00:06:15.137 Write completed with error (sct=0, sc=8) 00:06:15.137 Write completed with error (sct=0, sc=8) 00:06:15.137 Read completed with error (sct=0, sc=8) 00:06:15.137 Read completed with error (sct=0, sc=8) 00:06:15.137 Read completed with error (sct=0, sc=8) 00:06:15.137 Read completed with error (sct=0, sc=8) 00:06:15.137 Write completed with error (sct=0, sc=8) 00:06:15.137 Read completed with error (sct=0, sc=8) 00:06:15.137 starting I/O failed: -6 00:06:15.137 Read completed with error (sct=0, sc=8) 00:06:15.137 Read completed with error (sct=0, sc=8) 00:06:15.137 Read completed with error (sct=0, sc=8) 00:06:15.137 Write completed with error (sct=0, sc=8) 00:06:15.137 Read completed with error (sct=0, sc=8) 00:06:15.137 Read completed with error (sct=0, sc=8) 00:06:15.137 Write completed with error (sct=0, sc=8) 00:06:15.137 Write completed with error (sct=0, sc=8) 00:06:15.137 Write completed with error (sct=0, sc=8) 00:06:15.137 Read completed with error (sct=0, sc=8) 00:06:15.137 Write completed with error (sct=0, sc=8) 00:06:15.137 Read completed with error (sct=0, sc=8) 00:06:15.137 Read completed with error (sct=0, sc=8) 00:06:15.137 starting I/O failed: -6 00:06:15.137 Read completed with error (sct=0, sc=8) 00:06:15.137 Write completed with error (sct=0, sc=8) 00:06:15.137 Read completed with error (sct=0, sc=8) 00:06:15.137 Read completed with error (sct=0, sc=8) 00:06:15.137 Read completed with error (sct=0, sc=8) 00:06:15.137 Write completed with error (sct=0, sc=8) 00:06:15.137 Read completed with error (sct=0, sc=8) 00:06:15.137 Read completed with error (sct=0, sc=8) 00:06:15.137 Write completed with error (sct=0, sc=8) 00:06:15.137 starting I/O failed: -6 00:06:15.137 Read completed with error (sct=0, sc=8) 00:06:15.137 Read completed with error (sct=0, sc=8) 00:06:15.137 Write completed with error (sct=0, sc=8) 00:06:15.137 Read completed with error (sct=0, sc=8) 00:06:15.137 starting I/O failed: -6 00:06:15.137 Read completed with error (sct=0, sc=8) 00:06:15.137 Write completed with error (sct=0, sc=8) 00:06:15.137 Write completed with error (sct=0, sc=8) 00:06:15.137 Read completed with error (sct=0, sc=8) 00:06:15.137 starting I/O failed: -6 00:06:15.137 Read completed with error (sct=0, sc=8) 00:06:15.137 Read completed with error (sct=0, sc=8) 00:06:15.137 Write completed with error (sct=0, sc=8) 00:06:15.137 Read completed with error (sct=0, sc=8) 00:06:15.137 starting I/O failed: -6 00:06:15.137 Write completed with error (sct=0, sc=8) 00:06:15.137 Write completed with error (sct=0, sc=8) 00:06:15.137 Read completed with error (sct=0, sc=8) 00:06:15.137 Write completed with error (sct=0, sc=8) 00:06:15.137 starting I/O failed: -6 00:06:15.137 Read completed with error (sct=0, sc=8) 00:06:15.137 Write completed with error (sct=0, sc=8) 00:06:15.137 Read completed with error (sct=0, sc=8) 00:06:15.137 Read completed with error (sct=0, sc=8) 00:06:15.137 starting I/O failed: -6 00:06:15.137 Read completed with error (sct=0, sc=8) 00:06:15.137 Read completed with error (sct=0, sc=8) 00:06:15.137 [2024-10-09 00:13:45.627105] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f2640000c00 is same with the state(6) to be set 00:06:15.137 Write completed with error (sct=0, sc=8) 00:06:15.137 Write completed with error (sct=0, sc=8) 00:06:15.137 Read completed with error (sct=0, sc=8) 00:06:15.137 Write completed with error (sct=0, sc=8) 00:06:15.137 Read completed with error (sct=0, sc=8) 00:06:15.137 Read completed with error (sct=0, sc=8) 00:06:15.137 Read completed with error (sct=0, sc=8) 00:06:15.137 Write completed with error (sct=0, sc=8) 00:06:15.137 Write completed with error (sct=0, sc=8) 00:06:15.137 Write completed with error (sct=0, sc=8) 00:06:15.137 Read completed with error (sct=0, sc=8) 00:06:15.137 Read completed with error (sct=0, sc=8) 00:06:15.137 Read completed with error (sct=0, sc=8) 00:06:15.137 Read completed with error (sct=0, sc=8) 00:06:15.137 Read completed with error (sct=0, sc=8) 00:06:15.137 Write completed with error (sct=0, sc=8) 00:06:15.137 Write completed with error (sct=0, sc=8) 00:06:15.137 Read completed with error (sct=0, sc=8) 00:06:15.137 Write completed with error (sct=0, sc=8) 00:06:15.137 Read completed with error (sct=0, sc=8) 00:06:15.137 Write completed with error (sct=0, sc=8) 00:06:15.137 Write completed with error (sct=0, sc=8) 00:06:15.137 Write completed with error (sct=0, sc=8) 00:06:15.137 Read completed with error (sct=0, sc=8) 00:06:15.137 Read completed with error (sct=0, sc=8) 00:06:15.137 Read completed with error (sct=0, sc=8) 00:06:15.137 Read completed with error (sct=0, sc=8) 00:06:15.137 Read completed with error (sct=0, sc=8) 00:06:15.137 Read completed with error (sct=0, sc=8) 00:06:15.137 Write completed with error (sct=0, sc=8) 00:06:15.137 Read completed with error (sct=0, sc=8) 00:06:15.137 Write completed with error (sct=0, sc=8) 00:06:15.137 Write completed with error (sct=0, sc=8) 00:06:15.137 Read completed with error (sct=0, sc=8) 00:06:15.137 Read completed with error (sct=0, sc=8) 00:06:15.137 Write completed with error (sct=0, sc=8) 00:06:15.137 Read completed with error (sct=0, sc=8) 00:06:15.137 Read completed with error (sct=0, sc=8) 00:06:15.137 Write completed with error (sct=0, sc=8) 00:06:15.137 Read completed with error (sct=0, sc=8) 00:06:15.137 Read completed with error (sct=0, sc=8) 00:06:15.137 Read completed with error (sct=0, sc=8) 00:06:15.137 Write completed with error (sct=0, sc=8) 00:06:15.137 Read completed with error (sct=0, sc=8) 00:06:15.137 Read completed with error (sct=0, sc=8) 00:06:15.137 Read completed with error (sct=0, sc=8) 00:06:15.137 Write completed with error (sct=0, sc=8) 00:06:15.137 Write completed with error (sct=0, sc=8) 00:06:15.137 Read completed with error (sct=0, sc=8) 00:06:15.137 Read completed with error (sct=0, sc=8) 00:06:15.137 Read completed with error (sct=0, sc=8) 00:06:15.137 Read completed with error (sct=0, sc=8) 00:06:15.137 Read completed with error (sct=0, sc=8) 00:06:15.137 Read completed with error (sct=0, sc=8) 00:06:15.137 Read completed with error (sct=0, sc=8) 00:06:15.137 Write completed with error (sct=0, sc=8) 00:06:16.084 [2024-10-09 00:13:46.578898] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c15a70 is same with the state(6) to be set 00:06:16.084 Read completed with error (sct=0, sc=8) 00:06:16.084 Write completed with error (sct=0, sc=8) 00:06:16.084 Read completed with error (sct=0, sc=8) 00:06:16.084 Read completed with error (sct=0, sc=8) 00:06:16.084 Read completed with error (sct=0, sc=8) 00:06:16.084 Read completed with error (sct=0, sc=8) 00:06:16.084 Read completed with error (sct=0, sc=8) 00:06:16.084 Read completed with error (sct=0, sc=8) 00:06:16.084 Write completed with error (sct=0, sc=8) 00:06:16.084 Read completed with error (sct=0, sc=8) 00:06:16.084 Read completed with error (sct=0, sc=8) 00:06:16.084 Read completed with error (sct=0, sc=8) 00:06:16.084 Write completed with error (sct=0, sc=8) 00:06:16.084 Read completed with error (sct=0, sc=8) 00:06:16.084 Read completed with error (sct=0, sc=8) 00:06:16.084 Write completed with error (sct=0, sc=8) 00:06:16.084 Read completed with error (sct=0, sc=8) 00:06:16.084 Write completed with error (sct=0, sc=8) 00:06:16.084 Read completed with error (sct=0, sc=8) 00:06:16.084 Read completed with error (sct=0, sc=8) 00:06:16.084 Read completed with error (sct=0, sc=8) 00:06:16.084 Read completed with error (sct=0, sc=8) 00:06:16.084 Read completed with error (sct=0, sc=8) 00:06:16.084 Write completed with error (sct=0, sc=8) 00:06:16.084 [2024-10-09 00:13:46.627466] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f264000cfe0 is same with the state(6) to be set 00:06:16.084 Write completed with error (sct=0, sc=8) 00:06:16.084 Write completed with error (sct=0, sc=8) 00:06:16.084 Read completed with error (sct=0, sc=8) 00:06:16.084 Write completed with error (sct=0, sc=8) 00:06:16.084 Read completed with error (sct=0, sc=8) 00:06:16.084 Read completed with error (sct=0, sc=8) 00:06:16.084 Read completed with error (sct=0, sc=8) 00:06:16.084 Read completed with error (sct=0, sc=8) 00:06:16.084 Read completed with error (sct=0, sc=8) 00:06:16.084 Read completed with error (sct=0, sc=8) 00:06:16.084 Read completed with error (sct=0, sc=8) 00:06:16.084 Read completed with error (sct=0, sc=8) 00:06:16.084 Read completed with error (sct=0, sc=8) 00:06:16.084 Read completed with error (sct=0, sc=8) 00:06:16.084 Read completed with error (sct=0, sc=8) 00:06:16.084 Read completed with error (sct=0, sc=8) 00:06:16.084 Read completed with error (sct=0, sc=8) 00:06:16.084 Write completed with error (sct=0, sc=8) 00:06:16.084 Write completed with error (sct=0, sc=8) 00:06:16.084 Write completed with error (sct=0, sc=8) 00:06:16.084 Read completed with error (sct=0, sc=8) 00:06:16.084 Read completed with error (sct=0, sc=8) 00:06:16.084 Read completed with error (sct=0, sc=8) 00:06:16.084 Read completed with error (sct=0, sc=8) 00:06:16.084 Read completed with error (sct=0, sc=8) 00:06:16.084 Read completed with error (sct=0, sc=8) 00:06:16.084 Read completed with error (sct=0, sc=8) 00:06:16.084 Read completed with error (sct=0, sc=8) 00:06:16.084 [2024-10-09 00:13:46.627696] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c14570 is same with the state(6) to be set 00:06:16.084 Read completed with error (sct=0, sc=8) 00:06:16.084 Write completed with error (sct=0, sc=8) 00:06:16.084 Write completed with error (sct=0, sc=8) 00:06:16.084 Read completed with error (sct=0, sc=8) 00:06:16.084 Read completed with error (sct=0, sc=8) 00:06:16.084 Read completed with error (sct=0, sc=8) 00:06:16.084 Read completed with error (sct=0, sc=8) 00:06:16.084 Write completed with error (sct=0, sc=8) 00:06:16.084 Write completed with error (sct=0, sc=8) 00:06:16.084 Read completed with error (sct=0, sc=8) 00:06:16.084 Read completed with error (sct=0, sc=8) 00:06:16.084 Read completed with error (sct=0, sc=8) 00:06:16.084 Read completed with error (sct=0, sc=8) 00:06:16.084 Read completed with error (sct=0, sc=8) 00:06:16.084 Read completed with error (sct=0, sc=8) 00:06:16.084 Read completed with error (sct=0, sc=8) 00:06:16.084 Read completed with error (sct=0, sc=8) 00:06:16.084 Read completed with error (sct=0, sc=8) 00:06:16.084 Read completed with error (sct=0, sc=8) 00:06:16.084 Read completed with error (sct=0, sc=8) 00:06:16.084 Read completed with error (sct=0, sc=8) 00:06:16.084 Write completed with error (sct=0, sc=8) 00:06:16.084 Read completed with error (sct=0, sc=8) 00:06:16.084 Read completed with error (sct=0, sc=8) 00:06:16.084 Read completed with error (sct=0, sc=8) 00:06:16.084 Write completed with error (sct=0, sc=8) 00:06:16.084 Write completed with error (sct=0, sc=8) 00:06:16.084 Read completed with error (sct=0, sc=8) 00:06:16.084 [2024-10-09 00:13:46.628571] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c14930 is same with the state(6) to be set 00:06:16.084 Read completed with error (sct=0, sc=8) 00:06:16.084 Write completed with error (sct=0, sc=8) 00:06:16.084 Read completed with error (sct=0, sc=8) 00:06:16.084 Read completed with error (sct=0, sc=8) 00:06:16.084 Write completed with error (sct=0, sc=8) 00:06:16.084 Read completed with error (sct=0, sc=8) 00:06:16.084 Write completed with error (sct=0, sc=8) 00:06:16.084 Write completed with error (sct=0, sc=8) 00:06:16.085 Write completed with error (sct=0, sc=8) 00:06:16.085 Read completed with error (sct=0, sc=8) 00:06:16.085 Read completed with error (sct=0, sc=8) 00:06:16.085 Read completed with error (sct=0, sc=8) 00:06:16.085 Read completed with error (sct=0, sc=8) 00:06:16.085 Read completed with error (sct=0, sc=8) 00:06:16.085 Read completed with error (sct=0, sc=8) 00:06:16.085 Read completed with error (sct=0, sc=8) 00:06:16.085 Read completed with error (sct=0, sc=8) 00:06:16.085 Read completed with error (sct=0, sc=8) 00:06:16.085 Read completed with error (sct=0, sc=8) 00:06:16.085 Read completed with error (sct=0, sc=8) 00:06:16.085 Read completed with error (sct=0, sc=8) 00:06:16.085 Read completed with error (sct=0, sc=8) 00:06:16.085 Write completed with error (sct=0, sc=8) 00:06:16.085 [2024-10-09 00:13:46.628940] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f264000d780 is same with the state(6) to be set 00:06:16.085 Initializing NVMe Controllers 00:06:16.085 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:16.085 Controller IO queue size 128, less than required. 00:06:16.085 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:16.085 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:06:16.085 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:06:16.085 Initialization complete. Launching workers. 00:06:16.085 ======================================================== 00:06:16.085 Latency(us) 00:06:16.085 Device Information : IOPS MiB/s Average min max 00:06:16.085 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 176.66 0.09 881721.24 458.58 1012911.57 00:06:16.085 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 168.72 0.08 896797.11 325.12 1013003.07 00:06:16.085 ======================================================== 00:06:16.085 Total : 345.37 0.17 889085.89 325.12 1013003.07 00:06:16.085 00:06:16.085 [2024-10-09 00:13:46.629467] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c15a70 (9): Bad file descriptor 00:06:16.085 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:06:16.085 00:13:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:16.085 00:13:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:06:16.085 00:13:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3051138 00:06:16.085 00:13:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:06:16.674 00:13:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:06:16.674 00:13:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3051138 00:06:16.674 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (3051138) - No such process 00:06:16.674 00:13:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 3051138 00:06:16.674 00:13:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # local es=0 00:06:16.674 00:13:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # valid_exec_arg wait 3051138 00:06:16.674 00:13:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@638 -- # local arg=wait 00:06:16.674 00:13:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:16.674 00:13:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # type -t wait 00:06:16.674 00:13:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:16.674 00:13:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # wait 3051138 00:06:16.674 00:13:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # es=1 00:06:16.674 00:13:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:16.674 00:13:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:16.674 00:13:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:16.674 00:13:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:16.674 00:13:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:16.674 00:13:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:16.674 00:13:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:16.674 00:13:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:16.674 00:13:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:16.674 00:13:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:16.674 [2024-10-09 00:13:47.160418] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:16.674 00:13:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:16.674 00:13:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:16.674 00:13:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:16.674 00:13:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:16.674 00:13:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:16.674 00:13:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=3051817 00:06:16.674 00:13:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:06:16.674 00:13:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:06:16.674 00:13:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3051817 00:06:16.674 00:13:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:16.674 [2024-10-09 00:13:47.248174] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:06:17.245 00:13:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:17.245 00:13:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3051817 00:06:17.245 00:13:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:17.815 00:13:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:17.815 00:13:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3051817 00:06:17.815 00:13:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:18.074 00:13:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:18.074 00:13:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3051817 00:06:18.074 00:13:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:18.643 00:13:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:18.643 00:13:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3051817 00:06:18.643 00:13:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:19.215 00:13:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:19.215 00:13:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3051817 00:06:19.215 00:13:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:19.785 00:13:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:19.785 00:13:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3051817 00:06:19.785 00:13:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:19.785 Initializing NVMe Controllers 00:06:19.785 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:19.785 Controller IO queue size 128, less than required. 00:06:19.785 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:19.785 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:06:19.785 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:06:19.785 Initialization complete. Launching workers. 00:06:19.785 ======================================================== 00:06:19.785 Latency(us) 00:06:19.785 Device Information : IOPS MiB/s Average min max 00:06:19.785 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1001823.65 1000211.70 1004780.20 00:06:19.785 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1003205.66 1000322.12 1040953.06 00:06:19.785 ======================================================== 00:06:19.785 Total : 256.00 0.12 1002514.65 1000211.70 1040953.06 00:06:19.785 00:06:20.355 00:13:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:20.355 00:13:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3051817 00:06:20.355 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (3051817) - No such process 00:06:20.355 00:13:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 3051817 00:06:20.355 00:13:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:06:20.355 00:13:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:06:20.355 00:13:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@514 -- # nvmfcleanup 00:06:20.355 00:13:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:06:20.355 00:13:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:20.355 00:13:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:06:20.355 00:13:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:20.355 00:13:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:20.355 rmmod nvme_tcp 00:06:20.355 rmmod nvme_fabrics 00:06:20.355 rmmod nvme_keyring 00:06:20.355 00:13:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:20.355 00:13:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:06:20.355 00:13:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:06:20.355 00:13:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@515 -- # '[' -n 3051094 ']' 00:06:20.355 00:13:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # killprocess 3051094 00:06:20.355 00:13:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@950 -- # '[' -z 3051094 ']' 00:06:20.355 00:13:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # kill -0 3051094 00:06:20.355 00:13:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # uname 00:06:20.355 00:13:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:20.355 00:13:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3051094 00:06:20.355 00:13:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:20.355 00:13:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:20.355 00:13:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3051094' 00:06:20.355 killing process with pid 3051094 00:06:20.355 00:13:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@969 -- # kill 3051094 00:06:20.355 00:13:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@974 -- # wait 3051094 00:06:20.355 00:13:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:06:20.355 00:13:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:06:20.355 00:13:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:06:20.355 00:13:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:06:20.355 00:13:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@789 -- # iptables-save 00:06:20.355 00:13:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:06:20.355 00:13:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@789 -- # iptables-restore 00:06:20.355 00:13:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:20.355 00:13:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:20.355 00:13:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:20.355 00:13:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:20.355 00:13:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:22.898 00:13:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:22.898 00:06:22.898 real 0m17.941s 00:06:22.898 user 0m29.876s 00:06:22.898 sys 0m6.791s 00:06:22.898 00:13:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:22.898 00:13:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:22.898 ************************************ 00:06:22.898 END TEST nvmf_delete_subsystem 00:06:22.898 ************************************ 00:06:22.898 00:13:53 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:06:22.898 00:13:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:22.898 00:13:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:22.898 00:13:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:22.898 ************************************ 00:06:22.898 START TEST nvmf_host_management 00:06:22.898 ************************************ 00:06:22.898 00:13:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:06:22.898 * Looking for test storage... 00:06:22.898 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:22.898 00:13:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:22.898 00:13:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1681 -- # lcov --version 00:06:22.898 00:13:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:22.898 00:13:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:22.898 00:13:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:22.898 00:13:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:22.898 00:13:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:22.898 00:13:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:06:22.898 00:13:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:06:22.898 00:13:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:06:22.898 00:13:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:06:22.898 00:13:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:06:22.898 00:13:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:06:22.898 00:13:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:06:22.898 00:13:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:22.898 00:13:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:06:22.898 00:13:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:06:22.898 00:13:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:22.898 00:13:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:22.898 00:13:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:06:22.898 00:13:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:06:22.898 00:13:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:22.898 00:13:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:06:22.898 00:13:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:06:22.898 00:13:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:06:22.898 00:13:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:06:22.898 00:13:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:22.898 00:13:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:06:22.898 00:13:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:06:22.898 00:13:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:22.898 00:13:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:22.898 00:13:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:06:22.898 00:13:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:22.898 00:13:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:22.898 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:22.898 --rc genhtml_branch_coverage=1 00:06:22.898 --rc genhtml_function_coverage=1 00:06:22.898 --rc genhtml_legend=1 00:06:22.898 --rc geninfo_all_blocks=1 00:06:22.898 --rc geninfo_unexecuted_blocks=1 00:06:22.898 00:06:22.898 ' 00:06:22.898 00:13:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:22.898 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:22.898 --rc genhtml_branch_coverage=1 00:06:22.898 --rc genhtml_function_coverage=1 00:06:22.898 --rc genhtml_legend=1 00:06:22.898 --rc geninfo_all_blocks=1 00:06:22.898 --rc geninfo_unexecuted_blocks=1 00:06:22.898 00:06:22.898 ' 00:06:22.898 00:13:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:22.898 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:22.898 --rc genhtml_branch_coverage=1 00:06:22.898 --rc genhtml_function_coverage=1 00:06:22.898 --rc genhtml_legend=1 00:06:22.898 --rc geninfo_all_blocks=1 00:06:22.898 --rc geninfo_unexecuted_blocks=1 00:06:22.898 00:06:22.898 ' 00:06:22.898 00:13:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:22.898 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:22.898 --rc genhtml_branch_coverage=1 00:06:22.898 --rc genhtml_function_coverage=1 00:06:22.898 --rc genhtml_legend=1 00:06:22.898 --rc geninfo_all_blocks=1 00:06:22.898 --rc geninfo_unexecuted_blocks=1 00:06:22.898 00:06:22.898 ' 00:06:22.898 00:13:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:22.898 00:13:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:06:22.898 00:13:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:22.898 00:13:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:22.898 00:13:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:22.898 00:13:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:22.898 00:13:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:22.898 00:13:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:22.898 00:13:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:22.898 00:13:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:22.898 00:13:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:22.898 00:13:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:22.898 00:13:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:06:22.898 00:13:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:06:22.898 00:13:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:22.898 00:13:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:22.898 00:13:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:22.898 00:13:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:22.898 00:13:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:22.898 00:13:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:06:22.898 00:13:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:22.898 00:13:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:22.898 00:13:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:22.899 00:13:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:22.899 00:13:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:22.899 00:13:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:22.899 00:13:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:06:22.899 00:13:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:22.899 00:13:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:06:22.899 00:13:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:22.899 00:13:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:22.899 00:13:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:22.899 00:13:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:22.899 00:13:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:22.899 00:13:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:22.899 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:22.899 00:13:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:22.899 00:13:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:22.899 00:13:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:22.899 00:13:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:22.899 00:13:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:06:22.899 00:13:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:06:22.899 00:13:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:06:22.899 00:13:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:22.899 00:13:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # prepare_net_devs 00:06:22.899 00:13:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@436 -- # local -g is_hw=no 00:06:22.899 00:13:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # remove_spdk_ns 00:06:22.899 00:13:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:22.899 00:13:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:22.899 00:13:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:22.899 00:13:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:06:22.899 00:13:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:06:22.899 00:13:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:06:22.899 00:13:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:31.039 00:14:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:31.039 00:14:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:06:31.039 00:14:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:31.039 00:14:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:31.039 00:14:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:31.039 00:14:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:31.039 00:14:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:31.039 00:14:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:06:31.039 00:14:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:31.039 00:14:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:06:31.039 00:14:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:06:31.039 00:14:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:06:31.039 00:14:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:06:31.039 00:14:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:06:31.039 00:14:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:06:31.039 00:14:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:31.039 00:14:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:31.039 00:14:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:31.039 00:14:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:31.039 00:14:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:31.039 00:14:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:31.039 00:14:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:31.039 00:14:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:31.039 00:14:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:31.039 00:14:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:31.039 00:14:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:31.039 00:14:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:31.039 00:14:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:31.039 00:14:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:31.039 00:14:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:31.039 00:14:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:31.039 00:14:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:31.039 00:14:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:31.039 00:14:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:31.039 00:14:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:06:31.039 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:06:31.039 00:14:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:31.039 00:14:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:31.039 00:14:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:31.039 00:14:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:31.039 00:14:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:31.039 00:14:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:31.039 00:14:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:06:31.039 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:06:31.039 00:14:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:31.039 00:14:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:31.039 00:14:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:31.039 00:14:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:31.039 00:14:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:31.039 00:14:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:31.039 00:14:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:31.040 00:14:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:31.040 00:14:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:06:31.040 00:14:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:31.040 00:14:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:06:31.040 00:14:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:31.040 00:14:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ up == up ]] 00:06:31.040 00:14:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:06:31.040 00:14:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:31.040 00:14:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:06:31.040 Found net devices under 0000:4b:00.0: cvl_0_0 00:06:31.040 00:14:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:06:31.040 00:14:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:06:31.040 00:14:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:31.040 00:14:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:06:31.040 00:14:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:31.040 00:14:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ up == up ]] 00:06:31.040 00:14:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:06:31.040 00:14:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:31.040 00:14:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:06:31.040 Found net devices under 0000:4b:00.1: cvl_0_1 00:06:31.040 00:14:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:06:31.040 00:14:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:06:31.040 00:14:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # is_hw=yes 00:06:31.040 00:14:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:06:31.040 00:14:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:06:31.040 00:14:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:06:31.040 00:14:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:31.040 00:14:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:31.040 00:14:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:31.040 00:14:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:31.040 00:14:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:31.040 00:14:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:31.040 00:14:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:31.040 00:14:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:31.040 00:14:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:31.040 00:14:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:31.040 00:14:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:31.040 00:14:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:31.040 00:14:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:31.040 00:14:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:31.040 00:14:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:31.040 00:14:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:31.040 00:14:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:31.040 00:14:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:31.040 00:14:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:31.040 00:14:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:31.040 00:14:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:31.040 00:14:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:31.040 00:14:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:31.040 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:31.040 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.599 ms 00:06:31.040 00:06:31.040 --- 10.0.0.2 ping statistics --- 00:06:31.040 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:31.040 rtt min/avg/max/mdev = 0.599/0.599/0.599/0.000 ms 00:06:31.040 00:14:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:31.040 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:31.040 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.298 ms 00:06:31.040 00:06:31.040 --- 10.0.0.1 ping statistics --- 00:06:31.040 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:31.040 rtt min/avg/max/mdev = 0.298/0.298/0.298/0.000 ms 00:06:31.040 00:14:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:31.040 00:14:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@448 -- # return 0 00:06:31.040 00:14:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:06:31.040 00:14:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:31.040 00:14:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:06:31.040 00:14:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:06:31.040 00:14:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:31.040 00:14:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:06:31.040 00:14:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:06:31.040 00:14:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:06:31.040 00:14:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:06:31.040 00:14:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:06:31.040 00:14:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:06:31.040 00:14:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:31.040 00:14:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:31.040 00:14:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # nvmfpid=3056841 00:06:31.040 00:14:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # waitforlisten 3056841 00:06:31.040 00:14:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:06:31.040 00:14:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 3056841 ']' 00:06:31.040 00:14:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:31.040 00:14:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:31.040 00:14:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:31.040 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:31.040 00:14:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:31.040 00:14:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:31.040 [2024-10-09 00:14:00.859495] Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 initialization... 00:06:31.040 [2024-10-09 00:14:00.859562] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:31.040 [2024-10-09 00:14:00.948570] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:31.040 [2024-10-09 00:14:01.042451] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:31.040 [2024-10-09 00:14:01.042515] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:31.040 [2024-10-09 00:14:01.042524] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:31.040 [2024-10-09 00:14:01.042531] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:31.040 [2024-10-09 00:14:01.042538] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:31.040 [2024-10-09 00:14:01.044623] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:06:31.040 [2024-10-09 00:14:01.044785] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:06:31.040 [2024-10-09 00:14:01.044951] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 4 00:06:31.040 [2024-10-09 00:14:01.044953] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:06:31.302 00:14:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:31.302 00:14:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:06:31.302 00:14:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:06:31.302 00:14:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:31.302 00:14:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:31.302 00:14:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:31.302 00:14:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:31.302 00:14:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:31.302 00:14:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:31.302 [2024-10-09 00:14:01.732395] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:31.302 00:14:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:31.302 00:14:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:06:31.302 00:14:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:31.302 00:14:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:31.302 00:14:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:06:31.302 00:14:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:06:31.302 00:14:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:06:31.302 00:14:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:31.302 00:14:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:31.302 Malloc0 00:06:31.302 [2024-10-09 00:14:01.801793] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:31.302 00:14:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:31.302 00:14:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:06:31.302 00:14:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:31.302 00:14:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:31.302 00:14:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=3057135 00:06:31.302 00:14:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 3057135 /var/tmp/bdevperf.sock 00:06:31.302 00:14:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 3057135 ']' 00:06:31.302 00:14:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:06:31.302 00:14:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:31.302 00:14:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:06:31.302 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:06:31.302 00:14:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:31.302 00:14:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:06:31.302 00:14:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:06:31.302 00:14:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:31.302 00:14:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # config=() 00:06:31.302 00:14:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # local subsystem config 00:06:31.302 00:14:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:06:31.302 00:14:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:06:31.302 { 00:06:31.302 "params": { 00:06:31.302 "name": "Nvme$subsystem", 00:06:31.302 "trtype": "$TEST_TRANSPORT", 00:06:31.302 "traddr": "$NVMF_FIRST_TARGET_IP", 00:06:31.302 "adrfam": "ipv4", 00:06:31.302 "trsvcid": "$NVMF_PORT", 00:06:31.302 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:06:31.302 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:06:31.302 "hdgst": ${hdgst:-false}, 00:06:31.302 "ddgst": ${ddgst:-false} 00:06:31.302 }, 00:06:31.302 "method": "bdev_nvme_attach_controller" 00:06:31.302 } 00:06:31.302 EOF 00:06:31.302 )") 00:06:31.302 00:14:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@580 -- # cat 00:06:31.302 00:14:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # jq . 00:06:31.302 00:14:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@583 -- # IFS=, 00:06:31.302 00:14:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:06:31.302 "params": { 00:06:31.302 "name": "Nvme0", 00:06:31.302 "trtype": "tcp", 00:06:31.302 "traddr": "10.0.0.2", 00:06:31.302 "adrfam": "ipv4", 00:06:31.302 "trsvcid": "4420", 00:06:31.302 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:06:31.302 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:06:31.302 "hdgst": false, 00:06:31.302 "ddgst": false 00:06:31.302 }, 00:06:31.302 "method": "bdev_nvme_attach_controller" 00:06:31.302 }' 00:06:31.302 [2024-10-09 00:14:01.911940] Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 initialization... 00:06:31.302 [2024-10-09 00:14:01.912015] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3057135 ] 00:06:31.563 [2024-10-09 00:14:01.994474] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:31.563 [2024-10-09 00:14:02.090713] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.831 Running I/O for 10 seconds... 00:06:32.404 00:14:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:32.404 00:14:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:06:32.404 00:14:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:06:32.404 00:14:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:32.404 00:14:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:32.404 00:14:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:32.404 00:14:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:06:32.404 00:14:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:06:32.404 00:14:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:06:32.404 00:14:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:06:32.405 00:14:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:06:32.405 00:14:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:06:32.405 00:14:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:06:32.405 00:14:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:06:32.405 00:14:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:06:32.405 00:14:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:06:32.405 00:14:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:32.405 00:14:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:32.405 00:14:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:32.405 00:14:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=515 00:06:32.405 00:14:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 515 -ge 100 ']' 00:06:32.405 00:14:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:06:32.405 00:14:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:06:32.405 00:14:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:06:32.405 00:14:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:06:32.405 00:14:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:32.405 00:14:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:32.405 [2024-10-09 00:14:02.801602] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8f2d0 is same with the state(6) to be set 00:06:32.405 [2024-10-09 00:14:02.801714] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8f2d0 is same with the state(6) to be set 00:06:32.405 [2024-10-09 00:14:02.801966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:81792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.405 [2024-10-09 00:14:02.802030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:32.405 [2024-10-09 00:14:02.802054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:73728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.405 [2024-10-09 00:14:02.802063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:32.405 [2024-10-09 00:14:02.802074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:73856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.405 [2024-10-09 00:14:02.802083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:32.405 [2024-10-09 00:14:02.802093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:73984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.405 [2024-10-09 00:14:02.802109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:32.405 [2024-10-09 00:14:02.802120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:74112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.405 [2024-10-09 00:14:02.802128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:32.405 [2024-10-09 00:14:02.802138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:74240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.405 [2024-10-09 00:14:02.802145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:32.405 [2024-10-09 00:14:02.802155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:74368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.405 [2024-10-09 00:14:02.802163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:32.405 [2024-10-09 00:14:02.802173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:74496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.405 [2024-10-09 00:14:02.802180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:32.405 [2024-10-09 00:14:02.802190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:74624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.405 [2024-10-09 00:14:02.802197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:32.405 [2024-10-09 00:14:02.802207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:74752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.405 [2024-10-09 00:14:02.802216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:32.405 [2024-10-09 00:14:02.802226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:74880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.405 [2024-10-09 00:14:02.802234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:32.405 [2024-10-09 00:14:02.802244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:75008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.405 [2024-10-09 00:14:02.802251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:32.405 [2024-10-09 00:14:02.802262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:75136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.405 [2024-10-09 00:14:02.802269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:32.405 [2024-10-09 00:14:02.802279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:75264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.405 [2024-10-09 00:14:02.802287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:32.405 [2024-10-09 00:14:02.802297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:75392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.405 [2024-10-09 00:14:02.802305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:32.405 [2024-10-09 00:14:02.802315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:75520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.405 [2024-10-09 00:14:02.802323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:32.405 [2024-10-09 00:14:02.802337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:75648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.405 [2024-10-09 00:14:02.802345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:32.405 [2024-10-09 00:14:02.802355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:75776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.405 [2024-10-09 00:14:02.802362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:32.405 [2024-10-09 00:14:02.802372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:75904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.405 [2024-10-09 00:14:02.802380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:32.405 [2024-10-09 00:14:02.802389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:76032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.405 [2024-10-09 00:14:02.802397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:32.405 [2024-10-09 00:14:02.802406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:76160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.405 [2024-10-09 00:14:02.802414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:32.405 [2024-10-09 00:14:02.802423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:76288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.405 [2024-10-09 00:14:02.802430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:32.405 [2024-10-09 00:14:02.802440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:76416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.405 [2024-10-09 00:14:02.802447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:32.405 [2024-10-09 00:14:02.802457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:76544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.405 [2024-10-09 00:14:02.802465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:32.405 [2024-10-09 00:14:02.802474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:76672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.405 [2024-10-09 00:14:02.802482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:32.405 [2024-10-09 00:14:02.802491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:76800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.405 [2024-10-09 00:14:02.802498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:32.405 [2024-10-09 00:14:02.802507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:76928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.405 [2024-10-09 00:14:02.802516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:32.405 [2024-10-09 00:14:02.802526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:77056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.405 [2024-10-09 00:14:02.802533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:32.405 [2024-10-09 00:14:02.802544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:77184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.405 [2024-10-09 00:14:02.802558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:32.405 [2024-10-09 00:14:02.802569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:77312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.405 [2024-10-09 00:14:02.802576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:32.405 [2024-10-09 00:14:02.802586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:77440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.405 [2024-10-09 00:14:02.802595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:32.405 [2024-10-09 00:14:02.802604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:77568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.405 [2024-10-09 00:14:02.802612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:32.405 [2024-10-09 00:14:02.802622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:77696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.405 [2024-10-09 00:14:02.802630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:32.406 [2024-10-09 00:14:02.802641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:77824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.406 [2024-10-09 00:14:02.802650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:32.406 [2024-10-09 00:14:02.802660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:77952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.406 [2024-10-09 00:14:02.802668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:32.406 [2024-10-09 00:14:02.802679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:78080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.406 [2024-10-09 00:14:02.802687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:32.406 [2024-10-09 00:14:02.802697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:78208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.406 [2024-10-09 00:14:02.802704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:32.406 [2024-10-09 00:14:02.802714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:78336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.406 [2024-10-09 00:14:02.802728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:32.406 [2024-10-09 00:14:02.802738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:78464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.406 [2024-10-09 00:14:02.802746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:32.406 [2024-10-09 00:14:02.802755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:78592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.406 [2024-10-09 00:14:02.802763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:32.406 [2024-10-09 00:14:02.802772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:78720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.406 [2024-10-09 00:14:02.802782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:32.406 [2024-10-09 00:14:02.802793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:78848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.406 [2024-10-09 00:14:02.802803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:32.406 [2024-10-09 00:14:02.802814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:78976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.406 [2024-10-09 00:14:02.802821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:32.406 [2024-10-09 00:14:02.802830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:79104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.406 [2024-10-09 00:14:02.802837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:32.406 [2024-10-09 00:14:02.802847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:79232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.406 [2024-10-09 00:14:02.802854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:32.406 [2024-10-09 00:14:02.802866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:79360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.406 [2024-10-09 00:14:02.802874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:32.406 [2024-10-09 00:14:02.802885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:79488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.406 [2024-10-09 00:14:02.802892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:32.406 [2024-10-09 00:14:02.802902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:79616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.406 [2024-10-09 00:14:02.802909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:32.406 [2024-10-09 00:14:02.802919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:79744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.406 [2024-10-09 00:14:02.802928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:32.406 [2024-10-09 00:14:02.802937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:79872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.406 [2024-10-09 00:14:02.802946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:32.406 [2024-10-09 00:14:02.802955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:80000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.406 [2024-10-09 00:14:02.802963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:32.406 [2024-10-09 00:14:02.802972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:80128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.406 [2024-10-09 00:14:02.802980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:32.406 [2024-10-09 00:14:02.802991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:80256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.406 [2024-10-09 00:14:02.803000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:32.406 [2024-10-09 00:14:02.803009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:80384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.406 [2024-10-09 00:14:02.803016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:32.406 [2024-10-09 00:14:02.803028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:80512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.406 [2024-10-09 00:14:02.803036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:32.406 [2024-10-09 00:14:02.803047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:80640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.406 [2024-10-09 00:14:02.803054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:32.406 [2024-10-09 00:14:02.803064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:80768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.406 [2024-10-09 00:14:02.803071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:32.406 [2024-10-09 00:14:02.803081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:80896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.406 [2024-10-09 00:14:02.803088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:32.406 [2024-10-09 00:14:02.803100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:81024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.406 [2024-10-09 00:14:02.803108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:32.406 [2024-10-09 00:14:02.803117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:81152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.406 [2024-10-09 00:14:02.803124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:32.406 [2024-10-09 00:14:02.803134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:81280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.406 [2024-10-09 00:14:02.803145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:32.406 [2024-10-09 00:14:02.803157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:81408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.406 [2024-10-09 00:14:02.803165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:32.406 [2024-10-09 00:14:02.803175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:81536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.406 [2024-10-09 00:14:02.803182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:32.406 [2024-10-09 00:14:02.803191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:81664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.406 [2024-10-09 00:14:02.803199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:32.406 [2024-10-09 00:14:02.803208] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126ee30 is same with the state(6) to be set 00:06:32.406 [2024-10-09 00:14:02.803279] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x126ee30 was disconnected and freed. reset controller. 00:06:32.406 [2024-10-09 00:14:02.804536] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:06:32.406 task offset: 81792 on job bdev=Nvme0n1 fails 00:06:32.406 00:06:32.406 Latency(us) 00:06:32.406 [2024-10-08T22:14:03.041Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:32.406 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:06:32.406 Job: Nvme0n1 ended in about 0.42 seconds with error 00:06:32.406 Verification LBA range: start 0x0 length 0x400 00:06:32.406 Nvme0n1 : 0.42 1366.01 85.38 151.78 0.00 40892.65 1952.43 36481.71 00:06:32.406 [2024-10-08T22:14:03.041Z] =================================================================================================================== 00:06:32.406 [2024-10-08T22:14:03.041Z] Total : 1366.01 85.38 151.78 0.00 40892.65 1952.43 36481.71 00:06:32.406 00:14:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:32.406 [2024-10-09 00:14:02.806826] app.c:1062:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:32.406 [2024-10-09 00:14:02.806870] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10560c0 (9): Bad file descriptor 00:06:32.406 00:14:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:06:32.406 00:14:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:32.406 00:14:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:32.406 00:14:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:32.406 00:14:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:06:32.406 [2024-10-09 00:14:02.861251] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:06:33.348 00:14:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 3057135 00:06:33.348 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (3057135) - No such process 00:06:33.348 00:14:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:06:33.348 00:14:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:06:33.348 00:14:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:06:33.348 00:14:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:06:33.348 00:14:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # config=() 00:06:33.348 00:14:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # local subsystem config 00:06:33.348 00:14:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:06:33.348 00:14:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:06:33.348 { 00:06:33.348 "params": { 00:06:33.348 "name": "Nvme$subsystem", 00:06:33.348 "trtype": "$TEST_TRANSPORT", 00:06:33.348 "traddr": "$NVMF_FIRST_TARGET_IP", 00:06:33.348 "adrfam": "ipv4", 00:06:33.348 "trsvcid": "$NVMF_PORT", 00:06:33.348 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:06:33.348 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:06:33.348 "hdgst": ${hdgst:-false}, 00:06:33.348 "ddgst": ${ddgst:-false} 00:06:33.348 }, 00:06:33.348 "method": "bdev_nvme_attach_controller" 00:06:33.348 } 00:06:33.348 EOF 00:06:33.348 )") 00:06:33.348 00:14:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@580 -- # cat 00:06:33.348 00:14:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # jq . 00:06:33.348 00:14:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@583 -- # IFS=, 00:06:33.348 00:14:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:06:33.348 "params": { 00:06:33.348 "name": "Nvme0", 00:06:33.348 "trtype": "tcp", 00:06:33.348 "traddr": "10.0.0.2", 00:06:33.348 "adrfam": "ipv4", 00:06:33.348 "trsvcid": "4420", 00:06:33.348 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:06:33.348 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:06:33.348 "hdgst": false, 00:06:33.348 "ddgst": false 00:06:33.348 }, 00:06:33.348 "method": "bdev_nvme_attach_controller" 00:06:33.348 }' 00:06:33.348 [2024-10-09 00:14:03.879113] Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 initialization... 00:06:33.348 [2024-10-09 00:14:03.879171] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3057566 ] 00:06:33.348 [2024-10-09 00:14:03.959292] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.608 [2024-10-09 00:14:04.023649] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.872 Running I/O for 1 seconds... 00:06:34.817 1726.00 IOPS, 107.88 MiB/s 00:06:34.817 Latency(us) 00:06:34.817 [2024-10-08T22:14:05.452Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:34.817 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:06:34.817 Verification LBA range: start 0x0 length 0x400 00:06:34.817 Nvme0n1 : 1.03 1743.66 108.98 0.00 0.00 36052.50 6116.69 32112.64 00:06:34.817 [2024-10-08T22:14:05.452Z] =================================================================================================================== 00:06:34.817 [2024-10-08T22:14:05.452Z] Total : 1743.66 108.98 0.00 0.00 36052.50 6116.69 32112.64 00:06:35.078 00:14:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:06:35.078 00:14:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:06:35.078 00:14:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:06:35.078 00:14:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:06:35.078 00:14:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:06:35.078 00:14:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@514 -- # nvmfcleanup 00:06:35.078 00:14:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:06:35.078 00:14:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:35.078 00:14:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:06:35.078 00:14:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:35.078 00:14:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:35.078 rmmod nvme_tcp 00:06:35.078 rmmod nvme_fabrics 00:06:35.078 rmmod nvme_keyring 00:06:35.078 00:14:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:35.078 00:14:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:06:35.078 00:14:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:06:35.078 00:14:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@515 -- # '[' -n 3056841 ']' 00:06:35.078 00:14:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # killprocess 3056841 00:06:35.078 00:14:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@950 -- # '[' -z 3056841 ']' 00:06:35.078 00:14:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # kill -0 3056841 00:06:35.078 00:14:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@955 -- # uname 00:06:35.078 00:14:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:35.078 00:14:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3056841 00:06:35.078 00:14:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:06:35.078 00:14:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:06:35.079 00:14:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3056841' 00:06:35.079 killing process with pid 3056841 00:06:35.079 00:14:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@969 -- # kill 3056841 00:06:35.079 00:14:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@974 -- # wait 3056841 00:06:35.339 [2024-10-09 00:14:05.750112] app.c: 719:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:06:35.339 00:14:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:06:35.339 00:14:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:06:35.339 00:14:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:06:35.339 00:14:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:06:35.339 00:14:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@789 -- # iptables-save 00:06:35.339 00:14:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:06:35.339 00:14:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@789 -- # iptables-restore 00:06:35.339 00:14:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:35.339 00:14:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:35.339 00:14:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:35.339 00:14:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:35.339 00:14:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:37.251 00:14:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:37.251 00:14:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:06:37.251 00:06:37.251 real 0m14.726s 00:06:37.251 user 0m23.809s 00:06:37.251 sys 0m6.685s 00:06:37.251 00:14:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:37.251 00:14:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:37.251 ************************************ 00:06:37.251 END TEST nvmf_host_management 00:06:37.251 ************************************ 00:06:37.512 00:14:07 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:06:37.513 00:14:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:37.513 00:14:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:37.513 00:14:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:37.513 ************************************ 00:06:37.513 START TEST nvmf_lvol 00:06:37.513 ************************************ 00:06:37.513 00:14:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:06:37.513 * Looking for test storage... 00:06:37.513 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:37.513 00:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:37.513 00:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1681 -- # lcov --version 00:06:37.513 00:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:37.513 00:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:37.513 00:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:37.513 00:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:37.513 00:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:37.513 00:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:06:37.513 00:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:06:37.513 00:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:06:37.513 00:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:06:37.513 00:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:06:37.513 00:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:06:37.513 00:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:06:37.513 00:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:37.513 00:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:06:37.513 00:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:06:37.513 00:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:37.513 00:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:37.513 00:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:06:37.513 00:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:06:37.513 00:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:37.513 00:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:06:37.513 00:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:06:37.513 00:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:06:37.513 00:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:06:37.513 00:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:37.513 00:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:06:37.513 00:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:06:37.513 00:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:37.513 00:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:37.513 00:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:06:37.513 00:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:37.513 00:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:37.513 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:37.513 --rc genhtml_branch_coverage=1 00:06:37.513 --rc genhtml_function_coverage=1 00:06:37.513 --rc genhtml_legend=1 00:06:37.513 --rc geninfo_all_blocks=1 00:06:37.513 --rc geninfo_unexecuted_blocks=1 00:06:37.513 00:06:37.513 ' 00:06:37.513 00:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:37.513 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:37.513 --rc genhtml_branch_coverage=1 00:06:37.513 --rc genhtml_function_coverage=1 00:06:37.513 --rc genhtml_legend=1 00:06:37.513 --rc geninfo_all_blocks=1 00:06:37.513 --rc geninfo_unexecuted_blocks=1 00:06:37.513 00:06:37.513 ' 00:06:37.513 00:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:37.513 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:37.513 --rc genhtml_branch_coverage=1 00:06:37.513 --rc genhtml_function_coverage=1 00:06:37.513 --rc genhtml_legend=1 00:06:37.513 --rc geninfo_all_blocks=1 00:06:37.513 --rc geninfo_unexecuted_blocks=1 00:06:37.513 00:06:37.513 ' 00:06:37.513 00:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:37.513 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:37.513 --rc genhtml_branch_coverage=1 00:06:37.513 --rc genhtml_function_coverage=1 00:06:37.513 --rc genhtml_legend=1 00:06:37.513 --rc geninfo_all_blocks=1 00:06:37.513 --rc geninfo_unexecuted_blocks=1 00:06:37.513 00:06:37.513 ' 00:06:37.513 00:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:37.513 00:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:06:37.774 00:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:37.774 00:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:37.774 00:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:37.774 00:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:37.774 00:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:37.774 00:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:37.774 00:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:37.774 00:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:37.774 00:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:37.774 00:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:37.774 00:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:06:37.774 00:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:06:37.774 00:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:37.774 00:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:37.774 00:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:37.774 00:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:37.774 00:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:37.774 00:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:06:37.774 00:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:37.774 00:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:37.774 00:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:37.774 00:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:37.775 00:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:37.775 00:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:37.775 00:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:06:37.775 00:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:37.775 00:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:06:37.775 00:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:37.775 00:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:37.775 00:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:37.775 00:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:37.775 00:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:37.775 00:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:37.775 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:37.775 00:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:37.775 00:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:37.775 00:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:37.775 00:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:37.775 00:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:06:37.775 00:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:06:37.775 00:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:06:37.775 00:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:37.775 00:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:06:37.775 00:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:06:37.775 00:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:37.775 00:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # prepare_net_devs 00:06:37.775 00:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@436 -- # local -g is_hw=no 00:06:37.775 00:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # remove_spdk_ns 00:06:37.775 00:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:37.775 00:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:37.775 00:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:37.775 00:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:06:37.775 00:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:06:37.775 00:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:06:37.775 00:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:45.930 00:14:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:45.930 00:14:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:06:45.930 00:14:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:45.930 00:14:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:45.930 00:14:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:45.930 00:14:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:45.930 00:14:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:45.930 00:14:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:06:45.930 00:14:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:45.930 00:14:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:06:45.930 00:14:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:06:45.930 00:14:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:06:45.930 00:14:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:06:45.930 00:14:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:06:45.930 00:14:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:06:45.930 00:14:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:45.930 00:14:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:45.930 00:14:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:45.930 00:14:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:45.930 00:14:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:45.930 00:14:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:45.930 00:14:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:45.930 00:14:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:45.930 00:14:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:45.930 00:14:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:45.930 00:14:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:45.930 00:14:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:45.930 00:14:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:45.930 00:14:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:45.930 00:14:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:45.930 00:14:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:45.930 00:14:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:45.930 00:14:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:45.930 00:14:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:45.930 00:14:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:06:45.930 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:06:45.930 00:14:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:45.930 00:14:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:45.930 00:14:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:45.930 00:14:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:45.930 00:14:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:45.930 00:14:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:45.930 00:14:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:06:45.930 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:06:45.930 00:14:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:45.930 00:14:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:45.930 00:14:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:45.930 00:14:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:45.930 00:14:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:45.930 00:14:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:45.930 00:14:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:45.930 00:14:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:45.930 00:14:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:06:45.930 00:14:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:45.930 00:14:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:06:45.930 00:14:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:45.930 00:14:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ up == up ]] 00:06:45.930 00:14:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:06:45.930 00:14:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:45.930 00:14:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:06:45.930 Found net devices under 0000:4b:00.0: cvl_0_0 00:06:45.930 00:14:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:06:45.930 00:14:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:06:45.930 00:14:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:45.930 00:14:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:06:45.930 00:14:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:45.930 00:14:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ up == up ]] 00:06:45.930 00:14:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:06:45.930 00:14:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:45.930 00:14:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:06:45.930 Found net devices under 0000:4b:00.1: cvl_0_1 00:06:45.930 00:14:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:06:45.930 00:14:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:06:45.930 00:14:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # is_hw=yes 00:06:45.930 00:14:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:06:45.930 00:14:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:06:45.930 00:14:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:06:45.930 00:14:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:45.930 00:14:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:45.930 00:14:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:45.931 00:14:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:45.931 00:14:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:45.931 00:14:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:45.931 00:14:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:45.931 00:14:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:45.931 00:14:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:45.931 00:14:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:45.931 00:14:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:45.931 00:14:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:45.931 00:14:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:45.931 00:14:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:45.931 00:14:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:45.931 00:14:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:45.931 00:14:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:45.931 00:14:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:45.931 00:14:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:45.931 00:14:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:45.931 00:14:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:45.931 00:14:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:45.931 00:14:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:45.931 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:45.931 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.668 ms 00:06:45.931 00:06:45.931 --- 10.0.0.2 ping statistics --- 00:06:45.931 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:45.931 rtt min/avg/max/mdev = 0.668/0.668/0.668/0.000 ms 00:06:45.931 00:14:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:45.931 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:45.931 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.313 ms 00:06:45.931 00:06:45.931 --- 10.0.0.1 ping statistics --- 00:06:45.931 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:45.931 rtt min/avg/max/mdev = 0.313/0.313/0.313/0.000 ms 00:06:45.931 00:14:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:45.931 00:14:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@448 -- # return 0 00:06:45.931 00:14:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:06:45.931 00:14:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:45.931 00:14:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:06:45.931 00:14:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:06:45.931 00:14:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:45.931 00:14:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:06:45.931 00:14:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:06:45.931 00:14:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:06:45.931 00:14:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:06:45.931 00:14:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:45.931 00:14:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:45.931 00:14:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # nvmfpid=3062128 00:06:45.931 00:14:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # waitforlisten 3062128 00:06:45.931 00:14:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:06:45.931 00:14:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@831 -- # '[' -z 3062128 ']' 00:06:45.931 00:14:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:45.931 00:14:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:45.931 00:14:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:45.931 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:45.931 00:14:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:45.931 00:14:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:45.931 [2024-10-09 00:14:15.793365] Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 initialization... 00:06:45.931 [2024-10-09 00:14:15.793437] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:45.931 [2024-10-09 00:14:15.883457] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:45.931 [2024-10-09 00:14:15.980111] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:45.931 [2024-10-09 00:14:15.980174] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:45.931 [2024-10-09 00:14:15.980183] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:45.931 [2024-10-09 00:14:15.980190] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:45.931 [2024-10-09 00:14:15.980196] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:45.931 [2024-10-09 00:14:15.981774] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:06:45.931 [2024-10-09 00:14:15.982013] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:06:45.931 [2024-10-09 00:14:15.982013] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.192 00:14:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:46.192 00:14:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # return 0 00:06:46.192 00:14:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:06:46.192 00:14:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:46.192 00:14:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:46.192 00:14:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:46.192 00:14:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:06:46.192 [2024-10-09 00:14:16.826346] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:46.452 00:14:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:06:46.713 00:14:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:06:46.713 00:14:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:06:46.713 00:14:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:06:46.713 00:14:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:06:46.981 00:14:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:06:47.245 00:14:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=c4aa6a9c-1695-41f6-a2bf-aced6935e88a 00:06:47.245 00:14:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u c4aa6a9c-1695-41f6-a2bf-aced6935e88a lvol 20 00:06:47.505 00:14:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=5d1f6505-256a-4e7c-b801-766376b7e813 00:06:47.505 00:14:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:06:47.505 00:14:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 5d1f6505-256a-4e7c-b801-766376b7e813 00:06:47.766 00:14:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:06:48.026 [2024-10-09 00:14:18.482343] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:48.026 00:14:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:48.286 00:14:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=3062627 00:06:48.286 00:14:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:06:48.286 00:14:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:06:49.244 00:14:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 5d1f6505-256a-4e7c-b801-766376b7e813 MY_SNAPSHOT 00:06:49.505 00:14:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=2628083a-a45c-40b0-ba07-a283a6b2ae8c 00:06:49.505 00:14:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 5d1f6505-256a-4e7c-b801-766376b7e813 30 00:06:49.506 00:14:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 2628083a-a45c-40b0-ba07-a283a6b2ae8c MY_CLONE 00:06:49.766 00:14:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=0a7e34ce-1fc0-470e-b3f5-2a346665de57 00:06:49.766 00:14:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 0a7e34ce-1fc0-470e-b3f5-2a346665de57 00:06:50.026 00:14:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 3062627 00:07:00.024 Initializing NVMe Controllers 00:07:00.024 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:07:00.024 Controller IO queue size 128, less than required. 00:07:00.024 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:00.024 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:07:00.024 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:07:00.024 Initialization complete. Launching workers. 00:07:00.024 ======================================================== 00:07:00.024 Latency(us) 00:07:00.024 Device Information : IOPS MiB/s Average min max 00:07:00.024 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 16203.90 63.30 7901.96 1490.82 61139.86 00:07:00.024 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 17266.60 67.45 7414.99 613.54 47786.64 00:07:00.024 ======================================================== 00:07:00.024 Total : 33470.50 130.74 7650.74 613.54 61139.86 00:07:00.024 00:07:00.024 00:14:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:00.024 00:14:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 5d1f6505-256a-4e7c-b801-766376b7e813 00:07:00.024 00:14:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u c4aa6a9c-1695-41f6-a2bf-aced6935e88a 00:07:00.024 00:14:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:07:00.024 00:14:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:07:00.024 00:14:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:07:00.024 00:14:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@514 -- # nvmfcleanup 00:07:00.024 00:14:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:07:00.024 00:14:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:00.024 00:14:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:07:00.024 00:14:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:00.024 00:14:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:00.024 rmmod nvme_tcp 00:07:00.024 rmmod nvme_fabrics 00:07:00.024 rmmod nvme_keyring 00:07:00.024 00:14:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:00.024 00:14:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:07:00.024 00:14:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:07:00.024 00:14:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@515 -- # '[' -n 3062128 ']' 00:07:00.024 00:14:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # killprocess 3062128 00:07:00.024 00:14:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@950 -- # '[' -z 3062128 ']' 00:07:00.024 00:14:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # kill -0 3062128 00:07:00.024 00:14:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@955 -- # uname 00:07:00.024 00:14:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:00.024 00:14:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3062128 00:07:00.024 00:14:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:00.024 00:14:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:00.024 00:14:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3062128' 00:07:00.024 killing process with pid 3062128 00:07:00.024 00:14:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@969 -- # kill 3062128 00:07:00.024 00:14:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@974 -- # wait 3062128 00:07:00.024 00:14:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:07:00.024 00:14:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:07:00.024 00:14:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:07:00.024 00:14:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:07:00.024 00:14:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@789 -- # iptables-save 00:07:00.024 00:14:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:07:00.024 00:14:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@789 -- # iptables-restore 00:07:00.024 00:14:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:00.024 00:14:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:00.024 00:14:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:00.024 00:14:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:00.024 00:14:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:01.406 00:14:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:01.406 00:07:01.406 real 0m24.063s 00:07:01.406 user 1m4.884s 00:07:01.406 sys 0m8.741s 00:07:01.407 00:14:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:01.407 00:14:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:01.407 ************************************ 00:07:01.407 END TEST nvmf_lvol 00:07:01.407 ************************************ 00:07:01.407 00:14:32 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:07:01.407 00:14:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:01.407 00:14:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:01.668 00:14:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:01.668 ************************************ 00:07:01.668 START TEST nvmf_lvs_grow 00:07:01.668 ************************************ 00:07:01.668 00:14:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:07:01.668 * Looking for test storage... 00:07:01.668 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:01.668 00:14:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:01.668 00:14:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1681 -- # lcov --version 00:07:01.668 00:14:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:01.668 00:14:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:01.668 00:14:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:01.668 00:14:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:01.668 00:14:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:01.668 00:14:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:07:01.668 00:14:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:07:01.668 00:14:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:07:01.668 00:14:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:07:01.668 00:14:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:07:01.668 00:14:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:07:01.668 00:14:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:07:01.668 00:14:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:01.668 00:14:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:07:01.668 00:14:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:07:01.668 00:14:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:01.668 00:14:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:01.668 00:14:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:07:01.668 00:14:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:07:01.668 00:14:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:01.668 00:14:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:07:01.668 00:14:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:07:01.668 00:14:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:07:01.668 00:14:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:07:01.668 00:14:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:01.668 00:14:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:07:01.668 00:14:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:07:01.668 00:14:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:01.668 00:14:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:01.668 00:14:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:07:01.668 00:14:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:01.668 00:14:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:01.668 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:01.668 --rc genhtml_branch_coverage=1 00:07:01.668 --rc genhtml_function_coverage=1 00:07:01.668 --rc genhtml_legend=1 00:07:01.668 --rc geninfo_all_blocks=1 00:07:01.668 --rc geninfo_unexecuted_blocks=1 00:07:01.668 00:07:01.668 ' 00:07:01.668 00:14:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:01.668 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:01.668 --rc genhtml_branch_coverage=1 00:07:01.668 --rc genhtml_function_coverage=1 00:07:01.668 --rc genhtml_legend=1 00:07:01.668 --rc geninfo_all_blocks=1 00:07:01.668 --rc geninfo_unexecuted_blocks=1 00:07:01.668 00:07:01.668 ' 00:07:01.668 00:14:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:01.668 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:01.668 --rc genhtml_branch_coverage=1 00:07:01.668 --rc genhtml_function_coverage=1 00:07:01.668 --rc genhtml_legend=1 00:07:01.668 --rc geninfo_all_blocks=1 00:07:01.668 --rc geninfo_unexecuted_blocks=1 00:07:01.668 00:07:01.668 ' 00:07:01.668 00:14:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:01.668 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:01.668 --rc genhtml_branch_coverage=1 00:07:01.668 --rc genhtml_function_coverage=1 00:07:01.668 --rc genhtml_legend=1 00:07:01.668 --rc geninfo_all_blocks=1 00:07:01.668 --rc geninfo_unexecuted_blocks=1 00:07:01.668 00:07:01.668 ' 00:07:01.669 00:14:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:01.669 00:14:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:07:01.669 00:14:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:01.669 00:14:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:01.669 00:14:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:01.669 00:14:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:01.669 00:14:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:01.669 00:14:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:01.669 00:14:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:01.669 00:14:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:01.669 00:14:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:01.669 00:14:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:01.669 00:14:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:01.669 00:14:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:01.669 00:14:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:01.669 00:14:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:01.669 00:14:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:01.669 00:14:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:01.669 00:14:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:01.669 00:14:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:07:01.669 00:14:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:01.669 00:14:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:01.669 00:14:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:01.669 00:14:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:01.669 00:14:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:01.669 00:14:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:01.669 00:14:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:07:01.669 00:14:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:01.669 00:14:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:07:01.669 00:14:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:01.669 00:14:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:01.669 00:14:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:01.669 00:14:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:01.669 00:14:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:01.669 00:14:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:01.669 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:01.669 00:14:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:01.669 00:14:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:01.669 00:14:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:01.943 00:14:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:01.943 00:14:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:07:01.943 00:14:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:07:01.943 00:14:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:07:01.943 00:14:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:01.943 00:14:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # prepare_net_devs 00:07:01.943 00:14:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@436 -- # local -g is_hw=no 00:07:01.943 00:14:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # remove_spdk_ns 00:07:01.943 00:14:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:01.943 00:14:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:01.943 00:14:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:01.943 00:14:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:07:01.943 00:14:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:07:01.943 00:14:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:07:01.943 00:14:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:10.086 00:14:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:10.086 00:14:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:07:10.086 00:14:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:10.086 00:14:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:10.086 00:14:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:10.086 00:14:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:10.086 00:14:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:10.086 00:14:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:07:10.086 00:14:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:10.086 00:14:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:07:10.086 00:14:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:07:10.087 00:14:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:07:10.087 00:14:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:07:10.087 00:14:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:07:10.087 00:14:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:07:10.087 00:14:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:10.087 00:14:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:10.087 00:14:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:10.087 00:14:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:10.087 00:14:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:10.087 00:14:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:10.087 00:14:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:10.087 00:14:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:10.087 00:14:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:10.087 00:14:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:10.087 00:14:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:10.087 00:14:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:10.087 00:14:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:10.087 00:14:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:10.087 00:14:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:10.087 00:14:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:10.087 00:14:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:10.087 00:14:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:10.087 00:14:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:10.087 00:14:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:07:10.087 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:07:10.087 00:14:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:10.087 00:14:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:10.087 00:14:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:10.087 00:14:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:10.087 00:14:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:10.087 00:14:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:10.087 00:14:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:07:10.087 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:07:10.087 00:14:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:10.087 00:14:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:10.087 00:14:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:10.087 00:14:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:10.087 00:14:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:10.087 00:14:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:10.087 00:14:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:10.087 00:14:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:10.087 00:14:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:07:10.087 00:14:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:10.087 00:14:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:07:10.087 00:14:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:10.087 00:14:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ up == up ]] 00:07:10.087 00:14:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:07:10.087 00:14:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:10.087 00:14:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:07:10.087 Found net devices under 0000:4b:00.0: cvl_0_0 00:07:10.087 00:14:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:07:10.087 00:14:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:07:10.087 00:14:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:10.087 00:14:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:07:10.087 00:14:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:10.087 00:14:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ up == up ]] 00:07:10.087 00:14:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:07:10.087 00:14:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:10.087 00:14:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:07:10.087 Found net devices under 0000:4b:00.1: cvl_0_1 00:07:10.087 00:14:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:07:10.087 00:14:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:07:10.087 00:14:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # is_hw=yes 00:07:10.087 00:14:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:07:10.087 00:14:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:07:10.087 00:14:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:07:10.087 00:14:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:10.087 00:14:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:10.087 00:14:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:10.087 00:14:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:10.087 00:14:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:10.087 00:14:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:10.087 00:14:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:10.087 00:14:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:10.087 00:14:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:10.087 00:14:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:10.087 00:14:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:10.087 00:14:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:10.087 00:14:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:10.087 00:14:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:10.087 00:14:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:10.087 00:14:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:10.087 00:14:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:10.087 00:14:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:10.087 00:14:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:10.088 00:14:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:10.088 00:14:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:10.088 00:14:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:10.088 00:14:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:10.088 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:10.088 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.644 ms 00:07:10.088 00:07:10.088 --- 10.0.0.2 ping statistics --- 00:07:10.088 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:10.088 rtt min/avg/max/mdev = 0.644/0.644/0.644/0.000 ms 00:07:10.088 00:14:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:10.088 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:10.088 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.319 ms 00:07:10.088 00:07:10.088 --- 10.0.0.1 ping statistics --- 00:07:10.088 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:10.088 rtt min/avg/max/mdev = 0.319/0.319/0.319/0.000 ms 00:07:10.088 00:14:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:10.088 00:14:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@448 -- # return 0 00:07:10.088 00:14:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:07:10.088 00:14:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:10.088 00:14:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:07:10.088 00:14:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:07:10.088 00:14:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:10.088 00:14:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:07:10.088 00:14:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:07:10.088 00:14:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:07:10.088 00:14:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:07:10.088 00:14:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:10.088 00:14:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:10.088 00:14:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # nvmfpid=3069147 00:07:10.088 00:14:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # waitforlisten 3069147 00:07:10.088 00:14:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:07:10.088 00:14:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@831 -- # '[' -z 3069147 ']' 00:07:10.088 00:14:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:10.088 00:14:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:10.088 00:14:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:10.088 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:10.088 00:14:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:10.088 00:14:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:10.088 [2024-10-09 00:14:39.793111] Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 initialization... 00:07:10.088 [2024-10-09 00:14:39.793176] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:10.088 [2024-10-09 00:14:39.879147] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:10.088 [2024-10-09 00:14:39.975440] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:10.088 [2024-10-09 00:14:39.975498] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:10.088 [2024-10-09 00:14:39.975506] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:10.088 [2024-10-09 00:14:39.975513] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:10.088 [2024-10-09 00:14:39.975519] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:10.088 [2024-10-09 00:14:39.976314] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.088 00:14:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:10.088 00:14:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # return 0 00:07:10.088 00:14:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:07:10.088 00:14:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:10.088 00:14:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:10.088 00:14:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:10.088 00:14:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:10.349 [2024-10-09 00:14:40.811464] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:10.349 00:14:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:07:10.349 00:14:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:10.349 00:14:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:10.349 00:14:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:10.349 ************************************ 00:07:10.349 START TEST lvs_grow_clean 00:07:10.349 ************************************ 00:07:10.349 00:14:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1125 -- # lvs_grow 00:07:10.349 00:14:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:07:10.349 00:14:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:07:10.349 00:14:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:07:10.349 00:14:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:07:10.349 00:14:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:07:10.349 00:14:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:07:10.349 00:14:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:10.349 00:14:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:10.349 00:14:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:10.610 00:14:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:07:10.610 00:14:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:07:10.871 00:14:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=f576eacd-3478-4fec-954b-ea3dbd9a6230 00:07:10.871 00:14:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f576eacd-3478-4fec-954b-ea3dbd9a6230 00:07:10.871 00:14:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:07:10.871 00:14:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:07:10.871 00:14:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:07:10.871 00:14:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u f576eacd-3478-4fec-954b-ea3dbd9a6230 lvol 150 00:07:11.132 00:14:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=465c3b1a-3339-42c1-8dba-96f153cdeb49 00:07:11.132 00:14:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:11.132 00:14:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:07:11.393 [2024-10-09 00:14:41.843492] bdev_aio.c:1044:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:07:11.393 [2024-10-09 00:14:41.843565] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:07:11.393 true 00:07:11.393 00:14:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f576eacd-3478-4fec-954b-ea3dbd9a6230 00:07:11.393 00:14:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:07:11.662 00:14:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:07:11.662 00:14:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:11.662 00:14:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 465c3b1a-3339-42c1-8dba-96f153cdeb49 00:07:11.926 00:14:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:12.187 [2024-10-09 00:14:42.581876] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:12.187 00:14:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:12.187 00:14:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3069704 00:07:12.187 00:14:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:12.187 00:14:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:07:12.187 00:14:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3069704 /var/tmp/bdevperf.sock 00:07:12.187 00:14:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@831 -- # '[' -z 3069704 ']' 00:07:12.187 00:14:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:12.187 00:14:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:12.187 00:14:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:12.187 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:12.187 00:14:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:12.187 00:14:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:07:12.447 [2024-10-09 00:14:42.833344] Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 initialization... 00:07:12.447 [2024-10-09 00:14:42.833415] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3069704 ] 00:07:12.447 [2024-10-09 00:14:42.915071] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:12.447 [2024-10-09 00:14:43.010108] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:07:13.106 00:14:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:13.106 00:14:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # return 0 00:07:13.106 00:14:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:07:13.712 Nvme0n1 00:07:13.712 00:14:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:07:13.712 [ 00:07:13.712 { 00:07:13.712 "name": "Nvme0n1", 00:07:13.712 "aliases": [ 00:07:13.712 "465c3b1a-3339-42c1-8dba-96f153cdeb49" 00:07:13.712 ], 00:07:13.712 "product_name": "NVMe disk", 00:07:13.712 "block_size": 4096, 00:07:13.712 "num_blocks": 38912, 00:07:13.712 "uuid": "465c3b1a-3339-42c1-8dba-96f153cdeb49", 00:07:13.712 "numa_id": 0, 00:07:13.712 "assigned_rate_limits": { 00:07:13.712 "rw_ios_per_sec": 0, 00:07:13.712 "rw_mbytes_per_sec": 0, 00:07:13.712 "r_mbytes_per_sec": 0, 00:07:13.712 "w_mbytes_per_sec": 0 00:07:13.712 }, 00:07:13.712 "claimed": false, 00:07:13.712 "zoned": false, 00:07:13.712 "supported_io_types": { 00:07:13.712 "read": true, 00:07:13.712 "write": true, 00:07:13.712 "unmap": true, 00:07:13.712 "flush": true, 00:07:13.712 "reset": true, 00:07:13.712 "nvme_admin": true, 00:07:13.712 "nvme_io": true, 00:07:13.712 "nvme_io_md": false, 00:07:13.712 "write_zeroes": true, 00:07:13.712 "zcopy": false, 00:07:13.712 "get_zone_info": false, 00:07:13.712 "zone_management": false, 00:07:13.712 "zone_append": false, 00:07:13.712 "compare": true, 00:07:13.712 "compare_and_write": true, 00:07:13.712 "abort": true, 00:07:13.712 "seek_hole": false, 00:07:13.712 "seek_data": false, 00:07:13.712 "copy": true, 00:07:13.712 "nvme_iov_md": false 00:07:13.712 }, 00:07:13.712 "memory_domains": [ 00:07:13.712 { 00:07:13.712 "dma_device_id": "system", 00:07:13.712 "dma_device_type": 1 00:07:13.712 } 00:07:13.712 ], 00:07:13.712 "driver_specific": { 00:07:13.712 "nvme": [ 00:07:13.712 { 00:07:13.712 "trid": { 00:07:13.712 "trtype": "TCP", 00:07:13.712 "adrfam": "IPv4", 00:07:13.712 "traddr": "10.0.0.2", 00:07:13.712 "trsvcid": "4420", 00:07:13.712 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:07:13.712 }, 00:07:13.712 "ctrlr_data": { 00:07:13.712 "cntlid": 1, 00:07:13.712 "vendor_id": "0x8086", 00:07:13.712 "model_number": "SPDK bdev Controller", 00:07:13.712 "serial_number": "SPDK0", 00:07:13.712 "firmware_revision": "25.01", 00:07:13.712 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:13.712 "oacs": { 00:07:13.712 "security": 0, 00:07:13.712 "format": 0, 00:07:13.712 "firmware": 0, 00:07:13.712 "ns_manage": 0 00:07:13.712 }, 00:07:13.712 "multi_ctrlr": true, 00:07:13.712 "ana_reporting": false 00:07:13.712 }, 00:07:13.712 "vs": { 00:07:13.712 "nvme_version": "1.3" 00:07:13.712 }, 00:07:13.712 "ns_data": { 00:07:13.712 "id": 1, 00:07:13.712 "can_share": true 00:07:13.712 } 00:07:13.712 } 00:07:13.712 ], 00:07:13.712 "mp_policy": "active_passive" 00:07:13.712 } 00:07:13.712 } 00:07:13.712 ] 00:07:13.712 00:14:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3070051 00:07:13.712 00:14:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:07:13.712 00:14:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:13.972 Running I/O for 10 seconds... 00:07:14.913 Latency(us) 00:07:14.913 [2024-10-08T22:14:45.548Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:14.913 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:14.913 Nvme0n1 : 1.00 24948.00 97.45 0.00 0.00 0.00 0.00 0.00 00:07:14.913 [2024-10-08T22:14:45.548Z] =================================================================================================================== 00:07:14.913 [2024-10-08T22:14:45.548Z] Total : 24948.00 97.45 0.00 0.00 0.00 0.00 0.00 00:07:14.913 00:07:15.851 00:14:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u f576eacd-3478-4fec-954b-ea3dbd9a6230 00:07:15.851 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:15.851 Nvme0n1 : 2.00 25203.00 98.45 0.00 0.00 0.00 0.00 0.00 00:07:15.851 [2024-10-08T22:14:46.486Z] =================================================================================================================== 00:07:15.851 [2024-10-08T22:14:46.486Z] Total : 25203.00 98.45 0.00 0.00 0.00 0.00 0.00 00:07:15.851 00:07:15.851 true 00:07:15.851 00:14:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f576eacd-3478-4fec-954b-ea3dbd9a6230 00:07:15.851 00:14:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:07:16.112 00:14:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:07:16.112 00:14:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:07:16.112 00:14:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 3070051 00:07:17.052 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:17.052 Nvme0n1 : 3.00 25298.00 98.82 0.00 0.00 0.00 0.00 0.00 00:07:17.052 [2024-10-08T22:14:47.687Z] =================================================================================================================== 00:07:17.052 [2024-10-08T22:14:47.687Z] Total : 25298.00 98.82 0.00 0.00 0.00 0.00 0.00 00:07:17.052 00:07:17.994 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:17.994 Nvme0n1 : 4.00 25381.25 99.15 0.00 0.00 0.00 0.00 0.00 00:07:17.994 [2024-10-08T22:14:48.629Z] =================================================================================================================== 00:07:17.994 [2024-10-08T22:14:48.629Z] Total : 25381.25 99.15 0.00 0.00 0.00 0.00 0.00 00:07:17.994 00:07:18.948 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:18.948 Nvme0n1 : 5.00 25424.60 99.31 0.00 0.00 0.00 0.00 0.00 00:07:18.948 [2024-10-08T22:14:49.583Z] =================================================================================================================== 00:07:18.948 [2024-10-08T22:14:49.583Z] Total : 25424.60 99.31 0.00 0.00 0.00 0.00 0.00 00:07:18.948 00:07:19.891 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:19.891 Nvme0n1 : 6.00 25442.83 99.39 0.00 0.00 0.00 0.00 0.00 00:07:19.891 [2024-10-08T22:14:50.526Z] =================================================================================================================== 00:07:19.891 [2024-10-08T22:14:50.526Z] Total : 25442.83 99.39 0.00 0.00 0.00 0.00 0.00 00:07:19.891 00:07:20.832 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:20.832 Nvme0n1 : 7.00 25474.00 99.51 0.00 0.00 0.00 0.00 0.00 00:07:20.832 [2024-10-08T22:14:51.467Z] =================================================================================================================== 00:07:20.832 [2024-10-08T22:14:51.467Z] Total : 25474.00 99.51 0.00 0.00 0.00 0.00 0.00 00:07:20.832 00:07:21.773 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:21.773 Nvme0n1 : 8.00 25497.62 99.60 0.00 0.00 0.00 0.00 0.00 00:07:21.773 [2024-10-08T22:14:52.408Z] =================================================================================================================== 00:07:21.773 [2024-10-08T22:14:52.408Z] Total : 25497.62 99.60 0.00 0.00 0.00 0.00 0.00 00:07:21.773 00:07:23.156 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:23.156 Nvme0n1 : 9.00 25515.78 99.67 0.00 0.00 0.00 0.00 0.00 00:07:23.156 [2024-10-08T22:14:53.791Z] =================================================================================================================== 00:07:23.156 [2024-10-08T22:14:53.791Z] Total : 25515.78 99.67 0.00 0.00 0.00 0.00 0.00 00:07:23.156 00:07:24.096 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:24.096 Nvme0n1 : 10.00 25530.40 99.73 0.00 0.00 0.00 0.00 0.00 00:07:24.096 [2024-10-08T22:14:54.731Z] =================================================================================================================== 00:07:24.096 [2024-10-08T22:14:54.731Z] Total : 25530.40 99.73 0.00 0.00 0.00 0.00 0.00 00:07:24.096 00:07:24.096 00:07:24.096 Latency(us) 00:07:24.096 [2024-10-08T22:14:54.731Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:24.096 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:24.096 Nvme0n1 : 10.01 25530.04 99.73 0.00 0.00 5010.10 2484.91 16602.45 00:07:24.096 [2024-10-08T22:14:54.731Z] =================================================================================================================== 00:07:24.096 [2024-10-08T22:14:54.731Z] Total : 25530.04 99.73 0.00 0.00 5010.10 2484.91 16602.45 00:07:24.096 { 00:07:24.096 "results": [ 00:07:24.096 { 00:07:24.096 "job": "Nvme0n1", 00:07:24.096 "core_mask": "0x2", 00:07:24.096 "workload": "randwrite", 00:07:24.096 "status": "finished", 00:07:24.096 "queue_depth": 128, 00:07:24.096 "io_size": 4096, 00:07:24.096 "runtime": 10.005153, 00:07:24.096 "iops": 25530.0443681371, 00:07:24.096 "mibps": 99.72673581303555, 00:07:24.096 "io_failed": 0, 00:07:24.096 "io_timeout": 0, 00:07:24.096 "avg_latency_us": 5010.09890360905, 00:07:24.097 "min_latency_us": 2484.9066666666668, 00:07:24.097 "max_latency_us": 16602.453333333335 00:07:24.097 } 00:07:24.097 ], 00:07:24.097 "core_count": 1 00:07:24.097 } 00:07:24.097 00:14:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3069704 00:07:24.097 00:14:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@950 -- # '[' -z 3069704 ']' 00:07:24.097 00:14:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # kill -0 3069704 00:07:24.097 00:14:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # uname 00:07:24.097 00:14:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:24.097 00:14:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3069704 00:07:24.097 00:14:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:07:24.097 00:14:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:07:24.097 00:14:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3069704' 00:07:24.097 killing process with pid 3069704 00:07:24.097 00:14:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@969 -- # kill 3069704 00:07:24.097 Received shutdown signal, test time was about 10.000000 seconds 00:07:24.097 00:07:24.097 Latency(us) 00:07:24.097 [2024-10-08T22:14:54.732Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:24.097 [2024-10-08T22:14:54.732Z] =================================================================================================================== 00:07:24.097 [2024-10-08T22:14:54.732Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:07:24.097 00:14:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@974 -- # wait 3069704 00:07:24.097 00:14:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:24.357 00:14:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:24.357 00:14:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:07:24.357 00:14:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f576eacd-3478-4fec-954b-ea3dbd9a6230 00:07:24.618 00:14:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:07:24.618 00:14:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:07:24.618 00:14:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:24.879 [2024-10-09 00:14:55.318314] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:07:24.879 00:14:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f576eacd-3478-4fec-954b-ea3dbd9a6230 00:07:24.879 00:14:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # local es=0 00:07:24.879 00:14:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f576eacd-3478-4fec-954b-ea3dbd9a6230 00:07:24.879 00:14:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:24.879 00:14:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:24.879 00:14:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:24.880 00:14:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:24.880 00:14:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:24.880 00:14:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:24.880 00:14:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:24.880 00:14:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:24.880 00:14:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f576eacd-3478-4fec-954b-ea3dbd9a6230 00:07:25.140 request: 00:07:25.140 { 00:07:25.140 "uuid": "f576eacd-3478-4fec-954b-ea3dbd9a6230", 00:07:25.140 "method": "bdev_lvol_get_lvstores", 00:07:25.140 "req_id": 1 00:07:25.140 } 00:07:25.140 Got JSON-RPC error response 00:07:25.140 response: 00:07:25.140 { 00:07:25.140 "code": -19, 00:07:25.140 "message": "No such device" 00:07:25.140 } 00:07:25.140 00:14:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # es=1 00:07:25.140 00:14:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:25.140 00:14:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:25.140 00:14:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:25.140 00:14:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:25.140 aio_bdev 00:07:25.140 00:14:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 465c3b1a-3339-42c1-8dba-96f153cdeb49 00:07:25.140 00:14:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local bdev_name=465c3b1a-3339-42c1-8dba-96f153cdeb49 00:07:25.140 00:14:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:25.140 00:14:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # local i 00:07:25.140 00:14:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:25.140 00:14:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:25.140 00:14:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:25.403 00:14:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 465c3b1a-3339-42c1-8dba-96f153cdeb49 -t 2000 00:07:25.667 [ 00:07:25.667 { 00:07:25.667 "name": "465c3b1a-3339-42c1-8dba-96f153cdeb49", 00:07:25.667 "aliases": [ 00:07:25.667 "lvs/lvol" 00:07:25.667 ], 00:07:25.667 "product_name": "Logical Volume", 00:07:25.667 "block_size": 4096, 00:07:25.667 "num_blocks": 38912, 00:07:25.667 "uuid": "465c3b1a-3339-42c1-8dba-96f153cdeb49", 00:07:25.667 "assigned_rate_limits": { 00:07:25.667 "rw_ios_per_sec": 0, 00:07:25.667 "rw_mbytes_per_sec": 0, 00:07:25.667 "r_mbytes_per_sec": 0, 00:07:25.667 "w_mbytes_per_sec": 0 00:07:25.667 }, 00:07:25.667 "claimed": false, 00:07:25.667 "zoned": false, 00:07:25.667 "supported_io_types": { 00:07:25.667 "read": true, 00:07:25.667 "write": true, 00:07:25.667 "unmap": true, 00:07:25.667 "flush": false, 00:07:25.667 "reset": true, 00:07:25.667 "nvme_admin": false, 00:07:25.667 "nvme_io": false, 00:07:25.667 "nvme_io_md": false, 00:07:25.667 "write_zeroes": true, 00:07:25.667 "zcopy": false, 00:07:25.667 "get_zone_info": false, 00:07:25.667 "zone_management": false, 00:07:25.667 "zone_append": false, 00:07:25.667 "compare": false, 00:07:25.667 "compare_and_write": false, 00:07:25.667 "abort": false, 00:07:25.667 "seek_hole": true, 00:07:25.667 "seek_data": true, 00:07:25.667 "copy": false, 00:07:25.667 "nvme_iov_md": false 00:07:25.667 }, 00:07:25.667 "driver_specific": { 00:07:25.667 "lvol": { 00:07:25.667 "lvol_store_uuid": "f576eacd-3478-4fec-954b-ea3dbd9a6230", 00:07:25.667 "base_bdev": "aio_bdev", 00:07:25.667 "thin_provision": false, 00:07:25.667 "num_allocated_clusters": 38, 00:07:25.667 "snapshot": false, 00:07:25.667 "clone": false, 00:07:25.667 "esnap_clone": false 00:07:25.667 } 00:07:25.667 } 00:07:25.667 } 00:07:25.667 ] 00:07:25.667 00:14:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@907 -- # return 0 00:07:25.667 00:14:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f576eacd-3478-4fec-954b-ea3dbd9a6230 00:07:25.667 00:14:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:07:25.667 00:14:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:07:25.667 00:14:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f576eacd-3478-4fec-954b-ea3dbd9a6230 00:07:25.667 00:14:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:07:25.927 00:14:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:07:25.927 00:14:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 465c3b1a-3339-42c1-8dba-96f153cdeb49 00:07:26.189 00:14:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u f576eacd-3478-4fec-954b-ea3dbd9a6230 00:07:26.189 00:14:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:26.449 00:14:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:26.450 00:07:26.450 real 0m16.058s 00:07:26.450 user 0m15.670s 00:07:26.450 sys 0m1.440s 00:07:26.450 00:14:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:26.450 00:14:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:07:26.450 ************************************ 00:07:26.450 END TEST lvs_grow_clean 00:07:26.450 ************************************ 00:07:26.450 00:14:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:07:26.450 00:14:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:26.450 00:14:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:26.450 00:14:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:26.450 ************************************ 00:07:26.450 START TEST lvs_grow_dirty 00:07:26.450 ************************************ 00:07:26.450 00:14:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1125 -- # lvs_grow dirty 00:07:26.450 00:14:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:07:26.450 00:14:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:07:26.450 00:14:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:07:26.450 00:14:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:07:26.450 00:14:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:07:26.450 00:14:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:07:26.450 00:14:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:26.450 00:14:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:26.450 00:14:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:26.710 00:14:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:07:26.710 00:14:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:07:26.970 00:14:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=0f32cd2b-135c-41df-be0d-0e9ad5e3b321 00:07:26.970 00:14:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0f32cd2b-135c-41df-be0d-0e9ad5e3b321 00:07:26.970 00:14:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:07:26.970 00:14:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:07:26.970 00:14:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:07:26.970 00:14:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 0f32cd2b-135c-41df-be0d-0e9ad5e3b321 lvol 150 00:07:27.231 00:14:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=18032bfe-f0f7-42b7-ae17-6ee65bb434dd 00:07:27.231 00:14:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:27.231 00:14:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:07:27.491 [2024-10-09 00:14:57.922399] bdev_aio.c:1044:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:07:27.491 [2024-10-09 00:14:57.922440] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:07:27.491 true 00:07:27.491 00:14:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:07:27.491 00:14:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0f32cd2b-135c-41df-be0d-0e9ad5e3b321 00:07:27.491 00:14:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:07:27.491 00:14:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:27.751 00:14:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 18032bfe-f0f7-42b7-ae17-6ee65bb434dd 00:07:28.012 00:14:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:28.012 [2024-10-09 00:14:58.580295] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:28.012 00:14:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:28.272 00:14:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3073057 00:07:28.272 00:14:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:28.272 00:14:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:07:28.272 00:14:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3073057 /var/tmp/bdevperf.sock 00:07:28.272 00:14:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 3073057 ']' 00:07:28.272 00:14:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:28.272 00:14:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:28.272 00:14:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:28.272 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:28.272 00:14:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:28.272 00:14:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:28.272 [2024-10-09 00:14:58.809795] Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 initialization... 00:07:28.273 [2024-10-09 00:14:58.809845] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3073057 ] 00:07:28.273 [2024-10-09 00:14:58.883849] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:28.536 [2024-10-09 00:14:58.937598] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:07:29.106 00:14:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:29.106 00:14:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:07:29.106 00:14:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:07:29.367 Nvme0n1 00:07:29.629 00:15:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:07:29.629 [ 00:07:29.629 { 00:07:29.629 "name": "Nvme0n1", 00:07:29.629 "aliases": [ 00:07:29.629 "18032bfe-f0f7-42b7-ae17-6ee65bb434dd" 00:07:29.629 ], 00:07:29.629 "product_name": "NVMe disk", 00:07:29.629 "block_size": 4096, 00:07:29.629 "num_blocks": 38912, 00:07:29.629 "uuid": "18032bfe-f0f7-42b7-ae17-6ee65bb434dd", 00:07:29.629 "numa_id": 0, 00:07:29.629 "assigned_rate_limits": { 00:07:29.629 "rw_ios_per_sec": 0, 00:07:29.629 "rw_mbytes_per_sec": 0, 00:07:29.629 "r_mbytes_per_sec": 0, 00:07:29.629 "w_mbytes_per_sec": 0 00:07:29.629 }, 00:07:29.629 "claimed": false, 00:07:29.629 "zoned": false, 00:07:29.629 "supported_io_types": { 00:07:29.629 "read": true, 00:07:29.629 "write": true, 00:07:29.629 "unmap": true, 00:07:29.629 "flush": true, 00:07:29.629 "reset": true, 00:07:29.629 "nvme_admin": true, 00:07:29.629 "nvme_io": true, 00:07:29.629 "nvme_io_md": false, 00:07:29.629 "write_zeroes": true, 00:07:29.629 "zcopy": false, 00:07:29.629 "get_zone_info": false, 00:07:29.629 "zone_management": false, 00:07:29.629 "zone_append": false, 00:07:29.629 "compare": true, 00:07:29.629 "compare_and_write": true, 00:07:29.629 "abort": true, 00:07:29.629 "seek_hole": false, 00:07:29.629 "seek_data": false, 00:07:29.629 "copy": true, 00:07:29.629 "nvme_iov_md": false 00:07:29.629 }, 00:07:29.629 "memory_domains": [ 00:07:29.629 { 00:07:29.629 "dma_device_id": "system", 00:07:29.629 "dma_device_type": 1 00:07:29.629 } 00:07:29.629 ], 00:07:29.629 "driver_specific": { 00:07:29.629 "nvme": [ 00:07:29.629 { 00:07:29.629 "trid": { 00:07:29.629 "trtype": "TCP", 00:07:29.629 "adrfam": "IPv4", 00:07:29.629 "traddr": "10.0.0.2", 00:07:29.629 "trsvcid": "4420", 00:07:29.629 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:07:29.629 }, 00:07:29.629 "ctrlr_data": { 00:07:29.629 "cntlid": 1, 00:07:29.629 "vendor_id": "0x8086", 00:07:29.629 "model_number": "SPDK bdev Controller", 00:07:29.629 "serial_number": "SPDK0", 00:07:29.629 "firmware_revision": "25.01", 00:07:29.629 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:29.629 "oacs": { 00:07:29.629 "security": 0, 00:07:29.629 "format": 0, 00:07:29.629 "firmware": 0, 00:07:29.629 "ns_manage": 0 00:07:29.629 }, 00:07:29.629 "multi_ctrlr": true, 00:07:29.629 "ana_reporting": false 00:07:29.629 }, 00:07:29.629 "vs": { 00:07:29.629 "nvme_version": "1.3" 00:07:29.629 }, 00:07:29.629 "ns_data": { 00:07:29.629 "id": 1, 00:07:29.629 "can_share": true 00:07:29.629 } 00:07:29.629 } 00:07:29.629 ], 00:07:29.629 "mp_policy": "active_passive" 00:07:29.629 } 00:07:29.629 } 00:07:29.629 ] 00:07:29.629 00:15:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3073285 00:07:29.629 00:15:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:07:29.629 00:15:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:29.891 Running I/O for 10 seconds... 00:07:30.831 Latency(us) 00:07:30.831 [2024-10-08T22:15:01.466Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:30.831 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:30.831 Nvme0n1 : 1.00 25109.00 98.08 0.00 0.00 0.00 0.00 0.00 00:07:30.831 [2024-10-08T22:15:01.466Z] =================================================================================================================== 00:07:30.831 [2024-10-08T22:15:01.466Z] Total : 25109.00 98.08 0.00 0.00 0.00 0.00 0.00 00:07:30.831 00:07:31.773 00:15:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 0f32cd2b-135c-41df-be0d-0e9ad5e3b321 00:07:31.773 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:31.773 Nvme0n1 : 2.00 25289.00 98.79 0.00 0.00 0.00 0.00 0.00 00:07:31.773 [2024-10-08T22:15:02.408Z] =================================================================================================================== 00:07:31.773 [2024-10-08T22:15:02.408Z] Total : 25289.00 98.79 0.00 0.00 0.00 0.00 0.00 00:07:31.773 00:07:31.773 true 00:07:31.773 00:15:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0f32cd2b-135c-41df-be0d-0e9ad5e3b321 00:07:31.773 00:15:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:07:32.034 00:15:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:07:32.034 00:15:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:07:32.034 00:15:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 3073285 00:07:32.975 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:32.975 Nvme0n1 : 3.00 25370.67 99.10 0.00 0.00 0.00 0.00 0.00 00:07:32.975 [2024-10-08T22:15:03.610Z] =================================================================================================================== 00:07:32.975 [2024-10-08T22:15:03.610Z] Total : 25370.67 99.10 0.00 0.00 0.00 0.00 0.00 00:07:32.975 00:07:33.916 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:33.916 Nvme0n1 : 4.00 25411.25 99.26 0.00 0.00 0.00 0.00 0.00 00:07:33.916 [2024-10-08T22:15:04.551Z] =================================================================================================================== 00:07:33.916 [2024-10-08T22:15:04.551Z] Total : 25411.25 99.26 0.00 0.00 0.00 0.00 0.00 00:07:33.916 00:07:34.857 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:34.857 Nvme0n1 : 5.00 25461.40 99.46 0.00 0.00 0.00 0.00 0.00 00:07:34.857 [2024-10-08T22:15:05.492Z] =================================================================================================================== 00:07:34.857 [2024-10-08T22:15:05.492Z] Total : 25461.40 99.46 0.00 0.00 0.00 0.00 0.00 00:07:34.857 00:07:35.799 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:35.799 Nvme0n1 : 6.00 25494.83 99.59 0.00 0.00 0.00 0.00 0.00 00:07:35.799 [2024-10-08T22:15:06.434Z] =================================================================================================================== 00:07:35.799 [2024-10-08T22:15:06.434Z] Total : 25494.83 99.59 0.00 0.00 0.00 0.00 0.00 00:07:35.799 00:07:36.741 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:36.741 Nvme0n1 : 7.00 25518.71 99.68 0.00 0.00 0.00 0.00 0.00 00:07:36.741 [2024-10-08T22:15:07.376Z] =================================================================================================================== 00:07:36.741 [2024-10-08T22:15:07.376Z] Total : 25518.71 99.68 0.00 0.00 0.00 0.00 0.00 00:07:36.741 00:07:37.691 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:37.691 Nvme0n1 : 8.00 25544.62 99.78 0.00 0.00 0.00 0.00 0.00 00:07:37.691 [2024-10-08T22:15:08.326Z] =================================================================================================================== 00:07:37.691 [2024-10-08T22:15:08.326Z] Total : 25544.62 99.78 0.00 0.00 0.00 0.00 0.00 00:07:37.691 00:07:39.075 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:39.075 Nvme0n1 : 9.00 25557.00 99.83 0.00 0.00 0.00 0.00 0.00 00:07:39.075 [2024-10-08T22:15:09.710Z] =================================================================================================================== 00:07:39.075 [2024-10-08T22:15:09.710Z] Total : 25557.00 99.83 0.00 0.00 0.00 0.00 0.00 00:07:39.075 00:07:40.019 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:40.019 Nvme0n1 : 10.00 25561.10 99.85 0.00 0.00 0.00 0.00 0.00 00:07:40.019 [2024-10-08T22:15:10.654Z] =================================================================================================================== 00:07:40.019 [2024-10-08T22:15:10.654Z] Total : 25561.10 99.85 0.00 0.00 0.00 0.00 0.00 00:07:40.019 00:07:40.019 00:07:40.019 Latency(us) 00:07:40.019 [2024-10-08T22:15:10.654Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:40.019 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:40.019 Nvme0n1 : 10.00 25564.41 99.86 0.00 0.00 5003.99 3044.69 10212.69 00:07:40.019 [2024-10-08T22:15:10.654Z] =================================================================================================================== 00:07:40.019 [2024-10-08T22:15:10.654Z] Total : 25564.41 99.86 0.00 0.00 5003.99 3044.69 10212.69 00:07:40.019 { 00:07:40.019 "results": [ 00:07:40.019 { 00:07:40.019 "job": "Nvme0n1", 00:07:40.019 "core_mask": "0x2", 00:07:40.019 "workload": "randwrite", 00:07:40.019 "status": "finished", 00:07:40.019 "queue_depth": 128, 00:07:40.019 "io_size": 4096, 00:07:40.019 "runtime": 10.003712, 00:07:40.019 "iops": 25564.410490825805, 00:07:40.019 "mibps": 99.8609784797883, 00:07:40.019 "io_failed": 0, 00:07:40.019 "io_timeout": 0, 00:07:40.019 "avg_latency_us": 5003.989430265491, 00:07:40.019 "min_latency_us": 3044.693333333333, 00:07:40.019 "max_latency_us": 10212.693333333333 00:07:40.019 } 00:07:40.019 ], 00:07:40.019 "core_count": 1 00:07:40.019 } 00:07:40.019 00:15:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3073057 00:07:40.019 00:15:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@950 -- # '[' -z 3073057 ']' 00:07:40.019 00:15:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # kill -0 3073057 00:07:40.019 00:15:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # uname 00:07:40.019 00:15:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:40.019 00:15:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3073057 00:07:40.019 00:15:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:07:40.019 00:15:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:07:40.019 00:15:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3073057' 00:07:40.019 killing process with pid 3073057 00:07:40.019 00:15:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@969 -- # kill 3073057 00:07:40.019 Received shutdown signal, test time was about 10.000000 seconds 00:07:40.019 00:07:40.019 Latency(us) 00:07:40.019 [2024-10-08T22:15:10.654Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:40.019 [2024-10-08T22:15:10.654Z] =================================================================================================================== 00:07:40.019 [2024-10-08T22:15:10.654Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:07:40.019 00:15:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@974 -- # wait 3073057 00:07:40.019 00:15:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:40.287 00:15:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:40.287 00:15:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0f32cd2b-135c-41df-be0d-0e9ad5e3b321 00:07:40.287 00:15:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:07:40.549 00:15:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:07:40.549 00:15:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:07:40.549 00:15:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 3069147 00:07:40.549 00:15:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 3069147 00:07:40.549 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 3069147 Killed "${NVMF_APP[@]}" "$@" 00:07:40.549 00:15:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:07:40.549 00:15:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:07:40.549 00:15:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:07:40.549 00:15:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:40.549 00:15:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:40.549 00:15:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # nvmfpid=3076045 00:07:40.549 00:15:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # waitforlisten 3076045 00:07:40.549 00:15:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:07:40.549 00:15:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 3076045 ']' 00:07:40.549 00:15:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:40.549 00:15:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:40.549 00:15:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:40.549 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:40.549 00:15:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:40.549 00:15:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:40.549 [2024-10-09 00:15:11.119692] Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 initialization... 00:07:40.549 [2024-10-09 00:15:11.119755] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:40.809 [2024-10-09 00:15:11.204241] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:40.809 [2024-10-09 00:15:11.259678] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:40.809 [2024-10-09 00:15:11.259709] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:40.809 [2024-10-09 00:15:11.259715] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:40.809 [2024-10-09 00:15:11.259725] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:40.809 [2024-10-09 00:15:11.259730] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:40.809 [2024-10-09 00:15:11.260206] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.381 00:15:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:41.381 00:15:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:07:41.381 00:15:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:07:41.381 00:15:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:41.381 00:15:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:41.381 00:15:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:41.381 00:15:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:41.651 [2024-10-09 00:15:12.092790] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:07:41.651 [2024-10-09 00:15:12.092905] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:07:41.651 [2024-10-09 00:15:12.092928] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:07:41.651 00:15:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:07:41.651 00:15:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 18032bfe-f0f7-42b7-ae17-6ee65bb434dd 00:07:41.651 00:15:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=18032bfe-f0f7-42b7-ae17-6ee65bb434dd 00:07:41.651 00:15:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:41.651 00:15:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:07:41.651 00:15:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:41.651 00:15:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:41.651 00:15:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:41.651 00:15:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 18032bfe-f0f7-42b7-ae17-6ee65bb434dd -t 2000 00:07:41.911 [ 00:07:41.911 { 00:07:41.911 "name": "18032bfe-f0f7-42b7-ae17-6ee65bb434dd", 00:07:41.911 "aliases": [ 00:07:41.911 "lvs/lvol" 00:07:41.911 ], 00:07:41.911 "product_name": "Logical Volume", 00:07:41.911 "block_size": 4096, 00:07:41.911 "num_blocks": 38912, 00:07:41.911 "uuid": "18032bfe-f0f7-42b7-ae17-6ee65bb434dd", 00:07:41.911 "assigned_rate_limits": { 00:07:41.911 "rw_ios_per_sec": 0, 00:07:41.911 "rw_mbytes_per_sec": 0, 00:07:41.911 "r_mbytes_per_sec": 0, 00:07:41.911 "w_mbytes_per_sec": 0 00:07:41.911 }, 00:07:41.911 "claimed": false, 00:07:41.911 "zoned": false, 00:07:41.911 "supported_io_types": { 00:07:41.911 "read": true, 00:07:41.911 "write": true, 00:07:41.911 "unmap": true, 00:07:41.911 "flush": false, 00:07:41.911 "reset": true, 00:07:41.911 "nvme_admin": false, 00:07:41.911 "nvme_io": false, 00:07:41.911 "nvme_io_md": false, 00:07:41.912 "write_zeroes": true, 00:07:41.912 "zcopy": false, 00:07:41.912 "get_zone_info": false, 00:07:41.912 "zone_management": false, 00:07:41.912 "zone_append": false, 00:07:41.912 "compare": false, 00:07:41.912 "compare_and_write": false, 00:07:41.912 "abort": false, 00:07:41.912 "seek_hole": true, 00:07:41.912 "seek_data": true, 00:07:41.912 "copy": false, 00:07:41.912 "nvme_iov_md": false 00:07:41.912 }, 00:07:41.912 "driver_specific": { 00:07:41.912 "lvol": { 00:07:41.912 "lvol_store_uuid": "0f32cd2b-135c-41df-be0d-0e9ad5e3b321", 00:07:41.912 "base_bdev": "aio_bdev", 00:07:41.912 "thin_provision": false, 00:07:41.912 "num_allocated_clusters": 38, 00:07:41.912 "snapshot": false, 00:07:41.912 "clone": false, 00:07:41.912 "esnap_clone": false 00:07:41.912 } 00:07:41.912 } 00:07:41.912 } 00:07:41.912 ] 00:07:41.912 00:15:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:07:41.912 00:15:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0f32cd2b-135c-41df-be0d-0e9ad5e3b321 00:07:41.912 00:15:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:07:42.172 00:15:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:07:42.172 00:15:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0f32cd2b-135c-41df-be0d-0e9ad5e3b321 00:07:42.172 00:15:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:07:42.172 00:15:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:07:42.172 00:15:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:42.433 [2024-10-09 00:15:12.925403] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:07:42.433 00:15:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0f32cd2b-135c-41df-be0d-0e9ad5e3b321 00:07:42.433 00:15:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # local es=0 00:07:42.433 00:15:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0f32cd2b-135c-41df-be0d-0e9ad5e3b321 00:07:42.433 00:15:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:42.433 00:15:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:42.433 00:15:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:42.433 00:15:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:42.433 00:15:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:42.433 00:15:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:42.433 00:15:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:42.433 00:15:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:42.433 00:15:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0f32cd2b-135c-41df-be0d-0e9ad5e3b321 00:07:42.693 request: 00:07:42.693 { 00:07:42.693 "uuid": "0f32cd2b-135c-41df-be0d-0e9ad5e3b321", 00:07:42.693 "method": "bdev_lvol_get_lvstores", 00:07:42.693 "req_id": 1 00:07:42.693 } 00:07:42.693 Got JSON-RPC error response 00:07:42.693 response: 00:07:42.693 { 00:07:42.693 "code": -19, 00:07:42.693 "message": "No such device" 00:07:42.693 } 00:07:42.693 00:15:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # es=1 00:07:42.693 00:15:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:42.693 00:15:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:42.693 00:15:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:42.693 00:15:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:42.693 aio_bdev 00:07:42.693 00:15:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 18032bfe-f0f7-42b7-ae17-6ee65bb434dd 00:07:42.693 00:15:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=18032bfe-f0f7-42b7-ae17-6ee65bb434dd 00:07:42.693 00:15:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:42.693 00:15:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:07:42.693 00:15:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:42.693 00:15:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:42.693 00:15:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:42.951 00:15:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 18032bfe-f0f7-42b7-ae17-6ee65bb434dd -t 2000 00:07:43.211 [ 00:07:43.211 { 00:07:43.211 "name": "18032bfe-f0f7-42b7-ae17-6ee65bb434dd", 00:07:43.211 "aliases": [ 00:07:43.211 "lvs/lvol" 00:07:43.211 ], 00:07:43.211 "product_name": "Logical Volume", 00:07:43.211 "block_size": 4096, 00:07:43.211 "num_blocks": 38912, 00:07:43.211 "uuid": "18032bfe-f0f7-42b7-ae17-6ee65bb434dd", 00:07:43.211 "assigned_rate_limits": { 00:07:43.211 "rw_ios_per_sec": 0, 00:07:43.211 "rw_mbytes_per_sec": 0, 00:07:43.211 "r_mbytes_per_sec": 0, 00:07:43.211 "w_mbytes_per_sec": 0 00:07:43.211 }, 00:07:43.211 "claimed": false, 00:07:43.211 "zoned": false, 00:07:43.211 "supported_io_types": { 00:07:43.211 "read": true, 00:07:43.211 "write": true, 00:07:43.211 "unmap": true, 00:07:43.211 "flush": false, 00:07:43.211 "reset": true, 00:07:43.211 "nvme_admin": false, 00:07:43.211 "nvme_io": false, 00:07:43.211 "nvme_io_md": false, 00:07:43.211 "write_zeroes": true, 00:07:43.211 "zcopy": false, 00:07:43.211 "get_zone_info": false, 00:07:43.211 "zone_management": false, 00:07:43.211 "zone_append": false, 00:07:43.211 "compare": false, 00:07:43.211 "compare_and_write": false, 00:07:43.211 "abort": false, 00:07:43.211 "seek_hole": true, 00:07:43.211 "seek_data": true, 00:07:43.211 "copy": false, 00:07:43.211 "nvme_iov_md": false 00:07:43.211 }, 00:07:43.211 "driver_specific": { 00:07:43.211 "lvol": { 00:07:43.211 "lvol_store_uuid": "0f32cd2b-135c-41df-be0d-0e9ad5e3b321", 00:07:43.211 "base_bdev": "aio_bdev", 00:07:43.211 "thin_provision": false, 00:07:43.211 "num_allocated_clusters": 38, 00:07:43.211 "snapshot": false, 00:07:43.211 "clone": false, 00:07:43.211 "esnap_clone": false 00:07:43.211 } 00:07:43.211 } 00:07:43.211 } 00:07:43.212 ] 00:07:43.212 00:15:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:07:43.212 00:15:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0f32cd2b-135c-41df-be0d-0e9ad5e3b321 00:07:43.212 00:15:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:07:43.212 00:15:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:07:43.212 00:15:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0f32cd2b-135c-41df-be0d-0e9ad5e3b321 00:07:43.212 00:15:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:07:43.472 00:15:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:07:43.472 00:15:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 18032bfe-f0f7-42b7-ae17-6ee65bb434dd 00:07:43.741 00:15:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 0f32cd2b-135c-41df-be0d-0e9ad5e3b321 00:07:43.741 00:15:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:44.008 00:15:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:44.008 00:07:44.008 real 0m17.502s 00:07:44.008 user 0m46.024s 00:07:44.008 sys 0m3.022s 00:07:44.008 00:15:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:44.008 00:15:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:44.008 ************************************ 00:07:44.008 END TEST lvs_grow_dirty 00:07:44.008 ************************************ 00:07:44.008 00:15:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:07:44.009 00:15:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # type=--id 00:07:44.009 00:15:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@809 -- # id=0 00:07:44.009 00:15:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:07:44.009 00:15:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:07:44.009 00:15:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:07:44.009 00:15:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:07:44.009 00:15:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # for n in $shm_files 00:07:44.009 00:15:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:07:44.009 nvmf_trace.0 00:07:44.009 00:15:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@823 -- # return 0 00:07:44.009 00:15:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:07:44.009 00:15:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@514 -- # nvmfcleanup 00:07:44.009 00:15:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:07:44.009 00:15:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:44.009 00:15:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:07:44.009 00:15:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:44.009 00:15:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:44.009 rmmod nvme_tcp 00:07:44.269 rmmod nvme_fabrics 00:07:44.269 rmmod nvme_keyring 00:07:44.269 00:15:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:44.269 00:15:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:07:44.269 00:15:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:07:44.269 00:15:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@515 -- # '[' -n 3076045 ']' 00:07:44.269 00:15:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # killprocess 3076045 00:07:44.269 00:15:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@950 -- # '[' -z 3076045 ']' 00:07:44.269 00:15:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # kill -0 3076045 00:07:44.269 00:15:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # uname 00:07:44.269 00:15:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:44.269 00:15:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3076045 00:07:44.269 00:15:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:44.269 00:15:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:44.269 00:15:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3076045' 00:07:44.269 killing process with pid 3076045 00:07:44.269 00:15:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@969 -- # kill 3076045 00:07:44.269 00:15:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@974 -- # wait 3076045 00:07:44.269 00:15:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:07:44.269 00:15:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:07:44.269 00:15:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:07:44.269 00:15:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:07:44.269 00:15:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@789 -- # iptables-save 00:07:44.269 00:15:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:07:44.269 00:15:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@789 -- # iptables-restore 00:07:44.269 00:15:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:44.269 00:15:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:44.269 00:15:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:44.269 00:15:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:44.269 00:15:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:46.820 00:15:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:46.820 00:07:46.820 real 0m44.869s 00:07:46.820 user 1m7.976s 00:07:46.820 sys 0m10.559s 00:07:46.821 00:15:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:46.821 00:15:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:46.821 ************************************ 00:07:46.821 END TEST nvmf_lvs_grow 00:07:46.821 ************************************ 00:07:46.821 00:15:16 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:07:46.821 00:15:16 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:46.821 00:15:16 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:46.821 00:15:16 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:46.821 ************************************ 00:07:46.821 START TEST nvmf_bdev_io_wait 00:07:46.821 ************************************ 00:07:46.821 00:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:07:46.821 * Looking for test storage... 00:07:46.821 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:46.821 00:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:46.821 00:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:46.821 00:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1681 -- # lcov --version 00:07:46.821 00:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:46.821 00:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:46.821 00:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:46.821 00:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:46.821 00:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:07:46.821 00:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:07:46.821 00:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:07:46.821 00:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:07:46.821 00:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:07:46.821 00:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:07:46.821 00:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:07:46.821 00:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:46.821 00:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:07:46.821 00:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:07:46.821 00:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:46.821 00:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:46.821 00:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:07:46.821 00:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:07:46.821 00:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:46.821 00:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:07:46.821 00:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:07:46.821 00:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:07:46.821 00:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:07:46.821 00:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:46.821 00:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:07:46.821 00:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:07:46.821 00:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:46.821 00:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:46.821 00:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:07:46.821 00:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:46.821 00:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:46.821 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:46.821 --rc genhtml_branch_coverage=1 00:07:46.821 --rc genhtml_function_coverage=1 00:07:46.821 --rc genhtml_legend=1 00:07:46.821 --rc geninfo_all_blocks=1 00:07:46.821 --rc geninfo_unexecuted_blocks=1 00:07:46.821 00:07:46.821 ' 00:07:46.821 00:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:46.821 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:46.821 --rc genhtml_branch_coverage=1 00:07:46.821 --rc genhtml_function_coverage=1 00:07:46.821 --rc genhtml_legend=1 00:07:46.821 --rc geninfo_all_blocks=1 00:07:46.821 --rc geninfo_unexecuted_blocks=1 00:07:46.821 00:07:46.821 ' 00:07:46.821 00:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:46.821 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:46.821 --rc genhtml_branch_coverage=1 00:07:46.821 --rc genhtml_function_coverage=1 00:07:46.821 --rc genhtml_legend=1 00:07:46.821 --rc geninfo_all_blocks=1 00:07:46.821 --rc geninfo_unexecuted_blocks=1 00:07:46.821 00:07:46.821 ' 00:07:46.821 00:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:46.821 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:46.821 --rc genhtml_branch_coverage=1 00:07:46.821 --rc genhtml_function_coverage=1 00:07:46.821 --rc genhtml_legend=1 00:07:46.821 --rc geninfo_all_blocks=1 00:07:46.821 --rc geninfo_unexecuted_blocks=1 00:07:46.821 00:07:46.821 ' 00:07:46.821 00:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:46.821 00:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:07:46.821 00:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:46.821 00:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:46.821 00:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:46.821 00:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:46.821 00:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:46.821 00:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:46.821 00:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:46.821 00:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:46.821 00:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:46.821 00:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:46.821 00:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:46.821 00:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:46.821 00:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:46.821 00:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:46.821 00:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:46.821 00:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:46.821 00:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:46.821 00:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:07:46.821 00:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:46.821 00:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:46.821 00:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:46.821 00:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:46.821 00:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:46.821 00:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:46.821 00:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:07:46.822 00:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:46.822 00:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:07:46.822 00:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:46.822 00:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:46.822 00:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:46.822 00:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:46.822 00:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:46.822 00:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:46.822 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:46.822 00:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:46.822 00:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:46.822 00:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:46.822 00:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:46.822 00:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:46.822 00:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:07:46.822 00:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:07:46.822 00:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:46.822 00:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # prepare_net_devs 00:07:46.822 00:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@436 -- # local -g is_hw=no 00:07:46.822 00:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # remove_spdk_ns 00:07:46.822 00:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:46.822 00:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:46.822 00:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:46.822 00:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:07:46.822 00:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:07:46.822 00:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:07:46.822 00:15:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:54.988 00:15:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:54.988 00:15:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:07:54.988 00:15:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:54.988 00:15:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:54.988 00:15:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:54.988 00:15:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:54.988 00:15:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:54.988 00:15:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:07:54.988 00:15:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:54.988 00:15:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:07:54.988 00:15:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:07:54.988 00:15:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:07:54.988 00:15:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:07:54.988 00:15:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:07:54.988 00:15:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:07:54.988 00:15:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:54.988 00:15:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:54.988 00:15:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:54.988 00:15:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:54.988 00:15:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:54.988 00:15:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:54.988 00:15:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:54.988 00:15:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:54.988 00:15:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:54.988 00:15:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:54.988 00:15:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:54.988 00:15:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:54.989 00:15:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:54.989 00:15:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:54.989 00:15:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:54.989 00:15:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:54.989 00:15:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:54.989 00:15:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:54.989 00:15:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:54.989 00:15:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:07:54.989 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:07:54.989 00:15:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:54.989 00:15:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:54.989 00:15:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:54.989 00:15:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:54.989 00:15:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:54.989 00:15:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:54.989 00:15:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:07:54.989 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:07:54.989 00:15:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:54.989 00:15:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:54.989 00:15:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:54.989 00:15:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:54.989 00:15:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:54.989 00:15:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:54.989 00:15:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:54.989 00:15:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:54.989 00:15:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:07:54.989 00:15:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:54.989 00:15:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:07:54.989 00:15:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:54.989 00:15:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ up == up ]] 00:07:54.989 00:15:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:07:54.989 00:15:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:54.989 00:15:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:07:54.989 Found net devices under 0000:4b:00.0: cvl_0_0 00:07:54.990 00:15:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:07:54.990 00:15:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:07:54.990 00:15:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:54.990 00:15:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:07:54.990 00:15:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:54.990 00:15:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ up == up ]] 00:07:54.990 00:15:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:07:54.990 00:15:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:54.990 00:15:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:07:54.990 Found net devices under 0000:4b:00.1: cvl_0_1 00:07:54.990 00:15:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:07:54.990 00:15:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:07:54.990 00:15:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # is_hw=yes 00:07:54.990 00:15:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:07:54.990 00:15:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:07:54.990 00:15:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:07:54.990 00:15:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:54.990 00:15:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:54.990 00:15:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:54.990 00:15:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:54.990 00:15:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:54.990 00:15:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:54.990 00:15:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:54.990 00:15:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:54.990 00:15:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:54.990 00:15:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:54.990 00:15:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:54.990 00:15:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:54.990 00:15:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:54.990 00:15:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:54.990 00:15:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:54.990 00:15:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:54.990 00:15:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:54.990 00:15:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:54.990 00:15:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:54.991 00:15:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:54.991 00:15:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:54.991 00:15:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:54.991 00:15:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:54.991 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:54.991 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.588 ms 00:07:54.991 00:07:54.991 --- 10.0.0.2 ping statistics --- 00:07:54.991 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:54.991 rtt min/avg/max/mdev = 0.588/0.588/0.588/0.000 ms 00:07:54.991 00:15:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:54.991 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:54.991 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.265 ms 00:07:54.991 00:07:54.991 --- 10.0.0.1 ping statistics --- 00:07:54.991 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:54.991 rtt min/avg/max/mdev = 0.265/0.265/0.265/0.000 ms 00:07:54.991 00:15:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:54.991 00:15:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # return 0 00:07:54.991 00:15:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:07:54.991 00:15:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:54.991 00:15:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:07:54.991 00:15:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:07:54.991 00:15:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:54.991 00:15:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:07:54.991 00:15:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:07:54.991 00:15:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:07:54.991 00:15:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:07:54.991 00:15:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:54.994 00:15:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:54.994 00:15:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # nvmfpid=3081139 00:07:54.994 00:15:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # waitforlisten 3081139 00:07:54.994 00:15:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:07:54.994 00:15:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@831 -- # '[' -z 3081139 ']' 00:07:54.994 00:15:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:54.994 00:15:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:54.994 00:15:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:54.995 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:54.995 00:15:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:54.995 00:15:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:54.995 [2024-10-09 00:15:24.829644] Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 initialization... 00:07:54.995 [2024-10-09 00:15:24.829707] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:54.995 [2024-10-09 00:15:24.919454] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:54.995 [2024-10-09 00:15:25.016027] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:54.995 [2024-10-09 00:15:25.016087] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:54.995 [2024-10-09 00:15:25.016096] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:54.995 [2024-10-09 00:15:25.016103] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:54.995 [2024-10-09 00:15:25.016110] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:54.995 [2024-10-09 00:15:25.018549] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:07:54.995 [2024-10-09 00:15:25.018709] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:07:54.995 [2024-10-09 00:15:25.018868] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:07:54.995 [2024-10-09 00:15:25.018995] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:07:55.258 00:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:55.258 00:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # return 0 00:07:55.258 00:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:07:55.258 00:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:55.258 00:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:55.258 00:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:55.258 00:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:07:55.258 00:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:55.258 00:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:55.258 00:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:55.258 00:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:07:55.258 00:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:55.258 00:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:55.258 00:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:55.258 00:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:55.258 00:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:55.258 00:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:55.258 [2024-10-09 00:15:25.777604] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:55.258 00:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:55.258 00:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:07:55.258 00:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:55.258 00:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:55.258 Malloc0 00:07:55.258 00:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:55.258 00:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:55.258 00:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:55.258 00:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:55.258 00:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:55.258 00:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:55.258 00:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:55.258 00:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:55.258 00:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:55.258 00:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:55.258 00:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:55.258 00:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:55.258 [2024-10-09 00:15:25.852508] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:55.258 00:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:55.258 00:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=3081195 00:07:55.258 00:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:07:55.258 00:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=3081197 00:07:55.258 00:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:07:55.258 00:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:07:55.258 00:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:07:55.258 00:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:07:55.258 00:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:07:55.258 { 00:07:55.258 "params": { 00:07:55.258 "name": "Nvme$subsystem", 00:07:55.258 "trtype": "$TEST_TRANSPORT", 00:07:55.258 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:55.258 "adrfam": "ipv4", 00:07:55.258 "trsvcid": "$NVMF_PORT", 00:07:55.258 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:55.258 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:55.258 "hdgst": ${hdgst:-false}, 00:07:55.258 "ddgst": ${ddgst:-false} 00:07:55.258 }, 00:07:55.258 "method": "bdev_nvme_attach_controller" 00:07:55.258 } 00:07:55.258 EOF 00:07:55.258 )") 00:07:55.258 00:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=3081200 00:07:55.258 00:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:07:55.258 00:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:07:55.258 00:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:07:55.258 00:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:07:55.258 00:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:07:55.258 00:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:07:55.258 00:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:07:55.258 { 00:07:55.258 "params": { 00:07:55.258 "name": "Nvme$subsystem", 00:07:55.258 "trtype": "$TEST_TRANSPORT", 00:07:55.258 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:55.258 "adrfam": "ipv4", 00:07:55.258 "trsvcid": "$NVMF_PORT", 00:07:55.258 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:55.258 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:55.258 "hdgst": ${hdgst:-false}, 00:07:55.258 "ddgst": ${ddgst:-false} 00:07:55.258 }, 00:07:55.258 "method": "bdev_nvme_attach_controller" 00:07:55.258 } 00:07:55.258 EOF 00:07:55.258 )") 00:07:55.258 00:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:07:55.258 00:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=3081203 00:07:55.258 00:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:07:55.258 00:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:07:55.258 00:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:07:55.258 00:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:07:55.259 00:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:07:55.259 00:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:07:55.259 { 00:07:55.259 "params": { 00:07:55.259 "name": "Nvme$subsystem", 00:07:55.259 "trtype": "$TEST_TRANSPORT", 00:07:55.259 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:55.259 "adrfam": "ipv4", 00:07:55.259 "trsvcid": "$NVMF_PORT", 00:07:55.259 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:55.259 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:55.259 "hdgst": ${hdgst:-false}, 00:07:55.259 "ddgst": ${ddgst:-false} 00:07:55.259 }, 00:07:55.259 "method": "bdev_nvme_attach_controller" 00:07:55.259 } 00:07:55.259 EOF 00:07:55.259 )") 00:07:55.259 00:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:07:55.259 00:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:07:55.259 00:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:07:55.259 00:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:07:55.259 00:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:07:55.259 00:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:07:55.259 00:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:07:55.259 { 00:07:55.259 "params": { 00:07:55.259 "name": "Nvme$subsystem", 00:07:55.259 "trtype": "$TEST_TRANSPORT", 00:07:55.259 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:55.259 "adrfam": "ipv4", 00:07:55.259 "trsvcid": "$NVMF_PORT", 00:07:55.259 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:55.259 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:55.259 "hdgst": ${hdgst:-false}, 00:07:55.259 "ddgst": ${ddgst:-false} 00:07:55.259 }, 00:07:55.259 "method": "bdev_nvme_attach_controller" 00:07:55.259 } 00:07:55.259 EOF 00:07:55.259 )") 00:07:55.259 00:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:07:55.259 00:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:07:55.259 00:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 3081195 00:07:55.259 00:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:07:55.259 00:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:07:55.259 00:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:07:55.259 "params": { 00:07:55.259 "name": "Nvme1", 00:07:55.259 "trtype": "tcp", 00:07:55.259 "traddr": "10.0.0.2", 00:07:55.259 "adrfam": "ipv4", 00:07:55.259 "trsvcid": "4420", 00:07:55.259 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:55.259 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:55.259 "hdgst": false, 00:07:55.259 "ddgst": false 00:07:55.259 }, 00:07:55.259 "method": "bdev_nvme_attach_controller" 00:07:55.259 }' 00:07:55.259 00:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:07:55.259 00:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:07:55.259 00:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:07:55.259 00:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:07:55.259 00:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:07:55.259 "params": { 00:07:55.259 "name": "Nvme1", 00:07:55.259 "trtype": "tcp", 00:07:55.259 "traddr": "10.0.0.2", 00:07:55.259 "adrfam": "ipv4", 00:07:55.259 "trsvcid": "4420", 00:07:55.259 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:55.259 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:55.259 "hdgst": false, 00:07:55.259 "ddgst": false 00:07:55.259 }, 00:07:55.259 "method": "bdev_nvme_attach_controller" 00:07:55.259 }' 00:07:55.259 00:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:07:55.259 00:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:07:55.259 "params": { 00:07:55.259 "name": "Nvme1", 00:07:55.259 "trtype": "tcp", 00:07:55.259 "traddr": "10.0.0.2", 00:07:55.259 "adrfam": "ipv4", 00:07:55.259 "trsvcid": "4420", 00:07:55.259 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:55.259 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:55.259 "hdgst": false, 00:07:55.259 "ddgst": false 00:07:55.259 }, 00:07:55.259 "method": "bdev_nvme_attach_controller" 00:07:55.259 }' 00:07:55.259 00:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:07:55.259 00:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:07:55.259 "params": { 00:07:55.259 "name": "Nvme1", 00:07:55.259 "trtype": "tcp", 00:07:55.259 "traddr": "10.0.0.2", 00:07:55.259 "adrfam": "ipv4", 00:07:55.259 "trsvcid": "4420", 00:07:55.259 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:55.259 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:55.259 "hdgst": false, 00:07:55.259 "ddgst": false 00:07:55.259 }, 00:07:55.259 "method": "bdev_nvme_attach_controller" 00:07:55.259 }' 00:07:55.519 [2024-10-09 00:15:25.910807] Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 initialization... 00:07:55.519 [2024-10-09 00:15:25.910889] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:07:55.519 [2024-10-09 00:15:25.911293] Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 initialization... 00:07:55.519 [2024-10-09 00:15:25.911353] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:07:55.519 [2024-10-09 00:15:25.913666] Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 initialization... 00:07:55.519 [2024-10-09 00:15:25.913750] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:07:55.519 [2024-10-09 00:15:25.913836] Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 initialization... 00:07:55.519 [2024-10-09 00:15:25.913899] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:07:55.519 [2024-10-09 00:15:26.122159] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:55.780 [2024-10-09 00:15:26.192018] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 6 00:07:55.780 [2024-10-09 00:15:26.216236] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:55.780 [2024-10-09 00:15:26.288571] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 7 00:07:55.780 [2024-10-09 00:15:26.310570] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:55.780 [2024-10-09 00:15:26.378664] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:55.780 [2024-10-09 00:15:26.380778] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 4 00:07:56.041 [2024-10-09 00:15:26.447462] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 5 00:07:56.041 Running I/O for 1 seconds... 00:07:56.319 Running I/O for 1 seconds... 00:07:56.319 Running I/O for 1 seconds... 00:07:56.319 Running I/O for 1 seconds... 00:07:57.263 188312.00 IOPS, 735.59 MiB/s 00:07:57.263 Latency(us) 00:07:57.263 [2024-10-08T22:15:27.898Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:57.263 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:07:57.263 Nvme1n1 : 1.00 187935.48 734.12 0.00 0.00 677.47 319.15 1993.39 00:07:57.263 [2024-10-08T22:15:27.898Z] =================================================================================================================== 00:07:57.263 [2024-10-08T22:15:27.898Z] Total : 187935.48 734.12 0.00 0.00 677.47 319.15 1993.39 00:07:57.263 7486.00 IOPS, 29.24 MiB/s 00:07:57.263 Latency(us) 00:07:57.263 [2024-10-08T22:15:27.898Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:57.263 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:07:57.263 Nvme1n1 : 1.02 7444.14 29.08 0.00 0.00 16975.52 5734.40 30365.01 00:07:57.263 [2024-10-08T22:15:27.898Z] =================================================================================================================== 00:07:57.263 [2024-10-08T22:15:27.898Z] Total : 7444.14 29.08 0.00 0.00 16975.52 5734.40 30365.01 00:07:57.263 11238.00 IOPS, 43.90 MiB/s 00:07:57.263 Latency(us) 00:07:57.263 [2024-10-08T22:15:27.898Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:57.263 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:07:57.263 Nvme1n1 : 1.01 11275.97 44.05 0.00 0.00 11304.23 6362.45 19879.25 00:07:57.263 [2024-10-08T22:15:27.898Z] =================================================================================================================== 00:07:57.263 [2024-10-08T22:15:27.898Z] Total : 11275.97 44.05 0.00 0.00 11304.23 6362.45 19879.25 00:07:57.263 7197.00 IOPS, 28.11 MiB/s 00:07:57.263 Latency(us) 00:07:57.263 [2024-10-08T22:15:27.898Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:57.263 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:07:57.263 Nvme1n1 : 1.01 7312.07 28.56 0.00 0.00 17453.87 4341.76 38884.69 00:07:57.263 [2024-10-08T22:15:27.898Z] =================================================================================================================== 00:07:57.263 [2024-10-08T22:15:27.898Z] Total : 7312.07 28.56 0.00 0.00 17453.87 4341.76 38884.69 00:07:57.524 00:15:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 3081197 00:07:57.524 00:15:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 3081200 00:07:57.524 00:15:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 3081203 00:07:57.524 00:15:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:57.524 00:15:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:57.524 00:15:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:57.524 00:15:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:57.524 00:15:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:07:57.524 00:15:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:07:57.524 00:15:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@514 -- # nvmfcleanup 00:07:57.524 00:15:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:07:57.524 00:15:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:57.524 00:15:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:07:57.524 00:15:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:57.524 00:15:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:57.524 rmmod nvme_tcp 00:07:57.524 rmmod nvme_fabrics 00:07:57.524 rmmod nvme_keyring 00:07:57.524 00:15:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:57.524 00:15:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:07:57.524 00:15:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:07:57.524 00:15:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@515 -- # '[' -n 3081139 ']' 00:07:57.524 00:15:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # killprocess 3081139 00:07:57.524 00:15:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@950 -- # '[' -z 3081139 ']' 00:07:57.524 00:15:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # kill -0 3081139 00:07:57.524 00:15:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # uname 00:07:57.524 00:15:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:57.524 00:15:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3081139 00:07:57.524 00:15:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:57.524 00:15:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:57.524 00:15:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3081139' 00:07:57.524 killing process with pid 3081139 00:07:57.524 00:15:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@969 -- # kill 3081139 00:07:57.524 00:15:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@974 -- # wait 3081139 00:07:57.784 00:15:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:07:57.784 00:15:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:07:57.784 00:15:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:07:57.784 00:15:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:07:57.784 00:15:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@789 -- # iptables-save 00:07:57.784 00:15:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:07:57.784 00:15:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@789 -- # iptables-restore 00:07:57.784 00:15:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:57.784 00:15:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:57.784 00:15:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:57.784 00:15:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:57.784 00:15:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:00.329 00:15:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:00.329 00:08:00.329 real 0m13.330s 00:08:00.329 user 0m20.720s 00:08:00.329 sys 0m7.634s 00:08:00.329 00:15:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:00.329 00:15:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:00.329 ************************************ 00:08:00.329 END TEST nvmf_bdev_io_wait 00:08:00.329 ************************************ 00:08:00.330 00:15:30 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:08:00.330 00:15:30 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:00.330 00:15:30 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:00.330 00:15:30 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:00.330 ************************************ 00:08:00.330 START TEST nvmf_queue_depth 00:08:00.330 ************************************ 00:08:00.330 00:15:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:08:00.330 * Looking for test storage... 00:08:00.330 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:00.330 00:15:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:08:00.330 00:15:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1681 -- # lcov --version 00:08:00.330 00:15:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:08:00.330 00:15:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:08:00.330 00:15:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:00.330 00:15:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:00.330 00:15:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:00.330 00:15:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:08:00.330 00:15:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:08:00.330 00:15:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:08:00.330 00:15:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:08:00.330 00:15:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:08:00.330 00:15:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:08:00.330 00:15:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:08:00.330 00:15:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:00.330 00:15:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:08:00.330 00:15:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:08:00.330 00:15:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:00.330 00:15:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:00.330 00:15:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:08:00.330 00:15:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:08:00.330 00:15:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:00.330 00:15:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:08:00.330 00:15:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:08:00.330 00:15:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:08:00.330 00:15:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:08:00.330 00:15:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:00.330 00:15:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:08:00.330 00:15:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:08:00.330 00:15:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:00.330 00:15:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:00.330 00:15:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:08:00.330 00:15:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:00.330 00:15:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:08:00.330 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:00.330 --rc genhtml_branch_coverage=1 00:08:00.330 --rc genhtml_function_coverage=1 00:08:00.330 --rc genhtml_legend=1 00:08:00.330 --rc geninfo_all_blocks=1 00:08:00.330 --rc geninfo_unexecuted_blocks=1 00:08:00.330 00:08:00.330 ' 00:08:00.330 00:15:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:08:00.330 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:00.330 --rc genhtml_branch_coverage=1 00:08:00.330 --rc genhtml_function_coverage=1 00:08:00.330 --rc genhtml_legend=1 00:08:00.330 --rc geninfo_all_blocks=1 00:08:00.330 --rc geninfo_unexecuted_blocks=1 00:08:00.330 00:08:00.330 ' 00:08:00.330 00:15:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:08:00.330 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:00.330 --rc genhtml_branch_coverage=1 00:08:00.330 --rc genhtml_function_coverage=1 00:08:00.330 --rc genhtml_legend=1 00:08:00.330 --rc geninfo_all_blocks=1 00:08:00.330 --rc geninfo_unexecuted_blocks=1 00:08:00.330 00:08:00.330 ' 00:08:00.330 00:15:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:08:00.330 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:00.330 --rc genhtml_branch_coverage=1 00:08:00.330 --rc genhtml_function_coverage=1 00:08:00.330 --rc genhtml_legend=1 00:08:00.330 --rc geninfo_all_blocks=1 00:08:00.330 --rc geninfo_unexecuted_blocks=1 00:08:00.330 00:08:00.330 ' 00:08:00.330 00:15:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:00.330 00:15:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:08:00.330 00:15:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:00.330 00:15:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:00.330 00:15:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:00.330 00:15:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:00.330 00:15:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:00.330 00:15:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:00.330 00:15:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:00.330 00:15:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:00.330 00:15:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:00.330 00:15:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:00.330 00:15:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:00.330 00:15:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:00.330 00:15:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:00.330 00:15:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:00.330 00:15:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:00.330 00:15:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:00.330 00:15:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:00.330 00:15:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:08:00.330 00:15:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:00.330 00:15:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:00.330 00:15:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:00.330 00:15:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:00.330 00:15:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:00.330 00:15:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:00.330 00:15:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:08:00.330 00:15:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:00.330 00:15:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:08:00.330 00:15:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:00.330 00:15:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:00.330 00:15:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:00.330 00:15:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:00.330 00:15:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:00.330 00:15:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:00.330 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:00.330 00:15:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:00.330 00:15:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:00.330 00:15:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:00.331 00:15:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:08:00.331 00:15:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:08:00.331 00:15:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:08:00.331 00:15:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:08:00.331 00:15:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:08:00.331 00:15:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:00.331 00:15:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # prepare_net_devs 00:08:00.331 00:15:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@436 -- # local -g is_hw=no 00:08:00.331 00:15:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # remove_spdk_ns 00:08:00.331 00:15:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:00.331 00:15:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:00.331 00:15:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:00.331 00:15:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:08:00.331 00:15:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:08:00.331 00:15:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:08:00.331 00:15:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:08.672 00:15:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:08.672 00:15:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:08:08.672 00:15:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:08.672 00:15:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:08.672 00:15:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:08.672 00:15:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:08.672 00:15:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:08.672 00:15:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:08:08.672 00:15:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:08.672 00:15:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:08:08.672 00:15:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:08:08.672 00:15:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:08:08.672 00:15:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:08:08.672 00:15:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:08:08.672 00:15:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:08:08.672 00:15:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:08.672 00:15:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:08.672 00:15:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:08.672 00:15:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:08.672 00:15:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:08.672 00:15:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:08.672 00:15:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:08.672 00:15:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:08.672 00:15:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:08.672 00:15:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:08.672 00:15:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:08.672 00:15:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:08.672 00:15:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:08.672 00:15:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:08.672 00:15:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:08.672 00:15:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:08.672 00:15:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:08.672 00:15:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:08.672 00:15:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:08.672 00:15:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:08:08.672 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:08:08.672 00:15:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:08.672 00:15:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:08.672 00:15:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:08.672 00:15:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:08.672 00:15:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:08.672 00:15:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:08.672 00:15:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:08:08.672 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:08:08.672 00:15:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:08.672 00:15:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:08.672 00:15:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:08.672 00:15:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:08.672 00:15:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:08.672 00:15:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:08.672 00:15:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:08.672 00:15:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:08.672 00:15:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:08:08.672 00:15:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:08.672 00:15:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:08:08.672 00:15:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:08.672 00:15:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ up == up ]] 00:08:08.672 00:15:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:08:08.672 00:15:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:08.672 00:15:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:08:08.672 Found net devices under 0000:4b:00.0: cvl_0_0 00:08:08.672 00:15:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:08:08.672 00:15:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:08:08.672 00:15:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:08.672 00:15:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:08:08.672 00:15:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:08.672 00:15:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ up == up ]] 00:08:08.672 00:15:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:08:08.672 00:15:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:08.672 00:15:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:08:08.672 Found net devices under 0000:4b:00.1: cvl_0_1 00:08:08.672 00:15:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:08:08.672 00:15:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:08:08.672 00:15:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # is_hw=yes 00:08:08.672 00:15:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:08:08.672 00:15:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:08:08.672 00:15:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:08:08.672 00:15:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:08.672 00:15:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:08.672 00:15:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:08.673 00:15:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:08.673 00:15:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:08.673 00:15:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:08.673 00:15:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:08.673 00:15:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:08.673 00:15:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:08.673 00:15:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:08.673 00:15:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:08.673 00:15:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:08.673 00:15:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:08.673 00:15:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:08.673 00:15:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:08.673 00:15:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:08.673 00:15:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:08.673 00:15:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:08.673 00:15:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:08.673 00:15:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:08.673 00:15:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:08.673 00:15:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:08.673 00:15:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:08.673 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:08.673 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.628 ms 00:08:08.673 00:08:08.673 --- 10.0.0.2 ping statistics --- 00:08:08.673 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:08.673 rtt min/avg/max/mdev = 0.628/0.628/0.628/0.000 ms 00:08:08.673 00:15:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:08.673 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:08.673 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.282 ms 00:08:08.673 00:08:08.673 --- 10.0.0.1 ping statistics --- 00:08:08.673 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:08.673 rtt min/avg/max/mdev = 0.282/0.282/0.282/0.000 ms 00:08:08.673 00:15:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:08.673 00:15:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@448 -- # return 0 00:08:08.673 00:15:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:08:08.673 00:15:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:08.673 00:15:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:08:08.673 00:15:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:08:08.673 00:15:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:08.673 00:15:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:08:08.673 00:15:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:08:08.673 00:15:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:08:08.673 00:15:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:08:08.673 00:15:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:08.673 00:15:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:08.673 00:15:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # nvmfpid=3085907 00:08:08.673 00:15:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # waitforlisten 3085907 00:08:08.673 00:15:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:08:08.673 00:15:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 3085907 ']' 00:08:08.673 00:15:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:08.673 00:15:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:08.673 00:15:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:08.673 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:08.673 00:15:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:08.673 00:15:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:08.673 [2024-10-09 00:15:38.245580] Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 initialization... 00:08:08.673 [2024-10-09 00:15:38.245647] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:08.673 [2024-10-09 00:15:38.326399] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:08.673 [2024-10-09 00:15:38.419499] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:08.673 [2024-10-09 00:15:38.419567] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:08.673 [2024-10-09 00:15:38.419576] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:08.673 [2024-10-09 00:15:38.419583] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:08.673 [2024-10-09 00:15:38.419595] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:08.673 [2024-10-09 00:15:38.420422] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:08:08.673 00:15:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:08.673 00:15:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:08:08.673 00:15:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:08:08.673 00:15:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:08.673 00:15:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:08.673 00:15:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:08.673 00:15:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:08.673 00:15:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:08.673 00:15:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:08.673 [2024-10-09 00:15:39.119075] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:08.673 00:15:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:08.673 00:15:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:08.673 00:15:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:08.673 00:15:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:08.673 Malloc0 00:08:08.673 00:15:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:08.673 00:15:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:08.673 00:15:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:08.673 00:15:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:08.673 00:15:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:08.673 00:15:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:08.673 00:15:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:08.673 00:15:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:08.673 00:15:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:08.673 00:15:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:08.673 00:15:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:08.673 00:15:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:08.673 [2024-10-09 00:15:39.191649] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:08.673 00:15:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:08.673 00:15:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=3086232 00:08:08.673 00:15:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:08.673 00:15:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:08:08.673 00:15:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 3086232 /var/tmp/bdevperf.sock 00:08:08.673 00:15:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 3086232 ']' 00:08:08.673 00:15:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:08.673 00:15:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:08.673 00:15:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:08.673 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:08.673 00:15:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:08.673 00:15:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:08.673 [2024-10-09 00:15:39.250499] Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 initialization... 00:08:08.673 [2024-10-09 00:15:39.250565] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3086232 ] 00:08:08.934 [2024-10-09 00:15:39.331738] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:08.934 [2024-10-09 00:15:39.427116] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:08:09.504 00:15:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:09.504 00:15:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:08:09.504 00:15:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:08:09.504 00:15:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:09.504 00:15:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:09.764 NVMe0n1 00:08:09.764 00:15:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:09.764 00:15:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:10.024 Running I/O for 10 seconds... 00:08:11.902 10029.00 IOPS, 39.18 MiB/s [2024-10-08T22:15:43.487Z] 10745.50 IOPS, 41.97 MiB/s [2024-10-08T22:15:44.870Z] 11022.33 IOPS, 43.06 MiB/s [2024-10-08T22:15:45.810Z] 11344.50 IOPS, 44.31 MiB/s [2024-10-08T22:15:46.748Z] 11678.00 IOPS, 45.62 MiB/s [2024-10-08T22:15:47.688Z] 11948.50 IOPS, 46.67 MiB/s [2024-10-08T22:15:48.627Z] 12141.29 IOPS, 47.43 MiB/s [2024-10-08T22:15:49.566Z] 12291.88 IOPS, 48.02 MiB/s [2024-10-08T22:15:50.505Z] 12472.89 IOPS, 48.72 MiB/s [2024-10-08T22:15:50.765Z] 12591.20 IOPS, 49.18 MiB/s 00:08:20.130 Latency(us) 00:08:20.130 [2024-10-08T22:15:50.765Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:20.130 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:08:20.130 Verification LBA range: start 0x0 length 0x4000 00:08:20.130 NVMe0n1 : 10.06 12621.12 49.30 0.00 0.00 80877.81 24576.00 71652.69 00:08:20.130 [2024-10-08T22:15:50.765Z] =================================================================================================================== 00:08:20.130 [2024-10-08T22:15:50.766Z] Total : 12621.12 49.30 0.00 0.00 80877.81 24576.00 71652.69 00:08:20.131 { 00:08:20.131 "results": [ 00:08:20.131 { 00:08:20.131 "job": "NVMe0n1", 00:08:20.131 "core_mask": "0x1", 00:08:20.131 "workload": "verify", 00:08:20.131 "status": "finished", 00:08:20.131 "verify_range": { 00:08:20.131 "start": 0, 00:08:20.131 "length": 16384 00:08:20.131 }, 00:08:20.131 "queue_depth": 1024, 00:08:20.131 "io_size": 4096, 00:08:20.131 "runtime": 10.056875, 00:08:20.131 "iops": 12621.117394816978, 00:08:20.131 "mibps": 49.30123982350382, 00:08:20.131 "io_failed": 0, 00:08:20.131 "io_timeout": 0, 00:08:20.131 "avg_latency_us": 80877.8116547046, 00:08:20.131 "min_latency_us": 24576.0, 00:08:20.131 "max_latency_us": 71652.69333333333 00:08:20.131 } 00:08:20.131 ], 00:08:20.131 "core_count": 1 00:08:20.131 } 00:08:20.131 00:15:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 3086232 00:08:20.131 00:15:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 3086232 ']' 00:08:20.131 00:15:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 3086232 00:08:20.131 00:15:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:08:20.131 00:15:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:20.131 00:15:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3086232 00:08:20.131 00:15:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:20.131 00:15:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:20.131 00:15:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3086232' 00:08:20.131 killing process with pid 3086232 00:08:20.131 00:15:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 3086232 00:08:20.131 Received shutdown signal, test time was about 10.000000 seconds 00:08:20.131 00:08:20.131 Latency(us) 00:08:20.131 [2024-10-08T22:15:50.766Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:20.131 [2024-10-08T22:15:50.766Z] =================================================================================================================== 00:08:20.131 [2024-10-08T22:15:50.766Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:20.131 00:15:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 3086232 00:08:20.131 00:15:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:08:20.131 00:15:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:08:20.131 00:15:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@514 -- # nvmfcleanup 00:08:20.131 00:15:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:08:20.131 00:15:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:20.131 00:15:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:08:20.131 00:15:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:20.131 00:15:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:20.131 rmmod nvme_tcp 00:08:20.131 rmmod nvme_fabrics 00:08:20.391 rmmod nvme_keyring 00:08:20.391 00:15:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:20.391 00:15:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:08:20.391 00:15:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:08:20.391 00:15:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@515 -- # '[' -n 3085907 ']' 00:08:20.391 00:15:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # killprocess 3085907 00:08:20.391 00:15:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 3085907 ']' 00:08:20.391 00:15:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 3085907 00:08:20.391 00:15:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:08:20.391 00:15:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:20.391 00:15:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3085907 00:08:20.391 00:15:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:08:20.391 00:15:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:08:20.391 00:15:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3085907' 00:08:20.391 killing process with pid 3085907 00:08:20.391 00:15:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 3085907 00:08:20.391 00:15:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 3085907 00:08:20.391 00:15:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:08:20.391 00:15:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:08:20.391 00:15:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:08:20.391 00:15:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:08:20.391 00:15:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@789 -- # iptables-save 00:08:20.391 00:15:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:08:20.391 00:15:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@789 -- # iptables-restore 00:08:20.391 00:15:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:20.391 00:15:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:20.391 00:15:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:20.391 00:15:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:20.391 00:15:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:22.932 00:15:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:22.932 00:08:22.932 real 0m22.634s 00:08:22.932 user 0m26.005s 00:08:22.932 sys 0m7.070s 00:08:22.932 00:15:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:22.932 00:15:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:22.932 ************************************ 00:08:22.932 END TEST nvmf_queue_depth 00:08:22.932 ************************************ 00:08:22.932 00:15:53 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:08:22.932 00:15:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:22.932 00:15:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:22.932 00:15:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:22.932 ************************************ 00:08:22.932 START TEST nvmf_target_multipath 00:08:22.932 ************************************ 00:08:22.932 00:15:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:08:22.932 * Looking for test storage... 00:08:22.932 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:22.932 00:15:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:08:22.932 00:15:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1681 -- # lcov --version 00:08:22.932 00:15:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:08:22.932 00:15:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:08:22.932 00:15:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:22.932 00:15:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:22.932 00:15:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:22.932 00:15:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:08:22.932 00:15:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:08:22.932 00:15:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:08:22.933 00:15:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:08:22.933 00:15:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:08:22.933 00:15:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:08:22.933 00:15:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:08:22.933 00:15:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:22.933 00:15:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:08:22.933 00:15:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:08:22.933 00:15:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:22.933 00:15:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:22.933 00:15:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:08:22.933 00:15:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:08:22.933 00:15:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:22.933 00:15:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:08:22.933 00:15:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:08:22.933 00:15:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:08:22.933 00:15:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:08:22.933 00:15:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:22.933 00:15:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:08:22.933 00:15:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:08:22.933 00:15:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:22.933 00:15:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:22.933 00:15:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:08:22.933 00:15:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:22.933 00:15:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:08:22.933 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:22.933 --rc genhtml_branch_coverage=1 00:08:22.933 --rc genhtml_function_coverage=1 00:08:22.933 --rc genhtml_legend=1 00:08:22.933 --rc geninfo_all_blocks=1 00:08:22.933 --rc geninfo_unexecuted_blocks=1 00:08:22.933 00:08:22.933 ' 00:08:22.933 00:15:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:08:22.933 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:22.933 --rc genhtml_branch_coverage=1 00:08:22.933 --rc genhtml_function_coverage=1 00:08:22.933 --rc genhtml_legend=1 00:08:22.933 --rc geninfo_all_blocks=1 00:08:22.933 --rc geninfo_unexecuted_blocks=1 00:08:22.933 00:08:22.933 ' 00:08:22.933 00:15:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:08:22.933 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:22.933 --rc genhtml_branch_coverage=1 00:08:22.933 --rc genhtml_function_coverage=1 00:08:22.933 --rc genhtml_legend=1 00:08:22.933 --rc geninfo_all_blocks=1 00:08:22.933 --rc geninfo_unexecuted_blocks=1 00:08:22.933 00:08:22.933 ' 00:08:22.933 00:15:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:08:22.933 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:22.933 --rc genhtml_branch_coverage=1 00:08:22.933 --rc genhtml_function_coverage=1 00:08:22.933 --rc genhtml_legend=1 00:08:22.933 --rc geninfo_all_blocks=1 00:08:22.933 --rc geninfo_unexecuted_blocks=1 00:08:22.933 00:08:22.933 ' 00:08:22.933 00:15:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:22.933 00:15:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:08:22.933 00:15:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:22.933 00:15:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:22.933 00:15:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:22.933 00:15:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:22.933 00:15:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:22.933 00:15:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:22.933 00:15:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:22.933 00:15:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:22.933 00:15:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:22.933 00:15:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:22.933 00:15:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:22.933 00:15:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:22.933 00:15:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:22.933 00:15:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:22.933 00:15:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:22.933 00:15:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:22.933 00:15:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:22.933 00:15:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:08:22.933 00:15:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:22.933 00:15:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:22.933 00:15:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:22.933 00:15:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:22.933 00:15:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:22.933 00:15:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:22.933 00:15:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:08:22.933 00:15:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:22.933 00:15:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:08:22.933 00:15:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:22.933 00:15:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:22.933 00:15:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:22.933 00:15:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:22.933 00:15:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:22.933 00:15:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:22.933 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:22.933 00:15:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:22.933 00:15:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:22.933 00:15:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:22.933 00:15:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:22.933 00:15:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:22.933 00:15:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:08:22.933 00:15:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:22.933 00:15:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:08:22.933 00:15:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:08:22.933 00:15:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:22.933 00:15:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # prepare_net_devs 00:08:22.933 00:15:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@436 -- # local -g is_hw=no 00:08:22.933 00:15:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # remove_spdk_ns 00:08:22.933 00:15:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:22.933 00:15:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:22.933 00:15:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:22.933 00:15:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:08:22.933 00:15:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:08:22.933 00:15:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:08:22.933 00:15:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:31.071 00:16:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:31.071 00:16:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:08:31.071 00:16:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:31.071 00:16:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:31.071 00:16:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:31.071 00:16:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:31.071 00:16:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:31.071 00:16:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:08:31.071 00:16:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:31.071 00:16:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:08:31.071 00:16:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:08:31.071 00:16:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:08:31.071 00:16:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:08:31.071 00:16:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:08:31.071 00:16:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:08:31.071 00:16:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:31.071 00:16:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:31.071 00:16:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:31.072 00:16:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:31.072 00:16:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:31.072 00:16:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:31.072 00:16:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:31.072 00:16:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:31.072 00:16:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:31.072 00:16:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:31.072 00:16:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:31.072 00:16:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:31.072 00:16:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:31.072 00:16:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:31.072 00:16:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:31.072 00:16:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:31.072 00:16:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:31.072 00:16:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:31.072 00:16:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:31.072 00:16:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:08:31.072 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:08:31.072 00:16:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:31.072 00:16:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:31.072 00:16:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:31.072 00:16:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:31.072 00:16:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:31.072 00:16:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:31.072 00:16:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:08:31.072 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:08:31.072 00:16:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:31.072 00:16:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:31.072 00:16:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:31.072 00:16:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:31.072 00:16:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:31.072 00:16:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:31.072 00:16:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:31.072 00:16:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:31.072 00:16:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:08:31.072 00:16:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:31.072 00:16:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:08:31.072 00:16:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:31.072 00:16:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ up == up ]] 00:08:31.072 00:16:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:08:31.072 00:16:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:31.072 00:16:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:08:31.072 Found net devices under 0000:4b:00.0: cvl_0_0 00:08:31.072 00:16:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:08:31.072 00:16:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:08:31.072 00:16:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:31.072 00:16:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:08:31.072 00:16:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:31.072 00:16:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ up == up ]] 00:08:31.072 00:16:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:08:31.072 00:16:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:31.072 00:16:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:08:31.072 Found net devices under 0000:4b:00.1: cvl_0_1 00:08:31.072 00:16:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:08:31.072 00:16:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:08:31.072 00:16:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # is_hw=yes 00:08:31.072 00:16:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:08:31.072 00:16:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:08:31.072 00:16:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:08:31.072 00:16:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:31.072 00:16:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:31.072 00:16:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:31.072 00:16:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:31.072 00:16:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:31.072 00:16:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:31.072 00:16:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:31.072 00:16:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:31.072 00:16:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:31.072 00:16:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:31.072 00:16:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:31.072 00:16:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:31.072 00:16:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:31.072 00:16:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:31.072 00:16:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:31.072 00:16:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:31.072 00:16:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:31.072 00:16:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:31.072 00:16:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:31.072 00:16:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:31.072 00:16:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:31.072 00:16:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:31.072 00:16:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:31.072 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:31.072 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.543 ms 00:08:31.072 00:08:31.072 --- 10.0.0.2 ping statistics --- 00:08:31.072 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:31.072 rtt min/avg/max/mdev = 0.543/0.543/0.543/0.000 ms 00:08:31.072 00:16:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:31.072 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:31.072 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.274 ms 00:08:31.072 00:08:31.072 --- 10.0.0.1 ping statistics --- 00:08:31.072 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:31.072 rtt min/avg/max/mdev = 0.274/0.274/0.274/0.000 ms 00:08:31.072 00:16:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:31.072 00:16:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@448 -- # return 0 00:08:31.072 00:16:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:08:31.072 00:16:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:31.072 00:16:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:08:31.072 00:16:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:08:31.072 00:16:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:31.072 00:16:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:08:31.072 00:16:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:08:31.072 00:16:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:08:31.072 00:16:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:08:31.072 only one NIC for nvmf test 00:08:31.072 00:16:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:08:31.072 00:16:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@514 -- # nvmfcleanup 00:08:31.072 00:16:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:08:31.072 00:16:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:31.072 00:16:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:08:31.072 00:16:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:31.073 00:16:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:31.073 rmmod nvme_tcp 00:08:31.073 rmmod nvme_fabrics 00:08:31.073 rmmod nvme_keyring 00:08:31.073 00:16:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:31.073 00:16:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:08:31.073 00:16:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:08:31.073 00:16:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@515 -- # '[' -n '' ']' 00:08:31.073 00:16:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:08:31.073 00:16:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:08:31.073 00:16:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:08:31.073 00:16:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:08:31.073 00:16:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-save 00:08:31.073 00:16:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:08:31.073 00:16:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-restore 00:08:31.073 00:16:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:31.073 00:16:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:31.073 00:16:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:31.073 00:16:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:31.073 00:16:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:32.457 00:16:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:32.457 00:16:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:08:32.457 00:16:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:08:32.457 00:16:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@514 -- # nvmfcleanup 00:08:32.457 00:16:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:08:32.457 00:16:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:32.457 00:16:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:08:32.457 00:16:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:32.457 00:16:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:32.457 00:16:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:32.457 00:16:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:08:32.457 00:16:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:08:32.457 00:16:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@515 -- # '[' -n '' ']' 00:08:32.457 00:16:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:08:32.457 00:16:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:08:32.457 00:16:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:08:32.457 00:16:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:08:32.457 00:16:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-save 00:08:32.457 00:16:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:08:32.457 00:16:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-restore 00:08:32.717 00:16:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:32.717 00:16:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:32.717 00:16:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:32.717 00:16:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:32.717 00:16:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:32.718 00:16:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:32.718 00:08:32.718 real 0m9.945s 00:08:32.718 user 0m2.211s 00:08:32.718 sys 0m5.660s 00:08:32.718 00:16:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:32.718 00:16:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:32.718 ************************************ 00:08:32.718 END TEST nvmf_target_multipath 00:08:32.718 ************************************ 00:08:32.718 00:16:03 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:08:32.718 00:16:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:32.718 00:16:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:32.718 00:16:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:32.718 ************************************ 00:08:32.718 START TEST nvmf_zcopy 00:08:32.718 ************************************ 00:08:32.718 00:16:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:08:32.718 * Looking for test storage... 00:08:32.718 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:32.718 00:16:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:08:32.718 00:16:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1681 -- # lcov --version 00:08:32.718 00:16:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:08:32.979 00:16:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:08:32.979 00:16:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:32.979 00:16:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:32.979 00:16:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:32.979 00:16:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:08:32.979 00:16:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:08:32.979 00:16:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:08:32.979 00:16:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:08:32.979 00:16:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:08:32.979 00:16:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:08:32.979 00:16:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:08:32.979 00:16:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:32.979 00:16:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:08:32.979 00:16:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:08:32.979 00:16:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:32.979 00:16:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:32.979 00:16:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:08:32.979 00:16:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:08:32.979 00:16:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:32.979 00:16:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:08:32.979 00:16:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:08:32.979 00:16:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:08:32.980 00:16:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:08:32.980 00:16:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:32.980 00:16:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:08:32.980 00:16:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:08:32.980 00:16:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:32.980 00:16:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:32.980 00:16:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:08:32.980 00:16:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:32.980 00:16:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:08:32.980 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:32.980 --rc genhtml_branch_coverage=1 00:08:32.980 --rc genhtml_function_coverage=1 00:08:32.980 --rc genhtml_legend=1 00:08:32.980 --rc geninfo_all_blocks=1 00:08:32.980 --rc geninfo_unexecuted_blocks=1 00:08:32.980 00:08:32.980 ' 00:08:32.980 00:16:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:08:32.980 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:32.980 --rc genhtml_branch_coverage=1 00:08:32.980 --rc genhtml_function_coverage=1 00:08:32.980 --rc genhtml_legend=1 00:08:32.980 --rc geninfo_all_blocks=1 00:08:32.980 --rc geninfo_unexecuted_blocks=1 00:08:32.980 00:08:32.980 ' 00:08:32.980 00:16:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:08:32.980 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:32.980 --rc genhtml_branch_coverage=1 00:08:32.980 --rc genhtml_function_coverage=1 00:08:32.980 --rc genhtml_legend=1 00:08:32.980 --rc geninfo_all_blocks=1 00:08:32.980 --rc geninfo_unexecuted_blocks=1 00:08:32.980 00:08:32.980 ' 00:08:32.980 00:16:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:08:32.980 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:32.980 --rc genhtml_branch_coverage=1 00:08:32.980 --rc genhtml_function_coverage=1 00:08:32.980 --rc genhtml_legend=1 00:08:32.980 --rc geninfo_all_blocks=1 00:08:32.980 --rc geninfo_unexecuted_blocks=1 00:08:32.980 00:08:32.980 ' 00:08:32.980 00:16:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:32.980 00:16:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:08:32.980 00:16:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:32.980 00:16:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:32.980 00:16:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:32.980 00:16:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:32.980 00:16:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:32.980 00:16:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:32.980 00:16:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:32.980 00:16:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:32.980 00:16:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:32.980 00:16:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:32.980 00:16:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:32.980 00:16:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:32.980 00:16:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:32.980 00:16:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:32.980 00:16:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:32.980 00:16:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:32.980 00:16:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:32.980 00:16:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:08:32.980 00:16:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:32.980 00:16:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:32.980 00:16:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:32.980 00:16:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:32.980 00:16:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:32.980 00:16:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:32.980 00:16:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:08:32.980 00:16:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:32.980 00:16:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:08:32.980 00:16:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:32.980 00:16:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:32.980 00:16:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:32.980 00:16:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:32.980 00:16:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:32.980 00:16:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:32.980 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:32.980 00:16:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:32.980 00:16:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:32.980 00:16:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:32.980 00:16:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:08:32.980 00:16:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:08:32.980 00:16:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:32.980 00:16:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # prepare_net_devs 00:08:32.980 00:16:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@436 -- # local -g is_hw=no 00:08:32.980 00:16:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # remove_spdk_ns 00:08:32.980 00:16:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:32.980 00:16:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:32.980 00:16:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:32.980 00:16:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:08:32.980 00:16:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:08:32.980 00:16:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:08:32.980 00:16:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:41.149 00:16:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:41.149 00:16:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:08:41.149 00:16:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:41.149 00:16:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:41.149 00:16:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:41.149 00:16:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:41.149 00:16:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:41.149 00:16:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:08:41.149 00:16:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:41.149 00:16:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:08:41.149 00:16:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:08:41.149 00:16:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:08:41.149 00:16:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:08:41.149 00:16:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:08:41.149 00:16:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:08:41.149 00:16:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:41.149 00:16:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:41.149 00:16:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:41.149 00:16:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:41.149 00:16:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:41.149 00:16:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:41.149 00:16:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:41.149 00:16:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:41.149 00:16:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:41.149 00:16:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:41.149 00:16:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:41.149 00:16:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:41.149 00:16:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:41.149 00:16:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:41.149 00:16:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:41.149 00:16:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:41.149 00:16:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:41.149 00:16:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:41.149 00:16:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:41.149 00:16:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:08:41.149 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:08:41.149 00:16:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:41.149 00:16:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:41.149 00:16:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:41.149 00:16:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:41.149 00:16:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:41.149 00:16:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:41.149 00:16:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:08:41.149 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:08:41.149 00:16:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:41.149 00:16:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:41.149 00:16:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:41.149 00:16:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:41.149 00:16:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:41.149 00:16:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:41.149 00:16:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:41.150 00:16:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:41.150 00:16:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:08:41.150 00:16:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:41.150 00:16:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:08:41.150 00:16:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:41.150 00:16:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ up == up ]] 00:08:41.150 00:16:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:08:41.150 00:16:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:41.150 00:16:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:08:41.150 Found net devices under 0000:4b:00.0: cvl_0_0 00:08:41.150 00:16:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:08:41.150 00:16:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:08:41.150 00:16:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:41.150 00:16:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:08:41.150 00:16:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:41.150 00:16:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ up == up ]] 00:08:41.150 00:16:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:08:41.150 00:16:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:41.150 00:16:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:08:41.150 Found net devices under 0000:4b:00.1: cvl_0_1 00:08:41.150 00:16:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:08:41.150 00:16:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:08:41.150 00:16:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # is_hw=yes 00:08:41.150 00:16:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:08:41.150 00:16:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:08:41.150 00:16:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:08:41.150 00:16:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:41.150 00:16:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:41.150 00:16:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:41.150 00:16:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:41.150 00:16:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:41.150 00:16:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:41.150 00:16:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:41.150 00:16:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:41.150 00:16:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:41.150 00:16:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:41.150 00:16:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:41.150 00:16:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:41.150 00:16:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:41.150 00:16:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:41.150 00:16:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:41.150 00:16:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:41.150 00:16:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:41.150 00:16:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:41.150 00:16:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:41.150 00:16:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:41.150 00:16:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:41.150 00:16:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:41.150 00:16:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:41.150 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:41.150 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.651 ms 00:08:41.150 00:08:41.150 --- 10.0.0.2 ping statistics --- 00:08:41.150 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:41.150 rtt min/avg/max/mdev = 0.651/0.651/0.651/0.000 ms 00:08:41.150 00:16:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:41.150 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:41.150 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.224 ms 00:08:41.150 00:08:41.150 --- 10.0.0.1 ping statistics --- 00:08:41.150 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:41.150 rtt min/avg/max/mdev = 0.224/0.224/0.224/0.000 ms 00:08:41.150 00:16:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:41.150 00:16:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@448 -- # return 0 00:08:41.150 00:16:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:08:41.150 00:16:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:41.150 00:16:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:08:41.150 00:16:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:08:41.150 00:16:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:41.150 00:16:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:08:41.150 00:16:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:08:41.150 00:16:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:08:41.150 00:16:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:08:41.150 00:16:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:41.150 00:16:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:41.150 00:16:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # nvmfpid=3096929 00:08:41.150 00:16:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # waitforlisten 3096929 00:08:41.150 00:16:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:08:41.150 00:16:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@831 -- # '[' -z 3096929 ']' 00:08:41.150 00:16:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:41.150 00:16:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:41.150 00:16:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:41.150 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:41.150 00:16:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:41.150 00:16:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:41.150 [2024-10-09 00:16:10.987026] Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 initialization... 00:08:41.150 [2024-10-09 00:16:10.987091] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:41.150 [2024-10-09 00:16:11.074133] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:41.150 [2024-10-09 00:16:11.167663] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:41.150 [2024-10-09 00:16:11.167727] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:41.150 [2024-10-09 00:16:11.167737] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:41.150 [2024-10-09 00:16:11.167744] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:41.150 [2024-10-09 00:16:11.167750] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:41.150 [2024-10-09 00:16:11.168512] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:08:41.413 00:16:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:41.413 00:16:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # return 0 00:08:41.413 00:16:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:08:41.413 00:16:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:41.413 00:16:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:41.413 00:16:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:41.413 00:16:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:08:41.413 00:16:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:08:41.413 00:16:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.413 00:16:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:41.413 [2024-10-09 00:16:11.852747] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:41.413 00:16:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.413 00:16:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:41.413 00:16:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.413 00:16:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:41.413 00:16:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.413 00:16:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:41.413 00:16:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.413 00:16:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:41.413 [2024-10-09 00:16:11.876998] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:41.413 00:16:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.413 00:16:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:41.413 00:16:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.413 00:16:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:41.413 00:16:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.413 00:16:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:08:41.413 00:16:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.413 00:16:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:41.413 malloc0 00:08:41.413 00:16:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.413 00:16:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:08:41.413 00:16:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.413 00:16:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:41.413 00:16:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.413 00:16:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:08:41.413 00:16:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:08:41.413 00:16:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # config=() 00:08:41.413 00:16:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # local subsystem config 00:08:41.413 00:16:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:08:41.413 00:16:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:08:41.413 { 00:08:41.413 "params": { 00:08:41.413 "name": "Nvme$subsystem", 00:08:41.413 "trtype": "$TEST_TRANSPORT", 00:08:41.413 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:41.413 "adrfam": "ipv4", 00:08:41.413 "trsvcid": "$NVMF_PORT", 00:08:41.413 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:41.413 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:41.413 "hdgst": ${hdgst:-false}, 00:08:41.413 "ddgst": ${ddgst:-false} 00:08:41.413 }, 00:08:41.413 "method": "bdev_nvme_attach_controller" 00:08:41.413 } 00:08:41.413 EOF 00:08:41.413 )") 00:08:41.413 00:16:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@580 -- # cat 00:08:41.413 00:16:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # jq . 00:08:41.413 00:16:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@583 -- # IFS=, 00:08:41.413 00:16:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:08:41.413 "params": { 00:08:41.413 "name": "Nvme1", 00:08:41.413 "trtype": "tcp", 00:08:41.413 "traddr": "10.0.0.2", 00:08:41.413 "adrfam": "ipv4", 00:08:41.413 "trsvcid": "4420", 00:08:41.413 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:41.413 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:41.414 "hdgst": false, 00:08:41.414 "ddgst": false 00:08:41.414 }, 00:08:41.414 "method": "bdev_nvme_attach_controller" 00:08:41.414 }' 00:08:41.414 [2024-10-09 00:16:11.994744] Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 initialization... 00:08:41.414 [2024-10-09 00:16:11.994807] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3097281 ] 00:08:41.675 [2024-10-09 00:16:12.076704] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:41.675 [2024-10-09 00:16:12.172616] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:08:41.946 Running I/O for 10 seconds... 00:08:44.281 6466.00 IOPS, 50.52 MiB/s [2024-10-08T22:16:15.861Z] 6518.50 IOPS, 50.93 MiB/s [2024-10-08T22:16:16.806Z] 6540.67 IOPS, 51.10 MiB/s [2024-10-08T22:16:17.760Z] 6550.50 IOPS, 51.18 MiB/s [2024-10-08T22:16:18.707Z] 6586.40 IOPS, 51.46 MiB/s [2024-10-08T22:16:19.650Z] 7096.83 IOPS, 55.44 MiB/s [2024-10-08T22:16:20.595Z] 7480.57 IOPS, 58.44 MiB/s [2024-10-08T22:16:21.538Z] 7760.88 IOPS, 60.63 MiB/s [2024-10-08T22:16:22.925Z] 7982.00 IOPS, 62.36 MiB/s [2024-10-08T22:16:22.925Z] 8161.80 IOPS, 63.76 MiB/s 00:08:52.290 Latency(us) 00:08:52.290 [2024-10-08T22:16:22.925Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:52.290 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:08:52.290 Verification LBA range: start 0x0 length 0x1000 00:08:52.290 Nvme1n1 : 10.01 8166.04 63.80 0.00 0.00 15630.31 1303.89 28180.48 00:08:52.290 [2024-10-08T22:16:22.925Z] =================================================================================================================== 00:08:52.290 [2024-10-08T22:16:22.925Z] Total : 8166.04 63.80 0.00 0.00 15630.31 1303.89 28180.48 00:08:52.290 00:16:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=3099297 00:08:52.290 00:16:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:08:52.290 00:16:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:52.290 00:16:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:08:52.290 00:16:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:08:52.290 00:16:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # config=() 00:08:52.290 00:16:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # local subsystem config 00:08:52.290 00:16:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:08:52.290 00:16:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:08:52.290 { 00:08:52.290 "params": { 00:08:52.290 "name": "Nvme$subsystem", 00:08:52.290 "trtype": "$TEST_TRANSPORT", 00:08:52.290 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:52.290 "adrfam": "ipv4", 00:08:52.290 "trsvcid": "$NVMF_PORT", 00:08:52.290 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:52.290 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:52.290 "hdgst": ${hdgst:-false}, 00:08:52.290 "ddgst": ${ddgst:-false} 00:08:52.290 }, 00:08:52.290 "method": "bdev_nvme_attach_controller" 00:08:52.290 } 00:08:52.290 EOF 00:08:52.290 )") 00:08:52.290 00:16:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@580 -- # cat 00:08:52.290 [2024-10-09 00:16:22.659180] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.290 [2024-10-09 00:16:22.659208] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.290 00:16:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # jq . 00:08:52.290 00:16:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@583 -- # IFS=, 00:08:52.290 00:16:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:08:52.290 "params": { 00:08:52.290 "name": "Nvme1", 00:08:52.290 "trtype": "tcp", 00:08:52.290 "traddr": "10.0.0.2", 00:08:52.290 "adrfam": "ipv4", 00:08:52.290 "trsvcid": "4420", 00:08:52.290 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:52.290 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:52.290 "hdgst": false, 00:08:52.290 "ddgst": false 00:08:52.290 }, 00:08:52.290 "method": "bdev_nvme_attach_controller" 00:08:52.290 }' 00:08:52.290 [2024-10-09 00:16:22.671182] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.290 [2024-10-09 00:16:22.671191] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.290 [2024-10-09 00:16:22.683210] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.290 [2024-10-09 00:16:22.683217] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.290 [2024-10-09 00:16:22.695242] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.290 [2024-10-09 00:16:22.695249] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.290 [2024-10-09 00:16:22.704652] Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 initialization... 00:08:52.290 [2024-10-09 00:16:22.704740] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3099297 ] 00:08:52.290 [2024-10-09 00:16:22.707273] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.290 [2024-10-09 00:16:22.707281] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.290 [2024-10-09 00:16:22.719302] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.290 [2024-10-09 00:16:22.719309] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.290 [2024-10-09 00:16:22.731330] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.290 [2024-10-09 00:16:22.731337] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.290 [2024-10-09 00:16:22.743361] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.290 [2024-10-09 00:16:22.743368] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.290 [2024-10-09 00:16:22.755392] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.290 [2024-10-09 00:16:22.755399] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.290 [2024-10-09 00:16:22.767422] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.290 [2024-10-09 00:16:22.767429] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.290 [2024-10-09 00:16:22.779453] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.290 [2024-10-09 00:16:22.779460] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.290 [2024-10-09 00:16:22.782017] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:52.290 [2024-10-09 00:16:22.791485] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.290 [2024-10-09 00:16:22.791493] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.290 [2024-10-09 00:16:22.803516] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.290 [2024-10-09 00:16:22.803526] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.290 [2024-10-09 00:16:22.815548] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.290 [2024-10-09 00:16:22.815559] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.290 [2024-10-09 00:16:22.827579] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.290 [2024-10-09 00:16:22.827587] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.290 [2024-10-09 00:16:22.835103] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:08:52.290 [2024-10-09 00:16:22.839607] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.290 [2024-10-09 00:16:22.839614] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.290 [2024-10-09 00:16:22.851643] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.290 [2024-10-09 00:16:22.851656] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.290 [2024-10-09 00:16:22.863675] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.290 [2024-10-09 00:16:22.863687] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.290 [2024-10-09 00:16:22.875701] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.290 [2024-10-09 00:16:22.875709] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.290 [2024-10-09 00:16:22.887733] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.290 [2024-10-09 00:16:22.887741] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.291 [2024-10-09 00:16:22.899772] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.291 [2024-10-09 00:16:22.899784] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.291 [2024-10-09 00:16:22.911800] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.291 [2024-10-09 00:16:22.911811] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.291 [2024-10-09 00:16:22.923830] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.291 [2024-10-09 00:16:22.923839] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.552 [2024-10-09 00:16:22.935862] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.552 [2024-10-09 00:16:22.935870] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.552 [2024-10-09 00:16:22.947893] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.552 [2024-10-09 00:16:22.947899] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.552 [2024-10-09 00:16:22.959925] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.552 [2024-10-09 00:16:22.959932] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.552 [2024-10-09 00:16:22.971960] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.552 [2024-10-09 00:16:22.971970] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.552 [2024-10-09 00:16:22.983992] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.552 [2024-10-09 00:16:22.984001] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.552 [2024-10-09 00:16:22.996021] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.552 [2024-10-09 00:16:22.996028] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.552 [2024-10-09 00:16:23.008053] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.552 [2024-10-09 00:16:23.008060] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.552 [2024-10-09 00:16:23.020086] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.552 [2024-10-09 00:16:23.020093] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.552 [2024-10-09 00:16:23.032120] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.552 [2024-10-09 00:16:23.032131] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.552 [2024-10-09 00:16:23.044147] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.552 [2024-10-09 00:16:23.044154] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.552 [2024-10-09 00:16:23.056181] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.552 [2024-10-09 00:16:23.056188] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.552 [2024-10-09 00:16:23.068212] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.552 [2024-10-09 00:16:23.068222] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.552 [2024-10-09 00:16:23.080243] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.552 [2024-10-09 00:16:23.080251] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.552 [2024-10-09 00:16:23.092274] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.552 [2024-10-09 00:16:23.092281] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.552 [2024-10-09 00:16:23.104307] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.552 [2024-10-09 00:16:23.104314] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.552 [2024-10-09 00:16:23.116338] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.552 [2024-10-09 00:16:23.116345] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.552 [2024-10-09 00:16:23.161211] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.552 [2024-10-09 00:16:23.161224] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.552 Running I/O for 5 seconds... 00:08:52.552 [2024-10-09 00:16:23.172490] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.552 [2024-10-09 00:16:23.172499] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.552 [2024-10-09 00:16:23.187235] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.552 [2024-10-09 00:16:23.187250] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.814 [2024-10-09 00:16:23.200268] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.814 [2024-10-09 00:16:23.200284] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.814 [2024-10-09 00:16:23.213801] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.814 [2024-10-09 00:16:23.213816] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.814 [2024-10-09 00:16:23.227461] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.814 [2024-10-09 00:16:23.227476] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.814 [2024-10-09 00:16:23.240694] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.814 [2024-10-09 00:16:23.240710] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.814 [2024-10-09 00:16:23.253130] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.814 [2024-10-09 00:16:23.253145] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.814 [2024-10-09 00:16:23.266811] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.814 [2024-10-09 00:16:23.266829] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.814 [2024-10-09 00:16:23.279064] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.814 [2024-10-09 00:16:23.279079] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.814 [2024-10-09 00:16:23.292426] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.814 [2024-10-09 00:16:23.292441] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.814 [2024-10-09 00:16:23.305748] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.814 [2024-10-09 00:16:23.305763] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.814 [2024-10-09 00:16:23.318842] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.814 [2024-10-09 00:16:23.318857] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.814 [2024-10-09 00:16:23.332552] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.814 [2024-10-09 00:16:23.332567] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.814 [2024-10-09 00:16:23.345285] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.814 [2024-10-09 00:16:23.345300] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.814 [2024-10-09 00:16:23.357824] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.814 [2024-10-09 00:16:23.357839] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.814 [2024-10-09 00:16:23.371511] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.814 [2024-10-09 00:16:23.371526] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.814 [2024-10-09 00:16:23.384287] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.814 [2024-10-09 00:16:23.384301] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.814 [2024-10-09 00:16:23.397561] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.814 [2024-10-09 00:16:23.397576] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.814 [2024-10-09 00:16:23.410550] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.814 [2024-10-09 00:16:23.410565] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.814 [2024-10-09 00:16:23.423610] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.814 [2024-10-09 00:16:23.423625] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.814 [2024-10-09 00:16:23.436072] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.814 [2024-10-09 00:16:23.436087] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.814 [2024-10-09 00:16:23.448969] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.814 [2024-10-09 00:16:23.448984] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.075 [2024-10-09 00:16:23.462612] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.075 [2024-10-09 00:16:23.462627] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.075 [2024-10-09 00:16:23.475248] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.075 [2024-10-09 00:16:23.475263] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.075 [2024-10-09 00:16:23.487839] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.075 [2024-10-09 00:16:23.487854] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.075 [2024-10-09 00:16:23.500453] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.075 [2024-10-09 00:16:23.500467] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.075 [2024-10-09 00:16:23.513812] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.075 [2024-10-09 00:16:23.513830] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.075 [2024-10-09 00:16:23.526177] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.075 [2024-10-09 00:16:23.526192] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.075 [2024-10-09 00:16:23.539898] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.075 [2024-10-09 00:16:23.539912] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.075 [2024-10-09 00:16:23.552531] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.075 [2024-10-09 00:16:23.552546] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.075 [2024-10-09 00:16:23.565339] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.075 [2024-10-09 00:16:23.565354] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.075 [2024-10-09 00:16:23.578584] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.075 [2024-10-09 00:16:23.578599] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.075 [2024-10-09 00:16:23.592132] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.075 [2024-10-09 00:16:23.592147] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.075 [2024-10-09 00:16:23.605416] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.075 [2024-10-09 00:16:23.605431] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.075 [2024-10-09 00:16:23.618753] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.075 [2024-10-09 00:16:23.618767] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.076 [2024-10-09 00:16:23.631725] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.076 [2024-10-09 00:16:23.631740] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.076 [2024-10-09 00:16:23.644662] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.076 [2024-10-09 00:16:23.644676] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.076 [2024-10-09 00:16:23.657288] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.076 [2024-10-09 00:16:23.657302] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.076 [2024-10-09 00:16:23.670526] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.076 [2024-10-09 00:16:23.670540] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.076 [2024-10-09 00:16:23.684260] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.076 [2024-10-09 00:16:23.684275] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.076 [2024-10-09 00:16:23.697051] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.076 [2024-10-09 00:16:23.697065] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.076 [2024-10-09 00:16:23.710092] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.076 [2024-10-09 00:16:23.710107] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.337 [2024-10-09 00:16:23.722865] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.337 [2024-10-09 00:16:23.722879] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.337 [2024-10-09 00:16:23.736385] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.337 [2024-10-09 00:16:23.736403] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.337 [2024-10-09 00:16:23.749511] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.337 [2024-10-09 00:16:23.749527] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.337 [2024-10-09 00:16:23.762248] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.337 [2024-10-09 00:16:23.762268] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.337 [2024-10-09 00:16:23.774832] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.337 [2024-10-09 00:16:23.774847] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.337 [2024-10-09 00:16:23.788486] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.337 [2024-10-09 00:16:23.788501] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.337 [2024-10-09 00:16:23.801965] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.337 [2024-10-09 00:16:23.801980] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.337 [2024-10-09 00:16:23.814333] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.337 [2024-10-09 00:16:23.814348] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.337 [2024-10-09 00:16:23.827102] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.337 [2024-10-09 00:16:23.827117] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.337 [2024-10-09 00:16:23.840193] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.337 [2024-10-09 00:16:23.840208] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.337 [2024-10-09 00:16:23.853394] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.337 [2024-10-09 00:16:23.853409] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.337 [2024-10-09 00:16:23.866394] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.337 [2024-10-09 00:16:23.866409] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.337 [2024-10-09 00:16:23.879271] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.337 [2024-10-09 00:16:23.879286] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.337 [2024-10-09 00:16:23.892935] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.337 [2024-10-09 00:16:23.892950] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.337 [2024-10-09 00:16:23.906256] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.337 [2024-10-09 00:16:23.906270] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.337 [2024-10-09 00:16:23.919338] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.337 [2024-10-09 00:16:23.919353] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.337 [2024-10-09 00:16:23.932830] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.337 [2024-10-09 00:16:23.932844] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.337 [2024-10-09 00:16:23.946215] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.337 [2024-10-09 00:16:23.946229] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.337 [2024-10-09 00:16:23.959600] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.337 [2024-10-09 00:16:23.959614] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.599 [2024-10-09 00:16:23.973314] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.599 [2024-10-09 00:16:23.973328] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.599 [2024-10-09 00:16:23.986177] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.599 [2024-10-09 00:16:23.986191] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.599 [2024-10-09 00:16:23.999864] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.599 [2024-10-09 00:16:23.999879] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.599 [2024-10-09 00:16:24.013237] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.599 [2024-10-09 00:16:24.013258] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.599 [2024-10-09 00:16:24.026471] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.599 [2024-10-09 00:16:24.026485] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.599 [2024-10-09 00:16:24.039899] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.599 [2024-10-09 00:16:24.039914] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.599 [2024-10-09 00:16:24.052974] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.599 [2024-10-09 00:16:24.052988] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.599 [2024-10-09 00:16:24.066351] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.599 [2024-10-09 00:16:24.066365] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.599 [2024-10-09 00:16:24.079714] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.599 [2024-10-09 00:16:24.079734] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.599 [2024-10-09 00:16:24.092570] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.599 [2024-10-09 00:16:24.092584] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.599 [2024-10-09 00:16:24.105809] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.599 [2024-10-09 00:16:24.105826] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.599 [2024-10-09 00:16:24.118986] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.599 [2024-10-09 00:16:24.119001] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.599 [2024-10-09 00:16:24.132438] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.599 [2024-10-09 00:16:24.132452] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.599 [2024-10-09 00:16:24.146075] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.599 [2024-10-09 00:16:24.146090] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.599 [2024-10-09 00:16:24.158785] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.599 [2024-10-09 00:16:24.158799] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.599 [2024-10-09 00:16:24.172004] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.599 [2024-10-09 00:16:24.172019] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.599 19061.00 IOPS, 148.91 MiB/s [2024-10-08T22:16:24.234Z] [2024-10-09 00:16:24.185190] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.599 [2024-10-09 00:16:24.185204] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.599 [2024-10-09 00:16:24.198640] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.599 [2024-10-09 00:16:24.198654] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.599 [2024-10-09 00:16:24.211239] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.599 [2024-10-09 00:16:24.211254] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.599 [2024-10-09 00:16:24.224346] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.599 [2024-10-09 00:16:24.224361] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.862 [2024-10-09 00:16:24.237496] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.862 [2024-10-09 00:16:24.237510] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.862 [2024-10-09 00:16:24.250849] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.862 [2024-10-09 00:16:24.250863] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.862 [2024-10-09 00:16:24.263887] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.862 [2024-10-09 00:16:24.263901] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.862 [2024-10-09 00:16:24.277023] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.862 [2024-10-09 00:16:24.277037] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.862 [2024-10-09 00:16:24.290508] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.862 [2024-10-09 00:16:24.290523] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.862 [2024-10-09 00:16:24.303840] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.862 [2024-10-09 00:16:24.303854] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.862 [2024-10-09 00:16:24.317013] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.862 [2024-10-09 00:16:24.317027] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.862 [2024-10-09 00:16:24.330354] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.862 [2024-10-09 00:16:24.330368] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.862 [2024-10-09 00:16:24.343794] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.862 [2024-10-09 00:16:24.343809] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.862 [2024-10-09 00:16:24.357192] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.862 [2024-10-09 00:16:24.357206] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.862 [2024-10-09 00:16:24.370934] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.862 [2024-10-09 00:16:24.370948] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.862 [2024-10-09 00:16:24.383730] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.862 [2024-10-09 00:16:24.383745] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.862 [2024-10-09 00:16:24.396254] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.862 [2024-10-09 00:16:24.396268] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.862 [2024-10-09 00:16:24.409923] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.862 [2024-10-09 00:16:24.409937] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.862 [2024-10-09 00:16:24.423531] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.862 [2024-10-09 00:16:24.423545] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.862 [2024-10-09 00:16:24.437045] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.862 [2024-10-09 00:16:24.437060] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.862 [2024-10-09 00:16:24.449834] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.862 [2024-10-09 00:16:24.449848] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.862 [2024-10-09 00:16:24.462969] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.862 [2024-10-09 00:16:24.462984] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.862 [2024-10-09 00:16:24.475629] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.862 [2024-10-09 00:16:24.475643] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.862 [2024-10-09 00:16:24.489346] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.862 [2024-10-09 00:16:24.489360] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.124 [2024-10-09 00:16:24.502149] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.124 [2024-10-09 00:16:24.502163] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.124 [2024-10-09 00:16:24.515570] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.124 [2024-10-09 00:16:24.515585] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.124 [2024-10-09 00:16:24.529129] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.124 [2024-10-09 00:16:24.529144] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.124 [2024-10-09 00:16:24.542557] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.124 [2024-10-09 00:16:24.542571] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.124 [2024-10-09 00:16:24.556091] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.124 [2024-10-09 00:16:24.556105] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.124 [2024-10-09 00:16:24.569310] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.124 [2024-10-09 00:16:24.569325] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.124 [2024-10-09 00:16:24.581975] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.124 [2024-10-09 00:16:24.581989] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.124 [2024-10-09 00:16:24.595699] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.124 [2024-10-09 00:16:24.595713] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.124 [2024-10-09 00:16:24.608635] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.124 [2024-10-09 00:16:24.608650] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.124 [2024-10-09 00:16:24.622085] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.124 [2024-10-09 00:16:24.622099] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.124 [2024-10-09 00:16:24.635163] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.124 [2024-10-09 00:16:24.635177] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.124 [2024-10-09 00:16:24.647596] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.124 [2024-10-09 00:16:24.647614] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.124 [2024-10-09 00:16:24.661176] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.124 [2024-10-09 00:16:24.661190] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.124 [2024-10-09 00:16:24.674492] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.124 [2024-10-09 00:16:24.674507] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.124 [2024-10-09 00:16:24.687539] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.124 [2024-10-09 00:16:24.687553] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.124 [2024-10-09 00:16:24.700104] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.124 [2024-10-09 00:16:24.700118] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.124 [2024-10-09 00:16:24.713019] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.124 [2024-10-09 00:16:24.713034] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.124 [2024-10-09 00:16:24.726563] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.124 [2024-10-09 00:16:24.726578] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.124 [2024-10-09 00:16:24.740121] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.124 [2024-10-09 00:16:24.740136] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.124 [2024-10-09 00:16:24.753546] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.124 [2024-10-09 00:16:24.753560] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.385 [2024-10-09 00:16:24.767193] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.385 [2024-10-09 00:16:24.767207] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.385 [2024-10-09 00:16:24.780277] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.385 [2024-10-09 00:16:24.780291] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.385 [2024-10-09 00:16:24.793791] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.385 [2024-10-09 00:16:24.793805] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.385 [2024-10-09 00:16:24.807296] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.385 [2024-10-09 00:16:24.807310] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.385 [2024-10-09 00:16:24.820064] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.385 [2024-10-09 00:16:24.820079] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.385 [2024-10-09 00:16:24.833591] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.385 [2024-10-09 00:16:24.833605] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.386 [2024-10-09 00:16:24.846474] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.386 [2024-10-09 00:16:24.846487] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.386 [2024-10-09 00:16:24.860222] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.386 [2024-10-09 00:16:24.860236] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.386 [2024-10-09 00:16:24.873476] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.386 [2024-10-09 00:16:24.873489] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.386 [2024-10-09 00:16:24.886532] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.386 [2024-10-09 00:16:24.886546] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.386 [2024-10-09 00:16:24.899471] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.386 [2024-10-09 00:16:24.899485] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.386 [2024-10-09 00:16:24.913102] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.386 [2024-10-09 00:16:24.913116] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.386 [2024-10-09 00:16:24.925977] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.386 [2024-10-09 00:16:24.925991] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.386 [2024-10-09 00:16:24.938667] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.386 [2024-10-09 00:16:24.938681] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.386 [2024-10-09 00:16:24.951103] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.386 [2024-10-09 00:16:24.951117] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.386 [2024-10-09 00:16:24.963646] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.386 [2024-10-09 00:16:24.963661] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.386 [2024-10-09 00:16:24.976380] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.386 [2024-10-09 00:16:24.976398] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.386 [2024-10-09 00:16:24.988750] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.386 [2024-10-09 00:16:24.988766] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.386 [2024-10-09 00:16:25.002121] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.386 [2024-10-09 00:16:25.002140] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.386 [2024-10-09 00:16:25.015700] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.386 [2024-10-09 00:16:25.015715] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.647 [2024-10-09 00:16:25.028181] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.647 [2024-10-09 00:16:25.028197] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.647 [2024-10-09 00:16:25.041655] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.647 [2024-10-09 00:16:25.041671] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.647 [2024-10-09 00:16:25.054830] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.647 [2024-10-09 00:16:25.054844] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.647 [2024-10-09 00:16:25.067434] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.647 [2024-10-09 00:16:25.067449] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.647 [2024-10-09 00:16:25.080490] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.647 [2024-10-09 00:16:25.080505] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.647 [2024-10-09 00:16:25.093991] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.647 [2024-10-09 00:16:25.094006] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.647 [2024-10-09 00:16:25.106860] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.647 [2024-10-09 00:16:25.106876] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.647 [2024-10-09 00:16:25.119665] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.647 [2024-10-09 00:16:25.119680] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.647 [2024-10-09 00:16:25.133252] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.647 [2024-10-09 00:16:25.133267] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.647 [2024-10-09 00:16:25.145574] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.647 [2024-10-09 00:16:25.145588] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.647 [2024-10-09 00:16:25.158430] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.647 [2024-10-09 00:16:25.158445] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.647 [2024-10-09 00:16:25.171804] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.647 [2024-10-09 00:16:25.171819] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.647 19241.00 IOPS, 150.32 MiB/s [2024-10-08T22:16:25.282Z] [2024-10-09 00:16:25.184515] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.647 [2024-10-09 00:16:25.184530] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.647 [2024-10-09 00:16:25.198096] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.647 [2024-10-09 00:16:25.198110] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.648 [2024-10-09 00:16:25.211398] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.648 [2024-10-09 00:16:25.211413] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.648 [2024-10-09 00:16:25.224613] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.648 [2024-10-09 00:16:25.224627] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.648 [2024-10-09 00:16:25.237711] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.648 [2024-10-09 00:16:25.237732] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.648 [2024-10-09 00:16:25.251290] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.648 [2024-10-09 00:16:25.251309] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.648 [2024-10-09 00:16:25.264544] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.648 [2024-10-09 00:16:25.264559] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.648 [2024-10-09 00:16:25.277793] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.648 [2024-10-09 00:16:25.277809] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.910 [2024-10-09 00:16:25.290450] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.910 [2024-10-09 00:16:25.290464] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.910 [2024-10-09 00:16:25.303510] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.910 [2024-10-09 00:16:25.303525] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.910 [2024-10-09 00:16:25.316424] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.910 [2024-10-09 00:16:25.316438] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.910 [2024-10-09 00:16:25.329605] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.910 [2024-10-09 00:16:25.329619] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.910 [2024-10-09 00:16:25.342361] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.910 [2024-10-09 00:16:25.342375] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.910 [2024-10-09 00:16:25.355743] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.910 [2024-10-09 00:16:25.355758] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.910 [2024-10-09 00:16:25.368496] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.910 [2024-10-09 00:16:25.368511] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.910 [2024-10-09 00:16:25.381214] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.910 [2024-10-09 00:16:25.381228] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.910 [2024-10-09 00:16:25.393707] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.910 [2024-10-09 00:16:25.393737] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.910 [2024-10-09 00:16:25.406938] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.910 [2024-10-09 00:16:25.406953] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.910 [2024-10-09 00:16:25.419513] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.910 [2024-10-09 00:16:25.419528] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.910 [2024-10-09 00:16:25.433304] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.910 [2024-10-09 00:16:25.433319] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.910 [2024-10-09 00:16:25.445924] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.910 [2024-10-09 00:16:25.445939] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.910 [2024-10-09 00:16:25.459252] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.910 [2024-10-09 00:16:25.459267] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.910 [2024-10-09 00:16:25.472624] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.910 [2024-10-09 00:16:25.472638] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.910 [2024-10-09 00:16:25.485476] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.910 [2024-10-09 00:16:25.485491] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.910 [2024-10-09 00:16:25.498625] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.910 [2024-10-09 00:16:25.498643] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.910 [2024-10-09 00:16:25.511679] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.910 [2024-10-09 00:16:25.511693] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.910 [2024-10-09 00:16:25.525094] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.910 [2024-10-09 00:16:25.525109] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.910 [2024-10-09 00:16:25.538447] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.910 [2024-10-09 00:16:25.538462] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.171 [2024-10-09 00:16:25.551532] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.171 [2024-10-09 00:16:25.551546] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.171 [2024-10-09 00:16:25.564928] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.171 [2024-10-09 00:16:25.564943] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.171 [2024-10-09 00:16:25.578409] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.171 [2024-10-09 00:16:25.578423] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.171 [2024-10-09 00:16:25.592027] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.171 [2024-10-09 00:16:25.592041] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.171 [2024-10-09 00:16:25.604545] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.171 [2024-10-09 00:16:25.604560] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.171 [2024-10-09 00:16:25.617788] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.171 [2024-10-09 00:16:25.617803] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.171 [2024-10-09 00:16:25.630451] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.171 [2024-10-09 00:16:25.630466] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.171 [2024-10-09 00:16:25.642805] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.171 [2024-10-09 00:16:25.642820] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.171 [2024-10-09 00:16:25.656374] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.171 [2024-10-09 00:16:25.656388] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.171 [2024-10-09 00:16:25.669242] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.171 [2024-10-09 00:16:25.669257] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.171 [2024-10-09 00:16:25.682008] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.171 [2024-10-09 00:16:25.682023] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.171 [2024-10-09 00:16:25.695206] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.171 [2024-10-09 00:16:25.695220] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.171 [2024-10-09 00:16:25.708517] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.171 [2024-10-09 00:16:25.708532] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.171 [2024-10-09 00:16:25.721680] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.171 [2024-10-09 00:16:25.721694] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.171 [2024-10-09 00:16:25.735628] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.171 [2024-10-09 00:16:25.735642] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.171 [2024-10-09 00:16:25.748491] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.171 [2024-10-09 00:16:25.748505] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.171 [2024-10-09 00:16:25.762039] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.171 [2024-10-09 00:16:25.762054] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.171 [2024-10-09 00:16:25.774583] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.171 [2024-10-09 00:16:25.774597] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.171 [2024-10-09 00:16:25.787309] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.171 [2024-10-09 00:16:25.787324] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.171 [2024-10-09 00:16:25.799850] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.171 [2024-10-09 00:16:25.799864] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.455 [2024-10-09 00:16:25.813326] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.456 [2024-10-09 00:16:25.813340] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.456 [2024-10-09 00:16:25.826536] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.456 [2024-10-09 00:16:25.826550] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.456 [2024-10-09 00:16:25.839325] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.456 [2024-10-09 00:16:25.839339] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.456 [2024-10-09 00:16:25.852676] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.456 [2024-10-09 00:16:25.852690] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.456 [2024-10-09 00:16:25.865347] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.456 [2024-10-09 00:16:25.865362] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.456 [2024-10-09 00:16:25.878872] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.456 [2024-10-09 00:16:25.878887] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.456 [2024-10-09 00:16:25.892489] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.456 [2024-10-09 00:16:25.892504] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.456 [2024-10-09 00:16:25.905907] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.456 [2024-10-09 00:16:25.905921] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.456 [2024-10-09 00:16:25.918616] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.456 [2024-10-09 00:16:25.918630] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.456 [2024-10-09 00:16:25.931180] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.456 [2024-10-09 00:16:25.931194] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.456 [2024-10-09 00:16:25.944248] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.456 [2024-10-09 00:16:25.944262] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.456 [2024-10-09 00:16:25.957563] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.456 [2024-10-09 00:16:25.957577] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.456 [2024-10-09 00:16:25.971288] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.457 [2024-10-09 00:16:25.971303] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.457 [2024-10-09 00:16:25.984881] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.457 [2024-10-09 00:16:25.984895] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.457 [2024-10-09 00:16:25.997321] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.457 [2024-10-09 00:16:25.997335] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.457 [2024-10-09 00:16:26.010952] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.457 [2024-10-09 00:16:26.010966] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.457 [2024-10-09 00:16:26.023837] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.457 [2024-10-09 00:16:26.023851] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.457 [2024-10-09 00:16:26.037061] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.457 [2024-10-09 00:16:26.037075] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.457 [2024-10-09 00:16:26.050120] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.457 [2024-10-09 00:16:26.050135] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.457 [2024-10-09 00:16:26.063306] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.457 [2024-10-09 00:16:26.063320] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.457 [2024-10-09 00:16:26.076257] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.457 [2024-10-09 00:16:26.076271] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.720 [2024-10-09 00:16:26.089317] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.720 [2024-10-09 00:16:26.089332] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.720 [2024-10-09 00:16:26.102206] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.720 [2024-10-09 00:16:26.102221] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.720 [2024-10-09 00:16:26.115523] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.720 [2024-10-09 00:16:26.115537] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.720 [2024-10-09 00:16:26.128779] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.720 [2024-10-09 00:16:26.128793] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.720 [2024-10-09 00:16:26.141329] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.720 [2024-10-09 00:16:26.141343] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.720 [2024-10-09 00:16:26.153750] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.720 [2024-10-09 00:16:26.153765] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.720 [2024-10-09 00:16:26.167178] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.720 [2024-10-09 00:16:26.167193] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.720 19278.00 IOPS, 150.61 MiB/s [2024-10-08T22:16:26.355Z] [2024-10-09 00:16:26.180735] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.720 [2024-10-09 00:16:26.180750] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.720 [2024-10-09 00:16:26.193966] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.720 [2024-10-09 00:16:26.193980] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.720 [2024-10-09 00:16:26.206924] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.720 [2024-10-09 00:16:26.206938] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.720 [2024-10-09 00:16:26.220268] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.720 [2024-10-09 00:16:26.220282] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.720 [2024-10-09 00:16:26.233373] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.720 [2024-10-09 00:16:26.233394] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.720 [2024-10-09 00:16:26.245639] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.720 [2024-10-09 00:16:26.245654] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.720 [2024-10-09 00:16:26.258740] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.720 [2024-10-09 00:16:26.258754] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.720 [2024-10-09 00:16:26.272181] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.720 [2024-10-09 00:16:26.272196] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.720 [2024-10-09 00:16:26.284821] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.720 [2024-10-09 00:16:26.284836] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.720 [2024-10-09 00:16:26.297023] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.720 [2024-10-09 00:16:26.297037] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.720 [2024-10-09 00:16:26.309928] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.720 [2024-10-09 00:16:26.309943] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.720 [2024-10-09 00:16:26.322982] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.720 [2024-10-09 00:16:26.322996] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.720 [2024-10-09 00:16:26.336051] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.720 [2024-10-09 00:16:26.336065] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.720 [2024-10-09 00:16:26.349566] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.720 [2024-10-09 00:16:26.349581] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.982 [2024-10-09 00:16:26.362652] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.982 [2024-10-09 00:16:26.362666] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.982 [2024-10-09 00:16:26.375540] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.982 [2024-10-09 00:16:26.375554] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.982 [2024-10-09 00:16:26.389409] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.982 [2024-10-09 00:16:26.389424] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.982 [2024-10-09 00:16:26.401995] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.982 [2024-10-09 00:16:26.402010] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.982 [2024-10-09 00:16:26.414348] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.982 [2024-10-09 00:16:26.414362] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.982 [2024-10-09 00:16:26.427314] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.982 [2024-10-09 00:16:26.427328] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.982 [2024-10-09 00:16:26.440702] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.982 [2024-10-09 00:16:26.440716] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.982 [2024-10-09 00:16:26.453749] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.982 [2024-10-09 00:16:26.453763] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.982 [2024-10-09 00:16:26.466624] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.982 [2024-10-09 00:16:26.466638] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.982 [2024-10-09 00:16:26.479627] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.982 [2024-10-09 00:16:26.479645] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.982 [2024-10-09 00:16:26.493173] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.982 [2024-10-09 00:16:26.493188] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.982 [2024-10-09 00:16:26.506807] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.982 [2024-10-09 00:16:26.506821] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.983 [2024-10-09 00:16:26.519942] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.983 [2024-10-09 00:16:26.519956] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.983 [2024-10-09 00:16:26.533341] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.983 [2024-10-09 00:16:26.533355] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.983 [2024-10-09 00:16:26.546214] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.983 [2024-10-09 00:16:26.546228] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.983 [2024-10-09 00:16:26.558824] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.983 [2024-10-09 00:16:26.558838] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.983 [2024-10-09 00:16:26.572475] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.983 [2024-10-09 00:16:26.572490] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.983 [2024-10-09 00:16:26.585769] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.983 [2024-10-09 00:16:26.585783] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.983 [2024-10-09 00:16:26.599166] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.983 [2024-10-09 00:16:26.599181] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.983 [2024-10-09 00:16:26.612098] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.983 [2024-10-09 00:16:26.612112] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.245 [2024-10-09 00:16:26.625049] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.245 [2024-10-09 00:16:26.625063] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.245 [2024-10-09 00:16:26.638426] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.245 [2024-10-09 00:16:26.638440] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.245 [2024-10-09 00:16:26.651760] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.245 [2024-10-09 00:16:26.651775] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.245 [2024-10-09 00:16:26.665528] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.245 [2024-10-09 00:16:26.665543] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.245 [2024-10-09 00:16:26.678830] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.245 [2024-10-09 00:16:26.678845] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.245 [2024-10-09 00:16:26.692224] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.245 [2024-10-09 00:16:26.692239] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.245 [2024-10-09 00:16:26.705867] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.245 [2024-10-09 00:16:26.705882] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.245 [2024-10-09 00:16:26.719061] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.245 [2024-10-09 00:16:26.719076] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.245 [2024-10-09 00:16:26.731948] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.245 [2024-10-09 00:16:26.731968] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.245 [2024-10-09 00:16:26.744961] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.245 [2024-10-09 00:16:26.744976] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.245 [2024-10-09 00:16:26.757937] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.245 [2024-10-09 00:16:26.757952] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.245 [2024-10-09 00:16:26.771463] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.245 [2024-10-09 00:16:26.771478] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.245 [2024-10-09 00:16:26.784500] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.245 [2024-10-09 00:16:26.784514] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.245 [2024-10-09 00:16:26.798081] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.245 [2024-10-09 00:16:26.798096] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.245 [2024-10-09 00:16:26.810692] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.245 [2024-10-09 00:16:26.810707] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.245 [2024-10-09 00:16:26.824033] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.245 [2024-10-09 00:16:26.824048] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.245 [2024-10-09 00:16:26.837440] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.245 [2024-10-09 00:16:26.837455] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.245 [2024-10-09 00:16:26.850865] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.245 [2024-10-09 00:16:26.850880] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.245 [2024-10-09 00:16:26.863898] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.245 [2024-10-09 00:16:26.863913] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.245 [2024-10-09 00:16:26.877435] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.245 [2024-10-09 00:16:26.877449] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.507 [2024-10-09 00:16:26.890948] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.507 [2024-10-09 00:16:26.890963] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.507 [2024-10-09 00:16:26.904046] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.507 [2024-10-09 00:16:26.904061] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.507 [2024-10-09 00:16:26.917411] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.507 [2024-10-09 00:16:26.917426] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.507 [2024-10-09 00:16:26.931014] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.508 [2024-10-09 00:16:26.931029] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.508 [2024-10-09 00:16:26.944380] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.508 [2024-10-09 00:16:26.944395] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.508 [2024-10-09 00:16:26.957861] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.508 [2024-10-09 00:16:26.957875] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.508 [2024-10-09 00:16:26.971263] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.508 [2024-10-09 00:16:26.971278] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.508 [2024-10-09 00:16:26.984501] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.508 [2024-10-09 00:16:26.984520] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.508 [2024-10-09 00:16:26.997728] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.508 [2024-10-09 00:16:26.997742] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.508 [2024-10-09 00:16:27.010816] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.508 [2024-10-09 00:16:27.010831] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.508 [2024-10-09 00:16:27.023981] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.508 [2024-10-09 00:16:27.023996] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.508 [2024-10-09 00:16:27.036912] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.508 [2024-10-09 00:16:27.036927] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.508 [2024-10-09 00:16:27.050075] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.508 [2024-10-09 00:16:27.050089] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.508 [2024-10-09 00:16:27.063572] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.508 [2024-10-09 00:16:27.063587] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.508 [2024-10-09 00:16:27.077228] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.508 [2024-10-09 00:16:27.077243] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.508 [2024-10-09 00:16:27.090364] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.508 [2024-10-09 00:16:27.090378] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.508 [2024-10-09 00:16:27.103681] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.508 [2024-10-09 00:16:27.103696] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.508 [2024-10-09 00:16:27.117556] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.508 [2024-10-09 00:16:27.117570] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.508 [2024-10-09 00:16:27.130574] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.508 [2024-10-09 00:16:27.130588] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.770 [2024-10-09 00:16:27.144106] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.770 [2024-10-09 00:16:27.144121] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.770 [2024-10-09 00:16:27.156338] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.770 [2024-10-09 00:16:27.156353] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.770 [2024-10-09 00:16:27.169640] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.770 [2024-10-09 00:16:27.169655] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.770 [2024-10-09 00:16:27.182103] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.770 [2024-10-09 00:16:27.182117] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.770 19284.00 IOPS, 150.66 MiB/s [2024-10-08T22:16:27.405Z] [2024-10-09 00:16:27.194575] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.770 [2024-10-09 00:16:27.194590] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.770 [2024-10-09 00:16:27.207794] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.770 [2024-10-09 00:16:27.207809] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.770 [2024-10-09 00:16:27.221078] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.770 [2024-10-09 00:16:27.221093] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.770 [2024-10-09 00:16:27.234682] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.770 [2024-10-09 00:16:27.234697] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.770 [2024-10-09 00:16:27.247934] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.770 [2024-10-09 00:16:27.247949] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.770 [2024-10-09 00:16:27.260649] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.770 [2024-10-09 00:16:27.260663] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.770 [2024-10-09 00:16:27.273897] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.770 [2024-10-09 00:16:27.273912] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.770 [2024-10-09 00:16:27.287351] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.770 [2024-10-09 00:16:27.287366] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.770 [2024-10-09 00:16:27.300135] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.770 [2024-10-09 00:16:27.300150] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.770 [2024-10-09 00:16:27.312948] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.770 [2024-10-09 00:16:27.312962] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.770 [2024-10-09 00:16:27.326059] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.770 [2024-10-09 00:16:27.326073] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.770 [2024-10-09 00:16:27.338616] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.770 [2024-10-09 00:16:27.338630] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.770 [2024-10-09 00:16:27.352080] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.770 [2024-10-09 00:16:27.352094] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.770 [2024-10-09 00:16:27.364732] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.770 [2024-10-09 00:16:27.364746] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.770 [2024-10-09 00:16:27.378287] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.770 [2024-10-09 00:16:27.378301] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.770 [2024-10-09 00:16:27.391633] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.770 [2024-10-09 00:16:27.391647] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.770 [2024-10-09 00:16:27.405134] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.770 [2024-10-09 00:16:27.405148] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.032 [2024-10-09 00:16:27.418025] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.032 [2024-10-09 00:16:27.418039] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.032 [2024-10-09 00:16:27.430795] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.032 [2024-10-09 00:16:27.430810] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.032 [2024-10-09 00:16:27.443291] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.032 [2024-10-09 00:16:27.443305] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.032 [2024-10-09 00:16:27.456793] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.032 [2024-10-09 00:16:27.456807] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.032 [2024-10-09 00:16:27.470159] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.032 [2024-10-09 00:16:27.470174] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.032 [2024-10-09 00:16:27.483446] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.032 [2024-10-09 00:16:27.483462] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.032 [2024-10-09 00:16:27.496778] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.032 [2024-10-09 00:16:27.496793] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.033 [2024-10-09 00:16:27.510229] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.033 [2024-10-09 00:16:27.510244] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.033 [2024-10-09 00:16:27.522873] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.033 [2024-10-09 00:16:27.522888] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.033 [2024-10-09 00:16:27.535829] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.033 [2024-10-09 00:16:27.535844] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.033 [2024-10-09 00:16:27.549002] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.033 [2024-10-09 00:16:27.549017] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.033 [2024-10-09 00:16:27.562446] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.033 [2024-10-09 00:16:27.562461] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.033 [2024-10-09 00:16:27.575951] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.033 [2024-10-09 00:16:27.575965] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.033 [2024-10-09 00:16:27.588920] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.033 [2024-10-09 00:16:27.588934] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.033 [2024-10-09 00:16:27.602411] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.033 [2024-10-09 00:16:27.602425] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.033 [2024-10-09 00:16:27.615309] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.033 [2024-10-09 00:16:27.615323] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.033 [2024-10-09 00:16:27.628342] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.033 [2024-10-09 00:16:27.628356] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.033 [2024-10-09 00:16:27.641734] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.033 [2024-10-09 00:16:27.641749] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.033 [2024-10-09 00:16:27.654486] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.033 [2024-10-09 00:16:27.654500] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.033 [2024-10-09 00:16:27.667622] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.033 [2024-10-09 00:16:27.667636] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.295 [2024-10-09 00:16:27.680628] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.295 [2024-10-09 00:16:27.680642] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.295 [2024-10-09 00:16:27.693776] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.295 [2024-10-09 00:16:27.693790] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.295 [2024-10-09 00:16:27.707056] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.295 [2024-10-09 00:16:27.707070] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.295 [2024-10-09 00:16:27.720604] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.295 [2024-10-09 00:16:27.720618] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.295 [2024-10-09 00:16:27.734141] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.295 [2024-10-09 00:16:27.734155] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.295 [2024-10-09 00:16:27.747235] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.295 [2024-10-09 00:16:27.747249] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.295 [2024-10-09 00:16:27.760170] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.295 [2024-10-09 00:16:27.760184] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.295 [2024-10-09 00:16:27.773073] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.295 [2024-10-09 00:16:27.773088] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.295 [2024-10-09 00:16:27.785871] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.295 [2024-10-09 00:16:27.785886] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.295 [2024-10-09 00:16:27.799236] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.295 [2024-10-09 00:16:27.799250] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.295 [2024-10-09 00:16:27.812293] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.295 [2024-10-09 00:16:27.812307] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.295 [2024-10-09 00:16:27.825762] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.295 [2024-10-09 00:16:27.825777] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.295 [2024-10-09 00:16:27.839166] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.295 [2024-10-09 00:16:27.839180] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.295 [2024-10-09 00:16:27.852517] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.295 [2024-10-09 00:16:27.852531] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.295 [2024-10-09 00:16:27.865728] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.295 [2024-10-09 00:16:27.865742] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.295 [2024-10-09 00:16:27.879119] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.295 [2024-10-09 00:16:27.879133] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.295 [2024-10-09 00:16:27.892515] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.295 [2024-10-09 00:16:27.892529] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.295 [2024-10-09 00:16:27.905861] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.295 [2024-10-09 00:16:27.905875] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.295 [2024-10-09 00:16:27.918537] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.295 [2024-10-09 00:16:27.918551] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.556 [2024-10-09 00:16:27.932150] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.556 [2024-10-09 00:16:27.932164] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.556 [2024-10-09 00:16:27.945740] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.556 [2024-10-09 00:16:27.945754] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.556 [2024-10-09 00:16:27.959082] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.556 [2024-10-09 00:16:27.959096] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.556 [2024-10-09 00:16:27.972018] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.556 [2024-10-09 00:16:27.972036] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.556 [2024-10-09 00:16:27.985632] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.556 [2024-10-09 00:16:27.985646] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.556 [2024-10-09 00:16:27.998264] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.556 [2024-10-09 00:16:27.998278] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.556 [2024-10-09 00:16:28.011575] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.556 [2024-10-09 00:16:28.011589] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.556 [2024-10-09 00:16:28.024908] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.556 [2024-10-09 00:16:28.024922] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.556 [2024-10-09 00:16:28.038098] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.556 [2024-10-09 00:16:28.038112] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.556 [2024-10-09 00:16:28.051213] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.556 [2024-10-09 00:16:28.051227] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.556 [2024-10-09 00:16:28.064736] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.556 [2024-10-09 00:16:28.064751] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.556 [2024-10-09 00:16:28.077318] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.556 [2024-10-09 00:16:28.077332] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.556 [2024-10-09 00:16:28.090014] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.556 [2024-10-09 00:16:28.090028] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.556 [2024-10-09 00:16:28.102664] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.556 [2024-10-09 00:16:28.102678] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.556 [2024-10-09 00:16:28.115444] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.556 [2024-10-09 00:16:28.115457] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.556 [2024-10-09 00:16:28.128555] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.556 [2024-10-09 00:16:28.128569] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.556 [2024-10-09 00:16:28.141772] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.556 [2024-10-09 00:16:28.141786] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.556 [2024-10-09 00:16:28.154431] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.556 [2024-10-09 00:16:28.154445] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.556 [2024-10-09 00:16:28.167622] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.556 [2024-10-09 00:16:28.167635] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.556 [2024-10-09 00:16:28.180239] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.556 [2024-10-09 00:16:28.180253] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.820 19283.20 IOPS, 150.65 MiB/s [2024-10-08T22:16:28.455Z] [2024-10-09 00:16:28.191667] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.820 [2024-10-09 00:16:28.191682] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.820 00:08:57.820 Latency(us) 00:08:57.820 [2024-10-08T22:16:28.455Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:57.820 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:08:57.820 Nvme1n1 : 5.01 19287.21 150.68 0.00 0.00 6630.71 2252.80 17913.17 00:08:57.820 [2024-10-08T22:16:28.455Z] =================================================================================================================== 00:08:57.820 [2024-10-08T22:16:28.455Z] Total : 19287.21 150.68 0.00 0.00 6630.71 2252.80 17913.17 00:08:57.820 [2024-10-09 00:16:28.201814] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.820 [2024-10-09 00:16:28.201825] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.820 [2024-10-09 00:16:28.213852] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.820 [2024-10-09 00:16:28.213866] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.820 [2024-10-09 00:16:28.225876] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.820 [2024-10-09 00:16:28.225889] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.820 [2024-10-09 00:16:28.237909] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.820 [2024-10-09 00:16:28.237921] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.820 [2024-10-09 00:16:28.249936] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.820 [2024-10-09 00:16:28.249946] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.820 [2024-10-09 00:16:28.261965] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.820 [2024-10-09 00:16:28.261973] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.820 [2024-10-09 00:16:28.273995] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.820 [2024-10-09 00:16:28.274006] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.820 [2024-10-09 00:16:28.286023] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.820 [2024-10-09 00:16:28.286032] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.820 [2024-10-09 00:16:28.298055] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.820 [2024-10-09 00:16:28.298064] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.820 [2024-10-09 00:16:28.310085] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.820 [2024-10-09 00:16:28.310093] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.820 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (3099297) - No such process 00:08:57.820 00:16:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 3099297 00:08:57.820 00:16:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:57.820 00:16:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.820 00:16:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:57.820 00:16:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.820 00:16:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:57.820 00:16:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.820 00:16:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:57.820 delay0 00:08:57.820 00:16:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.820 00:16:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:08:57.820 00:16:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.820 00:16:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:57.820 00:16:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.820 00:16:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:08:58.082 [2024-10-09 00:16:28.469080] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:09:04.772 [2024-10-09 00:16:35.257653] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9598c0 is same with the state(6) to be set 00:09:04.772 Initializing NVMe Controllers 00:09:04.772 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:04.772 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:09:04.772 Initialization complete. Launching workers. 00:09:04.772 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 295, failed: 14355 00:09:04.772 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 14587, failed to submit 63 00:09:04.772 success 14432, unsuccessful 155, failed 0 00:09:04.772 00:16:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:09:04.772 00:16:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:09:04.772 00:16:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@514 -- # nvmfcleanup 00:09:04.772 00:16:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:09:04.772 00:16:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:04.772 00:16:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:09:04.772 00:16:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:04.772 00:16:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:04.772 rmmod nvme_tcp 00:09:04.772 rmmod nvme_fabrics 00:09:04.772 rmmod nvme_keyring 00:09:04.772 00:16:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:04.772 00:16:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:09:04.772 00:16:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:09:04.772 00:16:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@515 -- # '[' -n 3096929 ']' 00:09:04.773 00:16:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # killprocess 3096929 00:09:04.773 00:16:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@950 -- # '[' -z 3096929 ']' 00:09:04.773 00:16:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # kill -0 3096929 00:09:04.773 00:16:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@955 -- # uname 00:09:04.773 00:16:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:04.773 00:16:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3096929 00:09:04.773 00:16:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:09:04.773 00:16:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:09:04.773 00:16:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3096929' 00:09:04.773 killing process with pid 3096929 00:09:04.773 00:16:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@969 -- # kill 3096929 00:09:04.773 00:16:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@974 -- # wait 3096929 00:09:05.034 00:16:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:09:05.034 00:16:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:09:05.034 00:16:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:09:05.034 00:16:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:09:05.034 00:16:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@789 -- # iptables-save 00:09:05.034 00:16:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:09:05.034 00:16:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@789 -- # iptables-restore 00:09:05.034 00:16:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:05.034 00:16:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:05.034 00:16:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:05.034 00:16:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:05.034 00:16:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:07.598 00:16:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:07.598 00:09:07.598 real 0m34.419s 00:09:07.598 user 0m45.494s 00:09:07.598 sys 0m11.636s 00:09:07.598 00:16:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:07.598 00:16:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:07.598 ************************************ 00:09:07.598 END TEST nvmf_zcopy 00:09:07.598 ************************************ 00:09:07.598 00:16:37 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:09:07.598 00:16:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:07.598 00:16:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:07.598 00:16:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:07.598 ************************************ 00:09:07.598 START TEST nvmf_nmic 00:09:07.598 ************************************ 00:09:07.598 00:16:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:09:07.598 * Looking for test storage... 00:09:07.598 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:07.598 00:16:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:07.598 00:16:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1681 -- # lcov --version 00:09:07.598 00:16:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:07.598 00:16:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:07.598 00:16:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:07.598 00:16:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:07.598 00:16:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:07.598 00:16:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:09:07.598 00:16:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:09:07.598 00:16:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:09:07.598 00:16:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:09:07.598 00:16:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:09:07.598 00:16:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:09:07.598 00:16:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:09:07.598 00:16:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:07.598 00:16:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:09:07.598 00:16:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:09:07.598 00:16:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:07.598 00:16:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:07.598 00:16:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:09:07.598 00:16:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:09:07.598 00:16:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:07.598 00:16:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:09:07.598 00:16:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:09:07.598 00:16:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:09:07.598 00:16:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:09:07.598 00:16:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:07.598 00:16:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:09:07.598 00:16:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:09:07.598 00:16:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:07.598 00:16:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:07.598 00:16:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:09:07.598 00:16:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:07.598 00:16:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:07.598 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:07.598 --rc genhtml_branch_coverage=1 00:09:07.598 --rc genhtml_function_coverage=1 00:09:07.598 --rc genhtml_legend=1 00:09:07.598 --rc geninfo_all_blocks=1 00:09:07.598 --rc geninfo_unexecuted_blocks=1 00:09:07.598 00:09:07.598 ' 00:09:07.598 00:16:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:07.598 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:07.598 --rc genhtml_branch_coverage=1 00:09:07.598 --rc genhtml_function_coverage=1 00:09:07.598 --rc genhtml_legend=1 00:09:07.598 --rc geninfo_all_blocks=1 00:09:07.598 --rc geninfo_unexecuted_blocks=1 00:09:07.598 00:09:07.598 ' 00:09:07.598 00:16:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:07.598 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:07.598 --rc genhtml_branch_coverage=1 00:09:07.598 --rc genhtml_function_coverage=1 00:09:07.598 --rc genhtml_legend=1 00:09:07.598 --rc geninfo_all_blocks=1 00:09:07.598 --rc geninfo_unexecuted_blocks=1 00:09:07.598 00:09:07.598 ' 00:09:07.598 00:16:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:07.598 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:07.598 --rc genhtml_branch_coverage=1 00:09:07.598 --rc genhtml_function_coverage=1 00:09:07.598 --rc genhtml_legend=1 00:09:07.598 --rc geninfo_all_blocks=1 00:09:07.598 --rc geninfo_unexecuted_blocks=1 00:09:07.598 00:09:07.598 ' 00:09:07.598 00:16:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:07.598 00:16:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:09:07.598 00:16:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:07.598 00:16:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:07.598 00:16:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:07.598 00:16:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:07.598 00:16:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:07.598 00:16:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:07.598 00:16:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:07.598 00:16:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:07.598 00:16:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:07.598 00:16:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:07.598 00:16:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:07.598 00:16:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:07.598 00:16:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:07.598 00:16:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:07.598 00:16:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:07.598 00:16:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:07.598 00:16:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:07.598 00:16:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:09:07.598 00:16:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:07.598 00:16:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:07.598 00:16:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:07.598 00:16:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:07.599 00:16:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:07.599 00:16:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:07.599 00:16:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:09:07.599 00:16:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:07.599 00:16:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:09:07.599 00:16:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:07.599 00:16:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:07.599 00:16:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:07.599 00:16:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:07.599 00:16:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:07.599 00:16:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:07.599 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:07.599 00:16:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:07.599 00:16:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:07.599 00:16:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:07.599 00:16:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:07.599 00:16:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:07.599 00:16:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:09:07.599 00:16:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:09:07.599 00:16:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:07.599 00:16:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # prepare_net_devs 00:09:07.599 00:16:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@436 -- # local -g is_hw=no 00:09:07.599 00:16:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # remove_spdk_ns 00:09:07.599 00:16:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:07.599 00:16:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:07.599 00:16:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:07.599 00:16:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:09:07.599 00:16:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:09:07.599 00:16:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:09:07.599 00:16:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:15.751 00:16:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:15.752 00:16:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:09:15.752 00:16:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:15.752 00:16:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:15.752 00:16:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:15.752 00:16:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:15.752 00:16:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:15.752 00:16:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:09:15.752 00:16:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:15.752 00:16:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:09:15.752 00:16:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:09:15.752 00:16:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:09:15.752 00:16:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:09:15.752 00:16:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:09:15.752 00:16:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:09:15.752 00:16:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:15.752 00:16:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:15.752 00:16:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:15.752 00:16:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:15.752 00:16:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:15.752 00:16:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:15.752 00:16:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:15.752 00:16:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:15.752 00:16:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:15.752 00:16:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:15.752 00:16:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:15.752 00:16:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:15.752 00:16:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:15.752 00:16:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:15.752 00:16:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:15.752 00:16:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:15.752 00:16:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:15.752 00:16:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:15.752 00:16:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:15.752 00:16:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:09:15.752 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:09:15.752 00:16:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:15.752 00:16:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:15.752 00:16:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:15.752 00:16:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:15.752 00:16:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:15.752 00:16:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:15.752 00:16:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:09:15.752 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:09:15.752 00:16:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:15.752 00:16:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:15.752 00:16:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:15.752 00:16:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:15.752 00:16:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:15.752 00:16:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:15.752 00:16:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:15.752 00:16:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:15.752 00:16:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:09:15.752 00:16:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:15.752 00:16:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:09:15.752 00:16:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:15.752 00:16:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ up == up ]] 00:09:15.752 00:16:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:09:15.752 00:16:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:15.752 00:16:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:09:15.752 Found net devices under 0000:4b:00.0: cvl_0_0 00:09:15.752 00:16:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:09:15.752 00:16:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:09:15.752 00:16:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:15.752 00:16:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:09:15.752 00:16:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:15.752 00:16:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ up == up ]] 00:09:15.752 00:16:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:09:15.752 00:16:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:15.752 00:16:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:09:15.752 Found net devices under 0000:4b:00.1: cvl_0_1 00:09:15.752 00:16:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:09:15.752 00:16:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:09:15.752 00:16:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # is_hw=yes 00:09:15.752 00:16:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:09:15.752 00:16:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:09:15.752 00:16:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:09:15.752 00:16:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:15.752 00:16:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:15.752 00:16:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:15.752 00:16:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:15.752 00:16:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:15.752 00:16:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:15.752 00:16:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:15.752 00:16:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:15.752 00:16:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:15.752 00:16:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:15.752 00:16:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:15.752 00:16:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:15.752 00:16:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:15.752 00:16:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:15.752 00:16:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:15.752 00:16:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:15.752 00:16:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:15.752 00:16:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:15.752 00:16:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:15.752 00:16:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:15.752 00:16:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:15.752 00:16:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:15.752 00:16:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:15.752 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:15.752 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.516 ms 00:09:15.752 00:09:15.752 --- 10.0.0.2 ping statistics --- 00:09:15.752 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:15.752 rtt min/avg/max/mdev = 0.516/0.516/0.516/0.000 ms 00:09:15.752 00:16:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:15.752 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:15.752 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.190 ms 00:09:15.752 00:09:15.752 --- 10.0.0.1 ping statistics --- 00:09:15.752 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:15.752 rtt min/avg/max/mdev = 0.190/0.190/0.190/0.000 ms 00:09:15.752 00:16:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:15.752 00:16:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@448 -- # return 0 00:09:15.752 00:16:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:09:15.752 00:16:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:15.752 00:16:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:09:15.752 00:16:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:09:15.752 00:16:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:15.752 00:16:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:09:15.752 00:16:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:09:15.752 00:16:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:09:15.752 00:16:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:09:15.753 00:16:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:15.753 00:16:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:15.753 00:16:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # nvmfpid=3105990 00:09:15.753 00:16:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # waitforlisten 3105990 00:09:15.753 00:16:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:15.753 00:16:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@831 -- # '[' -z 3105990 ']' 00:09:15.753 00:16:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:15.753 00:16:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:15.753 00:16:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:15.753 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:15.753 00:16:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:15.753 00:16:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:15.753 [2024-10-09 00:16:45.302732] Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 initialization... 00:09:15.753 [2024-10-09 00:16:45.302799] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:15.753 [2024-10-09 00:16:45.394909] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:15.753 [2024-10-09 00:16:45.489939] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:15.753 [2024-10-09 00:16:45.490002] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:15.753 [2024-10-09 00:16:45.490011] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:15.753 [2024-10-09 00:16:45.490018] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:15.753 [2024-10-09 00:16:45.490025] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:15.753 [2024-10-09 00:16:45.492525] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:09:15.753 [2024-10-09 00:16:45.492683] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:09:15.753 [2024-10-09 00:16:45.492845] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:09:15.753 [2024-10-09 00:16:45.492846] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:09:15.753 00:16:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:15.753 00:16:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # return 0 00:09:15.753 00:16:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:09:15.753 00:16:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:15.753 00:16:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:15.753 00:16:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:15.753 00:16:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:15.753 00:16:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.753 00:16:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:15.753 [2024-10-09 00:16:46.176938] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:15.753 00:16:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.753 00:16:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:15.753 00:16:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.753 00:16:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:15.753 Malloc0 00:09:15.753 00:16:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.753 00:16:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:15.753 00:16:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.753 00:16:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:15.753 00:16:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.753 00:16:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:15.753 00:16:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.753 00:16:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:15.753 00:16:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.753 00:16:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:15.753 00:16:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.753 00:16:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:15.753 [2024-10-09 00:16:46.242651] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:15.753 00:16:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.753 00:16:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:09:15.753 test case1: single bdev can't be used in multiple subsystems 00:09:15.753 00:16:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:09:15.753 00:16:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.753 00:16:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:15.753 00:16:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.753 00:16:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:09:15.753 00:16:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.753 00:16:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:15.753 00:16:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.753 00:16:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:09:15.753 00:16:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:09:15.753 00:16:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.753 00:16:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:15.753 [2024-10-09 00:16:46.278513] bdev.c:8202:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:09:15.753 [2024-10-09 00:16:46.278541] subsystem.c:2157:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:09:15.753 [2024-10-09 00:16:46.278549] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.753 request: 00:09:15.753 { 00:09:15.753 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:09:15.753 "namespace": { 00:09:15.753 "bdev_name": "Malloc0", 00:09:15.753 "no_auto_visible": false 00:09:15.753 }, 00:09:15.753 "method": "nvmf_subsystem_add_ns", 00:09:15.753 "req_id": 1 00:09:15.753 } 00:09:15.753 Got JSON-RPC error response 00:09:15.753 response: 00:09:15.753 { 00:09:15.753 "code": -32602, 00:09:15.753 "message": "Invalid parameters" 00:09:15.753 } 00:09:15.753 00:16:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:09:15.753 00:16:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:09:15.753 00:16:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:09:15.753 00:16:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:09:15.753 Adding namespace failed - expected result. 00:09:15.753 00:16:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:09:15.753 test case2: host connect to nvmf target in multiple paths 00:09:15.753 00:16:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:09:15.753 00:16:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.753 00:16:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:15.753 [2024-10-09 00:16:46.290729] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:09:15.753 00:16:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.753 00:16:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:17.672 00:16:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:09:19.059 00:16:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:09:19.059 00:16:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:09:19.059 00:16:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:19.059 00:16:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:19.059 00:16:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:09:20.997 00:16:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:20.998 00:16:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:20.998 00:16:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:20.998 00:16:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:20.998 00:16:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:20.998 00:16:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:09:20.998 00:16:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:09:20.998 [global] 00:09:20.998 thread=1 00:09:20.998 invalidate=1 00:09:20.998 rw=write 00:09:20.998 time_based=1 00:09:20.998 runtime=1 00:09:20.998 ioengine=libaio 00:09:20.998 direct=1 00:09:20.998 bs=4096 00:09:20.998 iodepth=1 00:09:20.998 norandommap=0 00:09:20.998 numjobs=1 00:09:20.998 00:09:20.998 verify_dump=1 00:09:20.998 verify_backlog=512 00:09:20.998 verify_state_save=0 00:09:20.998 do_verify=1 00:09:20.998 verify=crc32c-intel 00:09:20.998 [job0] 00:09:20.998 filename=/dev/nvme0n1 00:09:20.998 Could not set queue depth (nvme0n1) 00:09:21.257 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:21.257 fio-3.35 00:09:21.257 Starting 1 thread 00:09:22.626 00:09:22.626 job0: (groupid=0, jobs=1): err= 0: pid=3107533: Wed Oct 9 00:16:52 2024 00:09:22.626 read: IOPS=477, BW=1910KiB/s (1956kB/s)(1912KiB/1001msec) 00:09:22.626 slat (nsec): min=8104, max=59618, avg=26340.82, stdev=2767.30 00:09:22.626 clat (usec): min=743, max=42036, avg=1320.99, stdev=3739.35 00:09:22.626 lat (usec): min=770, max=42062, avg=1347.33, stdev=3739.29 00:09:22.626 clat percentiles (usec): 00:09:22.626 | 1.00th=[ 799], 5.00th=[ 840], 10.00th=[ 889], 20.00th=[ 930], 00:09:22.626 | 30.00th=[ 963], 40.00th=[ 979], 50.00th=[ 988], 60.00th=[ 1004], 00:09:22.626 | 70.00th=[ 1012], 80.00th=[ 1020], 90.00th=[ 1045], 95.00th=[ 1074], 00:09:22.626 | 99.00th=[ 1172], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:09:22.626 | 99.99th=[42206] 00:09:22.626 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:09:22.626 slat (usec): min=9, max=31793, avg=91.22, stdev=1403.85 00:09:22.626 clat (usec): min=330, max=805, avg=589.18, stdev=98.66 00:09:22.626 lat (usec): min=340, max=32431, avg=680.39, stdev=1409.77 00:09:22.626 clat percentiles (usec): 00:09:22.626 | 1.00th=[ 347], 5.00th=[ 396], 10.00th=[ 441], 20.00th=[ 502], 00:09:22.626 | 30.00th=[ 553], 40.00th=[ 578], 50.00th=[ 594], 60.00th=[ 619], 00:09:22.626 | 70.00th=[ 660], 80.00th=[ 676], 90.00th=[ 709], 95.00th=[ 734], 00:09:22.626 | 99.00th=[ 791], 99.50th=[ 799], 99.90th=[ 807], 99.95th=[ 807], 00:09:22.626 | 99.99th=[ 807] 00:09:22.626 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:09:22.626 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:22.626 lat (usec) : 500=10.00%, 750=40.20%, 1000=29.70% 00:09:22.626 lat (msec) : 2=19.70%, 50=0.40% 00:09:22.626 cpu : usr=2.00%, sys=2.40%, ctx=994, majf=0, minf=1 00:09:22.626 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:22.626 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:22.626 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:22.626 issued rwts: total=478,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:22.626 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:22.626 00:09:22.626 Run status group 0 (all jobs): 00:09:22.626 READ: bw=1910KiB/s (1956kB/s), 1910KiB/s-1910KiB/s (1956kB/s-1956kB/s), io=1912KiB (1958kB), run=1001-1001msec 00:09:22.626 WRITE: bw=2046KiB/s (2095kB/s), 2046KiB/s-2046KiB/s (2095kB/s-2095kB/s), io=2048KiB (2097kB), run=1001-1001msec 00:09:22.626 00:09:22.626 Disk stats (read/write): 00:09:22.626 nvme0n1: ios=402/512, merge=0/0, ticks=1497/286, in_queue=1783, util=98.90% 00:09:22.626 00:16:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:22.626 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:09:22.626 00:16:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:22.626 00:16:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:09:22.626 00:16:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:22.626 00:16:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:22.626 00:16:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:22.626 00:16:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:22.626 00:16:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:09:22.626 00:16:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:09:22.626 00:16:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:09:22.626 00:16:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@514 -- # nvmfcleanup 00:09:22.626 00:16:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:09:22.626 00:16:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:22.626 00:16:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:09:22.626 00:16:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:22.626 00:16:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:22.626 rmmod nvme_tcp 00:09:22.626 rmmod nvme_fabrics 00:09:22.626 rmmod nvme_keyring 00:09:22.626 00:16:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:22.626 00:16:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:09:22.626 00:16:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:09:22.626 00:16:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@515 -- # '[' -n 3105990 ']' 00:09:22.626 00:16:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # killprocess 3105990 00:09:22.626 00:16:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@950 -- # '[' -z 3105990 ']' 00:09:22.626 00:16:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # kill -0 3105990 00:09:22.626 00:16:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@955 -- # uname 00:09:22.626 00:16:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:22.626 00:16:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3105990 00:09:22.626 00:16:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:22.626 00:16:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:22.626 00:16:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3105990' 00:09:22.626 killing process with pid 3105990 00:09:22.626 00:16:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@969 -- # kill 3105990 00:09:22.626 00:16:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@974 -- # wait 3105990 00:09:22.887 00:16:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:09:22.887 00:16:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:09:22.887 00:16:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:09:22.887 00:16:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:09:22.887 00:16:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@789 -- # iptables-save 00:09:22.887 00:16:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:09:22.887 00:16:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@789 -- # iptables-restore 00:09:22.887 00:16:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:22.887 00:16:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:22.887 00:16:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:22.887 00:16:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:22.887 00:16:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:24.803 00:16:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:24.803 00:09:24.803 real 0m17.732s 00:09:24.803 user 0m44.477s 00:09:24.803 sys 0m6.464s 00:09:24.803 00:16:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:24.803 00:16:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:24.803 ************************************ 00:09:24.803 END TEST nvmf_nmic 00:09:24.803 ************************************ 00:09:25.063 00:16:55 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:09:25.064 00:16:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:25.064 00:16:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:25.064 00:16:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:25.064 ************************************ 00:09:25.064 START TEST nvmf_fio_target 00:09:25.064 ************************************ 00:09:25.064 00:16:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:09:25.064 * Looking for test storage... 00:09:25.064 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:25.064 00:16:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:25.064 00:16:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1681 -- # lcov --version 00:09:25.064 00:16:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:25.064 00:16:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:25.064 00:16:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:25.064 00:16:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:25.064 00:16:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:25.064 00:16:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:09:25.064 00:16:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:09:25.064 00:16:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:09:25.064 00:16:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:09:25.064 00:16:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:09:25.064 00:16:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:09:25.064 00:16:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:09:25.064 00:16:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:25.064 00:16:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:09:25.064 00:16:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:09:25.064 00:16:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:25.064 00:16:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:25.064 00:16:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:09:25.064 00:16:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:09:25.064 00:16:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:25.064 00:16:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:09:25.064 00:16:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:09:25.064 00:16:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:09:25.064 00:16:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:09:25.064 00:16:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:25.064 00:16:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:09:25.064 00:16:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:09:25.064 00:16:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:25.064 00:16:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:25.064 00:16:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:09:25.064 00:16:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:25.064 00:16:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:25.064 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:25.064 --rc genhtml_branch_coverage=1 00:09:25.064 --rc genhtml_function_coverage=1 00:09:25.064 --rc genhtml_legend=1 00:09:25.064 --rc geninfo_all_blocks=1 00:09:25.064 --rc geninfo_unexecuted_blocks=1 00:09:25.064 00:09:25.064 ' 00:09:25.064 00:16:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:25.064 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:25.064 --rc genhtml_branch_coverage=1 00:09:25.064 --rc genhtml_function_coverage=1 00:09:25.064 --rc genhtml_legend=1 00:09:25.064 --rc geninfo_all_blocks=1 00:09:25.064 --rc geninfo_unexecuted_blocks=1 00:09:25.064 00:09:25.064 ' 00:09:25.064 00:16:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:25.064 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:25.064 --rc genhtml_branch_coverage=1 00:09:25.064 --rc genhtml_function_coverage=1 00:09:25.064 --rc genhtml_legend=1 00:09:25.064 --rc geninfo_all_blocks=1 00:09:25.064 --rc geninfo_unexecuted_blocks=1 00:09:25.064 00:09:25.064 ' 00:09:25.064 00:16:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:25.064 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:25.064 --rc genhtml_branch_coverage=1 00:09:25.064 --rc genhtml_function_coverage=1 00:09:25.064 --rc genhtml_legend=1 00:09:25.064 --rc geninfo_all_blocks=1 00:09:25.064 --rc geninfo_unexecuted_blocks=1 00:09:25.064 00:09:25.064 ' 00:09:25.064 00:16:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:25.326 00:16:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:09:25.326 00:16:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:25.326 00:16:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:25.326 00:16:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:25.326 00:16:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:25.326 00:16:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:25.326 00:16:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:25.326 00:16:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:25.326 00:16:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:25.326 00:16:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:25.326 00:16:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:25.326 00:16:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:25.326 00:16:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:25.326 00:16:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:25.326 00:16:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:25.326 00:16:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:25.326 00:16:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:25.326 00:16:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:25.326 00:16:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:09:25.326 00:16:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:25.326 00:16:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:25.326 00:16:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:25.326 00:16:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:25.326 00:16:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:25.326 00:16:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:25.326 00:16:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:09:25.326 00:16:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:25.326 00:16:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:09:25.326 00:16:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:25.326 00:16:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:25.327 00:16:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:25.327 00:16:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:25.327 00:16:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:25.327 00:16:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:25.327 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:25.327 00:16:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:25.327 00:16:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:25.327 00:16:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:25.327 00:16:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:25.327 00:16:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:25.327 00:16:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:25.327 00:16:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:09:25.327 00:16:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:09:25.327 00:16:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:25.327 00:16:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # prepare_net_devs 00:09:25.327 00:16:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@436 -- # local -g is_hw=no 00:09:25.327 00:16:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # remove_spdk_ns 00:09:25.327 00:16:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:25.327 00:16:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:25.327 00:16:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:25.327 00:16:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:09:25.327 00:16:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:09:25.327 00:16:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:09:25.327 00:16:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:33.470 00:17:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:33.470 00:17:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:09:33.470 00:17:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:33.470 00:17:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:33.470 00:17:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:33.470 00:17:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:33.470 00:17:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:33.470 00:17:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:09:33.470 00:17:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:33.470 00:17:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:09:33.470 00:17:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:09:33.470 00:17:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:09:33.470 00:17:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:09:33.470 00:17:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:09:33.470 00:17:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:09:33.470 00:17:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:33.470 00:17:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:33.470 00:17:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:33.470 00:17:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:33.470 00:17:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:33.470 00:17:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:33.470 00:17:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:33.470 00:17:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:33.470 00:17:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:33.470 00:17:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:33.470 00:17:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:33.470 00:17:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:33.470 00:17:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:33.470 00:17:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:33.470 00:17:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:33.470 00:17:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:33.470 00:17:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:33.470 00:17:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:33.470 00:17:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:33.470 00:17:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:09:33.470 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:09:33.470 00:17:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:33.470 00:17:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:33.470 00:17:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:33.470 00:17:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:33.470 00:17:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:33.470 00:17:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:33.470 00:17:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:09:33.470 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:09:33.470 00:17:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:33.470 00:17:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:33.470 00:17:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:33.470 00:17:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:33.470 00:17:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:33.470 00:17:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:33.470 00:17:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:33.470 00:17:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:33.470 00:17:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:09:33.470 00:17:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:33.470 00:17:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:09:33.470 00:17:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:33.470 00:17:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ up == up ]] 00:09:33.470 00:17:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:09:33.470 00:17:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:33.470 00:17:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:09:33.470 Found net devices under 0000:4b:00.0: cvl_0_0 00:09:33.470 00:17:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:09:33.470 00:17:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:09:33.470 00:17:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:33.470 00:17:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:09:33.470 00:17:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:33.470 00:17:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ up == up ]] 00:09:33.470 00:17:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:09:33.470 00:17:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:33.470 00:17:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:09:33.470 Found net devices under 0000:4b:00.1: cvl_0_1 00:09:33.470 00:17:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:09:33.470 00:17:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:09:33.470 00:17:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # is_hw=yes 00:09:33.470 00:17:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:09:33.470 00:17:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:09:33.470 00:17:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:09:33.470 00:17:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:33.470 00:17:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:33.470 00:17:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:33.470 00:17:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:33.470 00:17:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:33.470 00:17:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:33.470 00:17:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:33.470 00:17:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:33.470 00:17:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:33.470 00:17:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:33.470 00:17:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:33.470 00:17:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:33.470 00:17:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:33.470 00:17:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:33.470 00:17:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:33.470 00:17:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:33.470 00:17:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:33.470 00:17:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:33.470 00:17:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:33.470 00:17:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:33.470 00:17:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:33.470 00:17:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:33.470 00:17:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:33.470 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:33.470 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.641 ms 00:09:33.470 00:09:33.470 --- 10.0.0.2 ping statistics --- 00:09:33.470 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:33.470 rtt min/avg/max/mdev = 0.641/0.641/0.641/0.000 ms 00:09:33.470 00:17:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:33.470 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:33.470 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.321 ms 00:09:33.470 00:09:33.470 --- 10.0.0.1 ping statistics --- 00:09:33.470 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:33.470 rtt min/avg/max/mdev = 0.321/0.321/0.321/0.000 ms 00:09:33.470 00:17:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:33.470 00:17:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@448 -- # return 0 00:09:33.470 00:17:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:09:33.470 00:17:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:33.470 00:17:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:09:33.470 00:17:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:09:33.470 00:17:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:33.470 00:17:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:09:33.470 00:17:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:09:33.470 00:17:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:09:33.470 00:17:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:09:33.470 00:17:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:33.470 00:17:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:33.470 00:17:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # nvmfpid=3111890 00:09:33.470 00:17:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # waitforlisten 3111890 00:09:33.470 00:17:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:33.470 00:17:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@831 -- # '[' -z 3111890 ']' 00:09:33.470 00:17:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:33.470 00:17:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:33.470 00:17:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:33.470 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:33.470 00:17:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:33.470 00:17:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:33.470 [2024-10-09 00:17:03.261919] Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 initialization... 00:09:33.470 [2024-10-09 00:17:03.261985] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:33.470 [2024-10-09 00:17:03.350677] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:33.470 [2024-10-09 00:17:03.446285] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:33.470 [2024-10-09 00:17:03.446348] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:33.470 [2024-10-09 00:17:03.446357] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:33.470 [2024-10-09 00:17:03.446364] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:33.470 [2024-10-09 00:17:03.446371] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:33.470 [2024-10-09 00:17:03.448439] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:09:33.471 [2024-10-09 00:17:03.448606] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:09:33.471 [2024-10-09 00:17:03.448789] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:09:33.471 [2024-10-09 00:17:03.448843] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:09:33.471 00:17:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:33.471 00:17:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # return 0 00:09:33.471 00:17:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:09:33.471 00:17:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:33.471 00:17:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:33.729 00:17:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:33.729 00:17:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:33.729 [2024-10-09 00:17:04.288582] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:33.729 00:17:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:33.987 00:17:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:09:33.987 00:17:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:34.246 00:17:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:09:34.246 00:17:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:34.504 00:17:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:09:34.504 00:17:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:34.762 00:17:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:09:34.762 00:17:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:09:34.762 00:17:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:35.021 00:17:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:09:35.021 00:17:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:35.280 00:17:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:09:35.280 00:17:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:35.539 00:17:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:09:35.539 00:17:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:09:35.539 00:17:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:35.797 00:17:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:09:35.797 00:17:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:36.056 00:17:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:09:36.056 00:17:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:36.318 00:17:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:36.318 [2024-10-09 00:17:06.852597] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:36.318 00:17:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:09:36.577 00:17:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:09:36.835 00:17:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:38.209 00:17:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:09:38.209 00:17:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:09:38.209 00:17:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:38.209 00:17:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:09:38.209 00:17:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:09:38.209 00:17:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:09:40.739 00:17:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:40.739 00:17:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:40.739 00:17:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:40.739 00:17:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:09:40.739 00:17:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:40.739 00:17:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:09:40.739 00:17:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:09:40.739 [global] 00:09:40.739 thread=1 00:09:40.739 invalidate=1 00:09:40.739 rw=write 00:09:40.739 time_based=1 00:09:40.739 runtime=1 00:09:40.739 ioengine=libaio 00:09:40.739 direct=1 00:09:40.739 bs=4096 00:09:40.739 iodepth=1 00:09:40.739 norandommap=0 00:09:40.739 numjobs=1 00:09:40.739 00:09:40.739 verify_dump=1 00:09:40.739 verify_backlog=512 00:09:40.739 verify_state_save=0 00:09:40.739 do_verify=1 00:09:40.739 verify=crc32c-intel 00:09:40.739 [job0] 00:09:40.739 filename=/dev/nvme0n1 00:09:40.739 [job1] 00:09:40.739 filename=/dev/nvme0n2 00:09:40.739 [job2] 00:09:40.739 filename=/dev/nvme0n3 00:09:40.739 [job3] 00:09:40.739 filename=/dev/nvme0n4 00:09:40.739 Could not set queue depth (nvme0n1) 00:09:40.739 Could not set queue depth (nvme0n2) 00:09:40.739 Could not set queue depth (nvme0n3) 00:09:40.739 Could not set queue depth (nvme0n4) 00:09:40.739 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:40.739 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:40.739 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:40.739 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:40.739 fio-3.35 00:09:40.739 Starting 4 threads 00:09:42.216 00:09:42.216 job0: (groupid=0, jobs=1): err= 0: pid=3113807: Wed Oct 9 00:17:12 2024 00:09:42.216 read: IOPS=543, BW=2174KiB/s (2226kB/s)(2176KiB/1001msec) 00:09:42.216 slat (nsec): min=6881, max=55135, avg=24628.72, stdev=5242.23 00:09:42.216 clat (usec): min=453, max=1016, avg=791.55, stdev=114.38 00:09:42.216 lat (usec): min=460, max=1041, avg=816.18, stdev=114.93 00:09:42.216 clat percentiles (usec): 00:09:42.216 | 1.00th=[ 537], 5.00th=[ 578], 10.00th=[ 635], 20.00th=[ 676], 00:09:42.216 | 30.00th=[ 734], 40.00th=[ 775], 50.00th=[ 799], 60.00th=[ 840], 00:09:42.216 | 70.00th=[ 873], 80.00th=[ 906], 90.00th=[ 922], 95.00th=[ 947], 00:09:42.216 | 99.00th=[ 979], 99.50th=[ 1004], 99.90th=[ 1020], 99.95th=[ 1020], 00:09:42.216 | 99.99th=[ 1020] 00:09:42.216 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:09:42.216 slat (nsec): min=3495, max=60431, avg=31751.53, stdev=7517.40 00:09:42.216 clat (usec): min=122, max=807, avg=500.16, stdev=113.69 00:09:42.216 lat (usec): min=132, max=855, avg=531.91, stdev=115.40 00:09:42.216 clat percentiles (usec): 00:09:42.216 | 1.00th=[ 249], 5.00th=[ 293], 10.00th=[ 359], 20.00th=[ 396], 00:09:42.216 | 30.00th=[ 445], 40.00th=[ 478], 50.00th=[ 498], 60.00th=[ 529], 00:09:42.216 | 70.00th=[ 578], 80.00th=[ 611], 90.00th=[ 644], 95.00th=[ 676], 00:09:42.216 | 99.00th=[ 725], 99.50th=[ 742], 99.90th=[ 766], 99.95th=[ 807], 00:09:42.216 | 99.99th=[ 807] 00:09:42.216 bw ( KiB/s): min= 4096, max= 4096, per=33.63%, avg=4096.00, stdev= 0.00, samples=1 00:09:42.216 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:42.216 lat (usec) : 250=0.89%, 500=32.91%, 750=42.98%, 1000=23.02% 00:09:42.216 lat (msec) : 2=0.19% 00:09:42.216 cpu : usr=2.30%, sys=4.70%, ctx=1569, majf=0, minf=1 00:09:42.216 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:42.216 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:42.216 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:42.216 issued rwts: total=544,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:42.216 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:42.216 job1: (groupid=0, jobs=1): err= 0: pid=3113810: Wed Oct 9 00:17:12 2024 00:09:42.216 read: IOPS=533, BW=2134KiB/s (2185kB/s)(2136KiB/1001msec) 00:09:42.216 slat (nsec): min=6901, max=61096, avg=25264.25, stdev=5024.89 00:09:42.216 clat (usec): min=454, max=1048, avg=777.41, stdev=142.80 00:09:42.216 lat (usec): min=480, max=1074, avg=802.68, stdev=142.55 00:09:42.216 clat percentiles (usec): 00:09:42.216 | 1.00th=[ 502], 5.00th=[ 545], 10.00th=[ 562], 20.00th=[ 611], 00:09:42.216 | 30.00th=[ 693], 40.00th=[ 758], 50.00th=[ 799], 60.00th=[ 832], 00:09:42.216 | 70.00th=[ 881], 80.00th=[ 914], 90.00th=[ 947], 95.00th=[ 971], 00:09:42.216 | 99.00th=[ 1012], 99.50th=[ 1029], 99.90th=[ 1057], 99.95th=[ 1057], 00:09:42.216 | 99.99th=[ 1057] 00:09:42.216 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:09:42.216 slat (nsec): min=3001, max=68098, avg=26591.51, stdev=11281.25 00:09:42.216 clat (usec): min=156, max=943, avg=520.89, stdev=125.20 00:09:42.216 lat (usec): min=159, max=954, avg=547.49, stdev=126.56 00:09:42.216 clat percentiles (usec): 00:09:42.216 | 1.00th=[ 245], 5.00th=[ 322], 10.00th=[ 363], 20.00th=[ 412], 00:09:42.216 | 30.00th=[ 461], 40.00th=[ 486], 50.00th=[ 515], 60.00th=[ 553], 00:09:42.216 | 70.00th=[ 586], 80.00th=[ 619], 90.00th=[ 685], 95.00th=[ 742], 00:09:42.216 | 99.00th=[ 816], 99.50th=[ 848], 99.90th=[ 881], 99.95th=[ 947], 00:09:42.216 | 99.99th=[ 947] 00:09:42.216 bw ( KiB/s): min= 4096, max= 4096, per=33.63%, avg=4096.00, stdev= 0.00, samples=1 00:09:42.216 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:42.216 lat (usec) : 250=0.77%, 500=29.08%, 750=46.47%, 1000=23.23% 00:09:42.216 lat (msec) : 2=0.45% 00:09:42.216 cpu : usr=2.10%, sys=4.10%, ctx=1558, majf=0, minf=2 00:09:42.216 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:42.216 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:42.216 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:42.216 issued rwts: total=534,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:42.216 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:42.216 job2: (groupid=0, jobs=1): err= 0: pid=3113811: Wed Oct 9 00:17:12 2024 00:09:42.216 read: IOPS=56, BW=226KiB/s (231kB/s)(228KiB/1009msec) 00:09:42.216 slat (nsec): min=8094, max=43368, avg=26280.56, stdev=4746.75 00:09:42.216 clat (usec): min=326, max=41245, avg=14032.48, stdev=19223.84 00:09:42.216 lat (usec): min=352, max=41271, avg=14058.76, stdev=19223.81 00:09:42.216 clat percentiles (usec): 00:09:42.216 | 1.00th=[ 326], 5.00th=[ 392], 10.00th=[ 408], 20.00th=[ 437], 00:09:42.216 | 30.00th=[ 586], 40.00th=[ 627], 50.00th=[ 644], 60.00th=[ 693], 00:09:42.216 | 70.00th=[40633], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:09:42.216 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:09:42.216 | 99.99th=[41157] 00:09:42.216 write: IOPS=507, BW=2030KiB/s (2078kB/s)(2048KiB/1009msec); 0 zone resets 00:09:42.216 slat (nsec): min=9821, max=71082, avg=26872.97, stdev=10914.61 00:09:42.216 clat (usec): min=118, max=1229, avg=371.02, stdev=141.97 00:09:42.216 lat (usec): min=151, max=1262, avg=397.89, stdev=142.98 00:09:42.216 clat percentiles (usec): 00:09:42.216 | 1.00th=[ 182], 5.00th=[ 202], 10.00th=[ 247], 20.00th=[ 281], 00:09:42.216 | 30.00th=[ 297], 40.00th=[ 314], 50.00th=[ 326], 60.00th=[ 347], 00:09:42.216 | 70.00th=[ 383], 80.00th=[ 445], 90.00th=[ 586], 95.00th=[ 668], 00:09:42.216 | 99.00th=[ 824], 99.50th=[ 955], 99.90th=[ 1237], 99.95th=[ 1237], 00:09:42.216 | 99.99th=[ 1237] 00:09:42.216 bw ( KiB/s): min= 4096, max= 4096, per=33.63%, avg=4096.00, stdev= 0.00, samples=1 00:09:42.216 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:42.216 lat (usec) : 250=9.84%, 500=67.14%, 750=17.40%, 1000=2.11% 00:09:42.216 lat (msec) : 2=0.18%, 50=3.34% 00:09:42.216 cpu : usr=0.99%, sys=1.19%, ctx=570, majf=0, minf=2 00:09:42.216 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:42.216 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:42.216 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:42.216 issued rwts: total=57,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:42.216 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:42.216 job3: (groupid=0, jobs=1): err= 0: pid=3113812: Wed Oct 9 00:17:12 2024 00:09:42.216 read: IOPS=17, BW=71.4KiB/s (73.1kB/s)(72.0KiB/1009msec) 00:09:42.216 slat (nsec): min=27478, max=28154, avg=27797.39, stdev=169.51 00:09:42.216 clat (usec): min=40892, max=41231, avg=40985.77, stdev=80.64 00:09:42.216 lat (usec): min=40920, max=41259, avg=41013.57, stdev=80.58 00:09:42.216 clat percentiles (usec): 00:09:42.216 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:09:42.216 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:09:42.216 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:09:42.216 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:09:42.216 | 99.99th=[41157] 00:09:42.216 write: IOPS=507, BW=2030KiB/s (2078kB/s)(2048KiB/1009msec); 0 zone resets 00:09:42.216 slat (usec): min=10, max=43120, avg=112.99, stdev=1904.79 00:09:42.216 clat (usec): min=118, max=787, avg=409.39, stdev=81.56 00:09:42.216 lat (usec): min=131, max=43557, avg=522.37, stdev=1908.11 00:09:42.216 clat percentiles (usec): 00:09:42.216 | 1.00th=[ 249], 5.00th=[ 277], 10.00th=[ 302], 20.00th=[ 330], 00:09:42.216 | 30.00th=[ 355], 40.00th=[ 388], 50.00th=[ 429], 60.00th=[ 445], 00:09:42.216 | 70.00th=[ 461], 80.00th=[ 478], 90.00th=[ 498], 95.00th=[ 529], 00:09:42.216 | 99.00th=[ 570], 99.50th=[ 619], 99.90th=[ 791], 99.95th=[ 791], 00:09:42.216 | 99.99th=[ 791] 00:09:42.216 bw ( KiB/s): min= 4087, max= 4087, per=33.56%, avg=4087.00, stdev= 0.00, samples=1 00:09:42.216 iops : min= 1021, max= 1021, avg=1021.00, stdev= 0.00, samples=1 00:09:42.216 lat (usec) : 250=1.32%, 500=85.66%, 750=9.43%, 1000=0.19% 00:09:42.216 lat (msec) : 50=3.40% 00:09:42.216 cpu : usr=0.69%, sys=1.29%, ctx=535, majf=0, minf=1 00:09:42.216 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:42.216 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:42.216 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:42.216 issued rwts: total=18,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:42.216 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:42.216 00:09:42.216 Run status group 0 (all jobs): 00:09:42.216 READ: bw=4571KiB/s (4681kB/s), 71.4KiB/s-2174KiB/s (73.1kB/s-2226kB/s), io=4612KiB (4723kB), run=1001-1009msec 00:09:42.216 WRITE: bw=11.9MiB/s (12.5MB/s), 2030KiB/s-4092KiB/s (2078kB/s-4190kB/s), io=12.0MiB (12.6MB), run=1001-1009msec 00:09:42.216 00:09:42.216 Disk stats (read/write): 00:09:42.216 nvme0n1: ios=562/661, merge=0/0, ticks=448/294, in_queue=742, util=80.46% 00:09:42.216 nvme0n2: ios=527/654, merge=0/0, ticks=494/321, in_queue=815, util=84.74% 00:09:42.216 nvme0n3: ios=51/512, merge=0/0, ticks=554/180, in_queue=734, util=86.41% 00:09:42.216 nvme0n4: ios=69/512, merge=0/0, ticks=933/203, in_queue=1136, util=99.31% 00:09:42.216 00:17:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:09:42.216 [global] 00:09:42.216 thread=1 00:09:42.216 invalidate=1 00:09:42.216 rw=randwrite 00:09:42.216 time_based=1 00:09:42.216 runtime=1 00:09:42.216 ioengine=libaio 00:09:42.216 direct=1 00:09:42.216 bs=4096 00:09:42.216 iodepth=1 00:09:42.216 norandommap=0 00:09:42.216 numjobs=1 00:09:42.216 00:09:42.216 verify_dump=1 00:09:42.216 verify_backlog=512 00:09:42.216 verify_state_save=0 00:09:42.216 do_verify=1 00:09:42.217 verify=crc32c-intel 00:09:42.217 [job0] 00:09:42.217 filename=/dev/nvme0n1 00:09:42.217 [job1] 00:09:42.217 filename=/dev/nvme0n2 00:09:42.217 [job2] 00:09:42.217 filename=/dev/nvme0n3 00:09:42.217 [job3] 00:09:42.217 filename=/dev/nvme0n4 00:09:42.217 Could not set queue depth (nvme0n1) 00:09:42.217 Could not set queue depth (nvme0n2) 00:09:42.217 Could not set queue depth (nvme0n3) 00:09:42.217 Could not set queue depth (nvme0n4) 00:09:42.479 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:42.479 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:42.479 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:42.479 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:42.479 fio-3.35 00:09:42.479 Starting 4 threads 00:09:43.888 00:09:43.888 job0: (groupid=0, jobs=1): err= 0: pid=3114338: Wed Oct 9 00:17:14 2024 00:09:43.888 read: IOPS=17, BW=70.2KiB/s (71.9kB/s)(72.0KiB/1025msec) 00:09:43.888 slat (nsec): min=25814, max=27431, avg=26537.94, stdev=360.02 00:09:43.888 clat (usec): min=963, max=42049, avg=39515.81, stdev=9628.52 00:09:43.888 lat (usec): min=991, max=42075, avg=39542.34, stdev=9628.29 00:09:43.888 clat percentiles (usec): 00:09:43.888 | 1.00th=[ 963], 5.00th=[ 963], 10.00th=[41157], 20.00th=[41157], 00:09:43.888 | 30.00th=[41681], 40.00th=[41681], 50.00th=[41681], 60.00th=[42206], 00:09:43.888 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:09:43.888 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:09:43.888 | 99.99th=[42206] 00:09:43.888 write: IOPS=499, BW=1998KiB/s (2046kB/s)(2048KiB/1025msec); 0 zone resets 00:09:43.888 slat (nsec): min=8877, max=67477, avg=28870.20, stdev=9556.39 00:09:43.888 clat (usec): min=227, max=1033, avg=575.26, stdev=116.31 00:09:43.888 lat (usec): min=236, max=1066, avg=604.13, stdev=120.48 00:09:43.888 clat percentiles (usec): 00:09:43.888 | 1.00th=[ 322], 5.00th=[ 367], 10.00th=[ 424], 20.00th=[ 469], 00:09:43.888 | 30.00th=[ 519], 40.00th=[ 553], 50.00th=[ 578], 60.00th=[ 619], 00:09:43.888 | 70.00th=[ 652], 80.00th=[ 685], 90.00th=[ 709], 95.00th=[ 742], 00:09:43.888 | 99.00th=[ 791], 99.50th=[ 816], 99.90th=[ 1037], 99.95th=[ 1037], 00:09:43.888 | 99.99th=[ 1037] 00:09:43.888 bw ( KiB/s): min= 4096, max= 4096, per=46.29%, avg=4096.00, stdev= 0.00, samples=1 00:09:43.888 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:43.888 lat (usec) : 250=0.19%, 500=26.79%, 750=65.85%, 1000=3.77% 00:09:43.888 lat (msec) : 2=0.19%, 50=3.21% 00:09:43.888 cpu : usr=1.27%, sys=1.66%, ctx=530, majf=0, minf=1 00:09:43.888 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:43.888 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:43.888 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:43.888 issued rwts: total=18,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:43.888 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:43.888 job1: (groupid=0, jobs=1): err= 0: pid=3114339: Wed Oct 9 00:17:14 2024 00:09:43.888 read: IOPS=162, BW=650KiB/s (665kB/s)(664KiB/1022msec) 00:09:43.888 slat (nsec): min=9871, max=43315, avg=25163.54, stdev=2194.66 00:09:43.888 clat (usec): min=901, max=42028, avg=4028.65, stdev=10446.57 00:09:43.888 lat (usec): min=926, max=42053, avg=4053.81, stdev=10446.56 00:09:43.888 clat percentiles (usec): 00:09:43.888 | 1.00th=[ 930], 5.00th=[ 1012], 10.00th=[ 1045], 20.00th=[ 1074], 00:09:43.888 | 30.00th=[ 1106], 40.00th=[ 1123], 50.00th=[ 1139], 60.00th=[ 1139], 00:09:43.888 | 70.00th=[ 1172], 80.00th=[ 1188], 90.00th=[ 1237], 95.00th=[41157], 00:09:43.888 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:09:43.888 | 99.99th=[42206] 00:09:43.888 write: IOPS=500, BW=2004KiB/s (2052kB/s)(2048KiB/1022msec); 0 zone resets 00:09:43.888 slat (nsec): min=9021, max=51215, avg=29703.82, stdev=8082.09 00:09:43.888 clat (usec): min=222, max=978, avg=641.01, stdev=143.93 00:09:43.888 lat (usec): min=232, max=1009, avg=670.71, stdev=147.08 00:09:43.888 clat percentiles (usec): 00:09:43.888 | 1.00th=[ 289], 5.00th=[ 383], 10.00th=[ 461], 20.00th=[ 529], 00:09:43.888 | 30.00th=[ 570], 40.00th=[ 603], 50.00th=[ 644], 60.00th=[ 676], 00:09:43.888 | 70.00th=[ 734], 80.00th=[ 775], 90.00th=[ 816], 95.00th=[ 873], 00:09:43.888 | 99.00th=[ 938], 99.50th=[ 955], 99.90th=[ 979], 99.95th=[ 979], 00:09:43.888 | 99.99th=[ 979] 00:09:43.888 bw ( KiB/s): min= 4096, max= 4096, per=46.29%, avg=4096.00, stdev= 0.00, samples=1 00:09:43.888 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:43.888 lat (usec) : 250=0.15%, 500=12.09%, 750=43.95%, 1000=20.50% 00:09:43.888 lat (msec) : 2=21.53%, 50=1.77% 00:09:43.888 cpu : usr=0.78%, sys=2.15%, ctx=678, majf=0, minf=1 00:09:43.888 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:43.888 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:43.888 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:43.888 issued rwts: total=166,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:43.888 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:43.888 job2: (groupid=0, jobs=1): err= 0: pid=3114340: Wed Oct 9 00:17:14 2024 00:09:43.888 read: IOPS=17, BW=69.8KiB/s (71.4kB/s)(72.0KiB/1032msec) 00:09:43.888 slat (nsec): min=25575, max=26454, avg=25784.56, stdev=201.85 00:09:43.888 clat (usec): min=41408, max=42090, avg=41936.02, stdev=152.86 00:09:43.888 lat (usec): min=41434, max=42116, avg=41961.80, stdev=152.72 00:09:43.888 clat percentiles (usec): 00:09:43.888 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41681], 20.00th=[41681], 00:09:43.888 | 30.00th=[41681], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:09:43.888 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:09:43.888 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:09:43.888 | 99.99th=[42206] 00:09:43.888 write: IOPS=496, BW=1984KiB/s (2032kB/s)(2048KiB/1032msec); 0 zone resets 00:09:43.888 slat (nsec): min=9910, max=65653, avg=30853.01, stdev=6643.61 00:09:43.888 clat (usec): min=111, max=812, avg=501.22, stdev=132.57 00:09:43.888 lat (usec): min=126, max=844, avg=532.08, stdev=133.90 00:09:43.888 clat percentiles (usec): 00:09:43.888 | 1.00th=[ 172], 5.00th=[ 273], 10.00th=[ 310], 20.00th=[ 392], 00:09:43.888 | 30.00th=[ 433], 40.00th=[ 478], 50.00th=[ 515], 60.00th=[ 553], 00:09:43.888 | 70.00th=[ 578], 80.00th=[ 619], 90.00th=[ 668], 95.00th=[ 709], 00:09:43.888 | 99.00th=[ 758], 99.50th=[ 791], 99.90th=[ 816], 99.95th=[ 816], 00:09:43.888 | 99.99th=[ 816] 00:09:43.888 bw ( KiB/s): min= 4096, max= 4096, per=46.29%, avg=4096.00, stdev= 0.00, samples=1 00:09:43.888 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:43.888 lat (usec) : 250=2.26%, 500=41.13%, 750=51.51%, 1000=1.70% 00:09:43.888 lat (msec) : 50=3.40% 00:09:43.888 cpu : usr=0.87%, sys=1.45%, ctx=530, majf=0, minf=1 00:09:43.888 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:43.888 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:43.888 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:43.888 issued rwts: total=18,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:43.888 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:43.888 job3: (groupid=0, jobs=1): err= 0: pid=3114341: Wed Oct 9 00:17:14 2024 00:09:43.888 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:09:43.888 slat (nsec): min=8196, max=46775, avg=26473.39, stdev=2440.00 00:09:43.888 clat (usec): min=629, max=1255, avg=977.58, stdev=71.85 00:09:43.888 lat (usec): min=656, max=1281, avg=1004.05, stdev=71.96 00:09:43.888 clat percentiles (usec): 00:09:43.888 | 1.00th=[ 783], 5.00th=[ 840], 10.00th=[ 889], 20.00th=[ 930], 00:09:43.888 | 30.00th=[ 955], 40.00th=[ 971], 50.00th=[ 988], 60.00th=[ 996], 00:09:43.888 | 70.00th=[ 1012], 80.00th=[ 1029], 90.00th=[ 1057], 95.00th=[ 1074], 00:09:43.888 | 99.00th=[ 1139], 99.50th=[ 1172], 99.90th=[ 1254], 99.95th=[ 1254], 00:09:43.888 | 99.99th=[ 1254] 00:09:43.888 write: IOPS=746, BW=2985KiB/s (3057kB/s)(2988KiB/1001msec); 0 zone resets 00:09:43.888 slat (nsec): min=9804, max=53362, avg=30909.86, stdev=8314.95 00:09:43.888 clat (usec): min=132, max=960, avg=606.20, stdev=126.56 00:09:43.888 lat (usec): min=142, max=994, avg=637.11, stdev=129.26 00:09:43.888 clat percentiles (usec): 00:09:43.888 | 1.00th=[ 265], 5.00th=[ 383], 10.00th=[ 445], 20.00th=[ 506], 00:09:43.888 | 30.00th=[ 545], 40.00th=[ 586], 50.00th=[ 611], 60.00th=[ 644], 00:09:43.888 | 70.00th=[ 676], 80.00th=[ 709], 90.00th=[ 750], 95.00th=[ 791], 00:09:43.888 | 99.00th=[ 914], 99.50th=[ 938], 99.90th=[ 963], 99.95th=[ 963], 00:09:43.888 | 99.99th=[ 963] 00:09:43.888 bw ( KiB/s): min= 4096, max= 4096, per=46.29%, avg=4096.00, stdev= 0.00, samples=1 00:09:43.888 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:43.888 lat (usec) : 250=0.56%, 500=10.33%, 750=42.49%, 1000=30.98% 00:09:43.888 lat (msec) : 2=15.65% 00:09:43.888 cpu : usr=1.60%, sys=4.10%, ctx=1261, majf=0, minf=1 00:09:43.888 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:43.888 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:43.888 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:43.888 issued rwts: total=512,747,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:43.888 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:43.888 00:09:43.888 Run status group 0 (all jobs): 00:09:43.888 READ: bw=2767KiB/s (2834kB/s), 69.8KiB/s-2046KiB/s (71.4kB/s-2095kB/s), io=2856KiB (2925kB), run=1001-1032msec 00:09:43.888 WRITE: bw=8849KiB/s (9061kB/s), 1984KiB/s-2985KiB/s (2032kB/s-3057kB/s), io=9132KiB (9351kB), run=1001-1032msec 00:09:43.888 00:09:43.888 Disk stats (read/write): 00:09:43.888 nvme0n1: ios=63/512, merge=0/0, ticks=567/234, in_queue=801, util=88.08% 00:09:43.888 nvme0n2: ios=178/512, merge=0/0, ticks=565/307, in_queue=872, util=91.23% 00:09:43.888 nvme0n3: ios=40/512, merge=0/0, ticks=822/227, in_queue=1049, util=90.93% 00:09:43.888 nvme0n4: ios=536/512, merge=0/0, ticks=655/307, in_queue=962, util=97.12% 00:09:43.888 00:17:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:09:43.888 [global] 00:09:43.888 thread=1 00:09:43.888 invalidate=1 00:09:43.888 rw=write 00:09:43.888 time_based=1 00:09:43.888 runtime=1 00:09:43.888 ioengine=libaio 00:09:43.888 direct=1 00:09:43.888 bs=4096 00:09:43.888 iodepth=128 00:09:43.888 norandommap=0 00:09:43.888 numjobs=1 00:09:43.888 00:09:43.888 verify_dump=1 00:09:43.888 verify_backlog=512 00:09:43.888 verify_state_save=0 00:09:43.888 do_verify=1 00:09:43.888 verify=crc32c-intel 00:09:43.888 [job0] 00:09:43.888 filename=/dev/nvme0n1 00:09:43.888 [job1] 00:09:43.888 filename=/dev/nvme0n2 00:09:43.888 [job2] 00:09:43.888 filename=/dev/nvme0n3 00:09:43.888 [job3] 00:09:43.888 filename=/dev/nvme0n4 00:09:43.888 Could not set queue depth (nvme0n1) 00:09:43.888 Could not set queue depth (nvme0n2) 00:09:43.888 Could not set queue depth (nvme0n3) 00:09:43.889 Could not set queue depth (nvme0n4) 00:09:44.162 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:44.162 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:44.162 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:44.162 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:44.162 fio-3.35 00:09:44.162 Starting 4 threads 00:09:45.549 00:09:45.549 job0: (groupid=0, jobs=1): err= 0: pid=3114860: Wed Oct 9 00:17:15 2024 00:09:45.549 read: IOPS=7619, BW=29.8MiB/s (31.2MB/s)(30.0MiB/1008msec) 00:09:45.549 slat (nsec): min=915, max=7582.6k, avg=70449.51, stdev=544888.94 00:09:45.549 clat (usec): min=2975, max=16123, avg=8801.41, stdev=1924.71 00:09:45.549 lat (usec): min=2981, max=16130, avg=8871.86, stdev=1968.33 00:09:45.549 clat percentiles (usec): 00:09:45.549 | 1.00th=[ 3884], 5.00th=[ 6390], 10.00th=[ 6980], 20.00th=[ 7767], 00:09:45.549 | 30.00th=[ 8029], 40.00th=[ 8160], 50.00th=[ 8356], 60.00th=[ 8586], 00:09:45.549 | 70.00th=[ 8979], 80.00th=[ 9896], 90.00th=[11600], 95.00th=[12911], 00:09:45.549 | 99.00th=[14615], 99.50th=[14877], 99.90th=[15401], 99.95th=[15533], 00:09:45.549 | 99.99th=[16188] 00:09:45.549 write: IOPS=7813, BW=30.5MiB/s (32.0MB/s)(30.8MiB/1008msec); 0 zone resets 00:09:45.549 slat (nsec): min=1616, max=7010.0k, avg=53698.29, stdev=294823.87 00:09:45.549 clat (usec): min=1131, max=15361, avg=7655.63, stdev=1723.50 00:09:45.549 lat (usec): min=1141, max=15364, avg=7709.33, stdev=1739.91 00:09:45.549 clat percentiles (usec): 00:09:45.549 | 1.00th=[ 2868], 5.00th=[ 4228], 10.00th=[ 5145], 20.00th=[ 6652], 00:09:45.549 | 30.00th=[ 7504], 40.00th=[ 7898], 50.00th=[ 8029], 60.00th=[ 8160], 00:09:45.549 | 70.00th=[ 8356], 80.00th=[ 8455], 90.00th=[ 8848], 95.00th=[10028], 00:09:45.549 | 99.00th=[12780], 99.50th=[13042], 99.90th=[14877], 99.95th=[15270], 00:09:45.549 | 99.99th=[15401] 00:09:45.549 bw ( KiB/s): min=29240, max=32752, per=26.38%, avg=30996.00, stdev=2483.36, samples=2 00:09:45.549 iops : min= 7310, max= 8188, avg=7749.00, stdev=620.84, samples=2 00:09:45.549 lat (msec) : 2=0.05%, 4=2.68%, 10=85.17%, 20=12.10% 00:09:45.549 cpu : usr=4.77%, sys=6.95%, ctx=854, majf=0, minf=1 00:09:45.549 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:09:45.549 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:45.549 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:45.549 issued rwts: total=7680,7876,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:45.549 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:45.549 job1: (groupid=0, jobs=1): err= 0: pid=3114862: Wed Oct 9 00:17:15 2024 00:09:45.549 read: IOPS=7619, BW=29.8MiB/s (31.2MB/s)(30.0MiB/1008msec) 00:09:45.549 slat (nsec): min=923, max=7752.5k, avg=66985.36, stdev=520430.31 00:09:45.549 clat (usec): min=2906, max=16233, avg=8654.62, stdev=1879.05 00:09:45.549 lat (usec): min=2912, max=16245, avg=8721.60, stdev=1922.11 00:09:45.549 clat percentiles (usec): 00:09:45.549 | 1.00th=[ 4015], 5.00th=[ 6194], 10.00th=[ 7046], 20.00th=[ 7504], 00:09:45.549 | 30.00th=[ 7898], 40.00th=[ 8094], 50.00th=[ 8225], 60.00th=[ 8356], 00:09:45.549 | 70.00th=[ 8717], 80.00th=[ 9372], 90.00th=[11469], 95.00th=[12649], 00:09:45.549 | 99.00th=[14615], 99.50th=[14877], 99.90th=[15533], 99.95th=[15533], 00:09:45.549 | 99.99th=[16188] 00:09:45.549 write: IOPS=8045, BW=31.4MiB/s (33.0MB/s)(31.7MiB/1008msec); 0 zone resets 00:09:45.549 slat (nsec): min=1596, max=6780.8k, avg=54293.11, stdev=320380.76 00:09:45.549 clat (usec): min=827, max=15560, avg=7571.71, stdev=1899.70 00:09:45.549 lat (usec): min=835, max=15563, avg=7626.01, stdev=1915.57 00:09:45.549 clat percentiles (usec): 00:09:45.549 | 1.00th=[ 2573], 5.00th=[ 3949], 10.00th=[ 4817], 20.00th=[ 6063], 00:09:45.549 | 30.00th=[ 7570], 40.00th=[ 7832], 50.00th=[ 8029], 60.00th=[ 8160], 00:09:45.549 | 70.00th=[ 8291], 80.00th=[ 8455], 90.00th=[ 8717], 95.00th=[10814], 00:09:45.549 | 99.00th=[12256], 99.50th=[14353], 99.90th=[15270], 99.95th=[15270], 00:09:45.549 | 99.99th=[15533] 00:09:45.549 bw ( KiB/s): min=31112, max=32752, per=27.18%, avg=31932.00, stdev=1159.66, samples=2 00:09:45.549 iops : min= 7778, max= 8188, avg=7983.00, stdev=289.91, samples=2 00:09:45.549 lat (usec) : 1000=0.02% 00:09:45.549 lat (msec) : 2=0.21%, 4=3.07%, 10=84.67%, 20=12.03% 00:09:45.549 cpu : usr=5.16%, sys=7.25%, ctx=812, majf=0, minf=1 00:09:45.549 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:09:45.549 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:45.549 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:45.549 issued rwts: total=7680,8110,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:45.549 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:45.549 job2: (groupid=0, jobs=1): err= 0: pid=3114868: Wed Oct 9 00:17:15 2024 00:09:45.549 read: IOPS=6404, BW=25.0MiB/s (26.2MB/s)(25.1MiB/1005msec) 00:09:45.549 slat (nsec): min=977, max=9242.7k, avg=84510.24, stdev=648374.42 00:09:45.549 clat (usec): min=3177, max=26243, avg=10449.47, stdev=2813.84 00:09:45.549 lat (usec): min=3185, max=26245, avg=10533.98, stdev=2853.96 00:09:45.549 clat percentiles (usec): 00:09:45.549 | 1.00th=[ 4047], 5.00th=[ 6783], 10.00th=[ 8029], 20.00th=[ 8848], 00:09:45.549 | 30.00th=[ 9110], 40.00th=[ 9503], 50.00th=[ 9634], 60.00th=[10028], 00:09:45.549 | 70.00th=[10683], 80.00th=[12518], 90.00th=[14484], 95.00th=[16057], 00:09:45.549 | 99.00th=[18220], 99.50th=[21365], 99.90th=[25560], 99.95th=[26346], 00:09:45.549 | 99.99th=[26346] 00:09:45.549 write: IOPS=6622, BW=25.9MiB/s (27.1MB/s)(26.0MiB/1005msec); 0 zone resets 00:09:45.549 slat (nsec): min=1730, max=7889.8k, avg=63841.35, stdev=274859.93 00:09:45.549 clat (usec): min=1172, max=26244, avg=9008.58, stdev=2511.98 00:09:45.549 lat (usec): min=1182, max=26249, avg=9072.42, stdev=2531.80 00:09:45.549 clat percentiles (usec): 00:09:45.549 | 1.00th=[ 3130], 5.00th=[ 4555], 10.00th=[ 5735], 20.00th=[ 8160], 00:09:45.549 | 30.00th=[ 8979], 40.00th=[ 9241], 50.00th=[ 9372], 60.00th=[ 9503], 00:09:45.549 | 70.00th=[ 9634], 80.00th=[ 9765], 90.00th=[ 9896], 95.00th=[10159], 00:09:45.549 | 99.00th=[19792], 99.50th=[20579], 99.90th=[22676], 99.95th=[22676], 00:09:45.549 | 99.99th=[26346] 00:09:45.549 bw ( KiB/s): min=25104, max=28144, per=22.66%, avg=26624.00, stdev=2149.60, samples=2 00:09:45.549 iops : min= 6276, max= 7036, avg=6656.00, stdev=537.40, samples=2 00:09:45.549 lat (msec) : 2=0.02%, 4=2.07%, 10=73.31%, 20=23.84%, 50=0.76% 00:09:45.549 cpu : usr=3.19%, sys=6.87%, ctx=858, majf=0, minf=1 00:09:45.549 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:09:45.549 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:45.549 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:45.549 issued rwts: total=6437,6656,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:45.549 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:45.550 job3: (groupid=0, jobs=1): err= 0: pid=3114869: Wed Oct 9 00:17:15 2024 00:09:45.550 read: IOPS=6603, BW=25.8MiB/s (27.0MB/s)(26.0MiB/1008msec) 00:09:45.550 slat (nsec): min=955, max=8343.2k, avg=77457.61, stdev=582143.73 00:09:45.550 clat (usec): min=3968, max=23362, avg=10090.03, stdev=2399.98 00:09:45.550 lat (usec): min=4345, max=23364, avg=10167.48, stdev=2442.69 00:09:45.550 clat percentiles (usec): 00:09:45.550 | 1.00th=[ 5866], 5.00th=[ 7242], 10.00th=[ 7832], 20.00th=[ 8586], 00:09:45.550 | 30.00th=[ 8979], 40.00th=[ 9241], 50.00th=[ 9372], 60.00th=[ 9634], 00:09:45.550 | 70.00th=[10421], 80.00th=[11863], 90.00th=[13698], 95.00th=[15270], 00:09:45.550 | 99.00th=[17171], 99.50th=[19006], 99.90th=[19006], 99.95th=[19006], 00:09:45.550 | 99.99th=[23462] 00:09:45.550 write: IOPS=6910, BW=27.0MiB/s (28.3MB/s)(27.2MiB/1008msec); 0 zone resets 00:09:45.550 slat (nsec): min=1660, max=8162.4k, avg=62088.86, stdev=436757.76 00:09:45.550 clat (usec): min=1169, max=17337, avg=8728.53, stdev=2064.87 00:09:45.550 lat (usec): min=1179, max=17342, avg=8790.61, stdev=2099.02 00:09:45.550 clat percentiles (usec): 00:09:45.550 | 1.00th=[ 3752], 5.00th=[ 5407], 10.00th=[ 5997], 20.00th=[ 7177], 00:09:45.550 | 30.00th=[ 8160], 40.00th=[ 8848], 50.00th=[ 9110], 60.00th=[ 9241], 00:09:45.550 | 70.00th=[ 9372], 80.00th=[ 9634], 90.00th=[10028], 95.00th=[13042], 00:09:45.550 | 99.00th=[15139], 99.50th=[15795], 99.90th=[16909], 99.95th=[17171], 00:09:45.550 | 99.99th=[17433] 00:09:45.550 bw ( KiB/s): min=26040, max=28672, per=23.28%, avg=27356.00, stdev=1861.11, samples=2 00:09:45.550 iops : min= 6510, max= 7168, avg=6839.00, stdev=465.28, samples=2 00:09:45.550 lat (msec) : 2=0.07%, 4=0.84%, 10=76.60%, 20=22.48%, 50=0.01% 00:09:45.550 cpu : usr=3.77%, sys=8.24%, ctx=563, majf=0, minf=1 00:09:45.550 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:09:45.550 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:45.550 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:45.550 issued rwts: total=6656,6966,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:45.550 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:45.550 00:09:45.550 Run status group 0 (all jobs): 00:09:45.550 READ: bw=110MiB/s (116MB/s), 25.0MiB/s-29.8MiB/s (26.2MB/s-31.2MB/s), io=111MiB (117MB), run=1005-1008msec 00:09:45.550 WRITE: bw=115MiB/s (120MB/s), 25.9MiB/s-31.4MiB/s (27.1MB/s-33.0MB/s), io=116MiB (121MB), run=1005-1008msec 00:09:45.550 00:09:45.550 Disk stats (read/write): 00:09:45.550 nvme0n1: ios=6194/6656, merge=0/0, ticks=51885/49328, in_queue=101213, util=91.98% 00:09:45.550 nvme0n2: ios=6300/6656, merge=0/0, ticks=52925/48593, in_queue=101518, util=86.85% 00:09:45.550 nvme0n3: ios=5390/5632, merge=0/0, ticks=54607/47854, in_queue=102461, util=99.58% 00:09:45.550 nvme0n4: ios=5659/5719, merge=0/0, ticks=53895/47521, in_queue=101416, util=91.04% 00:09:45.550 00:17:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:09:45.550 [global] 00:09:45.550 thread=1 00:09:45.550 invalidate=1 00:09:45.550 rw=randwrite 00:09:45.550 time_based=1 00:09:45.550 runtime=1 00:09:45.550 ioengine=libaio 00:09:45.550 direct=1 00:09:45.550 bs=4096 00:09:45.550 iodepth=128 00:09:45.550 norandommap=0 00:09:45.550 numjobs=1 00:09:45.550 00:09:45.550 verify_dump=1 00:09:45.550 verify_backlog=512 00:09:45.550 verify_state_save=0 00:09:45.550 do_verify=1 00:09:45.550 verify=crc32c-intel 00:09:45.550 [job0] 00:09:45.550 filename=/dev/nvme0n1 00:09:45.550 [job1] 00:09:45.550 filename=/dev/nvme0n2 00:09:45.550 [job2] 00:09:45.550 filename=/dev/nvme0n3 00:09:45.550 [job3] 00:09:45.550 filename=/dev/nvme0n4 00:09:45.550 Could not set queue depth (nvme0n1) 00:09:45.550 Could not set queue depth (nvme0n2) 00:09:45.550 Could not set queue depth (nvme0n3) 00:09:45.550 Could not set queue depth (nvme0n4) 00:09:45.809 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:45.810 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:45.810 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:45.810 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:45.810 fio-3.35 00:09:45.810 Starting 4 threads 00:09:47.203 00:09:47.203 job0: (groupid=0, jobs=1): err= 0: pid=3115387: Wed Oct 9 00:17:17 2024 00:09:47.203 read: IOPS=4553, BW=17.8MiB/s (18.7MB/s)(18.0MiB/1012msec) 00:09:47.203 slat (nsec): min=967, max=18605k, avg=91743.36, stdev=753659.38 00:09:47.203 clat (usec): min=1171, max=97311, avg=11297.82, stdev=9936.83 00:09:47.203 lat (usec): min=1195, max=97320, avg=11389.56, stdev=10044.47 00:09:47.203 clat percentiles (usec): 00:09:47.203 | 1.00th=[ 1844], 5.00th=[ 3228], 10.00th=[ 4752], 20.00th=[ 6456], 00:09:47.203 | 30.00th=[ 7111], 40.00th=[ 7439], 50.00th=[ 8455], 60.00th=[10552], 00:09:47.203 | 70.00th=[11994], 80.00th=[15008], 90.00th=[18220], 95.00th=[25560], 00:09:47.203 | 99.00th=[64226], 99.50th=[79168], 99.90th=[96994], 99.95th=[96994], 00:09:47.203 | 99.99th=[96994] 00:09:47.203 write: IOPS=4867, BW=19.0MiB/s (19.9MB/s)(19.2MiB/1012msec); 0 zone resets 00:09:47.203 slat (nsec): min=1529, max=10732k, avg=100495.70, stdev=643149.32 00:09:47.203 clat (usec): min=687, max=97300, avg=15515.96, stdev=17590.73 00:09:47.203 lat (usec): min=718, max=97310, avg=15616.46, stdev=17700.04 00:09:47.203 clat percentiles (usec): 00:09:47.203 | 1.00th=[ 1254], 5.00th=[ 2540], 10.00th=[ 4424], 20.00th=[ 6259], 00:09:47.203 | 30.00th=[ 6652], 40.00th=[ 6980], 50.00th=[ 7635], 60.00th=[10814], 00:09:47.203 | 70.00th=[14091], 80.00th=[22938], 90.00th=[35914], 95.00th=[50594], 00:09:47.203 | 99.00th=[89654], 99.50th=[90702], 99.90th=[91751], 99.95th=[91751], 00:09:47.203 | 99.99th=[96994] 00:09:47.203 bw ( KiB/s): min=13816, max=24576, per=20.58%, avg=19196.00, stdev=7608.47, samples=2 00:09:47.203 iops : min= 3454, max= 6144, avg=4799.00, stdev=1902.12, samples=2 00:09:47.203 lat (usec) : 750=0.01%, 1000=0.03% 00:09:47.203 lat (msec) : 2=3.04%, 4=4.90%, 10=50.12%, 20=27.03%, 50=11.23% 00:09:47.203 lat (msec) : 100=3.64% 00:09:47.203 cpu : usr=4.15%, sys=5.04%, ctx=373, majf=0, minf=2 00:09:47.203 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:09:47.203 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:47.203 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:47.203 issued rwts: total=4608,4926,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:47.203 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:47.203 job1: (groupid=0, jobs=1): err= 0: pid=3115388: Wed Oct 9 00:17:17 2024 00:09:47.203 read: IOPS=10.0k, BW=39.2MiB/s (41.1MB/s)(39.4MiB/1004msec) 00:09:47.203 slat (nsec): min=951, max=6502.0k, avg=52402.91, stdev=371912.22 00:09:47.203 clat (usec): min=2277, max=13384, avg=6735.08, stdev=1653.03 00:09:47.204 lat (usec): min=2279, max=13386, avg=6787.49, stdev=1675.51 00:09:47.204 clat percentiles (usec): 00:09:47.204 | 1.00th=[ 3228], 5.00th=[ 4686], 10.00th=[ 5014], 20.00th=[ 5538], 00:09:47.204 | 30.00th=[ 5800], 40.00th=[ 5997], 50.00th=[ 6390], 60.00th=[ 6915], 00:09:47.204 | 70.00th=[ 7308], 80.00th=[ 7701], 90.00th=[ 9110], 95.00th=[10159], 00:09:47.204 | 99.00th=[12125], 99.50th=[12518], 99.90th=[13042], 99.95th=[13042], 00:09:47.204 | 99.99th=[13042] 00:09:47.204 write: IOPS=10.2k, BW=39.8MiB/s (41.8MB/s)(40.0MiB/1004msec); 0 zone resets 00:09:47.204 slat (nsec): min=1629, max=5673.2k, avg=41920.32, stdev=222987.09 00:09:47.204 clat (usec): min=1462, max=13012, avg=5799.38, stdev=1250.02 00:09:47.204 lat (usec): min=1471, max=13015, avg=5841.30, stdev=1270.09 00:09:47.204 clat percentiles (usec): 00:09:47.204 | 1.00th=[ 2278], 5.00th=[ 3195], 10.00th=[ 3949], 20.00th=[ 5211], 00:09:47.204 | 30.00th=[ 5473], 40.00th=[ 5800], 50.00th=[ 5932], 60.00th=[ 6063], 00:09:47.204 | 70.00th=[ 6456], 80.00th=[ 6980], 90.00th=[ 7177], 95.00th=[ 7242], 00:09:47.204 | 99.00th=[ 7504], 99.50th=[ 7832], 99.90th=[12518], 99.95th=[13042], 00:09:47.204 | 99.99th=[13042] 00:09:47.204 bw ( KiB/s): min=39216, max=42704, per=43.90%, avg=40960.00, stdev=2466.39, samples=2 00:09:47.204 iops : min= 9804, max=10676, avg=10240.00, stdev=616.60, samples=2 00:09:47.204 lat (msec) : 2=0.13%, 4=5.91%, 10=91.03%, 20=2.93% 00:09:47.204 cpu : usr=5.18%, sys=9.17%, ctx=1147, majf=0, minf=1 00:09:47.204 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:09:47.204 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:47.204 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:47.204 issued rwts: total=10077,10240,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:47.204 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:47.204 job2: (groupid=0, jobs=1): err= 0: pid=3115389: Wed Oct 9 00:17:17 2024 00:09:47.204 read: IOPS=3622, BW=14.2MiB/s (14.8MB/s)(14.2MiB/1007msec) 00:09:47.204 slat (nsec): min=955, max=24499k, avg=138418.45, stdev=1066923.12 00:09:47.204 clat (usec): min=3179, max=68064, avg=18733.10, stdev=11842.18 00:09:47.204 lat (usec): min=3188, max=77369, avg=18871.52, stdev=11955.41 00:09:47.204 clat percentiles (usec): 00:09:47.204 | 1.00th=[ 6194], 5.00th=[ 7308], 10.00th=[ 8291], 20.00th=[ 8586], 00:09:47.204 | 30.00th=[ 8848], 40.00th=[12125], 50.00th=[13435], 60.00th=[17957], 00:09:47.204 | 70.00th=[23462], 80.00th=[28967], 90.00th=[35914], 95.00th=[42730], 00:09:47.204 | 99.00th=[50594], 99.50th=[58983], 99.90th=[64750], 99.95th=[64750], 00:09:47.204 | 99.99th=[67634] 00:09:47.204 write: IOPS=4067, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1007msec); 0 zone resets 00:09:47.204 slat (nsec): min=1637, max=11341k, avg=115769.92, stdev=751927.22 00:09:47.204 clat (usec): min=4337, max=78765, avg=14425.14, stdev=11661.32 00:09:47.204 lat (usec): min=4345, max=79730, avg=14540.91, stdev=11745.43 00:09:47.204 clat percentiles (usec): 00:09:47.204 | 1.00th=[ 4490], 5.00th=[ 5342], 10.00th=[ 7898], 20.00th=[ 8094], 00:09:47.204 | 30.00th=[ 8848], 40.00th=[ 8979], 50.00th=[ 9503], 60.00th=[11338], 00:09:47.204 | 70.00th=[12780], 80.00th=[20841], 90.00th=[27395], 95.00th=[38011], 00:09:47.204 | 99.00th=[68682], 99.50th=[74974], 99.90th=[79168], 99.95th=[79168], 00:09:47.204 | 99.99th=[79168] 00:09:47.204 bw ( KiB/s): min=11776, max=20480, per=17.29%, avg=16128.00, stdev=6154.66, samples=2 00:09:47.204 iops : min= 2944, max= 5120, avg=4032.00, stdev=1538.66, samples=2 00:09:47.204 lat (msec) : 4=0.22%, 10=45.48%, 20=25.40%, 50=27.00%, 100=1.90% 00:09:47.204 cpu : usr=3.48%, sys=3.88%, ctx=243, majf=0, minf=1 00:09:47.204 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:09:47.204 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:47.204 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:47.204 issued rwts: total=3648,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:47.204 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:47.204 job3: (groupid=0, jobs=1): err= 0: pid=3115390: Wed Oct 9 00:17:17 2024 00:09:47.204 read: IOPS=4047, BW=15.8MiB/s (16.6MB/s)(16.0MiB/1012msec) 00:09:47.204 slat (nsec): min=1015, max=14889k, avg=110345.30, stdev=854152.07 00:09:47.204 clat (usec): min=5081, max=45776, avg=13577.65, stdev=5340.53 00:09:47.204 lat (usec): min=5091, max=45778, avg=13688.00, stdev=5418.30 00:09:47.204 clat percentiles (usec): 00:09:47.204 | 1.00th=[ 6521], 5.00th=[ 8455], 10.00th=[ 8979], 20.00th=[ 9634], 00:09:47.204 | 30.00th=[10159], 40.00th=[10552], 50.00th=[11863], 60.00th=[13566], 00:09:47.204 | 70.00th=[15664], 80.00th=[16450], 90.00th=[19530], 95.00th=[21627], 00:09:47.204 | 99.00th=[35914], 99.50th=[41681], 99.90th=[45876], 99.95th=[45876], 00:09:47.204 | 99.99th=[45876] 00:09:47.204 write: IOPS=4290, BW=16.8MiB/s (17.6MB/s)(17.0MiB/1012msec); 0 zone resets 00:09:47.204 slat (nsec): min=1784, max=13346k, avg=118901.82, stdev=732923.59 00:09:47.204 clat (usec): min=2127, max=58296, avg=16741.43, stdev=11741.24 00:09:47.204 lat (usec): min=2154, max=58305, avg=16860.33, stdev=11811.79 00:09:47.204 clat percentiles (usec): 00:09:47.204 | 1.00th=[ 4228], 5.00th=[ 5473], 10.00th=[ 6652], 20.00th=[ 7898], 00:09:47.204 | 30.00th=[ 9503], 40.00th=[10552], 50.00th=[11994], 60.00th=[13698], 00:09:47.204 | 70.00th=[17695], 80.00th=[25822], 90.00th=[37487], 95.00th=[42206], 00:09:47.204 | 99.00th=[49021], 99.50th=[56361], 99.90th=[58459], 99.95th=[58459], 00:09:47.204 | 99.99th=[58459] 00:09:47.204 bw ( KiB/s): min=14672, max=19048, per=18.07%, avg=16860.00, stdev=3094.30, samples=2 00:09:47.204 iops : min= 3668, max= 4762, avg=4215.00, stdev=773.57, samples=2 00:09:47.204 lat (msec) : 4=0.36%, 10=31.03%, 20=50.82%, 50=17.29%, 100=0.51% 00:09:47.204 cpu : usr=3.66%, sys=4.75%, ctx=305, majf=0, minf=1 00:09:47.204 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:09:47.204 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:47.204 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:47.204 issued rwts: total=4096,4342,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:47.204 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:47.204 00:09:47.204 Run status group 0 (all jobs): 00:09:47.204 READ: bw=86.6MiB/s (90.8MB/s), 14.2MiB/s-39.2MiB/s (14.8MB/s-41.1MB/s), io=87.6MiB (91.9MB), run=1004-1012msec 00:09:47.204 WRITE: bw=91.1MiB/s (95.5MB/s), 15.9MiB/s-39.8MiB/s (16.7MB/s-41.8MB/s), io=92.2MiB (96.7MB), run=1004-1012msec 00:09:47.204 00:09:47.204 Disk stats (read/write): 00:09:47.204 nvme0n1: ios=4146/4591, merge=0/0, ticks=35272/61595, in_queue=96867, util=95.29% 00:09:47.204 nvme0n2: ios=8214/8411, merge=0/0, ticks=54378/47770, in_queue=102148, util=97.25% 00:09:47.204 nvme0n3: ios=3115/3584, merge=0/0, ticks=24994/26230, in_queue=51224, util=88.40% 00:09:47.204 nvme0n4: ios=3256/3584, merge=0/0, ticks=43323/59612, in_queue=102935, util=100.00% 00:09:47.204 00:17:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:09:47.204 00:17:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=3115630 00:09:47.204 00:17:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:09:47.204 00:17:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:09:47.204 [global] 00:09:47.204 thread=1 00:09:47.204 invalidate=1 00:09:47.204 rw=read 00:09:47.204 time_based=1 00:09:47.204 runtime=10 00:09:47.204 ioengine=libaio 00:09:47.204 direct=1 00:09:47.204 bs=4096 00:09:47.204 iodepth=1 00:09:47.204 norandommap=1 00:09:47.204 numjobs=1 00:09:47.204 00:09:47.204 [job0] 00:09:47.204 filename=/dev/nvme0n1 00:09:47.204 [job1] 00:09:47.204 filename=/dev/nvme0n2 00:09:47.204 [job2] 00:09:47.204 filename=/dev/nvme0n3 00:09:47.204 [job3] 00:09:47.204 filename=/dev/nvme0n4 00:09:47.204 Could not set queue depth (nvme0n1) 00:09:47.204 Could not set queue depth (nvme0n2) 00:09:47.204 Could not set queue depth (nvme0n3) 00:09:47.204 Could not set queue depth (nvme0n4) 00:09:47.463 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:47.463 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:47.463 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:47.463 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:47.463 fio-3.35 00:09:47.463 Starting 4 threads 00:09:49.994 00:17:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:09:50.253 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=770048, buflen=4096 00:09:50.253 fio: pid=3115934, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:50.253 00:17:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:09:50.253 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=348160, buflen=4096 00:09:50.253 fio: pid=3115933, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:50.253 00:17:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:50.253 00:17:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:09:50.511 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=626688, buflen=4096 00:09:50.511 fio: pid=3115930, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:50.511 00:17:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:50.511 00:17:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:09:50.770 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=389120, buflen=4096 00:09:50.770 fio: pid=3115931, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:50.770 00:17:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:50.770 00:17:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:09:50.770 00:09:50.770 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3115930: Wed Oct 9 00:17:21 2024 00:09:50.770 read: IOPS=52, BW=207KiB/s (212kB/s)(612KiB/2950msec) 00:09:50.770 slat (usec): min=7, max=2638, avg=41.76, stdev=210.64 00:09:50.770 clat (usec): min=503, max=41933, avg=19228.40, stdev=20069.20 00:09:50.770 lat (usec): min=528, max=44100, avg=19270.27, stdev=20090.25 00:09:50.770 clat percentiles (usec): 00:09:50.770 | 1.00th=[ 510], 5.00th=[ 627], 10.00th=[ 758], 20.00th=[ 832], 00:09:50.770 | 30.00th=[ 906], 40.00th=[ 963], 50.00th=[ 1012], 60.00th=[41157], 00:09:50.770 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:09:50.770 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:09:50.770 | 99.99th=[41681] 00:09:50.770 bw ( KiB/s): min= 96, max= 104, per=14.99%, avg=99.20, stdev= 4.38, samples=5 00:09:50.770 iops : min= 24, max= 26, avg=24.80, stdev= 1.10, samples=5 00:09:50.770 lat (usec) : 750=8.44%, 1000=38.96% 00:09:50.770 lat (msec) : 2=6.49%, 50=45.45% 00:09:50.770 cpu : usr=0.07%, sys=0.14%, ctx=155, majf=0, minf=1 00:09:50.770 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:50.770 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:50.770 complete : 0=0.6%, 4=99.4%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:50.770 issued rwts: total=154,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:50.770 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:50.770 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3115931: Wed Oct 9 00:17:21 2024 00:09:50.770 read: IOPS=30, BW=120KiB/s (123kB/s)(380KiB/3155msec) 00:09:50.770 slat (usec): min=7, max=11805, avg=150.87, stdev=1202.22 00:09:50.771 clat (usec): min=474, max=42182, avg=33046.27, stdev=16772.42 00:09:50.771 lat (usec): min=510, max=53988, avg=33196.12, stdev=16883.78 00:09:50.771 clat percentiles (usec): 00:09:50.771 | 1.00th=[ 474], 5.00th=[ 881], 10.00th=[ 971], 20.00th=[ 1336], 00:09:50.771 | 30.00th=[41157], 40.00th=[41681], 50.00th=[41681], 60.00th=[42206], 00:09:50.771 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:09:50.771 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:09:50.771 | 99.99th=[42206] 00:09:50.771 bw ( KiB/s): min= 96, max= 240, per=18.32%, avg=121.33, stdev=58.22, samples=6 00:09:50.771 iops : min= 24, max= 60, avg=30.33, stdev=14.56, samples=6 00:09:50.771 lat (usec) : 500=1.04%, 750=2.08%, 1000=10.42% 00:09:50.771 lat (msec) : 2=7.29%, 50=78.12% 00:09:50.771 cpu : usr=0.00%, sys=0.13%, ctx=98, majf=0, minf=2 00:09:50.771 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:50.771 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:50.771 complete : 0=1.0%, 4=99.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:50.771 issued rwts: total=96,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:50.771 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:50.771 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3115933: Wed Oct 9 00:17:21 2024 00:09:50.771 read: IOPS=30, BW=122KiB/s (125kB/s)(340KiB/2782msec) 00:09:50.771 slat (usec): min=26, max=14682, avg=198.18, stdev=1580.28 00:09:50.771 clat (usec): min=761, max=41595, avg=32511.01, stdev=16437.42 00:09:50.771 lat (usec): min=790, max=56026, avg=32711.19, stdev=16607.05 00:09:50.771 clat percentiles (usec): 00:09:50.771 | 1.00th=[ 758], 5.00th=[ 947], 10.00th=[ 971], 20.00th=[ 1287], 00:09:50.771 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:09:50.771 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:09:50.771 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:09:50.771 | 99.99th=[41681] 00:09:50.771 bw ( KiB/s): min= 96, max= 232, per=18.77%, avg=124.80, stdev=60.03, samples=5 00:09:50.771 iops : min= 24, max= 58, avg=31.20, stdev=15.01, samples=5 00:09:50.771 lat (usec) : 1000=17.44% 00:09:50.771 lat (msec) : 2=3.49%, 50=77.91% 00:09:50.771 cpu : usr=0.00%, sys=0.14%, ctx=88, majf=0, minf=2 00:09:50.771 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:50.771 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:50.771 complete : 0=1.1%, 4=98.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:50.771 issued rwts: total=86,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:50.771 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:50.771 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3115934: Wed Oct 9 00:17:21 2024 00:09:50.771 read: IOPS=73, BW=291KiB/s (298kB/s)(752KiB/2587msec) 00:09:50.771 slat (nsec): min=24472, max=67730, avg=25964.07, stdev=4214.75 00:09:50.771 clat (usec): min=683, max=42088, avg=13720.52, stdev=18955.66 00:09:50.771 lat (usec): min=709, max=42113, avg=13746.49, stdev=18954.90 00:09:50.771 clat percentiles (usec): 00:09:50.771 | 1.00th=[ 766], 5.00th=[ 848], 10.00th=[ 889], 20.00th=[ 914], 00:09:50.771 | 30.00th=[ 938], 40.00th=[ 947], 50.00th=[ 971], 60.00th=[ 988], 00:09:50.771 | 70.00th=[41157], 80.00th=[41681], 90.00th=[42206], 95.00th=[42206], 00:09:50.771 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:09:50.771 | 99.99th=[42206] 00:09:50.771 bw ( KiB/s): min= 88, max= 1096, per=44.81%, avg=296.00, stdev=447.29, samples=5 00:09:50.771 iops : min= 22, max= 274, avg=74.00, stdev=111.82, samples=5 00:09:50.771 lat (usec) : 750=0.53%, 1000=62.43% 00:09:50.771 lat (msec) : 2=5.29%, 50=31.22% 00:09:50.771 cpu : usr=0.08%, sys=0.19%, ctx=189, majf=0, minf=2 00:09:50.771 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:50.771 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:50.771 complete : 0=0.5%, 4=99.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:50.771 issued rwts: total=189,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:50.771 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:50.771 00:09:50.771 Run status group 0 (all jobs): 00:09:50.771 READ: bw=661KiB/s (676kB/s), 120KiB/s-291KiB/s (123kB/s-298kB/s), io=2084KiB (2134kB), run=2587-3155msec 00:09:50.771 00:09:50.771 Disk stats (read/write): 00:09:50.771 nvme0n1: ios=70/0, merge=0/0, ticks=2792/0, in_queue=2792, util=94.59% 00:09:50.771 nvme0n2: ios=93/0, merge=0/0, ticks=3058/0, in_queue=3058, util=95.32% 00:09:50.771 nvme0n3: ios=115/0, merge=0/0, ticks=3305/0, in_queue=3305, util=100.00% 00:09:50.771 nvme0n4: ios=188/0, merge=0/0, ticks=2578/0, in_queue=2578, util=96.38% 00:09:51.029 00:17:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:51.029 00:17:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:09:51.029 00:17:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:51.029 00:17:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:09:51.287 00:17:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:51.287 00:17:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:09:51.545 00:17:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:51.545 00:17:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:09:51.545 00:17:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:09:51.545 00:17:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 3115630 00:09:51.545 00:17:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:09:51.545 00:17:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:51.804 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:51.804 00:17:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:51.804 00:17:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:09:51.804 00:17:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:51.804 00:17:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:51.804 00:17:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:51.804 00:17:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:51.804 00:17:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:09:51.804 00:17:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:09:51.804 00:17:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:09:51.804 nvmf hotplug test: fio failed as expected 00:09:51.804 00:17:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:51.804 00:17:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:09:51.804 00:17:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:09:51.804 00:17:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:09:51.804 00:17:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:09:51.804 00:17:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:09:51.804 00:17:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@514 -- # nvmfcleanup 00:09:51.804 00:17:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:09:51.804 00:17:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:51.804 00:17:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:09:51.804 00:17:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:51.804 00:17:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:52.065 rmmod nvme_tcp 00:09:52.065 rmmod nvme_fabrics 00:09:52.065 rmmod nvme_keyring 00:09:52.065 00:17:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:52.065 00:17:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:09:52.065 00:17:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:09:52.065 00:17:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@515 -- # '[' -n 3111890 ']' 00:09:52.065 00:17:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # killprocess 3111890 00:09:52.065 00:17:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@950 -- # '[' -z 3111890 ']' 00:09:52.065 00:17:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # kill -0 3111890 00:09:52.065 00:17:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@955 -- # uname 00:09:52.065 00:17:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:52.065 00:17:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3111890 00:09:52.065 00:17:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:52.065 00:17:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:52.065 00:17:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3111890' 00:09:52.065 killing process with pid 3111890 00:09:52.065 00:17:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@969 -- # kill 3111890 00:09:52.065 00:17:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@974 -- # wait 3111890 00:09:52.065 00:17:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:09:52.065 00:17:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:09:52.065 00:17:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:09:52.065 00:17:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:09:52.065 00:17:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@789 -- # iptables-save 00:09:52.065 00:17:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:09:52.065 00:17:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@789 -- # iptables-restore 00:09:52.337 00:17:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:52.337 00:17:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:52.337 00:17:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:52.337 00:17:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:52.337 00:17:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:54.253 00:17:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:54.253 00:09:54.253 real 0m29.273s 00:09:54.253 user 2m34.120s 00:09:54.253 sys 0m9.275s 00:09:54.253 00:17:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:54.253 00:17:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:54.253 ************************************ 00:09:54.253 END TEST nvmf_fio_target 00:09:54.253 ************************************ 00:09:54.253 00:17:24 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:09:54.253 00:17:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:54.253 00:17:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:54.253 00:17:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:54.253 ************************************ 00:09:54.253 START TEST nvmf_bdevio 00:09:54.253 ************************************ 00:09:54.253 00:17:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:09:54.514 * Looking for test storage... 00:09:54.514 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:54.515 00:17:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:54.515 00:17:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1681 -- # lcov --version 00:09:54.515 00:17:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:54.515 00:17:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:54.515 00:17:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:54.515 00:17:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:54.515 00:17:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:54.515 00:17:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:09:54.515 00:17:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:09:54.515 00:17:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:09:54.515 00:17:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:09:54.515 00:17:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:09:54.515 00:17:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:09:54.515 00:17:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:09:54.515 00:17:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:54.515 00:17:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:09:54.515 00:17:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:09:54.515 00:17:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:54.515 00:17:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:54.515 00:17:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:09:54.515 00:17:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:09:54.515 00:17:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:54.515 00:17:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:09:54.515 00:17:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:09:54.515 00:17:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:09:54.515 00:17:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:09:54.515 00:17:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:54.515 00:17:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:09:54.515 00:17:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:09:54.515 00:17:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:54.515 00:17:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:54.515 00:17:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:09:54.515 00:17:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:54.515 00:17:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:54.515 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:54.515 --rc genhtml_branch_coverage=1 00:09:54.515 --rc genhtml_function_coverage=1 00:09:54.515 --rc genhtml_legend=1 00:09:54.515 --rc geninfo_all_blocks=1 00:09:54.515 --rc geninfo_unexecuted_blocks=1 00:09:54.515 00:09:54.515 ' 00:09:54.515 00:17:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:54.515 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:54.515 --rc genhtml_branch_coverage=1 00:09:54.515 --rc genhtml_function_coverage=1 00:09:54.515 --rc genhtml_legend=1 00:09:54.515 --rc geninfo_all_blocks=1 00:09:54.515 --rc geninfo_unexecuted_blocks=1 00:09:54.515 00:09:54.515 ' 00:09:54.515 00:17:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:54.515 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:54.515 --rc genhtml_branch_coverage=1 00:09:54.515 --rc genhtml_function_coverage=1 00:09:54.515 --rc genhtml_legend=1 00:09:54.515 --rc geninfo_all_blocks=1 00:09:54.515 --rc geninfo_unexecuted_blocks=1 00:09:54.515 00:09:54.515 ' 00:09:54.515 00:17:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:54.515 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:54.515 --rc genhtml_branch_coverage=1 00:09:54.515 --rc genhtml_function_coverage=1 00:09:54.515 --rc genhtml_legend=1 00:09:54.515 --rc geninfo_all_blocks=1 00:09:54.515 --rc geninfo_unexecuted_blocks=1 00:09:54.515 00:09:54.515 ' 00:09:54.515 00:17:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:54.515 00:17:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:09:54.515 00:17:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:54.515 00:17:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:54.515 00:17:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:54.515 00:17:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:54.515 00:17:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:54.515 00:17:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:54.515 00:17:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:54.515 00:17:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:54.515 00:17:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:54.515 00:17:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:54.515 00:17:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:54.515 00:17:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:54.515 00:17:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:54.515 00:17:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:54.515 00:17:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:54.515 00:17:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:54.515 00:17:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:54.515 00:17:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:09:54.515 00:17:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:54.515 00:17:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:54.515 00:17:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:54.515 00:17:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:54.515 00:17:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:54.515 00:17:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:54.515 00:17:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:09:54.515 00:17:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:54.515 00:17:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:09:54.515 00:17:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:54.515 00:17:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:54.515 00:17:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:54.515 00:17:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:54.515 00:17:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:54.515 00:17:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:54.515 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:54.515 00:17:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:54.515 00:17:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:54.515 00:17:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:54.515 00:17:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:54.515 00:17:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:54.515 00:17:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:09:54.515 00:17:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:09:54.515 00:17:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:54.515 00:17:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # prepare_net_devs 00:09:54.515 00:17:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@436 -- # local -g is_hw=no 00:09:54.516 00:17:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # remove_spdk_ns 00:09:54.516 00:17:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:54.516 00:17:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:54.516 00:17:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:54.516 00:17:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:09:54.516 00:17:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:09:54.516 00:17:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:09:54.516 00:17:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:02.684 00:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:02.684 00:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:10:02.684 00:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:02.684 00:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:02.684 00:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:02.684 00:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:02.684 00:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:02.684 00:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:10:02.684 00:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:02.684 00:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:10:02.684 00:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:10:02.684 00:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:10:02.684 00:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:10:02.684 00:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:10:02.684 00:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:10:02.684 00:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:02.684 00:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:02.684 00:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:02.684 00:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:02.684 00:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:02.684 00:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:02.684 00:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:02.684 00:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:02.684 00:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:02.684 00:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:02.684 00:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:02.684 00:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:02.684 00:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:02.684 00:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:02.684 00:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:02.684 00:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:02.684 00:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:02.684 00:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:02.684 00:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:02.684 00:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:10:02.684 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:10:02.684 00:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:02.684 00:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:02.684 00:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:02.684 00:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:02.684 00:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:02.684 00:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:02.684 00:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:10:02.685 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:10:02.685 00:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:02.685 00:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:02.685 00:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:02.685 00:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:02.685 00:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:02.685 00:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:02.685 00:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:02.685 00:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:02.685 00:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:10:02.685 00:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:02.685 00:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:10:02.685 00:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:02.685 00:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ up == up ]] 00:10:02.685 00:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:10:02.685 00:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:02.685 00:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:10:02.685 Found net devices under 0000:4b:00.0: cvl_0_0 00:10:02.685 00:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:10:02.685 00:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:10:02.685 00:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:02.685 00:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:10:02.685 00:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:02.685 00:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ up == up ]] 00:10:02.685 00:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:10:02.685 00:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:02.685 00:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:10:02.685 Found net devices under 0000:4b:00.1: cvl_0_1 00:10:02.685 00:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:10:02.685 00:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:10:02.685 00:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # is_hw=yes 00:10:02.685 00:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:10:02.685 00:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:10:02.685 00:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:10:02.685 00:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:02.685 00:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:02.685 00:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:02.685 00:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:02.685 00:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:02.685 00:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:02.685 00:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:02.685 00:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:02.685 00:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:02.685 00:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:02.685 00:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:02.685 00:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:02.685 00:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:02.685 00:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:02.685 00:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:02.685 00:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:02.685 00:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:02.685 00:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:02.685 00:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:02.685 00:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:02.685 00:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:02.685 00:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:02.685 00:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:02.685 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:02.685 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.536 ms 00:10:02.685 00:10:02.685 --- 10.0.0.2 ping statistics --- 00:10:02.685 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:02.685 rtt min/avg/max/mdev = 0.536/0.536/0.536/0.000 ms 00:10:02.685 00:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:02.685 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:02.685 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.205 ms 00:10:02.685 00:10:02.685 --- 10.0.0.1 ping statistics --- 00:10:02.685 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:02.685 rtt min/avg/max/mdev = 0.205/0.205/0.205/0.000 ms 00:10:02.685 00:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:02.685 00:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@448 -- # return 0 00:10:02.685 00:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:10:02.685 00:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:02.685 00:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:10:02.685 00:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:10:02.685 00:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:02.685 00:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:10:02.685 00:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:10:02.685 00:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:10:02.685 00:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:10:02.685 00:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:02.685 00:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:02.685 00:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # nvmfpid=3120967 00:10:02.685 00:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # waitforlisten 3120967 00:10:02.685 00:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:10:02.685 00:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@831 -- # '[' -z 3120967 ']' 00:10:02.685 00:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:02.685 00:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:02.685 00:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:02.685 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:02.685 00:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:02.685 00:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:02.685 [2024-10-09 00:17:32.668605] Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 initialization... 00:10:02.685 [2024-10-09 00:17:32.668687] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:02.685 [2024-10-09 00:17:32.756813] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:02.685 [2024-10-09 00:17:32.850490] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:02.685 [2024-10-09 00:17:32.850550] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:02.685 [2024-10-09 00:17:32.850561] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:02.685 [2024-10-09 00:17:32.850569] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:02.685 [2024-10-09 00:17:32.850576] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:02.685 [2024-10-09 00:17:32.853094] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 4 00:10:02.685 [2024-10-09 00:17:32.853254] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 5 00:10:02.685 [2024-10-09 00:17:32.853415] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 6 00:10:02.685 [2024-10-09 00:17:32.853416] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:10:02.946 00:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:02.946 00:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # return 0 00:10:02.946 00:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:10:02.946 00:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:02.946 00:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:02.946 00:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:02.946 00:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:02.946 00:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.946 00:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:02.946 [2024-10-09 00:17:33.543850] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:02.946 00:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.946 00:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:02.946 00:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.946 00:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:02.946 Malloc0 00:10:02.946 00:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.946 00:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:02.946 00:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.946 00:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:03.207 00:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.207 00:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:03.207 00:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.207 00:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:03.207 00:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.207 00:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:03.207 00:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.207 00:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:03.207 [2024-10-09 00:17:33.609270] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:03.207 00:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.207 00:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:10:03.207 00:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:10:03.207 00:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@558 -- # config=() 00:10:03.207 00:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@558 -- # local subsystem config 00:10:03.207 00:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:10:03.207 00:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:10:03.207 { 00:10:03.207 "params": { 00:10:03.207 "name": "Nvme$subsystem", 00:10:03.207 "trtype": "$TEST_TRANSPORT", 00:10:03.207 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:03.207 "adrfam": "ipv4", 00:10:03.207 "trsvcid": "$NVMF_PORT", 00:10:03.207 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:03.207 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:03.207 "hdgst": ${hdgst:-false}, 00:10:03.207 "ddgst": ${ddgst:-false} 00:10:03.207 }, 00:10:03.207 "method": "bdev_nvme_attach_controller" 00:10:03.207 } 00:10:03.207 EOF 00:10:03.207 )") 00:10:03.207 00:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@580 -- # cat 00:10:03.207 00:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # jq . 00:10:03.207 00:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@583 -- # IFS=, 00:10:03.207 00:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:10:03.207 "params": { 00:10:03.207 "name": "Nvme1", 00:10:03.207 "trtype": "tcp", 00:10:03.207 "traddr": "10.0.0.2", 00:10:03.207 "adrfam": "ipv4", 00:10:03.207 "trsvcid": "4420", 00:10:03.207 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:03.207 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:03.207 "hdgst": false, 00:10:03.207 "ddgst": false 00:10:03.207 }, 00:10:03.207 "method": "bdev_nvme_attach_controller" 00:10:03.207 }' 00:10:03.207 [2024-10-09 00:17:33.667223] Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 initialization... 00:10:03.207 [2024-10-09 00:17:33.667294] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3121320 ] 00:10:03.207 [2024-10-09 00:17:33.750308] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:03.467 [2024-10-09 00:17:33.849195] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:10:03.467 [2024-10-09 00:17:33.849358] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:10:03.467 [2024-10-09 00:17:33.849358] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:10:03.727 I/O targets: 00:10:03.727 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:10:03.727 00:10:03.727 00:10:03.727 CUnit - A unit testing framework for C - Version 2.1-3 00:10:03.727 http://cunit.sourceforge.net/ 00:10:03.727 00:10:03.727 00:10:03.727 Suite: bdevio tests on: Nvme1n1 00:10:03.727 Test: blockdev write read block ...passed 00:10:03.727 Test: blockdev write zeroes read block ...passed 00:10:03.727 Test: blockdev write zeroes read no split ...passed 00:10:03.727 Test: blockdev write zeroes read split ...passed 00:10:03.727 Test: blockdev write zeroes read split partial ...passed 00:10:03.727 Test: blockdev reset ...[2024-10-09 00:17:34.235887] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:10:03.727 [2024-10-09 00:17:34.235992] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8f0d0 (9): Bad file descriptor 00:10:03.727 [2024-10-09 00:17:34.289870] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:10:03.727 passed 00:10:03.727 Test: blockdev write read 8 blocks ...passed 00:10:03.727 Test: blockdev write read size > 128k ...passed 00:10:03.727 Test: blockdev write read invalid size ...passed 00:10:03.727 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:03.727 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:03.727 Test: blockdev write read max offset ...passed 00:10:03.986 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:03.986 Test: blockdev writev readv 8 blocks ...passed 00:10:03.986 Test: blockdev writev readv 30 x 1block ...passed 00:10:03.986 Test: blockdev writev readv block ...passed 00:10:03.986 Test: blockdev writev readv size > 128k ...passed 00:10:03.986 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:03.986 Test: blockdev comparev and writev ...[2024-10-09 00:17:34.475121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:03.986 [2024-10-09 00:17:34.475170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:10:03.986 [2024-10-09 00:17:34.475187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:03.986 [2024-10-09 00:17:34.475196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:10:03.986 [2024-10-09 00:17:34.475698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:03.986 [2024-10-09 00:17:34.475712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:10:03.986 [2024-10-09 00:17:34.475730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:03.986 [2024-10-09 00:17:34.475738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:10:03.986 [2024-10-09 00:17:34.476187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:03.986 [2024-10-09 00:17:34.476199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:10:03.986 [2024-10-09 00:17:34.476213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:03.986 [2024-10-09 00:17:34.476220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:10:03.986 [2024-10-09 00:17:34.476715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:03.986 [2024-10-09 00:17:34.476732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:10:03.986 [2024-10-09 00:17:34.476746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:03.986 [2024-10-09 00:17:34.476754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:10:03.986 passed 00:10:03.986 Test: blockdev nvme passthru rw ...passed 00:10:03.986 Test: blockdev nvme passthru vendor specific ...[2024-10-09 00:17:34.561625] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:03.986 [2024-10-09 00:17:34.561646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:10:03.986 [2024-10-09 00:17:34.562010] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:03.986 [2024-10-09 00:17:34.562023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:10:03.986 [2024-10-09 00:17:34.562307] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:03.986 [2024-10-09 00:17:34.562317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:10:03.986 [2024-10-09 00:17:34.562713] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:03.986 [2024-10-09 00:17:34.562730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:10:03.986 passed 00:10:03.986 Test: blockdev nvme admin passthru ...passed 00:10:03.986 Test: blockdev copy ...passed 00:10:03.986 00:10:03.986 Run Summary: Type Total Ran Passed Failed Inactive 00:10:03.986 suites 1 1 n/a 0 0 00:10:03.986 tests 23 23 23 0 0 00:10:03.987 asserts 152 152 152 0 n/a 00:10:03.987 00:10:03.987 Elapsed time = 1.043 seconds 00:10:04.248 00:17:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:04.248 00:17:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.248 00:17:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:04.248 00:17:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.248 00:17:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:10:04.248 00:17:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:10:04.248 00:17:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@514 -- # nvmfcleanup 00:10:04.248 00:17:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:10:04.248 00:17:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:04.248 00:17:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:10:04.248 00:17:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:04.248 00:17:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:04.248 rmmod nvme_tcp 00:10:04.248 rmmod nvme_fabrics 00:10:04.248 rmmod nvme_keyring 00:10:04.248 00:17:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:04.248 00:17:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:10:04.248 00:17:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:10:04.248 00:17:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@515 -- # '[' -n 3120967 ']' 00:10:04.248 00:17:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # killprocess 3120967 00:10:04.248 00:17:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@950 -- # '[' -z 3120967 ']' 00:10:04.248 00:17:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # kill -0 3120967 00:10:04.248 00:17:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@955 -- # uname 00:10:04.248 00:17:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:04.248 00:17:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3120967 00:10:04.509 00:17:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:10:04.509 00:17:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:10:04.509 00:17:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3120967' 00:10:04.509 killing process with pid 3120967 00:10:04.509 00:17:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@969 -- # kill 3120967 00:10:04.509 00:17:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@974 -- # wait 3120967 00:10:04.509 00:17:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:10:04.509 00:17:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:10:04.509 00:17:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:10:04.509 00:17:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:10:04.509 00:17:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@789 -- # iptables-save 00:10:04.509 00:17:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:10:04.509 00:17:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@789 -- # iptables-restore 00:10:04.509 00:17:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:04.509 00:17:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:04.509 00:17:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:04.509 00:17:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:04.509 00:17:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:07.071 00:17:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:07.071 00:10:07.071 real 0m12.354s 00:10:07.071 user 0m13.387s 00:10:07.071 sys 0m6.330s 00:10:07.071 00:17:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:07.071 00:17:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:07.071 ************************************ 00:10:07.071 END TEST nvmf_bdevio 00:10:07.071 ************************************ 00:10:07.071 00:17:37 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:10:07.071 00:10:07.071 real 5m5.204s 00:10:07.071 user 11m47.928s 00:10:07.071 sys 1m51.558s 00:10:07.071 00:17:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:07.071 00:17:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:07.071 ************************************ 00:10:07.071 END TEST nvmf_target_core 00:10:07.071 ************************************ 00:10:07.071 00:17:37 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:10:07.071 00:17:37 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:07.071 00:17:37 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:07.071 00:17:37 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:07.071 ************************************ 00:10:07.071 START TEST nvmf_target_extra 00:10:07.071 ************************************ 00:10:07.071 00:17:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:10:07.071 * Looking for test storage... 00:10:07.071 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:10:07.071 00:17:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:10:07.071 00:17:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1681 -- # lcov --version 00:10:07.071 00:17:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:10:07.071 00:17:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:10:07.071 00:17:37 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:07.071 00:17:37 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:07.071 00:17:37 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:07.071 00:17:37 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:10:07.071 00:17:37 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:10:07.071 00:17:37 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:10:07.071 00:17:37 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:10:07.071 00:17:37 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:10:07.071 00:17:37 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:10:07.071 00:17:37 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:10:07.071 00:17:37 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:07.071 00:17:37 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:10:07.071 00:17:37 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:10:07.071 00:17:37 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:07.071 00:17:37 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:07.071 00:17:37 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:10:07.071 00:17:37 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:10:07.071 00:17:37 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:07.071 00:17:37 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:10:07.071 00:17:37 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:10:07.071 00:17:37 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:10:07.071 00:17:37 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:10:07.071 00:17:37 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:07.071 00:17:37 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:10:07.071 00:17:37 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:10:07.071 00:17:37 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:07.071 00:17:37 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:07.071 00:17:37 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:10:07.071 00:17:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:07.071 00:17:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:10:07.071 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:07.071 --rc genhtml_branch_coverage=1 00:10:07.071 --rc genhtml_function_coverage=1 00:10:07.071 --rc genhtml_legend=1 00:10:07.071 --rc geninfo_all_blocks=1 00:10:07.071 --rc geninfo_unexecuted_blocks=1 00:10:07.071 00:10:07.071 ' 00:10:07.071 00:17:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:10:07.072 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:07.072 --rc genhtml_branch_coverage=1 00:10:07.072 --rc genhtml_function_coverage=1 00:10:07.072 --rc genhtml_legend=1 00:10:07.072 --rc geninfo_all_blocks=1 00:10:07.072 --rc geninfo_unexecuted_blocks=1 00:10:07.072 00:10:07.072 ' 00:10:07.072 00:17:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:10:07.072 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:07.072 --rc genhtml_branch_coverage=1 00:10:07.072 --rc genhtml_function_coverage=1 00:10:07.072 --rc genhtml_legend=1 00:10:07.072 --rc geninfo_all_blocks=1 00:10:07.072 --rc geninfo_unexecuted_blocks=1 00:10:07.072 00:10:07.072 ' 00:10:07.072 00:17:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:10:07.072 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:07.072 --rc genhtml_branch_coverage=1 00:10:07.072 --rc genhtml_function_coverage=1 00:10:07.072 --rc genhtml_legend=1 00:10:07.072 --rc geninfo_all_blocks=1 00:10:07.072 --rc geninfo_unexecuted_blocks=1 00:10:07.072 00:10:07.072 ' 00:10:07.072 00:17:37 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:07.072 00:17:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:10:07.072 00:17:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:07.072 00:17:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:07.072 00:17:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:07.072 00:17:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:07.072 00:17:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:07.072 00:17:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:07.072 00:17:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:07.072 00:17:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:07.072 00:17:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:07.072 00:17:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:07.072 00:17:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:07.072 00:17:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:07.072 00:17:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:07.072 00:17:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:07.072 00:17:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:07.072 00:17:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:07.072 00:17:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:07.072 00:17:37 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:10:07.072 00:17:37 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:07.072 00:17:37 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:07.072 00:17:37 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:07.072 00:17:37 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:07.072 00:17:37 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:07.072 00:17:37 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:07.072 00:17:37 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:10:07.072 00:17:37 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:07.072 00:17:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:10:07.072 00:17:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:07.072 00:17:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:07.072 00:17:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:07.072 00:17:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:07.072 00:17:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:07.072 00:17:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:07.072 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:07.072 00:17:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:07.072 00:17:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:07.072 00:17:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:07.072 00:17:37 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:10:07.072 00:17:37 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:10:07.072 00:17:37 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:10:07.072 00:17:37 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:10:07.072 00:17:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:07.072 00:17:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:07.072 00:17:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:07.072 ************************************ 00:10:07.072 START TEST nvmf_example 00:10:07.072 ************************************ 00:10:07.072 00:17:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:10:07.340 * Looking for test storage... 00:10:07.341 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:07.341 00:17:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:10:07.341 00:17:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1681 -- # lcov --version 00:10:07.341 00:17:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:10:07.341 00:17:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:10:07.341 00:17:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:07.341 00:17:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:07.341 00:17:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:07.341 00:17:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:10:07.341 00:17:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:10:07.341 00:17:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:10:07.341 00:17:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:10:07.341 00:17:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:10:07.341 00:17:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:10:07.341 00:17:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:10:07.341 00:17:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:07.341 00:17:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:10:07.341 00:17:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:10:07.341 00:17:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:07.341 00:17:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:07.341 00:17:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:10:07.341 00:17:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:10:07.341 00:17:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:07.341 00:17:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:10:07.341 00:17:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:10:07.341 00:17:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:10:07.341 00:17:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:10:07.341 00:17:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:07.341 00:17:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:10:07.341 00:17:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:10:07.341 00:17:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:07.341 00:17:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:07.342 00:17:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:10:07.342 00:17:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:07.342 00:17:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:10:07.342 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:07.342 --rc genhtml_branch_coverage=1 00:10:07.342 --rc genhtml_function_coverage=1 00:10:07.342 --rc genhtml_legend=1 00:10:07.342 --rc geninfo_all_blocks=1 00:10:07.342 --rc geninfo_unexecuted_blocks=1 00:10:07.342 00:10:07.342 ' 00:10:07.342 00:17:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:10:07.342 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:07.342 --rc genhtml_branch_coverage=1 00:10:07.342 --rc genhtml_function_coverage=1 00:10:07.342 --rc genhtml_legend=1 00:10:07.342 --rc geninfo_all_blocks=1 00:10:07.342 --rc geninfo_unexecuted_blocks=1 00:10:07.342 00:10:07.342 ' 00:10:07.342 00:17:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:10:07.342 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:07.342 --rc genhtml_branch_coverage=1 00:10:07.342 --rc genhtml_function_coverage=1 00:10:07.342 --rc genhtml_legend=1 00:10:07.342 --rc geninfo_all_blocks=1 00:10:07.342 --rc geninfo_unexecuted_blocks=1 00:10:07.342 00:10:07.342 ' 00:10:07.342 00:17:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:10:07.342 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:07.342 --rc genhtml_branch_coverage=1 00:10:07.342 --rc genhtml_function_coverage=1 00:10:07.342 --rc genhtml_legend=1 00:10:07.342 --rc geninfo_all_blocks=1 00:10:07.342 --rc geninfo_unexecuted_blocks=1 00:10:07.342 00:10:07.342 ' 00:10:07.342 00:17:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:07.343 00:17:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:10:07.343 00:17:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:07.343 00:17:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:07.343 00:17:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:07.343 00:17:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:07.343 00:17:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:07.343 00:17:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:07.343 00:17:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:07.343 00:17:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:07.343 00:17:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:07.343 00:17:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:07.343 00:17:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:07.343 00:17:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:07.343 00:17:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:07.343 00:17:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:07.343 00:17:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:07.343 00:17:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:07.343 00:17:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:07.343 00:17:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:10:07.343 00:17:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:07.343 00:17:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:07.343 00:17:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:07.343 00:17:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:07.343 00:17:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:07.343 00:17:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:07.343 00:17:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:10:07.343 00:17:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:07.343 00:17:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # : 0 00:10:07.344 00:17:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:07.344 00:17:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:07.344 00:17:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:07.344 00:17:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:07.344 00:17:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:07.344 00:17:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:07.344 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:07.344 00:17:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:07.344 00:17:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:07.344 00:17:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:07.344 00:17:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:10:07.344 00:17:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:10:07.347 00:17:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:10:07.347 00:17:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:10:07.347 00:17:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:10:07.347 00:17:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:10:07.347 00:17:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:10:07.347 00:17:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:10:07.347 00:17:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:07.348 00:17:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:07.348 00:17:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:10:07.348 00:17:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:10:07.348 00:17:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:07.348 00:17:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # prepare_net_devs 00:10:07.348 00:17:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@436 -- # local -g is_hw=no 00:10:07.348 00:17:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # remove_spdk_ns 00:10:07.348 00:17:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:07.348 00:17:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:07.348 00:17:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:07.348 00:17:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:10:07.348 00:17:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:10:07.348 00:17:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@309 -- # xtrace_disable 00:10:07.348 00:17:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:15.496 00:17:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:15.496 00:17:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # pci_devs=() 00:10:15.496 00:17:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:15.496 00:17:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:15.496 00:17:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:15.496 00:17:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:15.496 00:17:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:15.496 00:17:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # net_devs=() 00:10:15.496 00:17:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:15.496 00:17:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # e810=() 00:10:15.496 00:17:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # local -ga e810 00:10:15.496 00:17:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # x722=() 00:10:15.496 00:17:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # local -ga x722 00:10:15.496 00:17:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # mlx=() 00:10:15.496 00:17:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # local -ga mlx 00:10:15.496 00:17:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:15.496 00:17:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:15.496 00:17:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:15.496 00:17:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:15.496 00:17:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:15.496 00:17:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:15.496 00:17:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:15.496 00:17:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:15.496 00:17:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:15.496 00:17:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:15.496 00:17:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:15.496 00:17:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:15.496 00:17:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:15.496 00:17:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:15.496 00:17:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:15.496 00:17:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:15.496 00:17:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:15.496 00:17:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:15.496 00:17:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:15.496 00:17:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:10:15.496 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:10:15.496 00:17:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:15.496 00:17:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:15.496 00:17:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:15.496 00:17:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:15.496 00:17:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:15.496 00:17:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:15.496 00:17:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:10:15.496 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:10:15.496 00:17:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:15.496 00:17:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:15.496 00:17:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:15.496 00:17:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:15.496 00:17:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:15.496 00:17:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:15.496 00:17:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:15.496 00:17:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:15.496 00:17:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:10:15.496 00:17:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:15.496 00:17:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:10:15.496 00:17:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:15.496 00:17:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ up == up ]] 00:10:15.496 00:17:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:10:15.496 00:17:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:15.496 00:17:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:10:15.496 Found net devices under 0000:4b:00.0: cvl_0_0 00:10:15.496 00:17:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:10:15.496 00:17:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:10:15.496 00:17:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:15.496 00:17:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:10:15.496 00:17:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:15.496 00:17:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ up == up ]] 00:10:15.496 00:17:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:10:15.496 00:17:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:15.496 00:17:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:10:15.496 Found net devices under 0000:4b:00.1: cvl_0_1 00:10:15.496 00:17:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:10:15.496 00:17:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:10:15.496 00:17:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # is_hw=yes 00:10:15.496 00:17:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:10:15.496 00:17:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:10:15.496 00:17:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:10:15.496 00:17:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:15.496 00:17:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:15.496 00:17:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:15.496 00:17:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:15.496 00:17:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:15.496 00:17:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:15.496 00:17:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:15.496 00:17:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:15.496 00:17:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:15.496 00:17:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:15.496 00:17:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:15.496 00:17:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:15.496 00:17:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:15.496 00:17:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:15.496 00:17:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:15.496 00:17:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:15.496 00:17:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:15.496 00:17:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:15.496 00:17:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:15.496 00:17:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:15.496 00:17:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:15.496 00:17:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:15.496 00:17:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:15.496 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:15.496 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.557 ms 00:10:15.496 00:10:15.496 --- 10.0.0.2 ping statistics --- 00:10:15.496 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:15.496 rtt min/avg/max/mdev = 0.557/0.557/0.557/0.000 ms 00:10:15.496 00:17:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:15.496 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:15.496 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.319 ms 00:10:15.497 00:10:15.497 --- 10.0.0.1 ping statistics --- 00:10:15.497 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:15.497 rtt min/avg/max/mdev = 0.319/0.319/0.319/0.000 ms 00:10:15.497 00:17:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:15.497 00:17:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@448 -- # return 0 00:10:15.497 00:17:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:10:15.497 00:17:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:15.497 00:17:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:10:15.497 00:17:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:10:15.497 00:17:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:15.497 00:17:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:10:15.497 00:17:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:10:15.497 00:17:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:10:15.497 00:17:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:10:15.497 00:17:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:15.497 00:17:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:15.497 00:17:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:10:15.497 00:17:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:10:15.497 00:17:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=3125830 00:10:15.497 00:17:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:15.497 00:17:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 3125830 00:10:15.497 00:17:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:10:15.497 00:17:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@831 -- # '[' -z 3125830 ']' 00:10:15.497 00:17:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:15.497 00:17:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:15.497 00:17:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:15.497 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:15.497 00:17:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:15.497 00:17:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:15.755 00:17:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:15.755 00:17:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # return 0 00:10:15.755 00:17:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:10:15.755 00:17:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:15.755 00:17:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:15.755 00:17:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:15.755 00:17:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.755 00:17:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:15.755 00:17:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.755 00:17:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:10:15.756 00:17:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.756 00:17:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:16.014 00:17:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.014 00:17:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:10:16.014 00:17:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:16.014 00:17:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.014 00:17:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:16.014 00:17:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.014 00:17:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:10:16.014 00:17:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:16.014 00:17:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.014 00:17:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:16.014 00:17:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.014 00:17:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:16.014 00:17:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.015 00:17:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:16.015 00:17:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.015 00:17:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:10:16.015 00:17:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:10:26.000 Initializing NVMe Controllers 00:10:26.000 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:26.000 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:10:26.000 Initialization complete. Launching workers. 00:10:26.000 ======================================================== 00:10:26.000 Latency(us) 00:10:26.000 Device Information : IOPS MiB/s Average min max 00:10:26.000 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18361.39 71.72 3485.41 636.39 16318.67 00:10:26.000 ======================================================== 00:10:26.000 Total : 18361.39 71.72 3485.41 636.39 16318.67 00:10:26.000 00:10:26.259 00:17:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:10:26.259 00:17:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:10:26.259 00:17:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@514 -- # nvmfcleanup 00:10:26.259 00:17:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # sync 00:10:26.259 00:17:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:26.259 00:17:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set +e 00:10:26.259 00:17:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:26.259 00:17:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:26.259 rmmod nvme_tcp 00:10:26.259 rmmod nvme_fabrics 00:10:26.259 rmmod nvme_keyring 00:10:26.259 00:17:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:26.259 00:17:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@128 -- # set -e 00:10:26.259 00:17:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # return 0 00:10:26.259 00:17:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@515 -- # '[' -n 3125830 ']' 00:10:26.259 00:17:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@516 -- # killprocess 3125830 00:10:26.259 00:17:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@950 -- # '[' -z 3125830 ']' 00:10:26.259 00:17:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # kill -0 3125830 00:10:26.259 00:17:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@955 -- # uname 00:10:26.259 00:17:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:26.259 00:17:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3125830 00:10:26.259 00:17:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@956 -- # process_name=nvmf 00:10:26.259 00:17:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # '[' nvmf = sudo ']' 00:10:26.259 00:17:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3125830' 00:10:26.259 killing process with pid 3125830 00:10:26.259 00:17:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@969 -- # kill 3125830 00:10:26.259 00:17:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@974 -- # wait 3125830 00:10:26.259 nvmf threads initialize successfully 00:10:26.259 bdev subsystem init successfully 00:10:26.259 created a nvmf target service 00:10:26.259 create targets's poll groups done 00:10:26.259 all subsystems of target started 00:10:26.259 nvmf target is running 00:10:26.259 all subsystems of target stopped 00:10:26.259 destroy targets's poll groups done 00:10:26.259 destroyed the nvmf target service 00:10:26.259 bdev subsystem finish successfully 00:10:26.259 nvmf threads destroy successfully 00:10:26.259 00:17:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:10:26.259 00:17:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:10:26.259 00:17:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:10:26.259 00:17:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # iptr 00:10:26.259 00:17:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@789 -- # iptables-save 00:10:26.259 00:17:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@789 -- # iptables-restore 00:10:26.259 00:17:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:10:26.259 00:17:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:26.259 00:17:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:26.259 00:17:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:26.259 00:17:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:26.259 00:17:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:28.809 00:17:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:28.809 00:17:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:10:28.809 00:17:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:28.809 00:17:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:28.809 00:10:28.809 real 0m21.403s 00:10:28.809 user 0m46.319s 00:10:28.809 sys 0m7.186s 00:10:28.809 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:28.809 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:28.809 ************************************ 00:10:28.809 END TEST nvmf_example 00:10:28.809 ************************************ 00:10:28.809 00:17:59 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:10:28.809 00:17:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:28.809 00:17:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:28.809 00:17:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:28.809 ************************************ 00:10:28.809 START TEST nvmf_filesystem 00:10:28.809 ************************************ 00:10:28.809 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:10:28.809 * Looking for test storage... 00:10:28.809 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:28.809 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:10:28.809 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # lcov --version 00:10:28.809 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:10:28.809 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:10:28.809 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:28.809 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:28.809 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:28.809 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:10:28.809 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:10:28.809 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:10:28.809 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:10:28.809 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:10:28.809 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:10:28.809 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:10:28.809 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:28.809 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:10:28.809 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:10:28.809 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:28.809 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:28.809 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:10:28.809 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:10:28.809 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:28.809 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:10:28.809 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:10:28.809 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:10:28.809 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:10:28.809 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:28.809 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:10:28.809 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:10:28.809 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:28.809 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:28.809 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:10:28.809 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:28.809 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:10:28.809 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:28.809 --rc genhtml_branch_coverage=1 00:10:28.809 --rc genhtml_function_coverage=1 00:10:28.809 --rc genhtml_legend=1 00:10:28.809 --rc geninfo_all_blocks=1 00:10:28.810 --rc geninfo_unexecuted_blocks=1 00:10:28.810 00:10:28.810 ' 00:10:28.810 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:10:28.810 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:28.810 --rc genhtml_branch_coverage=1 00:10:28.810 --rc genhtml_function_coverage=1 00:10:28.810 --rc genhtml_legend=1 00:10:28.810 --rc geninfo_all_blocks=1 00:10:28.810 --rc geninfo_unexecuted_blocks=1 00:10:28.810 00:10:28.810 ' 00:10:28.810 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:10:28.810 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:28.810 --rc genhtml_branch_coverage=1 00:10:28.810 --rc genhtml_function_coverage=1 00:10:28.810 --rc genhtml_legend=1 00:10:28.810 --rc geninfo_all_blocks=1 00:10:28.810 --rc geninfo_unexecuted_blocks=1 00:10:28.810 00:10:28.810 ' 00:10:28.810 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:10:28.810 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:28.810 --rc genhtml_branch_coverage=1 00:10:28.810 --rc genhtml_function_coverage=1 00:10:28.810 --rc genhtml_legend=1 00:10:28.810 --rc geninfo_all_blocks=1 00:10:28.810 --rc geninfo_unexecuted_blocks=1 00:10:28.810 00:10:28.810 ' 00:10:28.810 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:10:28.810 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:10:28.810 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:10:28.810 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:10:28.810 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:10:28.810 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:10:28.810 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:10:28.810 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:10:28.810 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:10:28.810 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:10:28.810 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:10:28.810 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:10:28.810 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:10:28.810 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:10:28.810 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:10:28.810 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:10:28.810 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:10:28.810 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:10:28.810 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:10:28.810 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:10:28.810 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:10:28.810 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:10:28.810 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:10:28.810 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:10:28.810 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:10:28.810 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:10:28.810 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:10:28.810 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:10:28.810 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:10:28.810 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:10:28.810 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:10:28.810 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:10:28.810 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:10:28.810 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:10:28.810 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_AIO_FSDEV=y 00:10:28.810 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_ARC4RANDOM=y 00:10:28.810 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_LIBARCHIVE=n 00:10:28.810 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_UBLK=y 00:10:28.810 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_ISAL_CRYPTO=y 00:10:28.810 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OPENSSL_PATH= 00:10:28.810 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OCF=n 00:10:28.810 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_FUSE=n 00:10:28.810 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_VTUNE_DIR= 00:10:28.810 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER_LIB= 00:10:28.810 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER=n 00:10:28.810 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FSDEV=y 00:10:28.810 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:10:28.810 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_CRYPTO=n 00:10:28.810 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_PGO_USE=n 00:10:28.810 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_VHOST=y 00:10:28.810 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS=n 00:10:28.810 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DPDK_INC_DIR= 00:10:28.810 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DAOS_DIR= 00:10:28.810 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_UNIT_TESTS=n 00:10:28.810 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:10:28.810 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_VIRTIO=y 00:10:28.810 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_DPDK_UADK=n 00:10:28.810 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_COVERAGE=y 00:10:28.810 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_RDMA=y 00:10:28.810 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:10:28.810 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_LZ4=n 00:10:28.810 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:10:28.810 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_URING_PATH= 00:10:28.810 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_XNVME=n 00:10:28.810 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_VFIO_USER=y 00:10:28.810 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_ARCH=native 00:10:28.810 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_HAVE_EVP_MAC=y 00:10:28.810 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_URING_ZNS=n 00:10:28.810 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_WERROR=y 00:10:28.810 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_HAVE_LIBBSD=n 00:10:28.810 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_UBSAN=y 00:10:28.810 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:10:28.810 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_IPSEC_MB_DIR= 00:10:28.810 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_GOLANG=n 00:10:28.810 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_ISAL=y 00:10:28.810 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_IDXD_KERNEL=y 00:10:28.810 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_DPDK_LIB_DIR= 00:10:28.810 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_RDMA_PROV=verbs 00:10:28.810 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_APPS=y 00:10:28.810 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_SHARED=y 00:10:28.810 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_HAVE_KEYUTILS=y 00:10:28.810 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_FC_PATH= 00:10:28.810 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_DPDK_PKG_CONFIG=n 00:10:28.810 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_FC=n 00:10:28.810 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_AVAHI=n 00:10:28.810 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_FIO_PLUGIN=y 00:10:28.810 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_RAID5F=n 00:10:28.810 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_EXAMPLES=y 00:10:28.810 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_TESTS=y 00:10:28.810 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_CRYPTO_MLX5=n 00:10:28.810 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_MAX_LCORES=128 00:10:28.810 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_IPSEC_MB=n 00:10:28.810 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_PGO_DIR= 00:10:28.810 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_DEBUG=y 00:10:28.810 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DPDK_COMPRESSDEV=n 00:10:28.811 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_CROSS_PREFIX= 00:10:28.811 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_COPY_FILE_RANGE=y 00:10:28.811 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_URING=n 00:10:28.811 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:10:28.811 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:10:28.811 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:10:28.811 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:10:28.811 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:10:28.811 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:10:28.811 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:10:28.811 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:10:28.811 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:10:28.811 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:10:28.811 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:10:28.811 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:10:28.811 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:10:28.811 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:10:28.811 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:10:28.811 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:10:28.811 #define SPDK_CONFIG_H 00:10:28.811 #define SPDK_CONFIG_AIO_FSDEV 1 00:10:28.811 #define SPDK_CONFIG_APPS 1 00:10:28.811 #define SPDK_CONFIG_ARCH native 00:10:28.811 #undef SPDK_CONFIG_ASAN 00:10:28.811 #undef SPDK_CONFIG_AVAHI 00:10:28.811 #undef SPDK_CONFIG_CET 00:10:28.811 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:10:28.811 #define SPDK_CONFIG_COVERAGE 1 00:10:28.811 #define SPDK_CONFIG_CROSS_PREFIX 00:10:28.811 #undef SPDK_CONFIG_CRYPTO 00:10:28.811 #undef SPDK_CONFIG_CRYPTO_MLX5 00:10:28.811 #undef SPDK_CONFIG_CUSTOMOCF 00:10:28.811 #undef SPDK_CONFIG_DAOS 00:10:28.811 #define SPDK_CONFIG_DAOS_DIR 00:10:28.811 #define SPDK_CONFIG_DEBUG 1 00:10:28.811 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:10:28.811 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:10:28.811 #define SPDK_CONFIG_DPDK_INC_DIR 00:10:28.811 #define SPDK_CONFIG_DPDK_LIB_DIR 00:10:28.811 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:10:28.811 #undef SPDK_CONFIG_DPDK_UADK 00:10:28.811 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:10:28.811 #define SPDK_CONFIG_EXAMPLES 1 00:10:28.811 #undef SPDK_CONFIG_FC 00:10:28.811 #define SPDK_CONFIG_FC_PATH 00:10:28.811 #define SPDK_CONFIG_FIO_PLUGIN 1 00:10:28.811 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:10:28.811 #define SPDK_CONFIG_FSDEV 1 00:10:28.811 #undef SPDK_CONFIG_FUSE 00:10:28.811 #undef SPDK_CONFIG_FUZZER 00:10:28.811 #define SPDK_CONFIG_FUZZER_LIB 00:10:28.811 #undef SPDK_CONFIG_GOLANG 00:10:28.811 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:10:28.811 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:10:28.811 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:10:28.811 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:10:28.811 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:10:28.811 #undef SPDK_CONFIG_HAVE_LIBBSD 00:10:28.811 #undef SPDK_CONFIG_HAVE_LZ4 00:10:28.811 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:10:28.811 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:10:28.811 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:10:28.811 #define SPDK_CONFIG_IDXD 1 00:10:28.811 #define SPDK_CONFIG_IDXD_KERNEL 1 00:10:28.811 #undef SPDK_CONFIG_IPSEC_MB 00:10:28.811 #define SPDK_CONFIG_IPSEC_MB_DIR 00:10:28.811 #define SPDK_CONFIG_ISAL 1 00:10:28.811 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:10:28.811 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:10:28.811 #define SPDK_CONFIG_LIBDIR 00:10:28.811 #undef SPDK_CONFIG_LTO 00:10:28.811 #define SPDK_CONFIG_MAX_LCORES 128 00:10:28.811 #define SPDK_CONFIG_NVME_CUSE 1 00:10:28.811 #undef SPDK_CONFIG_OCF 00:10:28.811 #define SPDK_CONFIG_OCF_PATH 00:10:28.811 #define SPDK_CONFIG_OPENSSL_PATH 00:10:28.811 #undef SPDK_CONFIG_PGO_CAPTURE 00:10:28.811 #define SPDK_CONFIG_PGO_DIR 00:10:28.811 #undef SPDK_CONFIG_PGO_USE 00:10:28.811 #define SPDK_CONFIG_PREFIX /usr/local 00:10:28.811 #undef SPDK_CONFIG_RAID5F 00:10:28.811 #undef SPDK_CONFIG_RBD 00:10:28.811 #define SPDK_CONFIG_RDMA 1 00:10:28.811 #define SPDK_CONFIG_RDMA_PROV verbs 00:10:28.811 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:10:28.811 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:10:28.811 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:10:28.811 #define SPDK_CONFIG_SHARED 1 00:10:28.811 #undef SPDK_CONFIG_SMA 00:10:28.811 #define SPDK_CONFIG_TESTS 1 00:10:28.811 #undef SPDK_CONFIG_TSAN 00:10:28.811 #define SPDK_CONFIG_UBLK 1 00:10:28.811 #define SPDK_CONFIG_UBSAN 1 00:10:28.811 #undef SPDK_CONFIG_UNIT_TESTS 00:10:28.811 #undef SPDK_CONFIG_URING 00:10:28.811 #define SPDK_CONFIG_URING_PATH 00:10:28.811 #undef SPDK_CONFIG_URING_ZNS 00:10:28.811 #undef SPDK_CONFIG_USDT 00:10:28.811 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:10:28.811 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:10:28.811 #define SPDK_CONFIG_VFIO_USER 1 00:10:28.811 #define SPDK_CONFIG_VFIO_USER_DIR 00:10:28.811 #define SPDK_CONFIG_VHOST 1 00:10:28.811 #define SPDK_CONFIG_VIRTIO 1 00:10:28.811 #undef SPDK_CONFIG_VTUNE 00:10:28.811 #define SPDK_CONFIG_VTUNE_DIR 00:10:28.811 #define SPDK_CONFIG_WERROR 1 00:10:28.811 #define SPDK_CONFIG_WPDK_DIR 00:10:28.811 #undef SPDK_CONFIG_XNVME 00:10:28.811 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:10:28.811 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:10:28.811 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:28.811 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:10:28.811 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:28.811 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:28.811 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:28.811 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:28.811 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:28.811 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:28.811 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:10:28.811 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:28.811 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:10:28.811 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:10:28.811 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:10:28.811 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:10:28.811 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:10:28.811 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:10:28.811 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:10:28.811 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:10:28.811 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:10:28.811 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:10:28.811 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:10:28.811 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:10:28.811 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:10:28.811 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:10:28.811 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:10:28.811 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:10:28.811 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:10:28.811 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:10:28.812 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:10:28.812 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:10:28.812 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:10:28.812 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:10:28.812 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:10:28.812 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:10:28.812 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:10:28.812 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:10:28.812 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:10:28.812 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:10:28.812 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:10:28.812 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:10:28.812 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:10:28.812 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:10:28.812 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:10:28.812 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:10:28.812 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:10:28.812 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:10:28.812 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:10:28.812 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:10:28.812 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:10:28.812 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:10:28.812 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:10:28.812 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:10:28.812 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:10:28.812 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:10:28.812 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:10:28.812 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:10:28.812 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:10:28.812 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:10:28.812 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:10:28.812 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:10:28.812 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:10:28.812 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:10:28.812 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:10:28.812 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:10:28.812 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:10:28.812 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:10:28.812 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:10:28.812 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:10:28.812 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:10:28.812 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:10:28.812 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:10:28.812 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:10:28.812 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:10:28.812 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:10:28.812 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:10:28.812 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:10:28.812 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:10:28.812 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:10:28.812 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:10:28.812 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:10:28.812 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:10:28.812 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:10:28.812 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:10:28.812 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:10:28.812 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:10:28.812 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:10:28.812 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:10:28.812 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:10:28.812 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:10:28.812 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:10:28.812 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:10:28.812 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:10:28.812 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:10:28.812 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:10:28.812 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:10:28.812 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:10:28.812 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:10:28.812 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:10:28.812 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:10:28.812 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 0 00:10:28.812 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:10:28.812 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:10:28.812 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:10:28.812 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 00:10:28.812 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:10:28.812 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:10:28.812 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:10:28.812 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:10:28.812 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:10:28.812 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:10:28.812 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:10:28.812 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:10:28.812 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:10:28.812 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:10:28.812 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:10:28.812 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:10:28.812 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:10:28.812 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : 00:10:28.812 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:10:28.812 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:10:28.812 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:10:28.812 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:10:28.812 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:10:28.812 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:10:28.812 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:10:28.812 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:10:28.812 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:10:28.812 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:10:28.812 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:10:28.812 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:10:28.812 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:10:28.812 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:10:28.812 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:10:28.812 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:10:28.812 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:10:28.812 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:10:28.812 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:10:28.812 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:10:28.812 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:10:28.812 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:10:28.812 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:10:28.812 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:10:28.812 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:10:28.812 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:10:28.812 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:10:28.813 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:10:28.813 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:10:28.813 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:10:28.813 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:10:28.813 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:10:28.813 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:10:28.813 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:10:28.813 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:10:28.813 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:10:28.813 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:10:28.813 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:10:28.813 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:10:28.813 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:28.813 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:28.813 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:28.813 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:28.813 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@185 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:10:28.813 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@185 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:10:28.813 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@189 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:10:28.813 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@189 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:10:28.813 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@193 -- # export PYTHONDONTWRITEBYTECODE=1 00:10:28.813 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@193 -- # PYTHONDONTWRITEBYTECODE=1 00:10:28.813 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@197 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:10:28.813 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@197 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:10:28.813 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@198 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:10:28.813 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@198 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:10:28.813 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@202 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:10:28.813 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@203 -- # rm -rf /var/tmp/asan_suppression_file 00:10:28.813 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # cat 00:10:28.813 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@240 -- # echo leak:libfuse3.so 00:10:28.813 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:10:28.813 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:10:28.813 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:10:28.813 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:10:28.813 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # '[' -z /var/spdk/dependencies ']' 00:10:28.813 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@249 -- # export DEPENDENCY_DIR 00:10:28.813 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@253 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:10:28.813 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@253 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:10:28.813 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@254 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:10:28.813 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@254 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:10:28.813 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@257 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:10:28.813 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@257 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:10:28.813 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:10:28.813 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:10:28.813 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:10:28.813 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:10:28.813 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@263 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:10:28.813 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@263 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:10:28.813 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # _LCOV_MAIN=0 00:10:28.813 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@266 -- # _LCOV_LLVM=1 00:10:28.813 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV= 00:10:28.813 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # [[ '' == *clang* ]] 00:10:28.813 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # [[ 0 -eq 1 ]] 00:10:28.813 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:10:28.813 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@271 -- # _lcov_opt[_LCOV_MAIN]= 00:10:28.813 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # lcov_opt= 00:10:28.813 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@276 -- # '[' 0 -eq 0 ']' 00:10:28.813 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@277 -- # export valgrind= 00:10:28.813 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@277 -- # valgrind= 00:10:28.813 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@283 -- # uname -s 00:10:28.813 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@283 -- # '[' Linux = Linux ']' 00:10:28.814 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@284 -- # HUGEMEM=4096 00:10:28.814 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # export CLEAR_HUGE=yes 00:10:28.814 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # CLEAR_HUGE=yes 00:10:28.814 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # MAKE=make 00:10:28.814 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@288 -- # MAKEFLAGS=-j144 00:10:28.814 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@304 -- # export HUGEMEM=4096 00:10:28.814 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@304 -- # HUGEMEM=4096 00:10:28.814 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # NO_HUGE=() 00:10:28.814 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@307 -- # TEST_MODE= 00:10:28.814 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # for i in "$@" 00:10:28.814 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # case "$i" in 00:10:28.814 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@314 -- # TEST_TRANSPORT=tcp 00:10:28.814 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # [[ -z 3128605 ]] 00:10:28.814 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # kill -0 3128605 00:10:28.814 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1666 -- # set_test_storage 2147483648 00:10:28.814 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@339 -- # [[ -v testdir ]] 00:10:28.814 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # local requested_size=2147483648 00:10:28.814 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@342 -- # local mount target_dir 00:10:28.814 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local -A mounts fss sizes avails uses 00:10:28.814 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@345 -- # local source fs size avail mount use 00:10:28.814 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local storage_fallback storage_candidates 00:10:28.814 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # mktemp -udt spdk.XXXXXX 00:10:28.814 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # storage_fallback=/tmp/spdk.hpH1MQ 00:10:28.814 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@354 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:10:28.814 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # [[ -n '' ]] 00:10:28.814 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # [[ -n '' ]] 00:10:28.814 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@366 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.hpH1MQ/tests/target /tmp/spdk.hpH1MQ 00:10:28.814 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@369 -- # requested_size=2214592512 00:10:28.814 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:28.814 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@338 -- # df -T 00:10:28.814 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@338 -- # grep -v Filesystem 00:10:29.077 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=spdk_devtmpfs 00:10:29.077 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=devtmpfs 00:10:29.077 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=67108864 00:10:29.077 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=67108864 00:10:29.077 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=0 00:10:29.077 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:29.077 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=/dev/pmem0 00:10:29.077 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=ext2 00:10:29.077 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=607141888 00:10:29.077 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=5284429824 00:10:29.077 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=4677287936 00:10:29.077 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:29.077 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=spdk_root 00:10:29.077 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=overlay 00:10:29.077 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=123083276288 00:10:29.077 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=129356558336 00:10:29.077 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=6273282048 00:10:29.077 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:29.077 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:10:29.077 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:10:29.077 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=64668246016 00:10:29.077 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=64678277120 00:10:29.077 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=10031104 00:10:29.077 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:29.077 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:10:29.077 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:10:29.077 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=25847959552 00:10:29.077 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=25871314944 00:10:29.077 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=23355392 00:10:29.077 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:29.077 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=efivarfs 00:10:29.077 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=efivarfs 00:10:29.077 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=216064 00:10:29.077 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=507904 00:10:29.077 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=287744 00:10:29.077 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:29.077 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:10:29.077 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:10:29.077 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=64678010880 00:10:29.077 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=64678281216 00:10:29.077 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=270336 00:10:29.077 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:29.077 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:10:29.077 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:10:29.077 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=12935643136 00:10:29.077 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=12935655424 00:10:29.077 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=12288 00:10:29.077 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:29.077 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@377 -- # printf '* Looking for test storage...\n' 00:10:29.077 * Looking for test storage... 00:10:29.077 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # local target_space new_size 00:10:29.077 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@380 -- # for target_dir in "${storage_candidates[@]}" 00:10:29.077 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:29.077 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # awk '$1 !~ /Filesystem/{print $6}' 00:10:29.077 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # mount=/ 00:10:29.077 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # target_space=123083276288 00:10:29.077 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@386 -- # (( target_space == 0 || target_space < requested_size )) 00:10:29.077 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@389 -- # (( target_space >= requested_size )) 00:10:29.077 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ overlay == tmpfs ]] 00:10:29.077 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ overlay == ramfs ]] 00:10:29.077 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ / == / ]] 00:10:29.077 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@392 -- # new_size=8487874560 00:10:29.077 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # (( new_size * 100 / sizes[/] > 95 )) 00:10:29.077 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@398 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:29.077 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@398 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:29.077 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@399 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:29.077 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:29.077 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # return 0 00:10:29.077 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1668 -- # set -o errtrace 00:10:29.077 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1669 -- # shopt -s extdebug 00:10:29.077 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1670 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:10:29.077 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1672 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:10:29.077 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1673 -- # true 00:10:29.077 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1675 -- # xtrace_fd 00:10:29.077 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:10:29.077 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:10:29.077 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:10:29.077 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:10:29.077 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:10:29.077 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:10:29.077 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:10:29.077 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:10:29.077 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:10:29.077 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # lcov --version 00:10:29.077 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:10:29.077 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:10:29.077 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:29.077 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:29.077 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:29.077 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:10:29.077 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:10:29.077 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:10:29.078 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:10:29.078 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:10:29.078 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:10:29.078 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:10:29.078 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:29.078 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:10:29.078 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:10:29.078 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:29.078 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:29.078 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:10:29.078 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:10:29.078 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:29.078 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:10:29.078 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:10:29.078 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:10:29.078 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:10:29.078 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:29.078 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:10:29.078 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:10:29.078 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:29.078 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:29.078 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:10:29.078 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:29.078 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:10:29.078 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:29.078 --rc genhtml_branch_coverage=1 00:10:29.078 --rc genhtml_function_coverage=1 00:10:29.078 --rc genhtml_legend=1 00:10:29.078 --rc geninfo_all_blocks=1 00:10:29.078 --rc geninfo_unexecuted_blocks=1 00:10:29.078 00:10:29.078 ' 00:10:29.078 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:10:29.078 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:29.078 --rc genhtml_branch_coverage=1 00:10:29.078 --rc genhtml_function_coverage=1 00:10:29.078 --rc genhtml_legend=1 00:10:29.078 --rc geninfo_all_blocks=1 00:10:29.078 --rc geninfo_unexecuted_blocks=1 00:10:29.078 00:10:29.078 ' 00:10:29.078 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:10:29.078 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:29.078 --rc genhtml_branch_coverage=1 00:10:29.078 --rc genhtml_function_coverage=1 00:10:29.078 --rc genhtml_legend=1 00:10:29.078 --rc geninfo_all_blocks=1 00:10:29.078 --rc geninfo_unexecuted_blocks=1 00:10:29.078 00:10:29.078 ' 00:10:29.078 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:10:29.078 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:29.078 --rc genhtml_branch_coverage=1 00:10:29.078 --rc genhtml_function_coverage=1 00:10:29.078 --rc genhtml_legend=1 00:10:29.078 --rc geninfo_all_blocks=1 00:10:29.078 --rc geninfo_unexecuted_blocks=1 00:10:29.078 00:10:29.078 ' 00:10:29.078 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:29.078 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:10:29.078 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:29.078 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:29.078 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:29.078 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:29.078 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:29.078 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:29.078 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:29.078 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:29.078 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:29.078 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:29.078 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:29.078 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:29.078 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:29.078 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:29.078 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:29.078 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:29.078 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:29.078 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:10:29.078 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:29.078 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:29.078 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:29.078 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:29.078 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:29.078 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:29.078 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:10:29.078 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:29.078 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0 00:10:29.078 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:29.078 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:29.078 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:29.078 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:29.078 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:29.078 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:29.078 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:29.078 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:29.078 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:29.078 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:29.078 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:10:29.078 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:10:29.078 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:10:29.078 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:10:29.078 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:29.078 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # prepare_net_devs 00:10:29.078 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@436 -- # local -g is_hw=no 00:10:29.078 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # remove_spdk_ns 00:10:29.079 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:29.079 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:29.079 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:29.079 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:10:29.079 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:10:29.079 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@309 -- # xtrace_disable 00:10:29.079 00:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:37.229 00:18:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:37.229 00:18:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # pci_devs=() 00:10:37.229 00:18:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:37.229 00:18:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:37.229 00:18:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:37.229 00:18:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:37.229 00:18:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:37.229 00:18:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # net_devs=() 00:10:37.229 00:18:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:37.229 00:18:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # e810=() 00:10:37.229 00:18:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # local -ga e810 00:10:37.229 00:18:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # x722=() 00:10:37.229 00:18:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # local -ga x722 00:10:37.229 00:18:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # mlx=() 00:10:37.229 00:18:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # local -ga mlx 00:10:37.229 00:18:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:37.229 00:18:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:37.229 00:18:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:37.229 00:18:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:37.229 00:18:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:37.229 00:18:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:37.229 00:18:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:37.229 00:18:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:37.229 00:18:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:37.229 00:18:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:37.229 00:18:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:37.229 00:18:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:37.229 00:18:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:37.229 00:18:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:37.229 00:18:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:37.229 00:18:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:37.229 00:18:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:37.229 00:18:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:37.229 00:18:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:37.229 00:18:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:10:37.229 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:10:37.229 00:18:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:37.229 00:18:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:37.229 00:18:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:37.229 00:18:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:37.229 00:18:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:37.229 00:18:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:37.229 00:18:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:10:37.229 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:10:37.229 00:18:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:37.229 00:18:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:37.229 00:18:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:37.229 00:18:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:37.229 00:18:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:37.229 00:18:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:37.229 00:18:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:37.229 00:18:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:37.229 00:18:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:10:37.230 00:18:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:37.230 00:18:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:10:37.230 00:18:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:37.230 00:18:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ up == up ]] 00:10:37.230 00:18:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:10:37.230 00:18:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:37.230 00:18:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:10:37.230 Found net devices under 0000:4b:00.0: cvl_0_0 00:10:37.230 00:18:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:10:37.230 00:18:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:10:37.230 00:18:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:37.230 00:18:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:10:37.230 00:18:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:37.230 00:18:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ up == up ]] 00:10:37.230 00:18:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:10:37.230 00:18:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:37.230 00:18:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:10:37.230 Found net devices under 0000:4b:00.1: cvl_0_1 00:10:37.230 00:18:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:10:37.230 00:18:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:10:37.230 00:18:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # is_hw=yes 00:10:37.230 00:18:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:10:37.230 00:18:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:10:37.230 00:18:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:10:37.230 00:18:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:37.230 00:18:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:37.230 00:18:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:37.230 00:18:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:37.230 00:18:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:37.230 00:18:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:37.230 00:18:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:37.230 00:18:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:37.230 00:18:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:37.230 00:18:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:37.230 00:18:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:37.230 00:18:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:37.230 00:18:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:37.230 00:18:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:37.230 00:18:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:37.230 00:18:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:37.230 00:18:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:37.230 00:18:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:37.230 00:18:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:37.230 00:18:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:37.230 00:18:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:37.230 00:18:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:37.230 00:18:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:37.230 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:37.230 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.549 ms 00:10:37.230 00:10:37.230 --- 10.0.0.2 ping statistics --- 00:10:37.230 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:37.230 rtt min/avg/max/mdev = 0.549/0.549/0.549/0.000 ms 00:10:37.230 00:18:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:37.230 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:37.230 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.328 ms 00:10:37.230 00:10:37.230 --- 10.0.0.1 ping statistics --- 00:10:37.230 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:37.230 rtt min/avg/max/mdev = 0.328/0.328/0.328/0.000 ms 00:10:37.230 00:18:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:37.230 00:18:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@448 -- # return 0 00:10:37.230 00:18:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:10:37.230 00:18:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:37.230 00:18:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:10:37.230 00:18:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:10:37.230 00:18:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:37.230 00:18:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:10:37.230 00:18:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:10:37.230 00:18:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:10:37.230 00:18:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:37.230 00:18:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:37.230 00:18:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:37.230 ************************************ 00:10:37.230 START TEST nvmf_filesystem_no_in_capsule 00:10:37.230 ************************************ 00:10:37.230 00:18:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1125 -- # nvmf_filesystem_part 0 00:10:37.230 00:18:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:10:37.230 00:18:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:10:37.230 00:18:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:10:37.230 00:18:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:37.230 00:18:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:37.230 00:18:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@507 -- # nvmfpid=3132569 00:10:37.230 00:18:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@508 -- # waitforlisten 3132569 00:10:37.230 00:18:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:37.230 00:18:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@831 -- # '[' -z 3132569 ']' 00:10:37.230 00:18:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:37.230 00:18:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:37.230 00:18:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:37.230 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:37.230 00:18:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:37.230 00:18:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:37.230 [2024-10-09 00:18:07.195124] Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 initialization... 00:10:37.230 [2024-10-09 00:18:07.195187] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:37.230 [2024-10-09 00:18:07.283887] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:37.230 [2024-10-09 00:18:07.378694] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:37.230 [2024-10-09 00:18:07.378766] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:37.230 [2024-10-09 00:18:07.378776] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:37.230 [2024-10-09 00:18:07.378783] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:37.230 [2024-10-09 00:18:07.378789] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:37.230 [2024-10-09 00:18:07.381137] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:10:37.230 [2024-10-09 00:18:07.381299] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:10:37.230 [2024-10-09 00:18:07.381462] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:10:37.230 [2024-10-09 00:18:07.381462] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:10:37.490 00:18:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:37.490 00:18:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # return 0 00:10:37.490 00:18:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:10:37.490 00:18:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:37.490 00:18:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:37.490 00:18:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:37.490 00:18:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:10:37.490 00:18:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:10:37.490 00:18:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.490 00:18:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:37.490 [2024-10-09 00:18:08.068942] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:37.490 00:18:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.490 00:18:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:10:37.490 00:18:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.490 00:18:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:37.756 Malloc1 00:10:37.756 00:18:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.756 00:18:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:37.756 00:18:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.756 00:18:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:37.756 00:18:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.756 00:18:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:37.756 00:18:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.756 00:18:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:37.756 00:18:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.756 00:18:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:37.756 00:18:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.756 00:18:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:37.756 [2024-10-09 00:18:08.226553] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:37.756 00:18:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.756 00:18:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:10:37.756 00:18:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:10:37.756 00:18:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:10:37.756 00:18:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:10:37.756 00:18:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:10:37.756 00:18:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:10:37.756 00:18:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.756 00:18:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:37.756 00:18:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.756 00:18:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:10:37.756 { 00:10:37.756 "name": "Malloc1", 00:10:37.756 "aliases": [ 00:10:37.756 "183b8b36-5480-4cae-879e-0c1cc2cbad87" 00:10:37.756 ], 00:10:37.756 "product_name": "Malloc disk", 00:10:37.756 "block_size": 512, 00:10:37.756 "num_blocks": 1048576, 00:10:37.756 "uuid": "183b8b36-5480-4cae-879e-0c1cc2cbad87", 00:10:37.756 "assigned_rate_limits": { 00:10:37.756 "rw_ios_per_sec": 0, 00:10:37.756 "rw_mbytes_per_sec": 0, 00:10:37.756 "r_mbytes_per_sec": 0, 00:10:37.756 "w_mbytes_per_sec": 0 00:10:37.756 }, 00:10:37.756 "claimed": true, 00:10:37.756 "claim_type": "exclusive_write", 00:10:37.756 "zoned": false, 00:10:37.756 "supported_io_types": { 00:10:37.756 "read": true, 00:10:37.756 "write": true, 00:10:37.756 "unmap": true, 00:10:37.756 "flush": true, 00:10:37.756 "reset": true, 00:10:37.756 "nvme_admin": false, 00:10:37.756 "nvme_io": false, 00:10:37.756 "nvme_io_md": false, 00:10:37.756 "write_zeroes": true, 00:10:37.756 "zcopy": true, 00:10:37.756 "get_zone_info": false, 00:10:37.756 "zone_management": false, 00:10:37.756 "zone_append": false, 00:10:37.756 "compare": false, 00:10:37.756 "compare_and_write": false, 00:10:37.756 "abort": true, 00:10:37.756 "seek_hole": false, 00:10:37.756 "seek_data": false, 00:10:37.756 "copy": true, 00:10:37.756 "nvme_iov_md": false 00:10:37.756 }, 00:10:37.756 "memory_domains": [ 00:10:37.756 { 00:10:37.756 "dma_device_id": "system", 00:10:37.756 "dma_device_type": 1 00:10:37.756 }, 00:10:37.756 { 00:10:37.756 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:37.756 "dma_device_type": 2 00:10:37.756 } 00:10:37.756 ], 00:10:37.756 "driver_specific": {} 00:10:37.756 } 00:10:37.756 ]' 00:10:37.756 00:18:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:10:37.756 00:18:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:10:37.756 00:18:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:10:37.756 00:18:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:10:37.756 00:18:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:10:37.756 00:18:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:10:37.756 00:18:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:10:37.756 00:18:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:39.220 00:18:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:10:39.220 00:18:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:10:39.220 00:18:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:39.220 00:18:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:10:39.220 00:18:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:10:41.754 00:18:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:41.754 00:18:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:41.754 00:18:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:41.754 00:18:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:10:41.754 00:18:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:41.754 00:18:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:10:41.754 00:18:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:10:41.754 00:18:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:10:41.754 00:18:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:10:41.754 00:18:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:10:41.754 00:18:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:10:41.754 00:18:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:10:41.754 00:18:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:10:41.754 00:18:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:10:41.754 00:18:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:10:41.754 00:18:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:10:41.754 00:18:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:10:41.754 00:18:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:10:42.013 00:18:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:10:43.397 00:18:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:10:43.397 00:18:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:10:43.397 00:18:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:10:43.397 00:18:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:43.398 00:18:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:43.398 ************************************ 00:10:43.398 START TEST filesystem_ext4 00:10:43.398 ************************************ 00:10:43.398 00:18:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create ext4 nvme0n1 00:10:43.398 00:18:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:10:43.398 00:18:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:43.398 00:18:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:10:43.398 00:18:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@926 -- # local fstype=ext4 00:10:43.398 00:18:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:10:43.398 00:18:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@928 -- # local i=0 00:10:43.398 00:18:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # local force 00:10:43.398 00:18:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # '[' ext4 = ext4 ']' 00:10:43.398 00:18:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # force=-F 00:10:43.398 00:18:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@937 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:10:43.398 mke2fs 1.47.0 (5-Feb-2023) 00:10:43.398 Discarding device blocks: 0/522240 done 00:10:43.398 Creating filesystem with 522240 1k blocks and 130560 inodes 00:10:43.398 Filesystem UUID: 540ae064-43e5-4b7f-ad69-062da6628a03 00:10:43.398 Superblock backups stored on blocks: 00:10:43.398 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:10:43.398 00:10:43.398 Allocating group tables: 0/64 done 00:10:43.398 Writing inode tables: 0/64 done 00:10:45.936 Creating journal (8192 blocks): done 00:10:45.936 Writing superblocks and filesystem accounting information: 0/64 done 00:10:45.936 00:10:45.936 00:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@945 -- # return 0 00:10:45.936 00:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:52.527 00:18:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:52.527 00:18:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:10:52.527 00:18:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:52.527 00:18:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:10:52.527 00:18:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:10:52.527 00:18:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:52.527 00:18:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 3132569 00:10:52.527 00:18:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:52.527 00:18:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:52.527 00:18:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:52.527 00:18:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:52.527 00:10:52.527 real 0m8.863s 00:10:52.527 user 0m0.040s 00:10:52.527 sys 0m0.043s 00:10:52.527 00:18:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:52.527 00:18:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:10:52.527 ************************************ 00:10:52.527 END TEST filesystem_ext4 00:10:52.527 ************************************ 00:10:52.527 00:18:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:10:52.527 00:18:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:10:52.527 00:18:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:52.527 00:18:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:52.527 ************************************ 00:10:52.527 START TEST filesystem_btrfs 00:10:52.527 ************************************ 00:10:52.527 00:18:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create btrfs nvme0n1 00:10:52.527 00:18:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:10:52.527 00:18:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:52.527 00:18:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:10:52.527 00:18:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@926 -- # local fstype=btrfs 00:10:52.527 00:18:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:10:52.527 00:18:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@928 -- # local i=0 00:10:52.527 00:18:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # local force 00:10:52.527 00:18:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # '[' btrfs = ext4 ']' 00:10:52.527 00:18:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@934 -- # force=-f 00:10:52.527 00:18:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@937 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:10:52.527 btrfs-progs v6.8.1 00:10:52.527 See https://btrfs.readthedocs.io for more information. 00:10:52.527 00:10:52.527 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:10:52.527 NOTE: several default settings have changed in version 5.15, please make sure 00:10:52.527 this does not affect your deployments: 00:10:52.527 - DUP for metadata (-m dup) 00:10:52.527 - enabled no-holes (-O no-holes) 00:10:52.527 - enabled free-space-tree (-R free-space-tree) 00:10:52.527 00:10:52.527 Label: (null) 00:10:52.527 UUID: bccec90c-dc9c-4140-b320-786e55b6ecc4 00:10:52.527 Node size: 16384 00:10:52.527 Sector size: 4096 (CPU page size: 4096) 00:10:52.527 Filesystem size: 510.00MiB 00:10:52.527 Block group profiles: 00:10:52.527 Data: single 8.00MiB 00:10:52.527 Metadata: DUP 32.00MiB 00:10:52.527 System: DUP 8.00MiB 00:10:52.527 SSD detected: yes 00:10:52.527 Zoned device: no 00:10:52.527 Features: extref, skinny-metadata, no-holes, free-space-tree 00:10:52.527 Checksum: crc32c 00:10:52.527 Number of devices: 1 00:10:52.527 Devices: 00:10:52.527 ID SIZE PATH 00:10:52.527 1 510.00MiB /dev/nvme0n1p1 00:10:52.527 00:10:52.527 00:18:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@945 -- # return 0 00:10:52.527 00:18:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:52.788 00:18:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:52.788 00:18:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:10:52.788 00:18:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:52.788 00:18:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:10:52.788 00:18:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:10:52.788 00:18:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:52.788 00:18:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 3132569 00:10:52.788 00:18:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:52.788 00:18:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:52.788 00:18:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:52.788 00:18:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:52.788 00:10:52.788 real 0m0.664s 00:10:52.788 user 0m0.022s 00:10:52.788 sys 0m0.067s 00:10:52.788 00:18:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:52.788 00:18:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:10:52.788 ************************************ 00:10:52.788 END TEST filesystem_btrfs 00:10:52.788 ************************************ 00:10:52.788 00:18:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:10:52.788 00:18:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:10:52.788 00:18:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:52.788 00:18:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:52.788 ************************************ 00:10:52.788 START TEST filesystem_xfs 00:10:52.788 ************************************ 00:10:52.788 00:18:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create xfs nvme0n1 00:10:52.788 00:18:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:10:52.788 00:18:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:52.788 00:18:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:10:52.788 00:18:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@926 -- # local fstype=xfs 00:10:52.788 00:18:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:10:52.788 00:18:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@928 -- # local i=0 00:10:52.788 00:18:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # local force 00:10:52.788 00:18:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # '[' xfs = ext4 ']' 00:10:52.788 00:18:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@934 -- # force=-f 00:10:52.788 00:18:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@937 -- # mkfs.xfs -f /dev/nvme0n1p1 00:10:53.048 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:10:53.048 = sectsz=512 attr=2, projid32bit=1 00:10:53.048 = crc=1 finobt=1, sparse=1, rmapbt=0 00:10:53.048 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:10:53.048 data = bsize=4096 blocks=130560, imaxpct=25 00:10:53.048 = sunit=0 swidth=0 blks 00:10:53.048 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:10:53.048 log =internal log bsize=4096 blocks=16384, version=2 00:10:53.048 = sectsz=512 sunit=0 blks, lazy-count=1 00:10:53.048 realtime =none extsz=4096 blocks=0, rtextents=0 00:10:53.986 Discarding blocks...Done. 00:10:53.986 00:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@945 -- # return 0 00:10:53.986 00:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:56.527 00:18:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:56.527 00:18:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:10:56.527 00:18:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:56.527 00:18:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:10:56.527 00:18:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:10:56.527 00:18:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:56.527 00:18:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 3132569 00:10:56.527 00:18:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:56.527 00:18:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:56.527 00:18:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:56.527 00:18:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:56.527 00:10:56.527 real 0m3.591s 00:10:56.527 user 0m0.027s 00:10:56.527 sys 0m0.056s 00:10:56.527 00:18:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:56.527 00:18:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:10:56.527 ************************************ 00:10:56.527 END TEST filesystem_xfs 00:10:56.527 ************************************ 00:10:56.527 00:18:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:10:56.527 00:18:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:10:56.527 00:18:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:56.787 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:56.787 00:18:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:56.787 00:18:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:10:56.787 00:18:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:56.787 00:18:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:56.787 00:18:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:56.787 00:18:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:56.787 00:18:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:10:56.787 00:18:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:56.787 00:18:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.787 00:18:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:56.787 00:18:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.787 00:18:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:10:56.787 00:18:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 3132569 00:10:56.787 00:18:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@950 -- # '[' -z 3132569 ']' 00:10:56.787 00:18:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # kill -0 3132569 00:10:56.787 00:18:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@955 -- # uname 00:10:56.787 00:18:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:56.787 00:18:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3132569 00:10:56.787 00:18:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:56.787 00:18:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:56.787 00:18:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3132569' 00:10:56.787 killing process with pid 3132569 00:10:56.787 00:18:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@969 -- # kill 3132569 00:10:56.787 00:18:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@974 -- # wait 3132569 00:10:57.048 00:18:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:10:57.048 00:10:57.048 real 0m20.369s 00:10:57.048 user 1m20.290s 00:10:57.048 sys 0m1.377s 00:10:57.048 00:18:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:57.048 00:18:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:57.048 ************************************ 00:10:57.048 END TEST nvmf_filesystem_no_in_capsule 00:10:57.048 ************************************ 00:10:57.048 00:18:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:10:57.048 00:18:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:57.048 00:18:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:57.048 00:18:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:57.048 ************************************ 00:10:57.048 START TEST nvmf_filesystem_in_capsule 00:10:57.048 ************************************ 00:10:57.048 00:18:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1125 -- # nvmf_filesystem_part 4096 00:10:57.048 00:18:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:10:57.048 00:18:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:10:57.048 00:18:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:10:57.048 00:18:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:57.048 00:18:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:57.048 00:18:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@507 -- # nvmfpid=3137254 00:10:57.048 00:18:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@508 -- # waitforlisten 3137254 00:10:57.048 00:18:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:57.048 00:18:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@831 -- # '[' -z 3137254 ']' 00:10:57.048 00:18:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:57.048 00:18:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:57.048 00:18:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:57.048 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:57.048 00:18:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:57.048 00:18:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:57.048 [2024-10-09 00:18:27.643508] Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 initialization... 00:10:57.048 [2024-10-09 00:18:27.643560] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:57.308 [2024-10-09 00:18:27.728527] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:57.308 [2024-10-09 00:18:27.797688] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:57.308 [2024-10-09 00:18:27.797734] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:57.308 [2024-10-09 00:18:27.797741] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:57.308 [2024-10-09 00:18:27.797746] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:57.308 [2024-10-09 00:18:27.797751] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:57.308 [2024-10-09 00:18:27.799426] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:10:57.308 [2024-10-09 00:18:27.799579] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:10:57.308 [2024-10-09 00:18:27.799751] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:10:57.308 [2024-10-09 00:18:27.799787] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:10:57.877 00:18:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:57.877 00:18:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # return 0 00:10:57.877 00:18:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:10:57.877 00:18:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:57.877 00:18:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:57.877 00:18:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:57.877 00:18:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:10:57.877 00:18:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:10:57.877 00:18:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.877 00:18:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:57.877 [2024-10-09 00:18:28.493988] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:57.877 00:18:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.877 00:18:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:10:57.877 00:18:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.877 00:18:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:58.136 Malloc1 00:10:58.136 00:18:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.136 00:18:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:58.136 00:18:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.136 00:18:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:58.136 00:18:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.136 00:18:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:58.136 00:18:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.136 00:18:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:58.136 00:18:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.136 00:18:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:58.136 00:18:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.136 00:18:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:58.136 [2024-10-09 00:18:28.613987] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:58.136 00:18:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.136 00:18:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:10:58.136 00:18:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:10:58.136 00:18:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:10:58.136 00:18:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:10:58.136 00:18:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:10:58.136 00:18:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:10:58.136 00:18:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.136 00:18:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:58.137 00:18:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.137 00:18:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:10:58.137 { 00:10:58.137 "name": "Malloc1", 00:10:58.137 "aliases": [ 00:10:58.137 "d80a7f58-181e-4b42-aed6-0fd99646f2e0" 00:10:58.137 ], 00:10:58.137 "product_name": "Malloc disk", 00:10:58.137 "block_size": 512, 00:10:58.137 "num_blocks": 1048576, 00:10:58.137 "uuid": "d80a7f58-181e-4b42-aed6-0fd99646f2e0", 00:10:58.137 "assigned_rate_limits": { 00:10:58.137 "rw_ios_per_sec": 0, 00:10:58.137 "rw_mbytes_per_sec": 0, 00:10:58.137 "r_mbytes_per_sec": 0, 00:10:58.137 "w_mbytes_per_sec": 0 00:10:58.137 }, 00:10:58.137 "claimed": true, 00:10:58.137 "claim_type": "exclusive_write", 00:10:58.137 "zoned": false, 00:10:58.137 "supported_io_types": { 00:10:58.137 "read": true, 00:10:58.137 "write": true, 00:10:58.137 "unmap": true, 00:10:58.137 "flush": true, 00:10:58.137 "reset": true, 00:10:58.137 "nvme_admin": false, 00:10:58.137 "nvme_io": false, 00:10:58.137 "nvme_io_md": false, 00:10:58.137 "write_zeroes": true, 00:10:58.137 "zcopy": true, 00:10:58.137 "get_zone_info": false, 00:10:58.137 "zone_management": false, 00:10:58.137 "zone_append": false, 00:10:58.137 "compare": false, 00:10:58.137 "compare_and_write": false, 00:10:58.137 "abort": true, 00:10:58.137 "seek_hole": false, 00:10:58.137 "seek_data": false, 00:10:58.137 "copy": true, 00:10:58.137 "nvme_iov_md": false 00:10:58.137 }, 00:10:58.137 "memory_domains": [ 00:10:58.137 { 00:10:58.137 "dma_device_id": "system", 00:10:58.137 "dma_device_type": 1 00:10:58.137 }, 00:10:58.137 { 00:10:58.137 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:58.137 "dma_device_type": 2 00:10:58.137 } 00:10:58.137 ], 00:10:58.137 "driver_specific": {} 00:10:58.137 } 00:10:58.137 ]' 00:10:58.137 00:18:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:10:58.137 00:18:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:10:58.137 00:18:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:10:58.137 00:18:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:10:58.137 00:18:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:10:58.137 00:18:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:10:58.137 00:18:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:10:58.137 00:18:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:00.044 00:18:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:11:00.044 00:18:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:11:00.044 00:18:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:00.044 00:18:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:11:00.044 00:18:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:11:01.946 00:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:01.946 00:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:01.946 00:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:01.946 00:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:11:01.946 00:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:01.946 00:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:11:01.946 00:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:11:01.946 00:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:11:01.946 00:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:11:01.946 00:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:11:01.946 00:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:11:01.946 00:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:11:01.946 00:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:11:01.946 00:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:11:01.946 00:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:11:01.946 00:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:11:01.946 00:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:11:01.946 00:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:11:01.946 00:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:11:03.324 00:18:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:11:03.324 00:18:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:11:03.324 00:18:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:11:03.325 00:18:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:03.325 00:18:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:03.325 ************************************ 00:11:03.325 START TEST filesystem_in_capsule_ext4 00:11:03.325 ************************************ 00:11:03.325 00:18:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create ext4 nvme0n1 00:11:03.325 00:18:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:11:03.325 00:18:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:03.325 00:18:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:11:03.325 00:18:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@926 -- # local fstype=ext4 00:11:03.325 00:18:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:11:03.325 00:18:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@928 -- # local i=0 00:11:03.325 00:18:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # local force 00:11:03.325 00:18:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # '[' ext4 = ext4 ']' 00:11:03.325 00:18:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # force=-F 00:11:03.325 00:18:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@937 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:11:03.325 mke2fs 1.47.0 (5-Feb-2023) 00:11:03.325 Discarding device blocks: 0/522240 done 00:11:03.325 Creating filesystem with 522240 1k blocks and 130560 inodes 00:11:03.325 Filesystem UUID: b3caf4af-5c06-4b8f-b383-7fed029ea22c 00:11:03.325 Superblock backups stored on blocks: 00:11:03.325 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:11:03.325 00:11:03.325 Allocating group tables: 0/64 done 00:11:03.325 Writing inode tables: 0/64 done 00:11:03.325 Creating journal (8192 blocks): done 00:11:05.648 Writing superblocks and filesystem accounting information: 0/6426/64 done 00:11:05.648 00:11:05.648 00:18:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@945 -- # return 0 00:11:05.648 00:18:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:12.228 00:18:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:12.228 00:18:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:11:12.228 00:18:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:12.228 00:18:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:11:12.228 00:18:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:11:12.228 00:18:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:12.228 00:18:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 3137254 00:11:12.228 00:18:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:12.228 00:18:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:12.228 00:18:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:12.228 00:18:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:12.228 00:11:12.228 real 0m8.668s 00:11:12.228 user 0m0.024s 00:11:12.228 sys 0m0.061s 00:11:12.228 00:18:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:12.228 00:18:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:11:12.228 ************************************ 00:11:12.228 END TEST filesystem_in_capsule_ext4 00:11:12.228 ************************************ 00:11:12.228 00:18:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:11:12.228 00:18:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:11:12.228 00:18:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:12.228 00:18:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:12.228 ************************************ 00:11:12.228 START TEST filesystem_in_capsule_btrfs 00:11:12.228 ************************************ 00:11:12.228 00:18:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create btrfs nvme0n1 00:11:12.228 00:18:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:11:12.228 00:18:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:12.228 00:18:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:11:12.228 00:18:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@926 -- # local fstype=btrfs 00:11:12.228 00:18:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:11:12.229 00:18:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@928 -- # local i=0 00:11:12.229 00:18:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # local force 00:11:12.229 00:18:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # '[' btrfs = ext4 ']' 00:11:12.229 00:18:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@934 -- # force=-f 00:11:12.229 00:18:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@937 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:11:12.229 btrfs-progs v6.8.1 00:11:12.229 See https://btrfs.readthedocs.io for more information. 00:11:12.229 00:11:12.229 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:11:12.229 NOTE: several default settings have changed in version 5.15, please make sure 00:11:12.229 this does not affect your deployments: 00:11:12.229 - DUP for metadata (-m dup) 00:11:12.229 - enabled no-holes (-O no-holes) 00:11:12.229 - enabled free-space-tree (-R free-space-tree) 00:11:12.229 00:11:12.229 Label: (null) 00:11:12.229 UUID: 2141f70d-b677-45ac-aa55-b4d47105ec31 00:11:12.229 Node size: 16384 00:11:12.229 Sector size: 4096 (CPU page size: 4096) 00:11:12.229 Filesystem size: 510.00MiB 00:11:12.229 Block group profiles: 00:11:12.229 Data: single 8.00MiB 00:11:12.229 Metadata: DUP 32.00MiB 00:11:12.229 System: DUP 8.00MiB 00:11:12.229 SSD detected: yes 00:11:12.229 Zoned device: no 00:11:12.229 Features: extref, skinny-metadata, no-holes, free-space-tree 00:11:12.229 Checksum: crc32c 00:11:12.229 Number of devices: 1 00:11:12.229 Devices: 00:11:12.229 ID SIZE PATH 00:11:12.229 1 510.00MiB /dev/nvme0n1p1 00:11:12.229 00:11:12.229 00:18:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@945 -- # return 0 00:11:12.229 00:18:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:12.802 00:18:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:12.802 00:18:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:11:12.802 00:18:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:12.802 00:18:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:11:12.802 00:18:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:11:12.802 00:18:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:12.802 00:18:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 3137254 00:11:12.802 00:18:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:12.802 00:18:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:12.802 00:18:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:12.802 00:18:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:12.802 00:11:12.802 real 0m0.996s 00:11:12.802 user 0m0.024s 00:11:12.802 sys 0m0.068s 00:11:12.802 00:18:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:12.802 00:18:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:11:12.802 ************************************ 00:11:12.802 END TEST filesystem_in_capsule_btrfs 00:11:12.802 ************************************ 00:11:12.802 00:18:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:11:12.802 00:18:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:11:12.802 00:18:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:12.802 00:18:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:13.061 ************************************ 00:11:13.061 START TEST filesystem_in_capsule_xfs 00:11:13.061 ************************************ 00:11:13.061 00:18:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create xfs nvme0n1 00:11:13.061 00:18:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:11:13.061 00:18:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:13.061 00:18:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:11:13.061 00:18:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@926 -- # local fstype=xfs 00:11:13.061 00:18:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:11:13.061 00:18:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@928 -- # local i=0 00:11:13.061 00:18:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # local force 00:11:13.061 00:18:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # '[' xfs = ext4 ']' 00:11:13.061 00:18:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@934 -- # force=-f 00:11:13.061 00:18:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@937 -- # mkfs.xfs -f /dev/nvme0n1p1 00:11:13.061 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:11:13.061 = sectsz=512 attr=2, projid32bit=1 00:11:13.061 = crc=1 finobt=1, sparse=1, rmapbt=0 00:11:13.061 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:11:13.061 data = bsize=4096 blocks=130560, imaxpct=25 00:11:13.061 = sunit=0 swidth=0 blks 00:11:13.061 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:11:13.061 log =internal log bsize=4096 blocks=16384, version=2 00:11:13.061 = sectsz=512 sunit=0 blks, lazy-count=1 00:11:13.061 realtime =none extsz=4096 blocks=0, rtextents=0 00:11:13.999 Discarding blocks...Done. 00:11:13.999 00:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@945 -- # return 0 00:11:13.999 00:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:16.533 00:18:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:16.533 00:18:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:11:16.533 00:18:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:16.533 00:18:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:11:16.533 00:18:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:11:16.533 00:18:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:16.533 00:18:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 3137254 00:11:16.533 00:18:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:16.533 00:18:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:16.533 00:18:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:16.533 00:18:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:16.533 00:11:16.533 real 0m3.556s 00:11:16.533 user 0m0.028s 00:11:16.533 sys 0m0.053s 00:11:16.533 00:18:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:16.533 00:18:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:11:16.533 ************************************ 00:11:16.533 END TEST filesystem_in_capsule_xfs 00:11:16.533 ************************************ 00:11:16.533 00:18:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:11:16.533 00:18:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:11:16.533 00:18:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:16.533 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:16.533 00:18:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:16.533 00:18:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:11:16.792 00:18:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:11:16.792 00:18:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:16.792 00:18:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:11:16.793 00:18:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:16.793 00:18:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:11:16.793 00:18:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:16.793 00:18:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.793 00:18:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:16.793 00:18:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.793 00:18:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:11:16.793 00:18:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 3137254 00:11:16.793 00:18:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@950 -- # '[' -z 3137254 ']' 00:11:16.793 00:18:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # kill -0 3137254 00:11:16.793 00:18:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@955 -- # uname 00:11:16.793 00:18:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:16.793 00:18:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3137254 00:11:16.793 00:18:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:16.793 00:18:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:16.793 00:18:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3137254' 00:11:16.793 killing process with pid 3137254 00:11:16.793 00:18:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@969 -- # kill 3137254 00:11:16.793 00:18:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@974 -- # wait 3137254 00:11:17.066 00:18:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:11:17.066 00:11:17.066 real 0m19.912s 00:11:17.066 user 1m18.657s 00:11:17.066 sys 0m1.269s 00:11:17.066 00:18:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:17.066 00:18:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:17.066 ************************************ 00:11:17.066 END TEST nvmf_filesystem_in_capsule 00:11:17.066 ************************************ 00:11:17.066 00:18:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:11:17.066 00:18:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@514 -- # nvmfcleanup 00:11:17.066 00:18:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # sync 00:11:17.066 00:18:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:17.066 00:18:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e 00:11:17.066 00:18:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:17.066 00:18:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:17.066 rmmod nvme_tcp 00:11:17.066 rmmod nvme_fabrics 00:11:17.066 rmmod nvme_keyring 00:11:17.066 00:18:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:17.066 00:18:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e 00:11:17.066 00:18:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0 00:11:17.066 00:18:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@515 -- # '[' -n '' ']' 00:11:17.066 00:18:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:11:17.066 00:18:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:11:17.066 00:18:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:11:17.066 00:18:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # iptr 00:11:17.066 00:18:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@789 -- # iptables-save 00:11:17.066 00:18:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:11:17.066 00:18:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@789 -- # iptables-restore 00:11:17.066 00:18:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:17.066 00:18:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:17.066 00:18:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:17.066 00:18:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:17.066 00:18:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:19.612 00:18:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:19.612 00:11:19.612 real 0m50.580s 00:11:19.612 user 2m41.385s 00:11:19.612 sys 0m8.476s 00:11:19.612 00:18:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:19.612 00:18:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:19.612 ************************************ 00:11:19.612 END TEST nvmf_filesystem 00:11:19.612 ************************************ 00:11:19.612 00:18:49 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:11:19.612 00:18:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:19.612 00:18:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:19.612 00:18:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:19.612 ************************************ 00:11:19.612 START TEST nvmf_target_discovery 00:11:19.612 ************************************ 00:11:19.612 00:18:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:11:19.612 * Looking for test storage... 00:11:19.612 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:19.612 00:18:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:11:19.612 00:18:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1681 -- # lcov --version 00:11:19.612 00:18:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:11:19.612 00:18:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:11:19.612 00:18:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:19.612 00:18:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:19.612 00:18:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:19.612 00:18:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:11:19.612 00:18:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:11:19.612 00:18:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:11:19.612 00:18:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:11:19.612 00:18:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:11:19.612 00:18:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:11:19.612 00:18:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:11:19.612 00:18:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:19.612 00:18:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:11:19.612 00:18:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:11:19.613 00:18:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:19.613 00:18:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:19.613 00:18:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:11:19.613 00:18:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:11:19.613 00:18:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:19.613 00:18:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:11:19.613 00:18:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:11:19.613 00:18:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:11:19.613 00:18:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:11:19.613 00:18:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:19.613 00:18:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:11:19.613 00:18:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:11:19.613 00:18:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:19.613 00:18:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:19.613 00:18:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:11:19.613 00:18:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:19.613 00:18:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:11:19.613 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:19.613 --rc genhtml_branch_coverage=1 00:11:19.613 --rc genhtml_function_coverage=1 00:11:19.613 --rc genhtml_legend=1 00:11:19.613 --rc geninfo_all_blocks=1 00:11:19.613 --rc geninfo_unexecuted_blocks=1 00:11:19.613 00:11:19.613 ' 00:11:19.613 00:18:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:11:19.613 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:19.613 --rc genhtml_branch_coverage=1 00:11:19.613 --rc genhtml_function_coverage=1 00:11:19.613 --rc genhtml_legend=1 00:11:19.613 --rc geninfo_all_blocks=1 00:11:19.613 --rc geninfo_unexecuted_blocks=1 00:11:19.613 00:11:19.613 ' 00:11:19.613 00:18:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:11:19.613 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:19.613 --rc genhtml_branch_coverage=1 00:11:19.613 --rc genhtml_function_coverage=1 00:11:19.613 --rc genhtml_legend=1 00:11:19.613 --rc geninfo_all_blocks=1 00:11:19.613 --rc geninfo_unexecuted_blocks=1 00:11:19.613 00:11:19.613 ' 00:11:19.613 00:18:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:11:19.613 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:19.613 --rc genhtml_branch_coverage=1 00:11:19.613 --rc genhtml_function_coverage=1 00:11:19.613 --rc genhtml_legend=1 00:11:19.613 --rc geninfo_all_blocks=1 00:11:19.613 --rc geninfo_unexecuted_blocks=1 00:11:19.613 00:11:19.613 ' 00:11:19.613 00:18:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:19.613 00:18:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:11:19.613 00:18:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:19.613 00:18:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:19.613 00:18:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:19.613 00:18:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:19.613 00:18:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:19.613 00:18:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:19.613 00:18:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:19.613 00:18:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:19.613 00:18:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:19.613 00:18:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:19.613 00:18:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:19.613 00:18:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:19.613 00:18:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:19.613 00:18:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:19.613 00:18:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:19.613 00:18:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:19.613 00:18:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:19.613 00:18:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:11:19.613 00:18:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:19.613 00:18:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:19.613 00:18:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:19.613 00:18:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:19.613 00:18:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:19.613 00:18:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:19.613 00:18:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:11:19.613 00:18:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:19.613 00:18:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0 00:11:19.613 00:18:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:19.613 00:18:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:19.613 00:18:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:19.613 00:18:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:19.613 00:18:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:19.613 00:18:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:19.613 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:19.613 00:18:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:19.613 00:18:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:19.613 00:18:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:19.613 00:18:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:11:19.613 00:18:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:11:19.613 00:18:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:11:19.613 00:18:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:11:19.613 00:18:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:11:19.613 00:18:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:11:19.613 00:18:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:19.613 00:18:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # prepare_net_devs 00:11:19.613 00:18:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@436 -- # local -g is_hw=no 00:11:19.613 00:18:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # remove_spdk_ns 00:11:19.613 00:18:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:19.613 00:18:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:19.613 00:18:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:19.613 00:18:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:11:19.613 00:18:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:11:19.613 00:18:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:11:19.613 00:18:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:27.837 00:18:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:27.837 00:18:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:11:27.837 00:18:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:27.837 00:18:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:27.837 00:18:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:27.837 00:18:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:27.837 00:18:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:27.837 00:18:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:11:27.837 00:18:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:27.837 00:18:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # e810=() 00:11:27.838 00:18:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:11:27.838 00:18:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # x722=() 00:11:27.838 00:18:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:11:27.838 00:18:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # mlx=() 00:11:27.838 00:18:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:11:27.838 00:18:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:27.838 00:18:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:27.838 00:18:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:27.838 00:18:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:27.838 00:18:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:27.838 00:18:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:27.838 00:18:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:27.838 00:18:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:27.838 00:18:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:27.838 00:18:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:27.838 00:18:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:27.838 00:18:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:27.838 00:18:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:27.838 00:18:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:27.838 00:18:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:27.838 00:18:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:27.838 00:18:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:27.838 00:18:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:27.838 00:18:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:27.838 00:18:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:11:27.838 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:11:27.838 00:18:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:27.838 00:18:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:27.838 00:18:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:27.838 00:18:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:27.838 00:18:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:27.838 00:18:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:27.838 00:18:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:11:27.838 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:11:27.838 00:18:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:27.838 00:18:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:27.838 00:18:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:27.838 00:18:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:27.838 00:18:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:27.838 00:18:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:27.838 00:18:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:27.838 00:18:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:27.838 00:18:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:11:27.838 00:18:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:27.838 00:18:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:11:27.838 00:18:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:27.838 00:18:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ up == up ]] 00:11:27.838 00:18:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:11:27.838 00:18:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:27.838 00:18:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:11:27.838 Found net devices under 0000:4b:00.0: cvl_0_0 00:11:27.838 00:18:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:11:27.838 00:18:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:11:27.838 00:18:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:27.838 00:18:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:11:27.838 00:18:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:27.838 00:18:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ up == up ]] 00:11:27.838 00:18:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:11:27.838 00:18:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:27.838 00:18:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:11:27.838 Found net devices under 0000:4b:00.1: cvl_0_1 00:11:27.838 00:18:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:11:27.838 00:18:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:11:27.838 00:18:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # is_hw=yes 00:11:27.838 00:18:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:11:27.838 00:18:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:11:27.838 00:18:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:11:27.838 00:18:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:27.838 00:18:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:27.838 00:18:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:27.838 00:18:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:27.838 00:18:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:27.838 00:18:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:27.838 00:18:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:27.838 00:18:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:27.838 00:18:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:27.838 00:18:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:27.838 00:18:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:27.838 00:18:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:27.838 00:18:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:27.838 00:18:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:27.838 00:18:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:27.838 00:18:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:27.838 00:18:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:27.838 00:18:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:27.838 00:18:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:27.838 00:18:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:27.838 00:18:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:27.838 00:18:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:27.838 00:18:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:27.838 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:27.838 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.773 ms 00:11:27.838 00:11:27.838 --- 10.0.0.2 ping statistics --- 00:11:27.838 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:27.838 rtt min/avg/max/mdev = 0.773/0.773/0.773/0.000 ms 00:11:27.838 00:18:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:27.838 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:27.838 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.282 ms 00:11:27.838 00:11:27.838 --- 10.0.0.1 ping statistics --- 00:11:27.838 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:27.839 rtt min/avg/max/mdev = 0.282/0.282/0.282/0.000 ms 00:11:27.839 00:18:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:27.839 00:18:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@448 -- # return 0 00:11:27.839 00:18:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:11:27.839 00:18:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:27.839 00:18:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:11:27.839 00:18:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:11:27.839 00:18:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:27.839 00:18:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:11:27.839 00:18:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:11:27.839 00:18:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:11:27.839 00:18:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:11:27.839 00:18:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:27.839 00:18:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:27.839 00:18:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@507 -- # nvmfpid=3145538 00:11:27.839 00:18:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@508 -- # waitforlisten 3145538 00:11:27.839 00:18:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:27.839 00:18:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@831 -- # '[' -z 3145538 ']' 00:11:27.839 00:18:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:27.839 00:18:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:27.839 00:18:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:27.839 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:27.839 00:18:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:27.839 00:18:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:27.839 [2024-10-09 00:18:57.534987] Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 initialization... 00:11:27.839 [2024-10-09 00:18:57.535052] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:27.839 [2024-10-09 00:18:57.616925] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:27.839 [2024-10-09 00:18:57.722518] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:27.839 [2024-10-09 00:18:57.722581] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:27.839 [2024-10-09 00:18:57.722588] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:27.839 [2024-10-09 00:18:57.722593] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:27.839 [2024-10-09 00:18:57.722598] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:27.839 [2024-10-09 00:18:57.724528] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:11:27.839 [2024-10-09 00:18:57.724689] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:11:27.839 [2024-10-09 00:18:57.724854] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:11:27.839 [2024-10-09 00:18:57.725004] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:11:27.839 00:18:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:27.839 00:18:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # return 0 00:11:27.839 00:18:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:11:27.839 00:18:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:27.839 00:18:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:27.839 00:18:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:27.839 00:18:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:27.839 00:18:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.839 00:18:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:27.839 [2024-10-09 00:18:58.471199] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:28.100 00:18:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.100 00:18:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:11:28.100 00:18:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:28.100 00:18:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:11:28.100 00:18:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.100 00:18:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:28.100 Null1 00:11:28.100 00:18:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.100 00:18:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:28.100 00:18:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.100 00:18:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:28.100 00:18:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.100 00:18:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:11:28.100 00:18:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.100 00:18:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:28.100 00:18:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.100 00:18:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:28.100 00:18:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.101 00:18:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:28.101 [2024-10-09 00:18:58.531708] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:28.101 00:18:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.101 00:18:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:28.101 00:18:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:11:28.101 00:18:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.101 00:18:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:28.101 Null2 00:11:28.101 00:18:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.101 00:18:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:11:28.101 00:18:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.101 00:18:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:28.101 00:18:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.101 00:18:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:11:28.101 00:18:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.101 00:18:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:28.101 00:18:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.101 00:18:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:11:28.101 00:18:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.101 00:18:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:28.101 00:18:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.101 00:18:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:28.101 00:18:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:11:28.101 00:18:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.101 00:18:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:28.101 Null3 00:11:28.101 00:18:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.101 00:18:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:11:28.101 00:18:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.101 00:18:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:28.101 00:18:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.101 00:18:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:11:28.101 00:18:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.101 00:18:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:28.101 00:18:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.101 00:18:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:11:28.101 00:18:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.101 00:18:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:28.101 00:18:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.101 00:18:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:28.101 00:18:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:11:28.101 00:18:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.101 00:18:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:28.101 Null4 00:11:28.101 00:18:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.101 00:18:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:11:28.101 00:18:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.101 00:18:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:28.101 00:18:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.101 00:18:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:11:28.101 00:18:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.101 00:18:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:28.101 00:18:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.101 00:18:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:11:28.101 00:18:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.101 00:18:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:28.101 00:18:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.101 00:18:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:28.101 00:18:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.101 00:18:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:28.101 00:18:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.101 00:18:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:11:28.101 00:18:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.101 00:18:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:28.101 00:18:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.101 00:18:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 4420 00:11:28.361 00:11:28.361 Discovery Log Number of Records 6, Generation counter 6 00:11:28.361 =====Discovery Log Entry 0====== 00:11:28.361 trtype: tcp 00:11:28.361 adrfam: ipv4 00:11:28.361 subtype: current discovery subsystem 00:11:28.361 treq: not required 00:11:28.361 portid: 0 00:11:28.361 trsvcid: 4420 00:11:28.362 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:11:28.362 traddr: 10.0.0.2 00:11:28.362 eflags: explicit discovery connections, duplicate discovery information 00:11:28.362 sectype: none 00:11:28.362 =====Discovery Log Entry 1====== 00:11:28.362 trtype: tcp 00:11:28.362 adrfam: ipv4 00:11:28.362 subtype: nvme subsystem 00:11:28.362 treq: not required 00:11:28.362 portid: 0 00:11:28.362 trsvcid: 4420 00:11:28.362 subnqn: nqn.2016-06.io.spdk:cnode1 00:11:28.362 traddr: 10.0.0.2 00:11:28.362 eflags: none 00:11:28.362 sectype: none 00:11:28.362 =====Discovery Log Entry 2====== 00:11:28.362 trtype: tcp 00:11:28.362 adrfam: ipv4 00:11:28.362 subtype: nvme subsystem 00:11:28.362 treq: not required 00:11:28.362 portid: 0 00:11:28.362 trsvcid: 4420 00:11:28.362 subnqn: nqn.2016-06.io.spdk:cnode2 00:11:28.362 traddr: 10.0.0.2 00:11:28.362 eflags: none 00:11:28.362 sectype: none 00:11:28.362 =====Discovery Log Entry 3====== 00:11:28.362 trtype: tcp 00:11:28.362 adrfam: ipv4 00:11:28.362 subtype: nvme subsystem 00:11:28.362 treq: not required 00:11:28.362 portid: 0 00:11:28.362 trsvcid: 4420 00:11:28.362 subnqn: nqn.2016-06.io.spdk:cnode3 00:11:28.362 traddr: 10.0.0.2 00:11:28.362 eflags: none 00:11:28.362 sectype: none 00:11:28.362 =====Discovery Log Entry 4====== 00:11:28.362 trtype: tcp 00:11:28.362 adrfam: ipv4 00:11:28.362 subtype: nvme subsystem 00:11:28.362 treq: not required 00:11:28.362 portid: 0 00:11:28.362 trsvcid: 4420 00:11:28.362 subnqn: nqn.2016-06.io.spdk:cnode4 00:11:28.362 traddr: 10.0.0.2 00:11:28.362 eflags: none 00:11:28.362 sectype: none 00:11:28.362 =====Discovery Log Entry 5====== 00:11:28.362 trtype: tcp 00:11:28.362 adrfam: ipv4 00:11:28.362 subtype: discovery subsystem referral 00:11:28.362 treq: not required 00:11:28.362 portid: 0 00:11:28.362 trsvcid: 4430 00:11:28.362 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:11:28.362 traddr: 10.0.0.2 00:11:28.362 eflags: none 00:11:28.362 sectype: none 00:11:28.362 00:18:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:11:28.362 Perform nvmf subsystem discovery via RPC 00:11:28.362 00:18:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:11:28.362 00:18:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.362 00:18:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:28.362 [ 00:11:28.362 { 00:11:28.362 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:11:28.362 "subtype": "Discovery", 00:11:28.362 "listen_addresses": [ 00:11:28.362 { 00:11:28.362 "trtype": "TCP", 00:11:28.362 "adrfam": "IPv4", 00:11:28.362 "traddr": "10.0.0.2", 00:11:28.362 "trsvcid": "4420" 00:11:28.362 } 00:11:28.362 ], 00:11:28.362 "allow_any_host": true, 00:11:28.362 "hosts": [] 00:11:28.362 }, 00:11:28.362 { 00:11:28.362 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:11:28.362 "subtype": "NVMe", 00:11:28.362 "listen_addresses": [ 00:11:28.362 { 00:11:28.362 "trtype": "TCP", 00:11:28.362 "adrfam": "IPv4", 00:11:28.362 "traddr": "10.0.0.2", 00:11:28.362 "trsvcid": "4420" 00:11:28.362 } 00:11:28.362 ], 00:11:28.362 "allow_any_host": true, 00:11:28.362 "hosts": [], 00:11:28.362 "serial_number": "SPDK00000000000001", 00:11:28.362 "model_number": "SPDK bdev Controller", 00:11:28.362 "max_namespaces": 32, 00:11:28.362 "min_cntlid": 1, 00:11:28.362 "max_cntlid": 65519, 00:11:28.362 "namespaces": [ 00:11:28.362 { 00:11:28.362 "nsid": 1, 00:11:28.362 "bdev_name": "Null1", 00:11:28.362 "name": "Null1", 00:11:28.362 "nguid": "E08629CD7F7F440BBF6AA20DD8FC17D8", 00:11:28.362 "uuid": "e08629cd-7f7f-440b-bf6a-a20dd8fc17d8" 00:11:28.362 } 00:11:28.362 ] 00:11:28.362 }, 00:11:28.362 { 00:11:28.362 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:11:28.362 "subtype": "NVMe", 00:11:28.362 "listen_addresses": [ 00:11:28.362 { 00:11:28.362 "trtype": "TCP", 00:11:28.362 "adrfam": "IPv4", 00:11:28.362 "traddr": "10.0.0.2", 00:11:28.362 "trsvcid": "4420" 00:11:28.362 } 00:11:28.362 ], 00:11:28.362 "allow_any_host": true, 00:11:28.362 "hosts": [], 00:11:28.362 "serial_number": "SPDK00000000000002", 00:11:28.362 "model_number": "SPDK bdev Controller", 00:11:28.362 "max_namespaces": 32, 00:11:28.362 "min_cntlid": 1, 00:11:28.362 "max_cntlid": 65519, 00:11:28.362 "namespaces": [ 00:11:28.362 { 00:11:28.362 "nsid": 1, 00:11:28.362 "bdev_name": "Null2", 00:11:28.362 "name": "Null2", 00:11:28.362 "nguid": "051EB269E1EC484BA82E5632802F58F6", 00:11:28.362 "uuid": "051eb269-e1ec-484b-a82e-5632802f58f6" 00:11:28.362 } 00:11:28.362 ] 00:11:28.362 }, 00:11:28.362 { 00:11:28.362 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:11:28.362 "subtype": "NVMe", 00:11:28.362 "listen_addresses": [ 00:11:28.362 { 00:11:28.362 "trtype": "TCP", 00:11:28.362 "adrfam": "IPv4", 00:11:28.362 "traddr": "10.0.0.2", 00:11:28.362 "trsvcid": "4420" 00:11:28.362 } 00:11:28.362 ], 00:11:28.362 "allow_any_host": true, 00:11:28.362 "hosts": [], 00:11:28.362 "serial_number": "SPDK00000000000003", 00:11:28.362 "model_number": "SPDK bdev Controller", 00:11:28.362 "max_namespaces": 32, 00:11:28.362 "min_cntlid": 1, 00:11:28.362 "max_cntlid": 65519, 00:11:28.362 "namespaces": [ 00:11:28.362 { 00:11:28.362 "nsid": 1, 00:11:28.362 "bdev_name": "Null3", 00:11:28.362 "name": "Null3", 00:11:28.362 "nguid": "9FCBE84C8CC043E9A2604DD4447F0A78", 00:11:28.362 "uuid": "9fcbe84c-8cc0-43e9-a260-4dd4447f0a78" 00:11:28.362 } 00:11:28.362 ] 00:11:28.362 }, 00:11:28.362 { 00:11:28.362 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:11:28.362 "subtype": "NVMe", 00:11:28.362 "listen_addresses": [ 00:11:28.362 { 00:11:28.362 "trtype": "TCP", 00:11:28.362 "adrfam": "IPv4", 00:11:28.362 "traddr": "10.0.0.2", 00:11:28.362 "trsvcid": "4420" 00:11:28.362 } 00:11:28.362 ], 00:11:28.362 "allow_any_host": true, 00:11:28.362 "hosts": [], 00:11:28.362 "serial_number": "SPDK00000000000004", 00:11:28.362 "model_number": "SPDK bdev Controller", 00:11:28.362 "max_namespaces": 32, 00:11:28.362 "min_cntlid": 1, 00:11:28.362 "max_cntlid": 65519, 00:11:28.362 "namespaces": [ 00:11:28.362 { 00:11:28.362 "nsid": 1, 00:11:28.362 "bdev_name": "Null4", 00:11:28.362 "name": "Null4", 00:11:28.362 "nguid": "A5E48B46423E458ABD82A1C352F76906", 00:11:28.362 "uuid": "a5e48b46-423e-458a-bd82-a1c352f76906" 00:11:28.362 } 00:11:28.362 ] 00:11:28.362 } 00:11:28.362 ] 00:11:28.362 00:18:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.362 00:18:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:11:28.362 00:18:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:28.362 00:18:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:28.362 00:18:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.362 00:18:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:28.362 00:18:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.362 00:18:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:11:28.362 00:18:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.362 00:18:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:28.362 00:18:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.362 00:18:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:28.362 00:18:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:11:28.362 00:18:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.362 00:18:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:28.362 00:18:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.362 00:18:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:11:28.362 00:18:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.362 00:18:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:28.362 00:18:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.362 00:18:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:28.362 00:18:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:11:28.362 00:18:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.362 00:18:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:28.623 00:18:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.623 00:18:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:11:28.623 00:18:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.623 00:18:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:28.623 00:18:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.623 00:18:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:28.623 00:18:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:11:28.623 00:18:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.623 00:18:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:28.623 00:18:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.623 00:18:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:11:28.623 00:18:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.623 00:18:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:28.623 00:18:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.623 00:18:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:11:28.623 00:18:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.623 00:18:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:28.623 00:18:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.623 00:18:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:11:28.623 00:18:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:11:28.623 00:18:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.623 00:18:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:28.623 00:18:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.623 00:18:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:11:28.623 00:18:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:11:28.623 00:18:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:11:28.623 00:18:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:11:28.623 00:18:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@514 -- # nvmfcleanup 00:11:28.623 00:18:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync 00:11:28.623 00:18:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:28.623 00:18:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e 00:11:28.623 00:18:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:28.623 00:18:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:28.623 rmmod nvme_tcp 00:11:28.623 rmmod nvme_fabrics 00:11:28.623 rmmod nvme_keyring 00:11:28.623 00:18:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:28.623 00:18:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e 00:11:28.623 00:18:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0 00:11:28.623 00:18:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@515 -- # '[' -n 3145538 ']' 00:11:28.623 00:18:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@516 -- # killprocess 3145538 00:11:28.623 00:18:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@950 -- # '[' -z 3145538 ']' 00:11:28.623 00:18:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # kill -0 3145538 00:11:28.623 00:18:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@955 -- # uname 00:11:28.623 00:18:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:28.623 00:18:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3145538 00:11:28.623 00:18:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:28.623 00:18:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:28.623 00:18:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3145538' 00:11:28.623 killing process with pid 3145538 00:11:28.624 00:18:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@969 -- # kill 3145538 00:11:28.624 00:18:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@974 -- # wait 3145538 00:11:28.884 00:18:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:11:28.884 00:18:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:11:28.884 00:18:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:11:28.884 00:18:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # iptr 00:11:28.884 00:18:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@789 -- # iptables-save 00:11:28.884 00:18:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:11:28.884 00:18:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@789 -- # iptables-restore 00:11:28.884 00:18:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:28.884 00:18:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:28.884 00:18:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:28.884 00:18:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:28.884 00:18:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:31.453 00:19:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:31.453 00:11:31.453 real 0m11.739s 00:11:31.454 user 0m9.117s 00:11:31.454 sys 0m6.050s 00:11:31.454 00:19:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:31.454 00:19:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:31.454 ************************************ 00:11:31.454 END TEST nvmf_target_discovery 00:11:31.454 ************************************ 00:11:31.454 00:19:01 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:11:31.454 00:19:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:31.454 00:19:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:31.454 00:19:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:31.454 ************************************ 00:11:31.454 START TEST nvmf_referrals 00:11:31.454 ************************************ 00:11:31.454 00:19:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:11:31.454 * Looking for test storage... 00:11:31.454 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:31.454 00:19:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:11:31.454 00:19:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1681 -- # lcov --version 00:11:31.454 00:19:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:11:31.454 00:19:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:11:31.454 00:19:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:31.454 00:19:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:31.455 00:19:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:31.455 00:19:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:11:31.455 00:19:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:11:31.455 00:19:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:11:31.455 00:19:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:11:31.455 00:19:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:11:31.455 00:19:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:11:31.455 00:19:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:11:31.455 00:19:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:31.455 00:19:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:11:31.455 00:19:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:11:31.455 00:19:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:31.455 00:19:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:31.455 00:19:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:11:31.455 00:19:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:11:31.455 00:19:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:31.455 00:19:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:11:31.455 00:19:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:11:31.455 00:19:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:11:31.455 00:19:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:11:31.455 00:19:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:31.455 00:19:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:11:31.455 00:19:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:11:31.455 00:19:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:31.455 00:19:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:31.455 00:19:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:11:31.455 00:19:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:31.455 00:19:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:11:31.455 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:31.455 --rc genhtml_branch_coverage=1 00:11:31.455 --rc genhtml_function_coverage=1 00:11:31.456 --rc genhtml_legend=1 00:11:31.456 --rc geninfo_all_blocks=1 00:11:31.456 --rc geninfo_unexecuted_blocks=1 00:11:31.456 00:11:31.456 ' 00:11:31.456 00:19:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:11:31.456 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:31.456 --rc genhtml_branch_coverage=1 00:11:31.456 --rc genhtml_function_coverage=1 00:11:31.456 --rc genhtml_legend=1 00:11:31.456 --rc geninfo_all_blocks=1 00:11:31.456 --rc geninfo_unexecuted_blocks=1 00:11:31.456 00:11:31.456 ' 00:11:31.456 00:19:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:11:31.456 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:31.456 --rc genhtml_branch_coverage=1 00:11:31.456 --rc genhtml_function_coverage=1 00:11:31.456 --rc genhtml_legend=1 00:11:31.456 --rc geninfo_all_blocks=1 00:11:31.456 --rc geninfo_unexecuted_blocks=1 00:11:31.456 00:11:31.456 ' 00:11:31.456 00:19:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:11:31.456 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:31.456 --rc genhtml_branch_coverage=1 00:11:31.456 --rc genhtml_function_coverage=1 00:11:31.456 --rc genhtml_legend=1 00:11:31.456 --rc geninfo_all_blocks=1 00:11:31.456 --rc geninfo_unexecuted_blocks=1 00:11:31.456 00:11:31.456 ' 00:11:31.456 00:19:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:31.456 00:19:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:11:31.456 00:19:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:31.456 00:19:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:31.456 00:19:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:31.457 00:19:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:31.457 00:19:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:31.457 00:19:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:31.457 00:19:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:31.457 00:19:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:31.457 00:19:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:31.457 00:19:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:31.457 00:19:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:31.457 00:19:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:31.457 00:19:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:31.457 00:19:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:31.457 00:19:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:31.457 00:19:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:31.457 00:19:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:31.457 00:19:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:11:31.457 00:19:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:31.457 00:19:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:31.457 00:19:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:31.458 00:19:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:31.458 00:19:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:31.458 00:19:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:31.458 00:19:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:11:31.458 00:19:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:31.458 00:19:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # : 0 00:11:31.458 00:19:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:31.458 00:19:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:31.458 00:19:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:31.458 00:19:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:31.458 00:19:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:31.458 00:19:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:31.459 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:31.459 00:19:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:31.459 00:19:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:31.459 00:19:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:31.459 00:19:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:11:31.459 00:19:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:11:31.459 00:19:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:11:31.459 00:19:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:11:31.459 00:19:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:11:31.459 00:19:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:11:31.460 00:19:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:11:31.460 00:19:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:11:31.460 00:19:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:31.460 00:19:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # prepare_net_devs 00:11:31.460 00:19:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@436 -- # local -g is_hw=no 00:11:31.460 00:19:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # remove_spdk_ns 00:11:31.460 00:19:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:31.460 00:19:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:31.460 00:19:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:31.460 00:19:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:11:31.460 00:19:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:11:31.460 00:19:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@309 -- # xtrace_disable 00:11:31.460 00:19:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:39.603 00:19:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:39.603 00:19:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # pci_devs=() 00:11:39.603 00:19:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:39.603 00:19:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:39.603 00:19:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:39.603 00:19:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:39.603 00:19:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:39.603 00:19:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # net_devs=() 00:11:39.603 00:19:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:39.603 00:19:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # e810=() 00:11:39.603 00:19:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # local -ga e810 00:11:39.603 00:19:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # x722=() 00:11:39.603 00:19:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # local -ga x722 00:11:39.603 00:19:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # mlx=() 00:11:39.603 00:19:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # local -ga mlx 00:11:39.603 00:19:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:39.603 00:19:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:39.603 00:19:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:39.603 00:19:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:39.603 00:19:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:39.603 00:19:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:39.603 00:19:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:39.604 00:19:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:39.604 00:19:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:39.604 00:19:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:39.604 00:19:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:39.604 00:19:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:39.604 00:19:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:39.604 00:19:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:39.604 00:19:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:39.604 00:19:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:39.604 00:19:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:39.604 00:19:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:39.604 00:19:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:39.604 00:19:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:11:39.604 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:11:39.604 00:19:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:39.604 00:19:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:39.604 00:19:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:39.604 00:19:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:39.604 00:19:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:39.604 00:19:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:39.604 00:19:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:11:39.604 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:11:39.604 00:19:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:39.604 00:19:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:39.604 00:19:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:39.604 00:19:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:39.604 00:19:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:39.604 00:19:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:39.604 00:19:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:39.604 00:19:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:39.604 00:19:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:11:39.604 00:19:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:39.604 00:19:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:11:39.604 00:19:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:39.604 00:19:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ up == up ]] 00:11:39.604 00:19:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:11:39.604 00:19:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:39.604 00:19:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:11:39.604 Found net devices under 0000:4b:00.0: cvl_0_0 00:11:39.604 00:19:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:11:39.604 00:19:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:11:39.604 00:19:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:39.604 00:19:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:11:39.604 00:19:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:39.604 00:19:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ up == up ]] 00:11:39.604 00:19:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:11:39.604 00:19:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:39.604 00:19:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:11:39.604 Found net devices under 0000:4b:00.1: cvl_0_1 00:11:39.604 00:19:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:11:39.604 00:19:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:11:39.604 00:19:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # is_hw=yes 00:11:39.604 00:19:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:11:39.604 00:19:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:11:39.604 00:19:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:11:39.604 00:19:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:39.604 00:19:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:39.604 00:19:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:39.604 00:19:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:39.604 00:19:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:39.604 00:19:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:39.604 00:19:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:39.604 00:19:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:39.604 00:19:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:39.604 00:19:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:39.604 00:19:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:39.604 00:19:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:39.604 00:19:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:39.604 00:19:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:39.604 00:19:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:39.604 00:19:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:39.604 00:19:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:39.604 00:19:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:39.604 00:19:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:39.604 00:19:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:39.604 00:19:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:39.604 00:19:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:39.604 00:19:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:39.604 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:39.604 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.579 ms 00:11:39.604 00:11:39.604 --- 10.0.0.2 ping statistics --- 00:11:39.604 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:39.604 rtt min/avg/max/mdev = 0.579/0.579/0.579/0.000 ms 00:11:39.604 00:19:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:39.604 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:39.604 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.321 ms 00:11:39.604 00:11:39.604 --- 10.0.0.1 ping statistics --- 00:11:39.604 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:39.604 rtt min/avg/max/mdev = 0.321/0.321/0.321/0.000 ms 00:11:39.604 00:19:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:39.604 00:19:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@448 -- # return 0 00:11:39.604 00:19:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:11:39.604 00:19:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:39.604 00:19:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:11:39.604 00:19:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:11:39.604 00:19:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:39.604 00:19:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:11:39.604 00:19:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:11:39.604 00:19:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:11:39.604 00:19:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:11:39.604 00:19:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:39.604 00:19:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:39.604 00:19:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@507 -- # nvmfpid=3149934 00:11:39.604 00:19:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@508 -- # waitforlisten 3149934 00:11:39.604 00:19:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:39.604 00:19:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@831 -- # '[' -z 3149934 ']' 00:11:39.604 00:19:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:39.604 00:19:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:39.604 00:19:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:39.604 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:39.604 00:19:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:39.604 00:19:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:39.604 [2024-10-09 00:19:09.356261] Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 initialization... 00:11:39.604 [2024-10-09 00:19:09.356328] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:39.604 [2024-10-09 00:19:09.446039] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:39.604 [2024-10-09 00:19:09.542060] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:39.604 [2024-10-09 00:19:09.542120] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:39.605 [2024-10-09 00:19:09.542129] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:39.605 [2024-10-09 00:19:09.542136] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:39.605 [2024-10-09 00:19:09.542143] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:39.605 [2024-10-09 00:19:09.544269] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:11:39.605 [2024-10-09 00:19:09.544436] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:11:39.605 [2024-10-09 00:19:09.544599] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:11:39.605 [2024-10-09 00:19:09.544600] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:11:39.605 00:19:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:39.605 00:19:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # return 0 00:11:39.605 00:19:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:11:39.605 00:19:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:39.605 00:19:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:39.605 00:19:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:39.605 00:19:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:39.605 00:19:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.605 00:19:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:39.605 [2024-10-09 00:19:10.227226] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:39.605 00:19:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.605 00:19:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:11:39.605 00:19:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.605 00:19:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:39.874 [2024-10-09 00:19:10.243573] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:11:39.874 00:19:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.874 00:19:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:11:39.874 00:19:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.874 00:19:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:39.874 00:19:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.874 00:19:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:11:39.874 00:19:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.874 00:19:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:39.874 00:19:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.874 00:19:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:11:39.874 00:19:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.874 00:19:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:39.874 00:19:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.874 00:19:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:39.874 00:19:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:11:39.874 00:19:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.874 00:19:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:39.874 00:19:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.874 00:19:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:11:39.874 00:19:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:11:39.874 00:19:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:39.874 00:19:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:39.874 00:19:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:39.874 00:19:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.874 00:19:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:39.874 00:19:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:11:39.874 00:19:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.874 00:19:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:11:39.874 00:19:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:11:39.874 00:19:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:11:39.874 00:19:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:39.874 00:19:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:39.874 00:19:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:39.874 00:19:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:39.874 00:19:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:39.874 00:19:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:11:39.874 00:19:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:11:39.875 00:19:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:11:39.875 00:19:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.875 00:19:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:40.137 00:19:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.137 00:19:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:11:40.137 00:19:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.137 00:19:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:40.137 00:19:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.137 00:19:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:11:40.137 00:19:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.137 00:19:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:40.137 00:19:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.137 00:19:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:40.137 00:19:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:11:40.137 00:19:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.137 00:19:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:40.137 00:19:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.137 00:19:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:11:40.137 00:19:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:11:40.137 00:19:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:40.137 00:19:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:40.138 00:19:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:40.138 00:19:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:40.138 00:19:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:40.396 00:19:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:11:40.396 00:19:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:11:40.396 00:19:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:11:40.396 00:19:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.396 00:19:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:40.396 00:19:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.396 00:19:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:11:40.397 00:19:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.397 00:19:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:40.397 00:19:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.397 00:19:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:11:40.397 00:19:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:40.397 00:19:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:40.397 00:19:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:40.397 00:19:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.397 00:19:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:40.397 00:19:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:11:40.397 00:19:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.397 00:19:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:11:40.397 00:19:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:11:40.397 00:19:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:11:40.397 00:19:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:40.397 00:19:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:40.397 00:19:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:40.397 00:19:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:40.397 00:19:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:40.397 00:19:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:11:40.397 00:19:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:11:40.397 00:19:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:11:40.397 00:19:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:11:40.397 00:19:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:11:40.397 00:19:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:40.397 00:19:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:11:40.655 00:19:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:11:40.656 00:19:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:11:40.656 00:19:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:11:40.656 00:19:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:11:40.656 00:19:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:40.656 00:19:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:11:40.656 00:19:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:11:40.656 00:19:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:11:40.656 00:19:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.656 00:19:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:40.656 00:19:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.656 00:19:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:11:40.656 00:19:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:40.656 00:19:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:40.656 00:19:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:40.656 00:19:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.656 00:19:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:40.656 00:19:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:11:40.656 00:19:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.914 00:19:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:11:40.914 00:19:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:11:40.914 00:19:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:11:40.914 00:19:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:40.914 00:19:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:40.914 00:19:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:40.914 00:19:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:40.914 00:19:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:40.914 00:19:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:11:40.914 00:19:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:11:40.914 00:19:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:11:40.914 00:19:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:11:40.914 00:19:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:11:40.914 00:19:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:40.914 00:19:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:11:41.173 00:19:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:11:41.173 00:19:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:11:41.173 00:19:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:11:41.173 00:19:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:11:41.173 00:19:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:41.173 00:19:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:11:41.173 00:19:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:11:41.173 00:19:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:11:41.173 00:19:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.173 00:19:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:41.173 00:19:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.173 00:19:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:41.173 00:19:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:11:41.173 00:19:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.173 00:19:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:41.173 00:19:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.433 00:19:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:11:41.433 00:19:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:11:41.433 00:19:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:41.433 00:19:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:41.433 00:19:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:41.433 00:19:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:41.433 00:19:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:41.433 00:19:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:11:41.433 00:19:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:11:41.433 00:19:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:11:41.433 00:19:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:11:41.433 00:19:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@514 -- # nvmfcleanup 00:11:41.433 00:19:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # sync 00:11:41.433 00:19:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:41.433 00:19:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set +e 00:11:41.433 00:19:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:41.433 00:19:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:41.433 rmmod nvme_tcp 00:11:41.433 rmmod nvme_fabrics 00:11:41.433 rmmod nvme_keyring 00:11:41.694 00:19:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:41.694 00:19:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@128 -- # set -e 00:11:41.694 00:19:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # return 0 00:11:41.694 00:19:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@515 -- # '[' -n 3149934 ']' 00:11:41.694 00:19:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@516 -- # killprocess 3149934 00:11:41.694 00:19:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@950 -- # '[' -z 3149934 ']' 00:11:41.694 00:19:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # kill -0 3149934 00:11:41.694 00:19:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@955 -- # uname 00:11:41.694 00:19:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:41.694 00:19:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3149934 00:11:41.694 00:19:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:41.694 00:19:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:41.694 00:19:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3149934' 00:11:41.694 killing process with pid 3149934 00:11:41.694 00:19:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@969 -- # kill 3149934 00:11:41.694 00:19:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@974 -- # wait 3149934 00:11:41.694 00:19:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:11:41.694 00:19:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:11:41.694 00:19:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:11:41.694 00:19:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # iptr 00:11:41.694 00:19:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@789 -- # iptables-save 00:11:41.694 00:19:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:11:41.694 00:19:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@789 -- # iptables-restore 00:11:41.955 00:19:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:41.955 00:19:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:41.955 00:19:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:41.955 00:19:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:41.955 00:19:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:43.866 00:19:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:43.866 00:11:43.866 real 0m12.817s 00:11:43.866 user 0m14.099s 00:11:43.866 sys 0m6.334s 00:11:43.866 00:19:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:43.866 00:19:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:43.866 ************************************ 00:11:43.866 END TEST nvmf_referrals 00:11:43.866 ************************************ 00:11:43.867 00:19:14 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:11:43.867 00:19:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:43.867 00:19:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:43.867 00:19:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:43.867 ************************************ 00:11:43.867 START TEST nvmf_connect_disconnect 00:11:43.867 ************************************ 00:11:43.867 00:19:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:11:44.128 * Looking for test storage... 00:11:44.128 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:44.128 00:19:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:11:44.128 00:19:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1681 -- # lcov --version 00:11:44.128 00:19:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:11:44.128 00:19:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:11:44.128 00:19:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:44.128 00:19:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:44.128 00:19:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:44.128 00:19:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:11:44.128 00:19:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:11:44.128 00:19:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:11:44.128 00:19:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:11:44.128 00:19:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:11:44.128 00:19:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:11:44.128 00:19:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:11:44.128 00:19:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:44.128 00:19:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:11:44.128 00:19:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:11:44.128 00:19:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:44.128 00:19:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:44.128 00:19:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:11:44.128 00:19:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:11:44.128 00:19:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:44.128 00:19:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:11:44.128 00:19:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:11:44.128 00:19:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:11:44.128 00:19:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:11:44.128 00:19:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:44.128 00:19:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:11:44.128 00:19:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:11:44.128 00:19:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:44.128 00:19:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:44.128 00:19:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:11:44.128 00:19:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:44.128 00:19:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:11:44.128 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:44.128 --rc genhtml_branch_coverage=1 00:11:44.129 --rc genhtml_function_coverage=1 00:11:44.129 --rc genhtml_legend=1 00:11:44.129 --rc geninfo_all_blocks=1 00:11:44.129 --rc geninfo_unexecuted_blocks=1 00:11:44.129 00:11:44.129 ' 00:11:44.129 00:19:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:11:44.129 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:44.129 --rc genhtml_branch_coverage=1 00:11:44.129 --rc genhtml_function_coverage=1 00:11:44.129 --rc genhtml_legend=1 00:11:44.129 --rc geninfo_all_blocks=1 00:11:44.129 --rc geninfo_unexecuted_blocks=1 00:11:44.129 00:11:44.129 ' 00:11:44.129 00:19:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:11:44.129 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:44.129 --rc genhtml_branch_coverage=1 00:11:44.129 --rc genhtml_function_coverage=1 00:11:44.129 --rc genhtml_legend=1 00:11:44.129 --rc geninfo_all_blocks=1 00:11:44.129 --rc geninfo_unexecuted_blocks=1 00:11:44.129 00:11:44.129 ' 00:11:44.129 00:19:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:11:44.129 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:44.129 --rc genhtml_branch_coverage=1 00:11:44.129 --rc genhtml_function_coverage=1 00:11:44.129 --rc genhtml_legend=1 00:11:44.129 --rc geninfo_all_blocks=1 00:11:44.129 --rc geninfo_unexecuted_blocks=1 00:11:44.129 00:11:44.129 ' 00:11:44.129 00:19:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:44.129 00:19:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:11:44.129 00:19:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:44.129 00:19:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:44.129 00:19:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:44.129 00:19:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:44.129 00:19:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:44.129 00:19:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:44.129 00:19:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:44.129 00:19:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:44.129 00:19:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:44.129 00:19:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:44.129 00:19:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:44.129 00:19:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:44.129 00:19:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:44.129 00:19:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:44.129 00:19:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:44.129 00:19:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:44.129 00:19:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:44.129 00:19:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:11:44.129 00:19:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:44.129 00:19:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:44.129 00:19:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:44.129 00:19:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:44.129 00:19:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:44.129 00:19:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:44.129 00:19:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:11:44.129 00:19:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:44.129 00:19:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0 00:11:44.129 00:19:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:44.129 00:19:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:44.129 00:19:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:44.129 00:19:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:44.129 00:19:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:44.129 00:19:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:44.129 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:44.129 00:19:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:44.129 00:19:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:44.129 00:19:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:44.129 00:19:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:44.129 00:19:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:44.129 00:19:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:11:44.129 00:19:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:11:44.129 00:19:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:44.129 00:19:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # prepare_net_devs 00:11:44.129 00:19:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@436 -- # local -g is_hw=no 00:11:44.129 00:19:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # remove_spdk_ns 00:11:44.129 00:19:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:44.129 00:19:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:44.129 00:19:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:44.129 00:19:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:11:44.129 00:19:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:11:44.129 00:19:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:11:44.129 00:19:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:52.271 00:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:52.271 00:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:11:52.271 00:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:52.271 00:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:52.271 00:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:52.271 00:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:52.271 00:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:52.271 00:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:11:52.271 00:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:52.271 00:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # e810=() 00:11:52.271 00:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:11:52.271 00:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # x722=() 00:11:52.271 00:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:11:52.271 00:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:11:52.271 00:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:11:52.271 00:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:52.271 00:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:52.271 00:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:52.271 00:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:52.271 00:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:52.271 00:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:52.271 00:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:52.271 00:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:52.271 00:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:52.271 00:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:52.271 00:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:52.271 00:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:52.271 00:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:52.271 00:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:52.271 00:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:52.271 00:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:52.271 00:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:52.271 00:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:52.271 00:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:52.271 00:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:11:52.271 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:11:52.271 00:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:52.271 00:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:52.271 00:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:52.271 00:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:52.271 00:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:52.271 00:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:52.271 00:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:11:52.271 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:11:52.271 00:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:52.271 00:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:52.271 00:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:52.271 00:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:52.271 00:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:52.271 00:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:52.271 00:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:52.271 00:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:52.271 00:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:11:52.271 00:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:52.271 00:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:11:52.271 00:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:52.271 00:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ up == up ]] 00:11:52.271 00:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:11:52.271 00:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:52.272 00:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:11:52.272 Found net devices under 0000:4b:00.0: cvl_0_0 00:11:52.272 00:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:11:52.272 00:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:11:52.272 00:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:52.272 00:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:11:52.272 00:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:52.272 00:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ up == up ]] 00:11:52.272 00:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:11:52.272 00:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:52.272 00:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:11:52.272 Found net devices under 0000:4b:00.1: cvl_0_1 00:11:52.272 00:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:11:52.272 00:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:11:52.272 00:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # is_hw=yes 00:11:52.272 00:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:11:52.272 00:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:11:52.272 00:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:11:52.272 00:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:52.272 00:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:52.272 00:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:52.272 00:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:52.272 00:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:52.272 00:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:52.272 00:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:52.272 00:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:52.272 00:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:52.272 00:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:52.272 00:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:52.272 00:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:52.272 00:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:52.272 00:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:52.272 00:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:52.272 00:19:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:52.272 00:19:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:52.272 00:19:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:52.272 00:19:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:52.272 00:19:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:52.272 00:19:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:52.272 00:19:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:52.272 00:19:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:52.272 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:52.272 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.524 ms 00:11:52.272 00:11:52.272 --- 10.0.0.2 ping statistics --- 00:11:52.272 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:52.272 rtt min/avg/max/mdev = 0.524/0.524/0.524/0.000 ms 00:11:52.272 00:19:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:52.272 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:52.272 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.317 ms 00:11:52.272 00:11:52.272 --- 10.0.0.1 ping statistics --- 00:11:52.272 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:52.272 rtt min/avg/max/mdev = 0.317/0.317/0.317/0.000 ms 00:11:52.272 00:19:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:52.272 00:19:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # return 0 00:11:52.272 00:19:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:11:52.272 00:19:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:52.272 00:19:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:11:52.272 00:19:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:11:52.272 00:19:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:52.272 00:19:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:11:52.272 00:19:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:11:52.272 00:19:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:11:52.272 00:19:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:11:52.272 00:19:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:52.272 00:19:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:52.272 00:19:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@507 -- # nvmfpid=3154875 00:11:52.272 00:19:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@508 -- # waitforlisten 3154875 00:11:52.272 00:19:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:52.272 00:19:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@831 -- # '[' -z 3154875 ']' 00:11:52.272 00:19:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:52.272 00:19:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:52.272 00:19:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:52.272 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:52.272 00:19:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:52.272 00:19:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:52.272 [2024-10-09 00:19:22.371636] Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 initialization... 00:11:52.272 [2024-10-09 00:19:22.371703] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:52.272 [2024-10-09 00:19:22.462012] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:52.272 [2024-10-09 00:19:22.559042] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:52.272 [2024-10-09 00:19:22.559098] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:52.272 [2024-10-09 00:19:22.559108] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:52.272 [2024-10-09 00:19:22.559116] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:52.272 [2024-10-09 00:19:22.559122] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:52.272 [2024-10-09 00:19:22.561169] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:11:52.272 [2024-10-09 00:19:22.561331] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:11:52.272 [2024-10-09 00:19:22.561525] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:11:52.272 [2024-10-09 00:19:22.561526] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:11:52.846 00:19:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:52.846 00:19:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # return 0 00:11:52.846 00:19:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:11:52.846 00:19:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:52.846 00:19:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:52.846 00:19:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:52.846 00:19:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:11:52.846 00:19:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.846 00:19:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:52.846 [2024-10-09 00:19:23.253289] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:52.846 00:19:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.846 00:19:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:11:52.846 00:19:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.846 00:19:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:52.846 00:19:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.846 00:19:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:11:52.846 00:19:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:52.846 00:19:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.846 00:19:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:52.846 00:19:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.846 00:19:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:52.846 00:19:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.846 00:19:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:52.846 00:19:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.846 00:19:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:52.846 00:19:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.846 00:19:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:52.846 [2024-10-09 00:19:23.323069] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:52.846 00:19:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.846 00:19:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:11:52.846 00:19:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:11:52.846 00:19:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:11:57.065 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:00.347 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:03.626 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:07.808 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:11.088 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:11.088 00:19:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:12:11.088 00:19:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:12:11.088 00:19:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@514 -- # nvmfcleanup 00:12:11.088 00:19:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync 00:12:11.088 00:19:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:11.088 00:19:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e 00:12:11.088 00:19:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:11.088 00:19:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:11.088 rmmod nvme_tcp 00:12:11.088 rmmod nvme_fabrics 00:12:11.088 rmmod nvme_keyring 00:12:11.088 00:19:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:11.088 00:19:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e 00:12:11.088 00:19:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0 00:12:11.088 00:19:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@515 -- # '[' -n 3154875 ']' 00:12:11.088 00:19:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@516 -- # killprocess 3154875 00:12:11.088 00:19:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@950 -- # '[' -z 3154875 ']' 00:12:11.088 00:19:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # kill -0 3154875 00:12:11.088 00:19:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@955 -- # uname 00:12:11.088 00:19:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:11.088 00:19:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3154875 00:12:11.088 00:19:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:11.088 00:19:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:11.088 00:19:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3154875' 00:12:11.088 killing process with pid 3154875 00:12:11.088 00:19:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@969 -- # kill 3154875 00:12:11.088 00:19:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@974 -- # wait 3154875 00:12:11.088 00:19:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:12:11.088 00:19:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:12:11.088 00:19:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:12:11.088 00:19:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # iptr 00:12:11.088 00:19:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@789 -- # iptables-save 00:12:11.088 00:19:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:12:11.088 00:19:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@789 -- # iptables-restore 00:12:11.088 00:19:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:11.088 00:19:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:11.088 00:19:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:11.088 00:19:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:11.088 00:19:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:13.732 00:19:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:13.732 00:12:13.732 real 0m29.215s 00:12:13.732 user 1m18.066s 00:12:13.732 sys 0m7.004s 00:12:13.732 00:19:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:13.732 00:19:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:13.732 ************************************ 00:12:13.732 END TEST nvmf_connect_disconnect 00:12:13.732 ************************************ 00:12:13.732 00:19:43 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:13.732 00:19:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:13.732 00:19:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:13.732 00:19:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:13.732 ************************************ 00:12:13.732 START TEST nvmf_multitarget 00:12:13.732 ************************************ 00:12:13.732 00:19:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:13.732 * Looking for test storage... 00:12:13.732 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:13.732 00:19:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:12:13.732 00:19:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1681 -- # lcov --version 00:12:13.732 00:19:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:12:13.732 00:19:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:12:13.732 00:19:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:13.732 00:19:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:13.732 00:19:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:13.732 00:19:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:12:13.732 00:19:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:12:13.732 00:19:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:12:13.732 00:19:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:12:13.732 00:19:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:12:13.732 00:19:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:12:13.732 00:19:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:12:13.732 00:19:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:13.732 00:19:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:12:13.732 00:19:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:12:13.732 00:19:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:13.732 00:19:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:13.732 00:19:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:12:13.732 00:19:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:12:13.732 00:19:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:13.732 00:19:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:12:13.732 00:19:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:12:13.732 00:19:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:12:13.732 00:19:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:12:13.732 00:19:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:13.732 00:19:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:12:13.732 00:19:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:12:13.732 00:19:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:13.732 00:19:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:13.732 00:19:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:12:13.732 00:19:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:13.732 00:19:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:12:13.732 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:13.732 --rc genhtml_branch_coverage=1 00:12:13.732 --rc genhtml_function_coverage=1 00:12:13.732 --rc genhtml_legend=1 00:12:13.732 --rc geninfo_all_blocks=1 00:12:13.732 --rc geninfo_unexecuted_blocks=1 00:12:13.732 00:12:13.732 ' 00:12:13.732 00:19:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:12:13.732 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:13.732 --rc genhtml_branch_coverage=1 00:12:13.732 --rc genhtml_function_coverage=1 00:12:13.732 --rc genhtml_legend=1 00:12:13.732 --rc geninfo_all_blocks=1 00:12:13.732 --rc geninfo_unexecuted_blocks=1 00:12:13.732 00:12:13.732 ' 00:12:13.732 00:19:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:12:13.732 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:13.732 --rc genhtml_branch_coverage=1 00:12:13.732 --rc genhtml_function_coverage=1 00:12:13.732 --rc genhtml_legend=1 00:12:13.732 --rc geninfo_all_blocks=1 00:12:13.732 --rc geninfo_unexecuted_blocks=1 00:12:13.732 00:12:13.732 ' 00:12:13.732 00:19:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:12:13.732 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:13.732 --rc genhtml_branch_coverage=1 00:12:13.732 --rc genhtml_function_coverage=1 00:12:13.732 --rc genhtml_legend=1 00:12:13.732 --rc geninfo_all_blocks=1 00:12:13.732 --rc geninfo_unexecuted_blocks=1 00:12:13.732 00:12:13.732 ' 00:12:13.732 00:19:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:13.732 00:19:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:12:13.732 00:19:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:13.732 00:19:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:13.733 00:19:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:13.733 00:19:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:13.733 00:19:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:13.733 00:19:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:13.733 00:19:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:13.733 00:19:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:13.733 00:19:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:13.733 00:19:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:13.733 00:19:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:13.733 00:19:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:13.733 00:19:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:13.733 00:19:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:13.733 00:19:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:13.733 00:19:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:13.733 00:19:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:13.733 00:19:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:12:13.733 00:19:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:13.733 00:19:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:13.733 00:19:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:13.733 00:19:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:13.733 00:19:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:13.733 00:19:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:13.733 00:19:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:12:13.733 00:19:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:13.733 00:19:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0 00:12:13.733 00:19:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:13.733 00:19:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:13.733 00:19:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:13.733 00:19:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:13.733 00:19:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:13.733 00:19:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:13.733 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:13.733 00:19:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:13.733 00:19:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:13.733 00:19:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:13.733 00:19:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:12:13.733 00:19:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:12:13.733 00:19:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:12:13.733 00:19:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:13.733 00:19:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # prepare_net_devs 00:12:13.733 00:19:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@436 -- # local -g is_hw=no 00:12:13.733 00:19:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # remove_spdk_ns 00:12:13.733 00:19:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:13.733 00:19:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:13.733 00:19:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:13.733 00:19:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:12:13.733 00:19:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:12:13.733 00:19:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@309 -- # xtrace_disable 00:12:13.733 00:19:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:22.014 00:19:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:22.014 00:19:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # pci_devs=() 00:12:22.014 00:19:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:22.014 00:19:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:22.014 00:19:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:22.014 00:19:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:22.014 00:19:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:22.014 00:19:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # net_devs=() 00:12:22.014 00:19:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:22.014 00:19:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # e810=() 00:12:22.014 00:19:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # local -ga e810 00:12:22.014 00:19:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # x722=() 00:12:22.014 00:19:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # local -ga x722 00:12:22.014 00:19:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # mlx=() 00:12:22.014 00:19:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # local -ga mlx 00:12:22.014 00:19:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:22.014 00:19:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:22.014 00:19:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:22.014 00:19:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:22.014 00:19:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:22.014 00:19:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:22.014 00:19:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:22.014 00:19:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:22.014 00:19:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:22.014 00:19:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:22.014 00:19:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:22.014 00:19:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:22.015 00:19:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:22.015 00:19:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:22.015 00:19:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:22.015 00:19:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:22.015 00:19:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:22.015 00:19:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:22.015 00:19:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:22.015 00:19:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:12:22.015 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:12:22.015 00:19:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:22.015 00:19:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:22.015 00:19:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:22.015 00:19:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:22.015 00:19:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:22.015 00:19:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:22.015 00:19:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:12:22.015 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:12:22.015 00:19:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:22.015 00:19:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:22.015 00:19:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:22.015 00:19:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:22.015 00:19:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:22.015 00:19:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:22.015 00:19:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:22.015 00:19:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:22.015 00:19:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:12:22.015 00:19:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:22.015 00:19:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:12:22.015 00:19:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:22.015 00:19:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ up == up ]] 00:12:22.015 00:19:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:12:22.015 00:19:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:22.015 00:19:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:12:22.015 Found net devices under 0000:4b:00.0: cvl_0_0 00:12:22.015 00:19:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:12:22.015 00:19:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:12:22.015 00:19:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:22.015 00:19:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:12:22.015 00:19:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:22.015 00:19:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ up == up ]] 00:12:22.015 00:19:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:12:22.015 00:19:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:22.015 00:19:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:12:22.015 Found net devices under 0000:4b:00.1: cvl_0_1 00:12:22.015 00:19:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:12:22.015 00:19:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:12:22.015 00:19:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # is_hw=yes 00:12:22.015 00:19:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:12:22.015 00:19:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:12:22.015 00:19:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:12:22.015 00:19:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:22.015 00:19:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:22.015 00:19:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:22.015 00:19:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:22.015 00:19:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:22.015 00:19:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:22.015 00:19:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:22.015 00:19:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:22.015 00:19:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:22.015 00:19:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:22.015 00:19:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:22.015 00:19:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:22.015 00:19:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:22.015 00:19:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:22.015 00:19:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:22.015 00:19:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:22.015 00:19:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:22.015 00:19:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:22.015 00:19:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:22.015 00:19:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:22.015 00:19:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:22.015 00:19:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:22.015 00:19:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:22.015 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:22.015 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.670 ms 00:12:22.015 00:12:22.015 --- 10.0.0.2 ping statistics --- 00:12:22.015 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:22.015 rtt min/avg/max/mdev = 0.670/0.670/0.670/0.000 ms 00:12:22.015 00:19:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:22.015 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:22.015 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.273 ms 00:12:22.015 00:12:22.015 --- 10.0.0.1 ping statistics --- 00:12:22.015 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:22.015 rtt min/avg/max/mdev = 0.273/0.273/0.273/0.000 ms 00:12:22.015 00:19:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:22.015 00:19:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@448 -- # return 0 00:12:22.015 00:19:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:12:22.015 00:19:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:22.015 00:19:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:12:22.015 00:19:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:12:22.015 00:19:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:22.015 00:19:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:12:22.015 00:19:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:12:22.015 00:19:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:12:22.015 00:19:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:12:22.015 00:19:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:22.015 00:19:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:22.015 00:19:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@507 -- # nvmfpid=3162827 00:12:22.015 00:19:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@508 -- # waitforlisten 3162827 00:12:22.015 00:19:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:22.015 00:19:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@831 -- # '[' -z 3162827 ']' 00:12:22.015 00:19:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:22.015 00:19:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:22.015 00:19:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:22.015 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:22.015 00:19:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:22.015 00:19:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:22.015 [2024-10-09 00:19:51.578748] Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 initialization... 00:12:22.015 [2024-10-09 00:19:51.578811] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:22.015 [2024-10-09 00:19:51.667633] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:22.015 [2024-10-09 00:19:51.762454] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:22.015 [2024-10-09 00:19:51.762517] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:22.015 [2024-10-09 00:19:51.762525] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:22.015 [2024-10-09 00:19:51.762533] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:22.015 [2024-10-09 00:19:51.762539] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:22.015 [2024-10-09 00:19:51.764656] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:12:22.015 [2024-10-09 00:19:51.764818] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:12:22.015 [2024-10-09 00:19:51.764899] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:12:22.015 [2024-10-09 00:19:51.764901] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:12:22.015 00:19:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:22.015 00:19:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # return 0 00:12:22.015 00:19:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:12:22.015 00:19:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:22.015 00:19:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:22.015 00:19:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:22.015 00:19:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:12:22.015 00:19:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:22.015 00:19:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:12:22.015 00:19:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:12:22.015 00:19:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:12:22.272 "nvmf_tgt_1" 00:12:22.272 00:19:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:12:22.272 "nvmf_tgt_2" 00:12:22.272 00:19:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:12:22.272 00:19:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:22.530 00:19:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:12:22.530 00:19:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:12:22.530 true 00:12:22.530 00:19:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:12:22.530 true 00:12:22.530 00:19:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:22.530 00:19:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:12:22.788 00:19:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:12:22.788 00:19:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:12:22.788 00:19:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:12:22.788 00:19:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@514 -- # nvmfcleanup 00:12:22.788 00:19:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # sync 00:12:22.788 00:19:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:22.788 00:19:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e 00:12:22.789 00:19:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:22.789 00:19:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:22.789 rmmod nvme_tcp 00:12:22.789 rmmod nvme_fabrics 00:12:22.789 rmmod nvme_keyring 00:12:22.789 00:19:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:22.789 00:19:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e 00:12:22.789 00:19:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0 00:12:22.789 00:19:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@515 -- # '[' -n 3162827 ']' 00:12:22.789 00:19:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@516 -- # killprocess 3162827 00:12:22.789 00:19:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@950 -- # '[' -z 3162827 ']' 00:12:22.789 00:19:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # kill -0 3162827 00:12:22.789 00:19:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@955 -- # uname 00:12:22.789 00:19:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:22.789 00:19:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3162827 00:12:22.789 00:19:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:22.789 00:19:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:22.789 00:19:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3162827' 00:12:22.789 killing process with pid 3162827 00:12:22.789 00:19:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@969 -- # kill 3162827 00:12:22.789 00:19:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@974 -- # wait 3162827 00:12:23.049 00:19:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:12:23.049 00:19:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:12:23.049 00:19:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:12:23.049 00:19:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # iptr 00:12:23.049 00:19:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@789 -- # iptables-save 00:12:23.049 00:19:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:12:23.049 00:19:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@789 -- # iptables-restore 00:12:23.049 00:19:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:23.049 00:19:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:23.049 00:19:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:23.049 00:19:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:23.049 00:19:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:25.606 00:19:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:25.606 00:12:25.606 real 0m11.890s 00:12:25.606 user 0m10.207s 00:12:25.606 sys 0m6.196s 00:12:25.606 00:19:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:25.606 00:19:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:25.606 ************************************ 00:12:25.606 END TEST nvmf_multitarget 00:12:25.606 ************************************ 00:12:25.606 00:19:55 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:25.606 00:19:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:25.606 00:19:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:25.606 00:19:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:25.606 ************************************ 00:12:25.606 START TEST nvmf_rpc 00:12:25.606 ************************************ 00:12:25.606 00:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:25.606 * Looking for test storage... 00:12:25.606 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:25.606 00:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:12:25.606 00:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:12:25.606 00:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:12:25.606 00:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:12:25.606 00:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:25.606 00:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:25.606 00:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:25.606 00:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:12:25.606 00:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:12:25.606 00:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:12:25.606 00:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:12:25.606 00:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:12:25.606 00:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:12:25.606 00:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:12:25.606 00:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:25.606 00:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:12:25.606 00:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:12:25.606 00:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:25.606 00:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:25.606 00:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:12:25.606 00:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:12:25.606 00:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:25.606 00:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:12:25.606 00:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:12:25.606 00:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:12:25.606 00:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:12:25.606 00:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:25.606 00:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:12:25.606 00:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:12:25.606 00:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:25.606 00:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:25.606 00:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:12:25.606 00:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:25.606 00:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:12:25.606 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:25.606 --rc genhtml_branch_coverage=1 00:12:25.606 --rc genhtml_function_coverage=1 00:12:25.606 --rc genhtml_legend=1 00:12:25.606 --rc geninfo_all_blocks=1 00:12:25.606 --rc geninfo_unexecuted_blocks=1 00:12:25.606 00:12:25.606 ' 00:12:25.606 00:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:12:25.606 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:25.606 --rc genhtml_branch_coverage=1 00:12:25.606 --rc genhtml_function_coverage=1 00:12:25.606 --rc genhtml_legend=1 00:12:25.606 --rc geninfo_all_blocks=1 00:12:25.606 --rc geninfo_unexecuted_blocks=1 00:12:25.606 00:12:25.606 ' 00:12:25.606 00:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:12:25.606 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:25.606 --rc genhtml_branch_coverage=1 00:12:25.606 --rc genhtml_function_coverage=1 00:12:25.606 --rc genhtml_legend=1 00:12:25.606 --rc geninfo_all_blocks=1 00:12:25.606 --rc geninfo_unexecuted_blocks=1 00:12:25.606 00:12:25.606 ' 00:12:25.606 00:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:12:25.606 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:25.606 --rc genhtml_branch_coverage=1 00:12:25.606 --rc genhtml_function_coverage=1 00:12:25.606 --rc genhtml_legend=1 00:12:25.606 --rc geninfo_all_blocks=1 00:12:25.606 --rc geninfo_unexecuted_blocks=1 00:12:25.606 00:12:25.606 ' 00:12:25.606 00:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:25.606 00:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:12:25.606 00:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:25.606 00:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:25.606 00:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:25.606 00:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:25.606 00:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:25.606 00:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:25.606 00:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:25.606 00:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:25.606 00:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:25.606 00:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:25.606 00:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:25.606 00:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:25.606 00:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:25.606 00:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:25.606 00:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:25.606 00:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:25.606 00:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:25.606 00:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:12:25.606 00:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:25.606 00:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:25.606 00:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:25.606 00:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:25.606 00:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:25.606 00:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:25.606 00:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:12:25.607 00:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:25.607 00:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # : 0 00:12:25.607 00:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:25.607 00:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:25.607 00:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:25.607 00:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:25.607 00:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:25.607 00:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:25.607 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:25.607 00:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:25.607 00:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:25.607 00:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:25.607 00:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:12:25.607 00:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:12:25.607 00:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:12:25.607 00:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:25.607 00:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # prepare_net_devs 00:12:25.607 00:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@436 -- # local -g is_hw=no 00:12:25.607 00:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # remove_spdk_ns 00:12:25.607 00:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:25.607 00:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:25.607 00:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:25.607 00:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:12:25.607 00:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:12:25.607 00:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@309 -- # xtrace_disable 00:12:25.607 00:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:33.751 00:20:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:33.751 00:20:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # pci_devs=() 00:12:33.751 00:20:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:33.751 00:20:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:33.751 00:20:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:33.751 00:20:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:33.751 00:20:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:33.751 00:20:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # net_devs=() 00:12:33.751 00:20:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:33.751 00:20:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # e810=() 00:12:33.751 00:20:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # local -ga e810 00:12:33.751 00:20:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # x722=() 00:12:33.751 00:20:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # local -ga x722 00:12:33.751 00:20:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # mlx=() 00:12:33.751 00:20:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # local -ga mlx 00:12:33.751 00:20:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:33.751 00:20:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:33.751 00:20:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:33.751 00:20:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:33.751 00:20:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:33.751 00:20:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:33.751 00:20:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:33.751 00:20:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:33.751 00:20:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:33.751 00:20:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:33.751 00:20:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:33.751 00:20:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:33.751 00:20:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:33.751 00:20:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:33.751 00:20:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:33.751 00:20:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:33.751 00:20:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:33.751 00:20:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:33.751 00:20:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:33.751 00:20:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:12:33.751 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:12:33.751 00:20:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:33.751 00:20:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:33.751 00:20:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:33.751 00:20:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:33.751 00:20:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:33.751 00:20:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:33.751 00:20:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:12:33.751 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:12:33.751 00:20:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:33.751 00:20:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:33.751 00:20:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:33.751 00:20:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:33.751 00:20:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:33.751 00:20:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:33.751 00:20:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:33.751 00:20:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:33.751 00:20:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:12:33.751 00:20:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:33.751 00:20:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:12:33.751 00:20:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:33.751 00:20:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ up == up ]] 00:12:33.751 00:20:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:12:33.751 00:20:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:33.751 00:20:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:12:33.751 Found net devices under 0000:4b:00.0: cvl_0_0 00:12:33.751 00:20:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:12:33.751 00:20:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:12:33.752 00:20:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:33.752 00:20:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:12:33.752 00:20:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:33.752 00:20:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ up == up ]] 00:12:33.752 00:20:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:12:33.752 00:20:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:33.752 00:20:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:12:33.752 Found net devices under 0000:4b:00.1: cvl_0_1 00:12:33.752 00:20:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:12:33.752 00:20:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:12:33.752 00:20:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # is_hw=yes 00:12:33.752 00:20:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:12:33.752 00:20:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:12:33.752 00:20:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:12:33.752 00:20:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:33.752 00:20:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:33.752 00:20:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:33.752 00:20:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:33.752 00:20:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:33.752 00:20:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:33.752 00:20:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:33.752 00:20:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:33.752 00:20:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:33.752 00:20:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:33.752 00:20:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:33.752 00:20:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:33.752 00:20:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:33.752 00:20:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:33.752 00:20:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:33.752 00:20:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:33.752 00:20:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:33.752 00:20:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:33.752 00:20:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:33.752 00:20:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:33.752 00:20:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:33.752 00:20:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:33.752 00:20:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:33.752 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:33.752 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.666 ms 00:12:33.752 00:12:33.752 --- 10.0.0.2 ping statistics --- 00:12:33.752 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:33.752 rtt min/avg/max/mdev = 0.666/0.666/0.666/0.000 ms 00:12:33.752 00:20:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:33.752 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:33.752 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.294 ms 00:12:33.752 00:12:33.752 --- 10.0.0.1 ping statistics --- 00:12:33.752 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:33.752 rtt min/avg/max/mdev = 0.294/0.294/0.294/0.000 ms 00:12:33.752 00:20:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:33.752 00:20:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@448 -- # return 0 00:12:33.752 00:20:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:12:33.752 00:20:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:33.752 00:20:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:12:33.752 00:20:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:12:33.752 00:20:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:33.752 00:20:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:12:33.752 00:20:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:12:33.752 00:20:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:12:33.752 00:20:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:12:33.752 00:20:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:33.752 00:20:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:33.752 00:20:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@507 -- # nvmfpid=3167516 00:12:33.752 00:20:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@508 -- # waitforlisten 3167516 00:12:33.752 00:20:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:33.752 00:20:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@831 -- # '[' -z 3167516 ']' 00:12:33.752 00:20:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:33.752 00:20:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:33.752 00:20:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:33.752 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:33.752 00:20:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:33.752 00:20:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:33.752 [2024-10-09 00:20:03.569518] Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 initialization... 00:12:33.752 [2024-10-09 00:20:03.569585] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:33.752 [2024-10-09 00:20:03.660537] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:33.752 [2024-10-09 00:20:03.754558] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:33.752 [2024-10-09 00:20:03.754621] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:33.752 [2024-10-09 00:20:03.754630] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:33.752 [2024-10-09 00:20:03.754637] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:33.752 [2024-10-09 00:20:03.754650] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:33.752 [2024-10-09 00:20:03.756796] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:12:33.752 [2024-10-09 00:20:03.757006] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:12:33.752 [2024-10-09 00:20:03.757134] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:12:33.752 [2024-10-09 00:20:03.757135] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:12:34.012 00:20:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:34.012 00:20:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # return 0 00:12:34.012 00:20:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:12:34.012 00:20:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:34.012 00:20:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:34.012 00:20:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:34.012 00:20:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:12:34.012 00:20:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.012 00:20:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:34.012 00:20:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.012 00:20:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:12:34.012 "tick_rate": 2400000000, 00:12:34.012 "poll_groups": [ 00:12:34.012 { 00:12:34.012 "name": "nvmf_tgt_poll_group_000", 00:12:34.012 "admin_qpairs": 0, 00:12:34.012 "io_qpairs": 0, 00:12:34.012 "current_admin_qpairs": 0, 00:12:34.012 "current_io_qpairs": 0, 00:12:34.012 "pending_bdev_io": 0, 00:12:34.012 "completed_nvme_io": 0, 00:12:34.012 "transports": [] 00:12:34.012 }, 00:12:34.012 { 00:12:34.012 "name": "nvmf_tgt_poll_group_001", 00:12:34.012 "admin_qpairs": 0, 00:12:34.012 "io_qpairs": 0, 00:12:34.012 "current_admin_qpairs": 0, 00:12:34.012 "current_io_qpairs": 0, 00:12:34.012 "pending_bdev_io": 0, 00:12:34.012 "completed_nvme_io": 0, 00:12:34.012 "transports": [] 00:12:34.012 }, 00:12:34.012 { 00:12:34.012 "name": "nvmf_tgt_poll_group_002", 00:12:34.012 "admin_qpairs": 0, 00:12:34.012 "io_qpairs": 0, 00:12:34.012 "current_admin_qpairs": 0, 00:12:34.012 "current_io_qpairs": 0, 00:12:34.012 "pending_bdev_io": 0, 00:12:34.012 "completed_nvme_io": 0, 00:12:34.012 "transports": [] 00:12:34.012 }, 00:12:34.012 { 00:12:34.012 "name": "nvmf_tgt_poll_group_003", 00:12:34.012 "admin_qpairs": 0, 00:12:34.012 "io_qpairs": 0, 00:12:34.012 "current_admin_qpairs": 0, 00:12:34.012 "current_io_qpairs": 0, 00:12:34.012 "pending_bdev_io": 0, 00:12:34.012 "completed_nvme_io": 0, 00:12:34.012 "transports": [] 00:12:34.012 } 00:12:34.012 ] 00:12:34.012 }' 00:12:34.012 00:20:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:12:34.012 00:20:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:12:34.012 00:20:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:12:34.012 00:20:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:12:34.012 00:20:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:12:34.012 00:20:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:12:34.012 00:20:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:12:34.012 00:20:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:34.012 00:20:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.012 00:20:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:34.012 [2024-10-09 00:20:04.570188] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:34.012 00:20:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.012 00:20:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:12:34.012 00:20:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.012 00:20:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:34.012 00:20:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.012 00:20:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:12:34.012 "tick_rate": 2400000000, 00:12:34.012 "poll_groups": [ 00:12:34.012 { 00:12:34.012 "name": "nvmf_tgt_poll_group_000", 00:12:34.012 "admin_qpairs": 0, 00:12:34.012 "io_qpairs": 0, 00:12:34.012 "current_admin_qpairs": 0, 00:12:34.012 "current_io_qpairs": 0, 00:12:34.012 "pending_bdev_io": 0, 00:12:34.012 "completed_nvme_io": 0, 00:12:34.012 "transports": [ 00:12:34.012 { 00:12:34.012 "trtype": "TCP" 00:12:34.012 } 00:12:34.012 ] 00:12:34.012 }, 00:12:34.012 { 00:12:34.012 "name": "nvmf_tgt_poll_group_001", 00:12:34.012 "admin_qpairs": 0, 00:12:34.012 "io_qpairs": 0, 00:12:34.012 "current_admin_qpairs": 0, 00:12:34.012 "current_io_qpairs": 0, 00:12:34.012 "pending_bdev_io": 0, 00:12:34.012 "completed_nvme_io": 0, 00:12:34.012 "transports": [ 00:12:34.012 { 00:12:34.012 "trtype": "TCP" 00:12:34.012 } 00:12:34.012 ] 00:12:34.012 }, 00:12:34.012 { 00:12:34.012 "name": "nvmf_tgt_poll_group_002", 00:12:34.012 "admin_qpairs": 0, 00:12:34.012 "io_qpairs": 0, 00:12:34.012 "current_admin_qpairs": 0, 00:12:34.012 "current_io_qpairs": 0, 00:12:34.012 "pending_bdev_io": 0, 00:12:34.012 "completed_nvme_io": 0, 00:12:34.012 "transports": [ 00:12:34.012 { 00:12:34.012 "trtype": "TCP" 00:12:34.012 } 00:12:34.012 ] 00:12:34.012 }, 00:12:34.012 { 00:12:34.012 "name": "nvmf_tgt_poll_group_003", 00:12:34.012 "admin_qpairs": 0, 00:12:34.012 "io_qpairs": 0, 00:12:34.012 "current_admin_qpairs": 0, 00:12:34.012 "current_io_qpairs": 0, 00:12:34.012 "pending_bdev_io": 0, 00:12:34.012 "completed_nvme_io": 0, 00:12:34.012 "transports": [ 00:12:34.012 { 00:12:34.012 "trtype": "TCP" 00:12:34.012 } 00:12:34.012 ] 00:12:34.012 } 00:12:34.012 ] 00:12:34.012 }' 00:12:34.012 00:20:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:12:34.012 00:20:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:12:34.012 00:20:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:12:34.012 00:20:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:34.272 00:20:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:12:34.272 00:20:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:12:34.272 00:20:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:12:34.272 00:20:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:12:34.272 00:20:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:34.272 00:20:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:12:34.272 00:20:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:12:34.272 00:20:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:12:34.272 00:20:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:12:34.272 00:20:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:12:34.272 00:20:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.272 00:20:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:34.272 Malloc1 00:12:34.272 00:20:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.272 00:20:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:34.272 00:20:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.272 00:20:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:34.272 00:20:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.272 00:20:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:34.272 00:20:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.272 00:20:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:34.272 00:20:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.272 00:20:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:12:34.272 00:20:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.272 00:20:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:34.272 00:20:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.272 00:20:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:34.272 00:20:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.272 00:20:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:34.272 [2024-10-09 00:20:04.772580] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:34.272 00:20:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.272 00:20:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.2 -s 4420 00:12:34.272 00:20:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:12:34.272 00:20:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.2 -s 4420 00:12:34.272 00:20:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:12:34.272 00:20:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:34.272 00:20:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:12:34.272 00:20:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:34.272 00:20:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:12:34.272 00:20:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:34.272 00:20:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:12:34.272 00:20:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:12:34.272 00:20:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.2 -s 4420 00:12:34.272 [2024-10-09 00:20:04.809635] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be' 00:12:34.272 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:34.272 could not add new controller: failed to write to nvme-fabrics device 00:12:34.272 00:20:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:12:34.272 00:20:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:34.272 00:20:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:34.272 00:20:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:34.272 00:20:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:34.272 00:20:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.272 00:20:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:34.272 00:20:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.272 00:20:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:35.650 00:20:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:12:35.650 00:20:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:12:35.650 00:20:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:35.650 00:20:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:35.650 00:20:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:12:38.180 00:20:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:38.180 00:20:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:38.180 00:20:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:38.180 00:20:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:38.180 00:20:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:38.180 00:20:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:12:38.180 00:20:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:38.180 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:38.180 00:20:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:38.180 00:20:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:12:38.180 00:20:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:38.180 00:20:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:38.180 00:20:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:38.180 00:20:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:38.180 00:20:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:12:38.180 00:20:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:38.180 00:20:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.180 00:20:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:38.180 00:20:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.180 00:20:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:38.180 00:20:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:12:38.180 00:20:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:38.180 00:20:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:12:38.180 00:20:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:38.180 00:20:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:12:38.180 00:20:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:38.180 00:20:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:12:38.180 00:20:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:38.180 00:20:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:12:38.180 00:20:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:12:38.180 00:20:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:38.180 [2024-10-09 00:20:08.566764] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be' 00:12:38.180 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:38.180 could not add new controller: failed to write to nvme-fabrics device 00:12:38.180 00:20:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:12:38.180 00:20:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:38.180 00:20:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:38.180 00:20:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:38.180 00:20:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:12:38.180 00:20:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.180 00:20:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:38.180 00:20:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.180 00:20:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:39.553 00:20:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:12:39.553 00:20:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:12:39.553 00:20:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:39.553 00:20:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:39.553 00:20:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:12:42.088 00:20:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:42.088 00:20:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:42.088 00:20:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:42.088 00:20:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:42.088 00:20:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:42.088 00:20:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:12:42.088 00:20:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:42.088 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:42.088 00:20:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:42.088 00:20:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:12:42.088 00:20:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:42.088 00:20:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:42.088 00:20:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:42.088 00:20:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:42.088 00:20:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:12:42.088 00:20:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:42.088 00:20:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:42.088 00:20:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:42.088 00:20:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:42.088 00:20:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:12:42.088 00:20:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:42.088 00:20:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:42.088 00:20:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:42.088 00:20:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:42.088 00:20:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:42.088 00:20:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:42.088 00:20:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:42.088 00:20:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:42.088 [2024-10-09 00:20:12.360896] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:42.088 00:20:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:42.088 00:20:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:42.088 00:20:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:42.088 00:20:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:42.088 00:20:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:42.088 00:20:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:42.088 00:20:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:42.088 00:20:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:42.088 00:20:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:42.088 00:20:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:43.462 00:20:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:43.462 00:20:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:12:43.462 00:20:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:43.462 00:20:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:43.462 00:20:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:12:45.364 00:20:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:45.364 00:20:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:45.364 00:20:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:45.364 00:20:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:45.364 00:20:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:45.364 00:20:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:12:45.364 00:20:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:45.364 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:45.364 00:20:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:45.364 00:20:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:12:45.364 00:20:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:45.364 00:20:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:45.364 00:20:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:45.364 00:20:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:45.625 00:20:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:12:45.625 00:20:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:45.625 00:20:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.625 00:20:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:45.625 00:20:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.625 00:20:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:45.625 00:20:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.625 00:20:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:45.625 00:20:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.625 00:20:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:45.625 00:20:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:45.625 00:20:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.625 00:20:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:45.625 00:20:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.625 00:20:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:45.625 00:20:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.625 00:20:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:45.625 [2024-10-09 00:20:16.043693] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:45.625 00:20:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.625 00:20:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:45.626 00:20:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.626 00:20:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:45.626 00:20:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.626 00:20:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:45.626 00:20:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.626 00:20:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:45.626 00:20:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.626 00:20:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:47.014 00:20:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:47.014 00:20:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:12:47.014 00:20:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:47.014 00:20:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:47.014 00:20:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:12:49.553 00:20:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:49.553 00:20:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:49.553 00:20:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:49.553 00:20:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:49.553 00:20:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:49.553 00:20:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:12:49.553 00:20:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:49.553 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:49.553 00:20:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:49.553 00:20:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:12:49.553 00:20:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:49.553 00:20:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:49.553 00:20:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:49.553 00:20:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:49.553 00:20:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:12:49.553 00:20:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:49.553 00:20:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.553 00:20:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:49.553 00:20:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.553 00:20:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:49.553 00:20:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.553 00:20:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:49.553 00:20:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.553 00:20:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:49.553 00:20:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:49.553 00:20:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.553 00:20:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:49.553 00:20:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.553 00:20:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:49.553 00:20:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.553 00:20:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:49.553 [2024-10-09 00:20:19.743858] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:49.553 00:20:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.553 00:20:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:49.553 00:20:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.553 00:20:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:49.553 00:20:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.553 00:20:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:49.553 00:20:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.553 00:20:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:49.553 00:20:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.553 00:20:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:50.941 00:20:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:50.941 00:20:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:12:50.941 00:20:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:50.941 00:20:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:50.941 00:20:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:12:52.845 00:20:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:52.845 00:20:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:52.845 00:20:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:52.845 00:20:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:52.845 00:20:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:52.845 00:20:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:12:52.845 00:20:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:52.845 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:52.845 00:20:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:52.845 00:20:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:12:52.845 00:20:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:52.845 00:20:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:52.845 00:20:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:52.845 00:20:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:52.845 00:20:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:12:52.845 00:20:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:52.845 00:20:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.845 00:20:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:52.845 00:20:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.845 00:20:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:52.845 00:20:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.845 00:20:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:52.845 00:20:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.845 00:20:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:52.845 00:20:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:52.845 00:20:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.845 00:20:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:52.845 00:20:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.845 00:20:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:52.845 00:20:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.845 00:20:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:52.845 [2024-10-09 00:20:23.459740] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:52.845 00:20:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.845 00:20:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:52.845 00:20:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.845 00:20:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:52.845 00:20:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.845 00:20:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:52.845 00:20:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.845 00:20:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:53.103 00:20:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.104 00:20:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:54.477 00:20:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:54.477 00:20:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:12:54.477 00:20:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:54.477 00:20:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:54.477 00:20:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:12:56.400 00:20:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:56.400 00:20:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:56.400 00:20:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:56.400 00:20:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:56.400 00:20:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:56.400 00:20:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:12:56.400 00:20:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:56.400 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:56.400 00:20:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:56.400 00:20:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:12:56.400 00:20:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:56.400 00:20:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:56.400 00:20:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:56.400 00:20:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:56.658 00:20:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:12:56.658 00:20:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:56.658 00:20:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.658 00:20:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:56.658 00:20:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.658 00:20:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:56.658 00:20:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.658 00:20:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:56.658 00:20:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.658 00:20:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:56.658 00:20:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:56.658 00:20:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.658 00:20:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:56.658 00:20:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.658 00:20:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:56.658 00:20:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.658 00:20:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:56.658 [2024-10-09 00:20:27.085580] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:56.658 00:20:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.658 00:20:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:56.658 00:20:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.658 00:20:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:56.658 00:20:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.658 00:20:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:56.658 00:20:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.658 00:20:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:56.658 00:20:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.658 00:20:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:58.042 00:20:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:58.042 00:20:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:12:58.042 00:20:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:58.042 00:20:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:58.043 00:20:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:13:00.581 00:20:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:00.581 00:20:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:00.581 00:20:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:00.581 00:20:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:00.581 00:20:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:00.581 00:20:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:13:00.581 00:20:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:00.581 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:00.581 00:20:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:00.581 00:20:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:13:00.581 00:20:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:13:00.581 00:20:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:00.581 00:20:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:13:00.581 00:20:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:00.581 00:20:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:13:00.581 00:20:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:00.581 00:20:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.581 00:20:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:00.581 00:20:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.581 00:20:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:00.581 00:20:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.581 00:20:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:00.581 00:20:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.581 00:20:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:13:00.581 00:20:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:00.581 00:20:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:00.581 00:20:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.581 00:20:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:00.581 00:20:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.581 00:20:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:00.581 00:20:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.581 00:20:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:00.581 [2024-10-09 00:20:30.799016] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:00.581 00:20:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.581 00:20:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:00.581 00:20:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.581 00:20:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:00.581 00:20:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.581 00:20:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:00.581 00:20:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.581 00:20:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:00.581 00:20:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.581 00:20:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:00.581 00:20:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.581 00:20:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:00.581 00:20:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.581 00:20:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:00.581 00:20:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.581 00:20:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:00.581 00:20:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.581 00:20:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:00.581 00:20:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:00.581 00:20:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.581 00:20:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:00.581 00:20:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.581 00:20:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:00.581 00:20:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.581 00:20:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:00.581 [2024-10-09 00:20:30.867077] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:00.581 00:20:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.581 00:20:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:00.581 00:20:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.581 00:20:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:00.581 00:20:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.581 00:20:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:00.581 00:20:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.581 00:20:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:00.581 00:20:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.581 00:20:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:00.581 00:20:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.581 00:20:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:00.581 00:20:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.581 00:20:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:00.581 00:20:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.581 00:20:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:00.581 00:20:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.581 00:20:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:00.581 00:20:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:00.582 00:20:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.582 00:20:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:00.582 00:20:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.582 00:20:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:00.582 00:20:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.582 00:20:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:00.582 [2024-10-09 00:20:30.935259] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:00.582 00:20:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.582 00:20:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:00.582 00:20:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.582 00:20:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:00.582 00:20:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.582 00:20:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:00.582 00:20:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.582 00:20:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:00.582 00:20:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.582 00:20:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:00.582 00:20:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.582 00:20:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:00.582 00:20:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.582 00:20:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:00.582 00:20:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.582 00:20:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:00.582 00:20:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.582 00:20:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:00.582 00:20:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:00.582 00:20:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.582 00:20:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:00.582 00:20:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.582 00:20:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:00.582 00:20:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.582 00:20:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:00.582 [2024-10-09 00:20:31.007466] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:00.582 00:20:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.582 00:20:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:00.582 00:20:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.582 00:20:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:00.582 00:20:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.582 00:20:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:00.582 00:20:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.582 00:20:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:00.582 00:20:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.582 00:20:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:00.582 00:20:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.582 00:20:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:00.582 00:20:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.582 00:20:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:00.582 00:20:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.582 00:20:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:00.582 00:20:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.582 00:20:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:00.582 00:20:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:00.582 00:20:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.582 00:20:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:00.582 00:20:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.582 00:20:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:00.582 00:20:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.582 00:20:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:00.582 [2024-10-09 00:20:31.071660] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:00.582 00:20:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.582 00:20:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:00.582 00:20:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.582 00:20:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:00.582 00:20:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.582 00:20:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:00.582 00:20:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.582 00:20:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:00.582 00:20:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.582 00:20:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:00.582 00:20:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.582 00:20:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:00.582 00:20:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.582 00:20:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:00.582 00:20:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.582 00:20:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:00.582 00:20:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.582 00:20:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:13:00.582 00:20:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.582 00:20:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:00.582 00:20:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.582 00:20:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:13:00.582 "tick_rate": 2400000000, 00:13:00.582 "poll_groups": [ 00:13:00.582 { 00:13:00.582 "name": "nvmf_tgt_poll_group_000", 00:13:00.582 "admin_qpairs": 0, 00:13:00.582 "io_qpairs": 224, 00:13:00.582 "current_admin_qpairs": 0, 00:13:00.582 "current_io_qpairs": 0, 00:13:00.582 "pending_bdev_io": 0, 00:13:00.582 "completed_nvme_io": 274, 00:13:00.582 "transports": [ 00:13:00.582 { 00:13:00.582 "trtype": "TCP" 00:13:00.582 } 00:13:00.582 ] 00:13:00.582 }, 00:13:00.582 { 00:13:00.582 "name": "nvmf_tgt_poll_group_001", 00:13:00.582 "admin_qpairs": 1, 00:13:00.582 "io_qpairs": 223, 00:13:00.582 "current_admin_qpairs": 0, 00:13:00.582 "current_io_qpairs": 0, 00:13:00.582 "pending_bdev_io": 0, 00:13:00.582 "completed_nvme_io": 517, 00:13:00.582 "transports": [ 00:13:00.582 { 00:13:00.582 "trtype": "TCP" 00:13:00.582 } 00:13:00.582 ] 00:13:00.582 }, 00:13:00.582 { 00:13:00.582 "name": "nvmf_tgt_poll_group_002", 00:13:00.582 "admin_qpairs": 6, 00:13:00.582 "io_qpairs": 218, 00:13:00.582 "current_admin_qpairs": 0, 00:13:00.582 "current_io_qpairs": 0, 00:13:00.582 "pending_bdev_io": 0, 00:13:00.582 "completed_nvme_io": 224, 00:13:00.582 "transports": [ 00:13:00.582 { 00:13:00.582 "trtype": "TCP" 00:13:00.582 } 00:13:00.582 ] 00:13:00.582 }, 00:13:00.582 { 00:13:00.582 "name": "nvmf_tgt_poll_group_003", 00:13:00.582 "admin_qpairs": 0, 00:13:00.582 "io_qpairs": 224, 00:13:00.583 "current_admin_qpairs": 0, 00:13:00.583 "current_io_qpairs": 0, 00:13:00.583 "pending_bdev_io": 0, 00:13:00.583 "completed_nvme_io": 224, 00:13:00.583 "transports": [ 00:13:00.583 { 00:13:00.583 "trtype": "TCP" 00:13:00.583 } 00:13:00.583 ] 00:13:00.583 } 00:13:00.583 ] 00:13:00.583 }' 00:13:00.583 00:20:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:13:00.583 00:20:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:13:00.583 00:20:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:13:00.583 00:20:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:00.583 00:20:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:13:00.583 00:20:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:13:00.583 00:20:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:13:00.583 00:20:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:13:00.583 00:20:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:00.841 00:20:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 889 > 0 )) 00:13:00.841 00:20:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:13:00.841 00:20:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:13:00.841 00:20:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:13:00.841 00:20:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@514 -- # nvmfcleanup 00:13:00.841 00:20:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # sync 00:13:00.841 00:20:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:00.841 00:20:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set +e 00:13:00.841 00:20:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:00.841 00:20:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:00.841 rmmod nvme_tcp 00:13:00.841 rmmod nvme_fabrics 00:13:00.841 rmmod nvme_keyring 00:13:00.841 00:20:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:00.841 00:20:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@128 -- # set -e 00:13:00.841 00:20:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # return 0 00:13:00.841 00:20:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@515 -- # '[' -n 3167516 ']' 00:13:00.841 00:20:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@516 -- # killprocess 3167516 00:13:00.841 00:20:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@950 -- # '[' -z 3167516 ']' 00:13:00.841 00:20:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # kill -0 3167516 00:13:00.841 00:20:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@955 -- # uname 00:13:00.841 00:20:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:00.841 00:20:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3167516 00:13:00.841 00:20:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:00.841 00:20:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:00.841 00:20:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3167516' 00:13:00.841 killing process with pid 3167516 00:13:00.841 00:20:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@969 -- # kill 3167516 00:13:00.841 00:20:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@974 -- # wait 3167516 00:13:01.101 00:20:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:13:01.101 00:20:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:13:01.101 00:20:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:13:01.101 00:20:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # iptr 00:13:01.101 00:20:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@789 -- # iptables-save 00:13:01.101 00:20:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:13:01.101 00:20:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@789 -- # iptables-restore 00:13:01.101 00:20:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:01.101 00:20:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:01.101 00:20:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:01.101 00:20:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:01.101 00:20:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:03.010 00:20:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:03.010 00:13:03.010 real 0m37.833s 00:13:03.010 user 1m52.937s 00:13:03.010 sys 0m7.581s 00:13:03.010 00:20:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:03.010 00:20:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:03.010 ************************************ 00:13:03.010 END TEST nvmf_rpc 00:13:03.010 ************************************ 00:13:03.010 00:20:33 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:13:03.010 00:20:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:03.010 00:20:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:03.010 00:20:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:03.272 ************************************ 00:13:03.272 START TEST nvmf_invalid 00:13:03.272 ************************************ 00:13:03.272 00:20:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:13:03.272 * Looking for test storage... 00:13:03.272 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:03.272 00:20:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:13:03.272 00:20:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1681 -- # lcov --version 00:13:03.272 00:20:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:13:03.272 00:20:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:13:03.272 00:20:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:03.272 00:20:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:03.272 00:20:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:03.272 00:20:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:13:03.272 00:20:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:13:03.272 00:20:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:13:03.272 00:20:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:13:03.272 00:20:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:13:03.272 00:20:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:13:03.272 00:20:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:13:03.272 00:20:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:03.272 00:20:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:13:03.272 00:20:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:13:03.272 00:20:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:03.272 00:20:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:03.272 00:20:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:13:03.272 00:20:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:13:03.272 00:20:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:03.272 00:20:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:13:03.272 00:20:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:13:03.272 00:20:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:13:03.272 00:20:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:13:03.272 00:20:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:03.273 00:20:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:13:03.273 00:20:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:13:03.273 00:20:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:03.273 00:20:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:03.273 00:20:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:13:03.273 00:20:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:03.273 00:20:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:13:03.273 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:03.273 --rc genhtml_branch_coverage=1 00:13:03.273 --rc genhtml_function_coverage=1 00:13:03.273 --rc genhtml_legend=1 00:13:03.273 --rc geninfo_all_blocks=1 00:13:03.273 --rc geninfo_unexecuted_blocks=1 00:13:03.273 00:13:03.273 ' 00:13:03.273 00:20:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:13:03.273 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:03.273 --rc genhtml_branch_coverage=1 00:13:03.273 --rc genhtml_function_coverage=1 00:13:03.273 --rc genhtml_legend=1 00:13:03.273 --rc geninfo_all_blocks=1 00:13:03.273 --rc geninfo_unexecuted_blocks=1 00:13:03.273 00:13:03.273 ' 00:13:03.273 00:20:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:13:03.273 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:03.273 --rc genhtml_branch_coverage=1 00:13:03.273 --rc genhtml_function_coverage=1 00:13:03.273 --rc genhtml_legend=1 00:13:03.273 --rc geninfo_all_blocks=1 00:13:03.273 --rc geninfo_unexecuted_blocks=1 00:13:03.273 00:13:03.273 ' 00:13:03.273 00:20:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:13:03.273 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:03.273 --rc genhtml_branch_coverage=1 00:13:03.273 --rc genhtml_function_coverage=1 00:13:03.273 --rc genhtml_legend=1 00:13:03.273 --rc geninfo_all_blocks=1 00:13:03.273 --rc geninfo_unexecuted_blocks=1 00:13:03.273 00:13:03.273 ' 00:13:03.273 00:20:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:03.273 00:20:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:13:03.273 00:20:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:03.273 00:20:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:03.273 00:20:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:03.273 00:20:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:03.273 00:20:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:03.273 00:20:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:03.273 00:20:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:03.273 00:20:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:03.273 00:20:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:03.273 00:20:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:03.273 00:20:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:03.273 00:20:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:03.273 00:20:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:03.273 00:20:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:03.273 00:20:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:03.273 00:20:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:03.273 00:20:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:03.273 00:20:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:13:03.273 00:20:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:03.273 00:20:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:03.273 00:20:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:03.273 00:20:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:03.273 00:20:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:03.273 00:20:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:03.273 00:20:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:13:03.273 00:20:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:03.273 00:20:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # : 0 00:13:03.273 00:20:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:03.273 00:20:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:03.273 00:20:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:03.273 00:20:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:03.273 00:20:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:03.273 00:20:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:03.273 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:03.273 00:20:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:03.273 00:20:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:03.273 00:20:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:03.273 00:20:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:13:03.273 00:20:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:03.273 00:20:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:13:03.273 00:20:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:13:03.273 00:20:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:13:03.273 00:20:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:13:03.273 00:20:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:13:03.273 00:20:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:03.273 00:20:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # prepare_net_devs 00:13:03.273 00:20:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@436 -- # local -g is_hw=no 00:13:03.273 00:20:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # remove_spdk_ns 00:13:03.273 00:20:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:03.273 00:20:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:03.273 00:20:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:03.535 00:20:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:13:03.535 00:20:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:13:03.535 00:20:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@309 -- # xtrace_disable 00:13:03.535 00:20:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:11.685 00:20:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:11.685 00:20:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # pci_devs=() 00:13:11.685 00:20:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:11.685 00:20:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:11.685 00:20:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:11.685 00:20:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:11.685 00:20:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:11.685 00:20:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # net_devs=() 00:13:11.685 00:20:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:11.685 00:20:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # e810=() 00:13:11.685 00:20:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # local -ga e810 00:13:11.685 00:20:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # x722=() 00:13:11.685 00:20:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # local -ga x722 00:13:11.685 00:20:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # mlx=() 00:13:11.685 00:20:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # local -ga mlx 00:13:11.685 00:20:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:11.685 00:20:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:11.685 00:20:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:11.685 00:20:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:11.685 00:20:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:11.685 00:20:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:11.685 00:20:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:11.685 00:20:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:11.686 00:20:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:11.686 00:20:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:11.686 00:20:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:11.686 00:20:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:11.686 00:20:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:11.686 00:20:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:11.686 00:20:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:11.686 00:20:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:11.686 00:20:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:11.686 00:20:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:11.686 00:20:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:11.686 00:20:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:13:11.686 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:13:11.686 00:20:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:11.686 00:20:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:11.686 00:20:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:11.686 00:20:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:11.686 00:20:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:11.686 00:20:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:11.686 00:20:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:13:11.686 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:13:11.686 00:20:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:11.686 00:20:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:11.686 00:20:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:11.686 00:20:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:11.686 00:20:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:11.686 00:20:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:11.686 00:20:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:11.686 00:20:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:11.686 00:20:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:13:11.686 00:20:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:11.686 00:20:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:13:11.686 00:20:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:11.686 00:20:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ up == up ]] 00:13:11.686 00:20:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:13:11.686 00:20:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:11.686 00:20:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:13:11.686 Found net devices under 0000:4b:00.0: cvl_0_0 00:13:11.686 00:20:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:13:11.686 00:20:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:13:11.686 00:20:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:11.686 00:20:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:13:11.686 00:20:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:11.686 00:20:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ up == up ]] 00:13:11.686 00:20:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:13:11.686 00:20:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:11.686 00:20:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:13:11.686 Found net devices under 0000:4b:00.1: cvl_0_1 00:13:11.686 00:20:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:13:11.686 00:20:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:13:11.686 00:20:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # is_hw=yes 00:13:11.686 00:20:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:13:11.686 00:20:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:13:11.686 00:20:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:13:11.686 00:20:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:11.686 00:20:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:11.686 00:20:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:11.686 00:20:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:11.686 00:20:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:11.686 00:20:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:11.686 00:20:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:11.686 00:20:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:11.686 00:20:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:11.686 00:20:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:11.686 00:20:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:11.686 00:20:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:11.686 00:20:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:11.686 00:20:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:11.686 00:20:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:11.686 00:20:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:11.686 00:20:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:11.686 00:20:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:11.686 00:20:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:11.686 00:20:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:11.686 00:20:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:11.686 00:20:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:11.686 00:20:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:11.686 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:11.686 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.527 ms 00:13:11.686 00:13:11.686 --- 10.0.0.2 ping statistics --- 00:13:11.686 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:11.686 rtt min/avg/max/mdev = 0.527/0.527/0.527/0.000 ms 00:13:11.686 00:20:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:11.686 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:11.686 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.189 ms 00:13:11.686 00:13:11.686 --- 10.0.0.1 ping statistics --- 00:13:11.686 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:11.686 rtt min/avg/max/mdev = 0.189/0.189/0.189/0.000 ms 00:13:11.686 00:20:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:11.686 00:20:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@448 -- # return 0 00:13:11.686 00:20:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:13:11.686 00:20:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:11.686 00:20:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:13:11.686 00:20:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:13:11.686 00:20:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:11.686 00:20:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:13:11.686 00:20:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:13:11.686 00:20:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:13:11.686 00:20:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:13:11.686 00:20:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:11.686 00:20:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:11.686 00:20:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@507 -- # nvmfpid=3177176 00:13:11.686 00:20:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@508 -- # waitforlisten 3177176 00:13:11.686 00:20:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:11.686 00:20:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@831 -- # '[' -z 3177176 ']' 00:13:11.686 00:20:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:11.686 00:20:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:11.686 00:20:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:11.686 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:11.686 00:20:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:11.686 00:20:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:11.687 [2024-10-09 00:20:41.516089] Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 initialization... 00:13:11.687 [2024-10-09 00:20:41.516153] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:11.687 [2024-10-09 00:20:41.604310] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:11.687 [2024-10-09 00:20:41.699818] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:11.687 [2024-10-09 00:20:41.699875] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:11.687 [2024-10-09 00:20:41.699884] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:11.687 [2024-10-09 00:20:41.699891] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:11.687 [2024-10-09 00:20:41.699898] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:11.687 [2024-10-09 00:20:41.701953] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:13:11.687 [2024-10-09 00:20:41.702118] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:13:11.687 [2024-10-09 00:20:41.702286] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:13:11.687 [2024-10-09 00:20:41.702287] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:13:11.946 00:20:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:11.946 00:20:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # return 0 00:13:11.946 00:20:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:13:11.946 00:20:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:11.946 00:20:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:11.946 00:20:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:11.946 00:20:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:13:11.946 00:20:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode9072 00:13:11.946 [2024-10-09 00:20:42.553534] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:13:12.205 00:20:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:13:12.205 { 00:13:12.205 "nqn": "nqn.2016-06.io.spdk:cnode9072", 00:13:12.205 "tgt_name": "foobar", 00:13:12.205 "method": "nvmf_create_subsystem", 00:13:12.205 "req_id": 1 00:13:12.205 } 00:13:12.205 Got JSON-RPC error response 00:13:12.205 response: 00:13:12.205 { 00:13:12.205 "code": -32603, 00:13:12.205 "message": "Unable to find target foobar" 00:13:12.205 }' 00:13:12.205 00:20:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:13:12.205 { 00:13:12.205 "nqn": "nqn.2016-06.io.spdk:cnode9072", 00:13:12.205 "tgt_name": "foobar", 00:13:12.205 "method": "nvmf_create_subsystem", 00:13:12.205 "req_id": 1 00:13:12.205 } 00:13:12.205 Got JSON-RPC error response 00:13:12.205 response: 00:13:12.205 { 00:13:12.205 "code": -32603, 00:13:12.205 "message": "Unable to find target foobar" 00:13:12.205 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:13:12.205 00:20:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:13:12.205 00:20:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode4759 00:13:12.205 [2024-10-09 00:20:42.758411] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode4759: invalid serial number 'SPDKISFASTANDAWESOME' 00:13:12.205 00:20:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:13:12.205 { 00:13:12.205 "nqn": "nqn.2016-06.io.spdk:cnode4759", 00:13:12.205 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:13:12.205 "method": "nvmf_create_subsystem", 00:13:12.205 "req_id": 1 00:13:12.205 } 00:13:12.205 Got JSON-RPC error response 00:13:12.205 response: 00:13:12.205 { 00:13:12.205 "code": -32602, 00:13:12.205 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:13:12.205 }' 00:13:12.205 00:20:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:13:12.205 { 00:13:12.205 "nqn": "nqn.2016-06.io.spdk:cnode4759", 00:13:12.205 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:13:12.205 "method": "nvmf_create_subsystem", 00:13:12.205 "req_id": 1 00:13:12.205 } 00:13:12.205 Got JSON-RPC error response 00:13:12.205 response: 00:13:12.205 { 00:13:12.205 "code": -32602, 00:13:12.205 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:13:12.205 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:13:12.205 00:20:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:13:12.205 00:20:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode25811 00:13:12.465 [2024-10-09 00:20:42.967162] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode25811: invalid model number 'SPDK_Controller' 00:13:12.465 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:13:12.465 { 00:13:12.465 "nqn": "nqn.2016-06.io.spdk:cnode25811", 00:13:12.465 "model_number": "SPDK_Controller\u001f", 00:13:12.465 "method": "nvmf_create_subsystem", 00:13:12.465 "req_id": 1 00:13:12.465 } 00:13:12.465 Got JSON-RPC error response 00:13:12.465 response: 00:13:12.465 { 00:13:12.465 "code": -32602, 00:13:12.465 "message": "Invalid MN SPDK_Controller\u001f" 00:13:12.465 }' 00:13:12.465 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:13:12.465 { 00:13:12.465 "nqn": "nqn.2016-06.io.spdk:cnode25811", 00:13:12.465 "model_number": "SPDK_Controller\u001f", 00:13:12.465 "method": "nvmf_create_subsystem", 00:13:12.465 "req_id": 1 00:13:12.465 } 00:13:12.465 Got JSON-RPC error response 00:13:12.465 response: 00:13:12.465 { 00:13:12.465 "code": -32602, 00:13:12.465 "message": "Invalid MN SPDK_Controller\u001f" 00:13:12.465 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:13:12.465 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:13:12.465 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:13:12.465 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:13:12.465 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:13:12.465 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:13:12.465 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:13:12.465 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:12.465 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:13:12.465 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:13:12.465 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:13:12.465 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:12.465 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:12.465 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:13:12.465 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:13:12.465 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:13:12.465 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:12.465 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:12.465 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:13:12.465 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:13:12.465 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:13:12.465 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:12.465 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:12.465 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:13:12.465 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:13:12.465 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:13:12.465 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:12.465 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:12.465 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:13:12.465 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:13:12.465 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:13:12.465 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:12.465 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:12.465 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:13:12.465 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:13:12.465 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:13:12.465 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:12.465 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:12.465 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:13:12.465 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:13:12.465 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:13:12.465 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:12.465 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:12.465 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:13:12.465 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:13:12.465 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:13:12.465 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:12.465 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:12.465 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:13:12.465 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:13:12.465 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:13:12.465 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:12.465 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:12.465 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:13:12.465 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:13:12.465 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:13:12.465 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:12.465 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:12.465 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:13:12.465 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:13:12.465 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:13:12.465 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:12.465 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:12.724 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:13:12.724 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:13:12.724 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:13:12.724 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:12.724 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:12.724 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:13:12.724 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:13:12.724 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:13:12.724 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:12.724 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:12.724 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:13:12.724 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:13:12.724 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:13:12.724 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:12.724 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:12.724 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:13:12.724 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:13:12.724 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:13:12.724 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:12.724 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:12.724 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:13:12.724 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:13:12.724 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:13:12.724 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:12.724 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:12.724 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:13:12.724 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:13:12.724 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:13:12.724 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:12.724 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:12.724 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:13:12.724 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:13:12.724 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:13:12.724 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:12.724 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:12.724 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:13:12.724 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:13:12.724 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:13:12.724 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:12.724 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:12.724 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:13:12.724 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:13:12.724 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:13:12.724 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:12.724 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:12.724 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:13:12.724 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:13:12.724 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:13:12.724 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:12.724 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:12.724 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ d == \- ]] 00:13:12.724 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'dp/,MQ-D\}Y?IByj)By' 00:13:12.724 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'dp/,MQ-D\}Y?IByj)By' nqn.2016-06.io.spdk:cnode9823 00:13:12.724 [2024-10-09 00:20:43.352622] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode9823: invalid serial number 'dp/,MQ-D\}Y?IByj)By' 00:13:12.992 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:13:12.992 { 00:13:12.992 "nqn": "nqn.2016-06.io.spdk:cnode9823", 00:13:12.992 "serial_number": "dp/,MQ\u007f-D\\}Y?\u007fIByj)By", 00:13:12.992 "method": "nvmf_create_subsystem", 00:13:12.992 "req_id": 1 00:13:12.992 } 00:13:12.992 Got JSON-RPC error response 00:13:12.992 response: 00:13:12.992 { 00:13:12.992 "code": -32602, 00:13:12.992 "message": "Invalid SN dp/,MQ\u007f-D\\}Y?\u007fIByj)By" 00:13:12.992 }' 00:13:12.992 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:13:12.992 { 00:13:12.992 "nqn": "nqn.2016-06.io.spdk:cnode9823", 00:13:12.992 "serial_number": "dp/,MQ\u007f-D\\}Y?\u007fIByj)By", 00:13:12.992 "method": "nvmf_create_subsystem", 00:13:12.992 "req_id": 1 00:13:12.992 } 00:13:12.992 Got JSON-RPC error response 00:13:12.992 response: 00:13:12.992 { 00:13:12.992 "code": -32602, 00:13:12.992 "message": "Invalid SN dp/,MQ\u007f-D\\}Y?\u007fIByj)By" 00:13:12.992 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:13:12.992 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:13:12.992 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:13:12.992 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:13:12.992 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:13:12.992 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:13:12.992 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:13:12.992 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:12.992 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:13:12.992 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:13:12.992 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:13:12.992 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:12.992 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:12.992 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:13:12.992 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:13:12.992 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:13:12.992 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:12.992 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:12.992 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:13:12.992 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:13:12.992 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:13:12.992 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:12.992 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:12.992 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:13:12.992 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:13:12.992 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:13:12.992 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:12.992 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:12.992 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:13:12.992 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:13:12.992 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:13:12.992 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:12.992 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:12.992 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:13:12.992 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:13:12.992 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:13:12.992 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:12.992 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:12.992 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:13:12.992 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:13:12.992 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:13:12.992 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:12.992 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:12.992 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:13:12.992 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:13:12.992 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:13:12.992 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:12.992 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:12.992 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:13:12.992 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:13:12.992 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:13:12.992 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:12.992 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:12.992 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:13:12.992 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:13:12.992 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:13:12.992 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:12.992 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:12.993 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:13:12.993 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:13:12.993 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:13:12.993 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:12.993 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:12.993 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:13:12.993 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:13:12.993 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:13:12.993 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:12.993 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:12.993 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:13:12.993 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:13:12.993 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:13:12.993 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:12.993 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:12.993 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:13:12.993 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:13:12.993 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:13:12.993 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:12.993 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:12.993 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:13:12.993 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:13:12.993 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:13:12.993 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:12.993 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:12.993 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:13:12.993 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:13:12.993 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:13:12.993 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:12.993 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:12.993 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:13:12.993 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:13:12.993 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:13:12.993 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:12.993 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:12.993 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:13:12.993 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:13:12.993 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:13:12.993 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:12.993 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:12.993 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:13:12.993 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:13:12.993 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:13:12.993 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:12.993 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:12.993 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:13:12.993 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:13:12.993 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:13:12.993 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:12.993 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:12.993 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:13:12.993 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:13:12.993 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:13:12.993 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:12.993 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:12.993 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:13:12.993 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:13:12.993 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:13:12.993 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:12.993 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:12.993 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:13:12.993 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:13:12.993 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:13:12.993 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:12.993 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:12.993 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:13:12.993 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:13:12.993 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:13:12.993 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:12.993 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:12.993 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:13:12.993 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:13:12.993 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:13:12.993 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:12.993 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:12.993 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:13:12.993 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:13:12.993 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:13:12.993 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:12.993 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:12.993 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:13:12.993 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:13:12.993 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:13:12.993 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:12.993 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:12.993 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:13:12.993 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:13:12.993 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:13:12.993 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:12.993 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:13.251 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:13:13.251 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:13:13.251 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:13:13.251 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:13.251 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:13.251 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:13:13.251 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:13:13.251 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:13:13.251 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:13.251 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:13.251 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:13:13.251 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:13:13.251 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:13:13.251 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:13.251 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:13.251 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:13:13.251 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:13:13.251 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:13:13.251 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:13.251 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:13.251 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:13:13.251 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:13:13.251 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:13:13.251 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:13.251 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:13.251 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:13:13.251 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:13:13.251 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:13:13.251 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:13.251 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:13.251 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:13:13.251 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:13:13.251 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:13:13.251 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:13.251 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:13.251 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:13:13.251 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:13:13.251 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:13:13.251 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:13.251 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:13.251 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:13:13.251 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:13:13.251 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:13:13.251 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:13.251 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:13.251 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:13:13.251 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:13:13.251 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:13:13.251 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:13.251 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:13.252 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:13:13.252 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:13:13.252 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:13:13.252 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:13.252 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:13.252 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:13:13.252 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:13:13.252 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:13:13.252 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:13.252 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:13.252 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:13:13.252 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:13:13.252 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:13:13.252 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:13.252 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:13.252 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ $ == \- ]] 00:13:13.252 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo '$Nf>}+;m`NUUosF=TeXs#[xqI!wy3NpWA\uSQHP' 00:13:13.252 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d '$Nf>}+;m`NUUosF=TeXs#[xqI!wy3NpWA\uSQHP' nqn.2016-06.io.spdk:cnode31530 00:13:13.510 [2024-10-09 00:20:43.894739] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode31530: invalid model number '$Nf>}+;m`NUUosF=TeXs#[xqI!wy3NpWA\uSQHP' 00:13:13.510 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:13:13.510 { 00:13:13.510 "nqn": "nqn.2016-06.io.spdk:cnode31530", 00:13:13.510 "model_number": "$Nf>}+;m`NUUosF=TeXs#[x\u007fqI!wy3NpWA\u007f\\uSQHP", 00:13:13.510 "method": "nvmf_create_subsystem", 00:13:13.510 "req_id": 1 00:13:13.510 } 00:13:13.510 Got JSON-RPC error response 00:13:13.510 response: 00:13:13.510 { 00:13:13.510 "code": -32602, 00:13:13.510 "message": "Invalid MN $Nf>}+;m`NUUosF=TeXs#[x\u007fqI!wy3NpWA\u007f\\uSQHP" 00:13:13.510 }' 00:13:13.510 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:13:13.510 { 00:13:13.510 "nqn": "nqn.2016-06.io.spdk:cnode31530", 00:13:13.510 "model_number": "$Nf>}+;m`NUUosF=TeXs#[x\u007fqI!wy3NpWA\u007f\\uSQHP", 00:13:13.510 "method": "nvmf_create_subsystem", 00:13:13.510 "req_id": 1 00:13:13.510 } 00:13:13.510 Got JSON-RPC error response 00:13:13.510 response: 00:13:13.510 { 00:13:13.510 "code": -32602, 00:13:13.510 "message": "Invalid MN $Nf>}+;m`NUUosF=TeXs#[x\u007fqI!wy3NpWA\u007f\\uSQHP" 00:13:13.510 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:13:13.510 00:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:13:13.510 [2024-10-09 00:20:44.095568] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:13.510 00:20:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:13:13.769 00:20:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:13:13.769 00:20:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:13:13.769 00:20:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:13:13.769 00:20:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:13:13.769 00:20:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:13:14.027 [2024-10-09 00:20:44.509163] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:13:14.027 00:20:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:13:14.027 { 00:13:14.027 "nqn": "nqn.2016-06.io.spdk:cnode", 00:13:14.027 "listen_address": { 00:13:14.027 "trtype": "tcp", 00:13:14.027 "traddr": "", 00:13:14.027 "trsvcid": "4421" 00:13:14.027 }, 00:13:14.027 "method": "nvmf_subsystem_remove_listener", 00:13:14.027 "req_id": 1 00:13:14.027 } 00:13:14.027 Got JSON-RPC error response 00:13:14.027 response: 00:13:14.027 { 00:13:14.027 "code": -32602, 00:13:14.027 "message": "Invalid parameters" 00:13:14.027 }' 00:13:14.027 00:20:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:13:14.027 { 00:13:14.027 "nqn": "nqn.2016-06.io.spdk:cnode", 00:13:14.028 "listen_address": { 00:13:14.028 "trtype": "tcp", 00:13:14.028 "traddr": "", 00:13:14.028 "trsvcid": "4421" 00:13:14.028 }, 00:13:14.028 "method": "nvmf_subsystem_remove_listener", 00:13:14.028 "req_id": 1 00:13:14.028 } 00:13:14.028 Got JSON-RPC error response 00:13:14.028 response: 00:13:14.028 { 00:13:14.028 "code": -32602, 00:13:14.028 "message": "Invalid parameters" 00:13:14.028 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:13:14.028 00:20:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode14720 -i 0 00:13:14.286 [2024-10-09 00:20:44.713858] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode14720: invalid cntlid range [0-65519] 00:13:14.286 00:20:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:13:14.286 { 00:13:14.286 "nqn": "nqn.2016-06.io.spdk:cnode14720", 00:13:14.286 "min_cntlid": 0, 00:13:14.286 "method": "nvmf_create_subsystem", 00:13:14.286 "req_id": 1 00:13:14.286 } 00:13:14.286 Got JSON-RPC error response 00:13:14.286 response: 00:13:14.286 { 00:13:14.286 "code": -32602, 00:13:14.286 "message": "Invalid cntlid range [0-65519]" 00:13:14.286 }' 00:13:14.286 00:20:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:13:14.286 { 00:13:14.286 "nqn": "nqn.2016-06.io.spdk:cnode14720", 00:13:14.286 "min_cntlid": 0, 00:13:14.286 "method": "nvmf_create_subsystem", 00:13:14.286 "req_id": 1 00:13:14.286 } 00:13:14.286 Got JSON-RPC error response 00:13:14.286 response: 00:13:14.286 { 00:13:14.286 "code": -32602, 00:13:14.286 "message": "Invalid cntlid range [0-65519]" 00:13:14.286 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:14.286 00:20:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode12102 -i 65520 00:13:14.286 [2024-10-09 00:20:44.906495] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode12102: invalid cntlid range [65520-65519] 00:13:14.544 00:20:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:13:14.544 { 00:13:14.544 "nqn": "nqn.2016-06.io.spdk:cnode12102", 00:13:14.544 "min_cntlid": 65520, 00:13:14.544 "method": "nvmf_create_subsystem", 00:13:14.544 "req_id": 1 00:13:14.544 } 00:13:14.544 Got JSON-RPC error response 00:13:14.544 response: 00:13:14.544 { 00:13:14.544 "code": -32602, 00:13:14.544 "message": "Invalid cntlid range [65520-65519]" 00:13:14.544 }' 00:13:14.544 00:20:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:13:14.544 { 00:13:14.544 "nqn": "nqn.2016-06.io.spdk:cnode12102", 00:13:14.544 "min_cntlid": 65520, 00:13:14.544 "method": "nvmf_create_subsystem", 00:13:14.544 "req_id": 1 00:13:14.544 } 00:13:14.544 Got JSON-RPC error response 00:13:14.544 response: 00:13:14.544 { 00:13:14.544 "code": -32602, 00:13:14.544 "message": "Invalid cntlid range [65520-65519]" 00:13:14.544 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:14.544 00:20:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode30788 -I 0 00:13:14.544 [2024-10-09 00:20:45.095059] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode30788: invalid cntlid range [1-0] 00:13:14.544 00:20:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:13:14.544 { 00:13:14.544 "nqn": "nqn.2016-06.io.spdk:cnode30788", 00:13:14.544 "max_cntlid": 0, 00:13:14.544 "method": "nvmf_create_subsystem", 00:13:14.544 "req_id": 1 00:13:14.544 } 00:13:14.544 Got JSON-RPC error response 00:13:14.544 response: 00:13:14.544 { 00:13:14.544 "code": -32602, 00:13:14.544 "message": "Invalid cntlid range [1-0]" 00:13:14.544 }' 00:13:14.544 00:20:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:13:14.544 { 00:13:14.544 "nqn": "nqn.2016-06.io.spdk:cnode30788", 00:13:14.544 "max_cntlid": 0, 00:13:14.544 "method": "nvmf_create_subsystem", 00:13:14.544 "req_id": 1 00:13:14.544 } 00:13:14.544 Got JSON-RPC error response 00:13:14.544 response: 00:13:14.544 { 00:13:14.544 "code": -32602, 00:13:14.544 "message": "Invalid cntlid range [1-0]" 00:13:14.544 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:14.544 00:20:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode19797 -I 65520 00:13:14.801 [2024-10-09 00:20:45.283671] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode19797: invalid cntlid range [1-65520] 00:13:14.801 00:20:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:13:14.801 { 00:13:14.801 "nqn": "nqn.2016-06.io.spdk:cnode19797", 00:13:14.801 "max_cntlid": 65520, 00:13:14.801 "method": "nvmf_create_subsystem", 00:13:14.801 "req_id": 1 00:13:14.801 } 00:13:14.801 Got JSON-RPC error response 00:13:14.801 response: 00:13:14.801 { 00:13:14.801 "code": -32602, 00:13:14.801 "message": "Invalid cntlid range [1-65520]" 00:13:14.801 }' 00:13:14.801 00:20:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:13:14.801 { 00:13:14.801 "nqn": "nqn.2016-06.io.spdk:cnode19797", 00:13:14.801 "max_cntlid": 65520, 00:13:14.801 "method": "nvmf_create_subsystem", 00:13:14.801 "req_id": 1 00:13:14.801 } 00:13:14.801 Got JSON-RPC error response 00:13:14.801 response: 00:13:14.801 { 00:13:14.801 "code": -32602, 00:13:14.801 "message": "Invalid cntlid range [1-65520]" 00:13:14.801 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:14.801 00:20:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode15317 -i 6 -I 5 00:13:15.059 [2024-10-09 00:20:45.472320] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode15317: invalid cntlid range [6-5] 00:13:15.059 00:20:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:13:15.059 { 00:13:15.059 "nqn": "nqn.2016-06.io.spdk:cnode15317", 00:13:15.059 "min_cntlid": 6, 00:13:15.059 "max_cntlid": 5, 00:13:15.059 "method": "nvmf_create_subsystem", 00:13:15.059 "req_id": 1 00:13:15.059 } 00:13:15.059 Got JSON-RPC error response 00:13:15.059 response: 00:13:15.059 { 00:13:15.059 "code": -32602, 00:13:15.059 "message": "Invalid cntlid range [6-5]" 00:13:15.059 }' 00:13:15.059 00:20:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:13:15.059 { 00:13:15.059 "nqn": "nqn.2016-06.io.spdk:cnode15317", 00:13:15.059 "min_cntlid": 6, 00:13:15.059 "max_cntlid": 5, 00:13:15.059 "method": "nvmf_create_subsystem", 00:13:15.059 "req_id": 1 00:13:15.059 } 00:13:15.060 Got JSON-RPC error response 00:13:15.060 response: 00:13:15.060 { 00:13:15.060 "code": -32602, 00:13:15.060 "message": "Invalid cntlid range [6-5]" 00:13:15.060 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:15.060 00:20:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:13:15.060 00:20:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:13:15.060 { 00:13:15.060 "name": "foobar", 00:13:15.060 "method": "nvmf_delete_target", 00:13:15.060 "req_id": 1 00:13:15.060 } 00:13:15.060 Got JSON-RPC error response 00:13:15.060 response: 00:13:15.060 { 00:13:15.060 "code": -32602, 00:13:15.060 "message": "The specified target doesn'\''t exist, cannot delete it." 00:13:15.060 }' 00:13:15.060 00:20:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:13:15.060 { 00:13:15.060 "name": "foobar", 00:13:15.060 "method": "nvmf_delete_target", 00:13:15.060 "req_id": 1 00:13:15.060 } 00:13:15.060 Got JSON-RPC error response 00:13:15.060 response: 00:13:15.060 { 00:13:15.060 "code": -32602, 00:13:15.060 "message": "The specified target doesn't exist, cannot delete it." 00:13:15.060 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:13:15.060 00:20:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:13:15.060 00:20:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:13:15.060 00:20:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@514 -- # nvmfcleanup 00:13:15.060 00:20:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@121 -- # sync 00:13:15.060 00:20:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:15.060 00:20:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@124 -- # set +e 00:13:15.060 00:20:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:15.060 00:20:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:15.060 rmmod nvme_tcp 00:13:15.060 rmmod nvme_fabrics 00:13:15.060 rmmod nvme_keyring 00:13:15.060 00:20:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:15.060 00:20:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@128 -- # set -e 00:13:15.060 00:20:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@129 -- # return 0 00:13:15.060 00:20:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@515 -- # '[' -n 3177176 ']' 00:13:15.060 00:20:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@516 -- # killprocess 3177176 00:13:15.060 00:20:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@950 -- # '[' -z 3177176 ']' 00:13:15.060 00:20:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@954 -- # kill -0 3177176 00:13:15.060 00:20:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@955 -- # uname 00:13:15.060 00:20:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:15.060 00:20:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3177176 00:13:15.320 00:20:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:15.320 00:20:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:15.320 00:20:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3177176' 00:13:15.320 killing process with pid 3177176 00:13:15.320 00:20:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@969 -- # kill 3177176 00:13:15.320 00:20:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@974 -- # wait 3177176 00:13:15.320 00:20:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:13:15.320 00:20:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:13:15.320 00:20:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:13:15.320 00:20:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # iptr 00:13:15.320 00:20:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@789 -- # iptables-save 00:13:15.320 00:20:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:13:15.320 00:20:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@789 -- # iptables-restore 00:13:15.320 00:20:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:15.320 00:20:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:15.320 00:20:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:15.320 00:20:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:15.320 00:20:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:17.864 00:20:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:17.864 00:13:17.864 real 0m14.289s 00:13:17.864 user 0m21.329s 00:13:17.864 sys 0m6.781s 00:13:17.864 00:20:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:17.864 00:20:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:17.864 ************************************ 00:13:17.864 END TEST nvmf_invalid 00:13:17.864 ************************************ 00:13:17.864 00:20:47 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:13:17.864 00:20:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:17.864 00:20:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:17.864 00:20:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:17.864 ************************************ 00:13:17.864 START TEST nvmf_connect_stress 00:13:17.864 ************************************ 00:13:17.864 00:20:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:13:17.864 * Looking for test storage... 00:13:17.864 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:17.864 00:20:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:13:17.864 00:20:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1681 -- # lcov --version 00:13:17.864 00:20:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:13:17.864 00:20:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:13:17.864 00:20:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:17.864 00:20:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:17.864 00:20:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:17.864 00:20:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-: 00:13:17.864 00:20:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1 00:13:17.864 00:20:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-: 00:13:17.864 00:20:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2 00:13:17.864 00:20:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<' 00:13:17.864 00:20:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2 00:13:17.864 00:20:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1 00:13:17.864 00:20:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:17.864 00:20:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in 00:13:17.864 00:20:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1 00:13:17.864 00:20:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:17.864 00:20:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:17.864 00:20:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:13:17.864 00:20:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:13:17.864 00:20:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:17.864 00:20:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:13:17.864 00:20:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:13:17.864 00:20:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:13:17.864 00:20:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:13:17.864 00:20:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:17.864 00:20:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:13:17.864 00:20:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:13:17.864 00:20:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:17.864 00:20:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:17.864 00:20:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:13:17.864 00:20:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:17.864 00:20:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:13:17.864 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:17.864 --rc genhtml_branch_coverage=1 00:13:17.864 --rc genhtml_function_coverage=1 00:13:17.864 --rc genhtml_legend=1 00:13:17.864 --rc geninfo_all_blocks=1 00:13:17.864 --rc geninfo_unexecuted_blocks=1 00:13:17.864 00:13:17.864 ' 00:13:17.864 00:20:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:13:17.864 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:17.864 --rc genhtml_branch_coverage=1 00:13:17.864 --rc genhtml_function_coverage=1 00:13:17.864 --rc genhtml_legend=1 00:13:17.864 --rc geninfo_all_blocks=1 00:13:17.864 --rc geninfo_unexecuted_blocks=1 00:13:17.864 00:13:17.864 ' 00:13:17.864 00:20:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:13:17.864 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:17.864 --rc genhtml_branch_coverage=1 00:13:17.864 --rc genhtml_function_coverage=1 00:13:17.864 --rc genhtml_legend=1 00:13:17.864 --rc geninfo_all_blocks=1 00:13:17.864 --rc geninfo_unexecuted_blocks=1 00:13:17.864 00:13:17.864 ' 00:13:17.864 00:20:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:13:17.864 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:17.864 --rc genhtml_branch_coverage=1 00:13:17.864 --rc genhtml_function_coverage=1 00:13:17.864 --rc genhtml_legend=1 00:13:17.864 --rc geninfo_all_blocks=1 00:13:17.864 --rc geninfo_unexecuted_blocks=1 00:13:17.864 00:13:17.864 ' 00:13:17.864 00:20:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:17.864 00:20:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:13:17.864 00:20:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:17.864 00:20:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:17.864 00:20:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:17.864 00:20:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:17.864 00:20:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:17.864 00:20:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:17.864 00:20:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:17.864 00:20:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:17.864 00:20:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:17.864 00:20:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:17.864 00:20:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:17.864 00:20:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:17.864 00:20:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:17.864 00:20:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:17.864 00:20:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:17.864 00:20:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:17.864 00:20:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:17.864 00:20:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:13:17.864 00:20:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:17.864 00:20:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:17.864 00:20:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:17.864 00:20:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:17.864 00:20:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:17.864 00:20:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:17.864 00:20:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:13:17.864 00:20:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:17.864 00:20:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0 00:13:17.865 00:20:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:17.865 00:20:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:17.865 00:20:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:17.865 00:20:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:17.865 00:20:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:17.865 00:20:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:17.865 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:17.865 00:20:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:17.865 00:20:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:17.865 00:20:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:17.865 00:20:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:13:17.865 00:20:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:13:17.865 00:20:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:17.865 00:20:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # prepare_net_devs 00:13:17.865 00:20:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@436 -- # local -g is_hw=no 00:13:17.865 00:20:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # remove_spdk_ns 00:13:17.865 00:20:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:17.865 00:20:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:17.865 00:20:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:17.865 00:20:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:13:17.865 00:20:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:13:17.865 00:20:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:13:17.865 00:20:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:26.008 00:20:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:26.008 00:20:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:13:26.008 00:20:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:26.008 00:20:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:26.008 00:20:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:26.008 00:20:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:26.008 00:20:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:26.008 00:20:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # net_devs=() 00:13:26.008 00:20:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:26.008 00:20:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # e810=() 00:13:26.008 00:20:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # local -ga e810 00:13:26.008 00:20:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # x722=() 00:13:26.008 00:20:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # local -ga x722 00:13:26.008 00:20:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # mlx=() 00:13:26.008 00:20:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:13:26.008 00:20:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:26.008 00:20:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:26.008 00:20:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:26.008 00:20:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:26.008 00:20:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:26.008 00:20:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:26.008 00:20:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:26.009 00:20:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:26.009 00:20:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:26.009 00:20:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:26.009 00:20:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:26.009 00:20:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:26.009 00:20:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:26.009 00:20:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:26.009 00:20:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:26.009 00:20:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:26.009 00:20:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:26.009 00:20:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:26.009 00:20:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:26.009 00:20:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:13:26.009 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:13:26.009 00:20:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:26.009 00:20:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:26.009 00:20:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:26.009 00:20:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:26.009 00:20:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:26.009 00:20:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:26.009 00:20:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:13:26.009 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:13:26.009 00:20:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:26.009 00:20:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:26.009 00:20:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:26.009 00:20:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:26.009 00:20:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:26.009 00:20:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:26.009 00:20:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:26.009 00:20:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:26.009 00:20:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:13:26.009 00:20:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:26.009 00:20:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:13:26.009 00:20:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:26.009 00:20:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ up == up ]] 00:13:26.009 00:20:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:13:26.009 00:20:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:26.009 00:20:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:13:26.009 Found net devices under 0000:4b:00.0: cvl_0_0 00:13:26.009 00:20:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:13:26.009 00:20:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:13:26.009 00:20:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:26.009 00:20:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:13:26.009 00:20:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:26.009 00:20:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ up == up ]] 00:13:26.009 00:20:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:13:26.009 00:20:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:26.009 00:20:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:13:26.009 Found net devices under 0000:4b:00.1: cvl_0_1 00:13:26.009 00:20:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:13:26.009 00:20:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:13:26.009 00:20:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # is_hw=yes 00:13:26.009 00:20:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:13:26.009 00:20:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:13:26.009 00:20:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:13:26.009 00:20:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:26.009 00:20:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:26.009 00:20:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:26.009 00:20:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:26.009 00:20:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:26.009 00:20:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:26.009 00:20:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:26.009 00:20:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:26.009 00:20:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:26.009 00:20:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:26.009 00:20:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:26.009 00:20:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:26.009 00:20:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:26.009 00:20:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:26.009 00:20:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:26.009 00:20:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:26.009 00:20:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:26.009 00:20:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:26.009 00:20:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:26.009 00:20:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:26.009 00:20:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:26.009 00:20:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:26.009 00:20:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:26.009 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:26.009 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.703 ms 00:13:26.009 00:13:26.009 --- 10.0.0.2 ping statistics --- 00:13:26.009 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:26.009 rtt min/avg/max/mdev = 0.703/0.703/0.703/0.000 ms 00:13:26.009 00:20:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:26.009 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:26.009 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.322 ms 00:13:26.009 00:13:26.009 --- 10.0.0.1 ping statistics --- 00:13:26.009 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:26.009 rtt min/avg/max/mdev = 0.322/0.322/0.322/0.000 ms 00:13:26.009 00:20:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:26.009 00:20:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@448 -- # return 0 00:13:26.009 00:20:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:13:26.009 00:20:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:26.009 00:20:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:13:26.009 00:20:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:13:26.009 00:20:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:26.009 00:20:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:13:26.009 00:20:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:13:26.009 00:20:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:13:26.009 00:20:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:13:26.009 00:20:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:26.009 00:20:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:26.009 00:20:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@507 -- # nvmfpid=3182405 00:13:26.009 00:20:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@508 -- # waitforlisten 3182405 00:13:26.009 00:20:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:13:26.009 00:20:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@831 -- # '[' -z 3182405 ']' 00:13:26.009 00:20:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:26.009 00:20:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:26.009 00:20:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:26.009 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:26.009 00:20:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:26.009 00:20:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:26.009 [2024-10-09 00:20:55.861104] Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 initialization... 00:13:26.009 [2024-10-09 00:20:55.861172] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:26.009 [2024-10-09 00:20:55.951735] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:26.009 [2024-10-09 00:20:56.045410] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:26.009 [2024-10-09 00:20:56.045467] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:26.009 [2024-10-09 00:20:56.045475] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:26.009 [2024-10-09 00:20:56.045482] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:26.009 [2024-10-09 00:20:56.045489] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:26.009 [2024-10-09 00:20:56.046828] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:13:26.009 [2024-10-09 00:20:56.047156] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:13:26.009 [2024-10-09 00:20:56.047156] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:13:26.268 00:20:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:26.268 00:20:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # return 0 00:13:26.268 00:20:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:13:26.268 00:20:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:26.268 00:20:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:26.268 00:20:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:26.268 00:20:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:26.268 00:20:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.268 00:20:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:26.268 [2024-10-09 00:20:56.733826] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:26.268 00:20:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.268 00:20:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:26.268 00:20:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.268 00:20:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:26.268 00:20:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.268 00:20:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:26.268 00:20:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.268 00:20:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:26.268 [2024-10-09 00:20:56.767321] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:26.268 00:20:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.268 00:20:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:13:26.268 00:20:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.268 00:20:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:26.268 NULL1 00:13:26.268 00:20:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.268 00:20:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=3182584 00:13:26.268 00:20:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:26.268 00:20:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:13:26.268 00:20:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:26.268 00:20:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:13:26.268 00:20:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:26.268 00:20:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:26.268 00:20:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:26.268 00:20:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:26.268 00:20:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:26.268 00:20:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:26.268 00:20:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:26.268 00:20:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:26.268 00:20:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:26.268 00:20:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:26.268 00:20:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:26.268 00:20:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:26.268 00:20:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:26.268 00:20:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:26.268 00:20:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:26.268 00:20:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:26.268 00:20:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:26.268 00:20:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:26.268 00:20:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:26.268 00:20:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:26.268 00:20:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:26.268 00:20:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:26.268 00:20:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:26.268 00:20:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:26.268 00:20:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:26.268 00:20:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:26.268 00:20:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:26.268 00:20:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:26.268 00:20:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:26.268 00:20:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:26.268 00:20:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:26.268 00:20:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:26.268 00:20:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:26.268 00:20:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:26.268 00:20:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:26.268 00:20:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:26.268 00:20:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:26.268 00:20:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:26.269 00:20:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:26.269 00:20:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:26.269 00:20:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3182584 00:13:26.269 00:20:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:26.269 00:20:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.269 00:20:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:26.837 00:20:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.837 00:20:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3182584 00:13:26.837 00:20:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:26.837 00:20:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.837 00:20:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:27.097 00:20:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.097 00:20:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3182584 00:13:27.097 00:20:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:27.097 00:20:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.097 00:20:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:27.360 00:20:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.360 00:20:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3182584 00:13:27.360 00:20:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:27.360 00:20:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.360 00:20:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:27.619 00:20:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.619 00:20:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3182584 00:13:27.619 00:20:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:27.619 00:20:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.619 00:20:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:28.187 00:20:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.187 00:20:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3182584 00:13:28.187 00:20:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:28.187 00:20:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.187 00:20:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:28.446 00:20:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.446 00:20:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3182584 00:13:28.446 00:20:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:28.446 00:20:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.446 00:20:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:28.705 00:20:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.705 00:20:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3182584 00:13:28.705 00:20:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:28.705 00:20:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.705 00:20:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:28.966 00:20:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.966 00:20:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3182584 00:13:28.966 00:20:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:28.966 00:20:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.966 00:20:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:29.225 00:20:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.225 00:20:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3182584 00:13:29.225 00:20:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:29.225 00:20:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.225 00:20:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:29.793 00:21:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.793 00:21:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3182584 00:13:29.793 00:21:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:29.793 00:21:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.793 00:21:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:30.053 00:21:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.053 00:21:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3182584 00:13:30.053 00:21:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:30.053 00:21:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.053 00:21:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:30.312 00:21:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.312 00:21:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3182584 00:13:30.312 00:21:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:30.312 00:21:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.312 00:21:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:30.570 00:21:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.570 00:21:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3182584 00:13:30.570 00:21:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:30.570 00:21:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.570 00:21:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:30.828 00:21:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.828 00:21:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3182584 00:13:30.828 00:21:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:30.828 00:21:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.828 00:21:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:31.414 00:21:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.414 00:21:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3182584 00:13:31.414 00:21:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:31.414 00:21:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.414 00:21:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:31.680 00:21:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.680 00:21:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3182584 00:13:31.680 00:21:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:31.680 00:21:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.680 00:21:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:31.939 00:21:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.939 00:21:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3182584 00:13:31.939 00:21:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:31.939 00:21:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.939 00:21:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:32.197 00:21:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.197 00:21:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3182584 00:13:32.197 00:21:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:32.197 00:21:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.197 00:21:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:32.457 00:21:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.457 00:21:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3182584 00:13:32.457 00:21:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:32.457 00:21:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.457 00:21:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:33.024 00:21:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.024 00:21:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3182584 00:13:33.024 00:21:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:33.024 00:21:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.024 00:21:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:33.284 00:21:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.284 00:21:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3182584 00:13:33.284 00:21:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:33.284 00:21:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.284 00:21:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:33.542 00:21:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.542 00:21:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3182584 00:13:33.542 00:21:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:33.542 00:21:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.542 00:21:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:33.801 00:21:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.801 00:21:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3182584 00:13:33.801 00:21:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:33.801 00:21:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.801 00:21:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:34.369 00:21:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.369 00:21:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3182584 00:13:34.369 00:21:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:34.369 00:21:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.369 00:21:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:34.630 00:21:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.630 00:21:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3182584 00:13:34.630 00:21:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:34.630 00:21:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.630 00:21:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:34.890 00:21:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.890 00:21:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3182584 00:13:34.890 00:21:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:34.890 00:21:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.890 00:21:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:35.147 00:21:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.147 00:21:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3182584 00:13:35.147 00:21:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:35.147 00:21:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.147 00:21:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:35.406 00:21:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.406 00:21:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3182584 00:13:35.406 00:21:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:35.406 00:21:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.406 00:21:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:35.973 00:21:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.973 00:21:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3182584 00:13:35.973 00:21:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:35.973 00:21:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.973 00:21:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:36.231 00:21:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.231 00:21:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3182584 00:13:36.231 00:21:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:36.231 00:21:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.232 00:21:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:36.490 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:36.490 00:21:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.490 00:21:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3182584 00:13:36.490 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (3182584) - No such process 00:13:36.490 00:21:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 3182584 00:13:36.490 00:21:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:36.490 00:21:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:13:36.490 00:21:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:13:36.490 00:21:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@514 -- # nvmfcleanup 00:13:36.490 00:21:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync 00:13:36.490 00:21:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:36.490 00:21:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e 00:13:36.490 00:21:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:36.490 00:21:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:36.490 rmmod nvme_tcp 00:13:36.490 rmmod nvme_fabrics 00:13:36.490 rmmod nvme_keyring 00:13:36.490 00:21:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:36.490 00:21:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e 00:13:36.490 00:21:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0 00:13:36.490 00:21:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@515 -- # '[' -n 3182405 ']' 00:13:36.490 00:21:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@516 -- # killprocess 3182405 00:13:36.490 00:21:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@950 -- # '[' -z 3182405 ']' 00:13:36.490 00:21:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # kill -0 3182405 00:13:36.490 00:21:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@955 -- # uname 00:13:36.490 00:21:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:36.490 00:21:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3182405 00:13:36.490 00:21:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:13:36.490 00:21:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:13:36.490 00:21:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3182405' 00:13:36.490 killing process with pid 3182405 00:13:36.490 00:21:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@969 -- # kill 3182405 00:13:36.490 00:21:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@974 -- # wait 3182405 00:13:36.750 00:21:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:13:36.750 00:21:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:13:36.750 00:21:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:13:36.750 00:21:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # iptr 00:13:36.750 00:21:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:13:36.750 00:21:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@789 -- # iptables-save 00:13:36.750 00:21:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@789 -- # iptables-restore 00:13:36.750 00:21:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:36.750 00:21:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:36.750 00:21:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:36.750 00:21:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:36.750 00:21:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:39.301 00:21:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:39.301 00:13:39.301 real 0m21.280s 00:13:39.301 user 0m42.030s 00:13:39.301 sys 0m9.356s 00:13:39.301 00:21:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:39.301 00:21:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:39.301 ************************************ 00:13:39.301 END TEST nvmf_connect_stress 00:13:39.301 ************************************ 00:13:39.301 00:21:09 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:13:39.301 00:21:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:39.301 00:21:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:39.301 00:21:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:39.301 ************************************ 00:13:39.301 START TEST nvmf_fused_ordering 00:13:39.301 ************************************ 00:13:39.301 00:21:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:13:39.301 * Looking for test storage... 00:13:39.301 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:39.301 00:21:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:13:39.301 00:21:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1681 -- # lcov --version 00:13:39.301 00:21:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:13:39.301 00:21:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:13:39.301 00:21:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:39.301 00:21:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:39.301 00:21:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:39.301 00:21:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:13:39.301 00:21:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:13:39.301 00:21:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:13:39.301 00:21:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:13:39.301 00:21:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:13:39.301 00:21:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:13:39.301 00:21:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:13:39.301 00:21:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:39.301 00:21:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:13:39.301 00:21:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:13:39.301 00:21:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:39.301 00:21:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:39.301 00:21:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:13:39.301 00:21:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:13:39.301 00:21:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:39.301 00:21:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:13:39.301 00:21:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:13:39.301 00:21:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:13:39.301 00:21:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:13:39.301 00:21:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:39.301 00:21:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:13:39.301 00:21:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:13:39.301 00:21:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:39.301 00:21:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:39.301 00:21:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:13:39.301 00:21:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:39.301 00:21:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:13:39.301 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:39.301 --rc genhtml_branch_coverage=1 00:13:39.301 --rc genhtml_function_coverage=1 00:13:39.301 --rc genhtml_legend=1 00:13:39.301 --rc geninfo_all_blocks=1 00:13:39.301 --rc geninfo_unexecuted_blocks=1 00:13:39.301 00:13:39.301 ' 00:13:39.301 00:21:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:13:39.301 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:39.301 --rc genhtml_branch_coverage=1 00:13:39.301 --rc genhtml_function_coverage=1 00:13:39.301 --rc genhtml_legend=1 00:13:39.301 --rc geninfo_all_blocks=1 00:13:39.301 --rc geninfo_unexecuted_blocks=1 00:13:39.301 00:13:39.301 ' 00:13:39.301 00:21:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:13:39.301 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:39.301 --rc genhtml_branch_coverage=1 00:13:39.301 --rc genhtml_function_coverage=1 00:13:39.301 --rc genhtml_legend=1 00:13:39.301 --rc geninfo_all_blocks=1 00:13:39.301 --rc geninfo_unexecuted_blocks=1 00:13:39.301 00:13:39.301 ' 00:13:39.301 00:21:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:13:39.301 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:39.301 --rc genhtml_branch_coverage=1 00:13:39.301 --rc genhtml_function_coverage=1 00:13:39.301 --rc genhtml_legend=1 00:13:39.301 --rc geninfo_all_blocks=1 00:13:39.301 --rc geninfo_unexecuted_blocks=1 00:13:39.301 00:13:39.301 ' 00:13:39.301 00:21:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:39.301 00:21:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:13:39.301 00:21:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:39.301 00:21:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:39.301 00:21:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:39.301 00:21:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:39.301 00:21:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:39.301 00:21:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:39.301 00:21:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:39.301 00:21:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:39.301 00:21:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:39.301 00:21:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:39.302 00:21:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:39.302 00:21:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:39.302 00:21:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:39.302 00:21:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:39.302 00:21:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:39.302 00:21:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:39.302 00:21:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:39.302 00:21:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:13:39.302 00:21:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:39.302 00:21:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:39.302 00:21:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:39.302 00:21:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:39.302 00:21:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:39.302 00:21:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:39.302 00:21:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:13:39.302 00:21:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:39.302 00:21:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0 00:13:39.302 00:21:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:39.302 00:21:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:39.302 00:21:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:39.302 00:21:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:39.302 00:21:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:39.302 00:21:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:39.302 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:39.302 00:21:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:39.302 00:21:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:39.302 00:21:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:39.302 00:21:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:13:39.302 00:21:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:13:39.302 00:21:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:39.302 00:21:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # prepare_net_devs 00:13:39.302 00:21:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@436 -- # local -g is_hw=no 00:13:39.302 00:21:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # remove_spdk_ns 00:13:39.302 00:21:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:39.302 00:21:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:39.302 00:21:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:39.302 00:21:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:13:39.302 00:21:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:13:39.302 00:21:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@309 -- # xtrace_disable 00:13:39.302 00:21:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:47.531 00:21:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:47.531 00:21:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # pci_devs=() 00:13:47.531 00:21:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:47.531 00:21:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:47.531 00:21:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:47.531 00:21:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:47.531 00:21:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:47.531 00:21:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # net_devs=() 00:13:47.531 00:21:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:47.531 00:21:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # e810=() 00:13:47.531 00:21:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # local -ga e810 00:13:47.531 00:21:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # x722=() 00:13:47.531 00:21:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # local -ga x722 00:13:47.531 00:21:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # mlx=() 00:13:47.531 00:21:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # local -ga mlx 00:13:47.531 00:21:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:47.531 00:21:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:47.531 00:21:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:47.531 00:21:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:47.531 00:21:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:47.531 00:21:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:47.531 00:21:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:47.531 00:21:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:47.531 00:21:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:47.531 00:21:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:47.531 00:21:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:47.531 00:21:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:47.531 00:21:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:47.531 00:21:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:47.531 00:21:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:47.531 00:21:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:47.531 00:21:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:47.531 00:21:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:47.531 00:21:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:47.531 00:21:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:13:47.531 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:13:47.531 00:21:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:47.531 00:21:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:47.531 00:21:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:47.531 00:21:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:47.531 00:21:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:47.531 00:21:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:47.531 00:21:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:13:47.531 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:13:47.531 00:21:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:47.531 00:21:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:47.531 00:21:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:47.531 00:21:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:47.531 00:21:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:47.531 00:21:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:47.531 00:21:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:47.531 00:21:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:47.531 00:21:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:13:47.531 00:21:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:47.532 00:21:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:13:47.532 00:21:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:47.532 00:21:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ up == up ]] 00:13:47.532 00:21:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:13:47.532 00:21:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:47.532 00:21:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:13:47.532 Found net devices under 0000:4b:00.0: cvl_0_0 00:13:47.532 00:21:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:13:47.532 00:21:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:13:47.532 00:21:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:47.532 00:21:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:13:47.532 00:21:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:47.532 00:21:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ up == up ]] 00:13:47.532 00:21:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:13:47.532 00:21:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:47.532 00:21:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:13:47.532 Found net devices under 0000:4b:00.1: cvl_0_1 00:13:47.532 00:21:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:13:47.532 00:21:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:13:47.532 00:21:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # is_hw=yes 00:13:47.532 00:21:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:13:47.532 00:21:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:13:47.532 00:21:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:13:47.532 00:21:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:47.532 00:21:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:47.532 00:21:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:47.532 00:21:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:47.532 00:21:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:47.532 00:21:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:47.532 00:21:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:47.532 00:21:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:47.532 00:21:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:47.532 00:21:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:47.532 00:21:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:47.532 00:21:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:47.532 00:21:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:47.532 00:21:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:47.532 00:21:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:47.532 00:21:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:47.532 00:21:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:47.532 00:21:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:47.532 00:21:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:47.532 00:21:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:47.532 00:21:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:47.532 00:21:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:47.532 00:21:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:47.532 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:47.532 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.513 ms 00:13:47.532 00:13:47.532 --- 10.0.0.2 ping statistics --- 00:13:47.532 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:47.532 rtt min/avg/max/mdev = 0.513/0.513/0.513/0.000 ms 00:13:47.532 00:21:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:47.532 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:47.532 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.322 ms 00:13:47.532 00:13:47.532 --- 10.0.0.1 ping statistics --- 00:13:47.532 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:47.532 rtt min/avg/max/mdev = 0.322/0.322/0.322/0.000 ms 00:13:47.532 00:21:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:47.532 00:21:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@448 -- # return 0 00:13:47.532 00:21:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:13:47.532 00:21:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:47.532 00:21:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:13:47.532 00:21:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:13:47.532 00:21:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:47.532 00:21:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:13:47.532 00:21:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:13:47.532 00:21:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:13:47.532 00:21:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:13:47.532 00:21:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:47.532 00:21:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:47.532 00:21:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@507 -- # nvmfpid=3189492 00:13:47.532 00:21:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@508 -- # waitforlisten 3189492 00:13:47.532 00:21:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:13:47.532 00:21:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@831 -- # '[' -z 3189492 ']' 00:13:47.532 00:21:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:47.532 00:21:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:47.532 00:21:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:47.532 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:47.532 00:21:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:47.532 00:21:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:47.532 [2024-10-09 00:21:17.181223] Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 initialization... 00:13:47.532 [2024-10-09 00:21:17.181286] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:47.532 [2024-10-09 00:21:17.269194] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:47.532 [2024-10-09 00:21:17.360540] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:47.532 [2024-10-09 00:21:17.360597] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:47.532 [2024-10-09 00:21:17.360606] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:47.532 [2024-10-09 00:21:17.360613] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:47.532 [2024-10-09 00:21:17.360619] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:47.532 [2024-10-09 00:21:17.361395] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:13:47.532 00:21:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:47.532 00:21:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # return 0 00:13:47.532 00:21:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:13:47.532 00:21:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:47.532 00:21:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:47.532 00:21:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:47.532 00:21:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:47.532 00:21:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.532 00:21:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:47.532 [2024-10-09 00:21:18.051071] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:47.532 00:21:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.532 00:21:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:47.532 00:21:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.532 00:21:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:47.532 00:21:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.532 00:21:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:47.532 00:21:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.532 00:21:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:47.532 [2024-10-09 00:21:18.075318] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:47.532 00:21:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.532 00:21:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:13:47.532 00:21:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.533 00:21:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:47.533 NULL1 00:13:47.533 00:21:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.533 00:21:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:13:47.533 00:21:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.533 00:21:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:47.533 00:21:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.533 00:21:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:13:47.533 00:21:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.533 00:21:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:47.533 00:21:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.533 00:21:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:13:47.533 [2024-10-09 00:21:18.145742] Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 initialization... 00:13:47.533 [2024-10-09 00:21:18.145813] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3189548 ] 00:13:48.170 Attached to nqn.2016-06.io.spdk:cnode1 00:13:48.170 Namespace ID: 1 size: 1GB 00:13:48.170 fused_ordering(0) 00:13:48.170 fused_ordering(1) 00:13:48.170 fused_ordering(2) 00:13:48.170 fused_ordering(3) 00:13:48.170 fused_ordering(4) 00:13:48.170 fused_ordering(5) 00:13:48.170 fused_ordering(6) 00:13:48.170 fused_ordering(7) 00:13:48.170 fused_ordering(8) 00:13:48.170 fused_ordering(9) 00:13:48.170 fused_ordering(10) 00:13:48.170 fused_ordering(11) 00:13:48.170 fused_ordering(12) 00:13:48.170 fused_ordering(13) 00:13:48.171 fused_ordering(14) 00:13:48.171 fused_ordering(15) 00:13:48.171 fused_ordering(16) 00:13:48.171 fused_ordering(17) 00:13:48.171 fused_ordering(18) 00:13:48.171 fused_ordering(19) 00:13:48.171 fused_ordering(20) 00:13:48.171 fused_ordering(21) 00:13:48.171 fused_ordering(22) 00:13:48.171 fused_ordering(23) 00:13:48.171 fused_ordering(24) 00:13:48.171 fused_ordering(25) 00:13:48.171 fused_ordering(26) 00:13:48.171 fused_ordering(27) 00:13:48.171 fused_ordering(28) 00:13:48.171 fused_ordering(29) 00:13:48.171 fused_ordering(30) 00:13:48.171 fused_ordering(31) 00:13:48.171 fused_ordering(32) 00:13:48.171 fused_ordering(33) 00:13:48.171 fused_ordering(34) 00:13:48.171 fused_ordering(35) 00:13:48.171 fused_ordering(36) 00:13:48.171 fused_ordering(37) 00:13:48.171 fused_ordering(38) 00:13:48.171 fused_ordering(39) 00:13:48.171 fused_ordering(40) 00:13:48.171 fused_ordering(41) 00:13:48.171 fused_ordering(42) 00:13:48.171 fused_ordering(43) 00:13:48.171 fused_ordering(44) 00:13:48.171 fused_ordering(45) 00:13:48.171 fused_ordering(46) 00:13:48.171 fused_ordering(47) 00:13:48.171 fused_ordering(48) 00:13:48.171 fused_ordering(49) 00:13:48.171 fused_ordering(50) 00:13:48.171 fused_ordering(51) 00:13:48.171 fused_ordering(52) 00:13:48.171 fused_ordering(53) 00:13:48.171 fused_ordering(54) 00:13:48.171 fused_ordering(55) 00:13:48.171 fused_ordering(56) 00:13:48.171 fused_ordering(57) 00:13:48.171 fused_ordering(58) 00:13:48.171 fused_ordering(59) 00:13:48.171 fused_ordering(60) 00:13:48.171 fused_ordering(61) 00:13:48.171 fused_ordering(62) 00:13:48.171 fused_ordering(63) 00:13:48.171 fused_ordering(64) 00:13:48.171 fused_ordering(65) 00:13:48.171 fused_ordering(66) 00:13:48.171 fused_ordering(67) 00:13:48.171 fused_ordering(68) 00:13:48.171 fused_ordering(69) 00:13:48.171 fused_ordering(70) 00:13:48.171 fused_ordering(71) 00:13:48.171 fused_ordering(72) 00:13:48.171 fused_ordering(73) 00:13:48.171 fused_ordering(74) 00:13:48.171 fused_ordering(75) 00:13:48.171 fused_ordering(76) 00:13:48.172 fused_ordering(77) 00:13:48.172 fused_ordering(78) 00:13:48.172 fused_ordering(79) 00:13:48.172 fused_ordering(80) 00:13:48.172 fused_ordering(81) 00:13:48.172 fused_ordering(82) 00:13:48.172 fused_ordering(83) 00:13:48.172 fused_ordering(84) 00:13:48.172 fused_ordering(85) 00:13:48.172 fused_ordering(86) 00:13:48.172 fused_ordering(87) 00:13:48.172 fused_ordering(88) 00:13:48.172 fused_ordering(89) 00:13:48.172 fused_ordering(90) 00:13:48.172 fused_ordering(91) 00:13:48.172 fused_ordering(92) 00:13:48.172 fused_ordering(93) 00:13:48.172 fused_ordering(94) 00:13:48.172 fused_ordering(95) 00:13:48.172 fused_ordering(96) 00:13:48.172 fused_ordering(97) 00:13:48.172 fused_ordering(98) 00:13:48.172 fused_ordering(99) 00:13:48.172 fused_ordering(100) 00:13:48.172 fused_ordering(101) 00:13:48.172 fused_ordering(102) 00:13:48.172 fused_ordering(103) 00:13:48.172 fused_ordering(104) 00:13:48.172 fused_ordering(105) 00:13:48.172 fused_ordering(106) 00:13:48.172 fused_ordering(107) 00:13:48.172 fused_ordering(108) 00:13:48.172 fused_ordering(109) 00:13:48.172 fused_ordering(110) 00:13:48.172 fused_ordering(111) 00:13:48.172 fused_ordering(112) 00:13:48.172 fused_ordering(113) 00:13:48.172 fused_ordering(114) 00:13:48.172 fused_ordering(115) 00:13:48.172 fused_ordering(116) 00:13:48.172 fused_ordering(117) 00:13:48.172 fused_ordering(118) 00:13:48.172 fused_ordering(119) 00:13:48.172 fused_ordering(120) 00:13:48.173 fused_ordering(121) 00:13:48.173 fused_ordering(122) 00:13:48.173 fused_ordering(123) 00:13:48.173 fused_ordering(124) 00:13:48.173 fused_ordering(125) 00:13:48.173 fused_ordering(126) 00:13:48.173 fused_ordering(127) 00:13:48.173 fused_ordering(128) 00:13:48.173 fused_ordering(129) 00:13:48.173 fused_ordering(130) 00:13:48.173 fused_ordering(131) 00:13:48.173 fused_ordering(132) 00:13:48.173 fused_ordering(133) 00:13:48.173 fused_ordering(134) 00:13:48.173 fused_ordering(135) 00:13:48.173 fused_ordering(136) 00:13:48.173 fused_ordering(137) 00:13:48.173 fused_ordering(138) 00:13:48.173 fused_ordering(139) 00:13:48.173 fused_ordering(140) 00:13:48.173 fused_ordering(141) 00:13:48.173 fused_ordering(142) 00:13:48.173 fused_ordering(143) 00:13:48.173 fused_ordering(144) 00:13:48.173 fused_ordering(145) 00:13:48.173 fused_ordering(146) 00:13:48.173 fused_ordering(147) 00:13:48.173 fused_ordering(148) 00:13:48.173 fused_ordering(149) 00:13:48.175 fused_ordering(150) 00:13:48.175 fused_ordering(151) 00:13:48.175 fused_ordering(152) 00:13:48.175 fused_ordering(153) 00:13:48.175 fused_ordering(154) 00:13:48.175 fused_ordering(155) 00:13:48.175 fused_ordering(156) 00:13:48.175 fused_ordering(157) 00:13:48.175 fused_ordering(158) 00:13:48.175 fused_ordering(159) 00:13:48.175 fused_ordering(160) 00:13:48.175 fused_ordering(161) 00:13:48.175 fused_ordering(162) 00:13:48.175 fused_ordering(163) 00:13:48.175 fused_ordering(164) 00:13:48.175 fused_ordering(165) 00:13:48.175 fused_ordering(166) 00:13:48.175 fused_ordering(167) 00:13:48.175 fused_ordering(168) 00:13:48.175 fused_ordering(169) 00:13:48.175 fused_ordering(170) 00:13:48.175 fused_ordering(171) 00:13:48.175 fused_ordering(172) 00:13:48.175 fused_ordering(173) 00:13:48.175 fused_ordering(174) 00:13:48.175 fused_ordering(175) 00:13:48.175 fused_ordering(176) 00:13:48.175 fused_ordering(177) 00:13:48.175 fused_ordering(178) 00:13:48.175 fused_ordering(179) 00:13:48.175 fused_ordering(180) 00:13:48.175 fused_ordering(181) 00:13:48.175 fused_ordering(182) 00:13:48.175 fused_ordering(183) 00:13:48.175 fused_ordering(184) 00:13:48.175 fused_ordering(185) 00:13:48.175 fused_ordering(186) 00:13:48.175 fused_ordering(187) 00:13:48.175 fused_ordering(188) 00:13:48.175 fused_ordering(189) 00:13:48.175 fused_ordering(190) 00:13:48.175 fused_ordering(191) 00:13:48.175 fused_ordering(192) 00:13:48.175 fused_ordering(193) 00:13:48.175 fused_ordering(194) 00:13:48.175 fused_ordering(195) 00:13:48.175 fused_ordering(196) 00:13:48.175 fused_ordering(197) 00:13:48.175 fused_ordering(198) 00:13:48.175 fused_ordering(199) 00:13:48.175 fused_ordering(200) 00:13:48.175 fused_ordering(201) 00:13:48.175 fused_ordering(202) 00:13:48.175 fused_ordering(203) 00:13:48.175 fused_ordering(204) 00:13:48.175 fused_ordering(205) 00:13:48.438 fused_ordering(206) 00:13:48.438 fused_ordering(207) 00:13:48.438 fused_ordering(208) 00:13:48.438 fused_ordering(209) 00:13:48.438 fused_ordering(210) 00:13:48.438 fused_ordering(211) 00:13:48.438 fused_ordering(212) 00:13:48.438 fused_ordering(213) 00:13:48.438 fused_ordering(214) 00:13:48.438 fused_ordering(215) 00:13:48.438 fused_ordering(216) 00:13:48.438 fused_ordering(217) 00:13:48.438 fused_ordering(218) 00:13:48.438 fused_ordering(219) 00:13:48.438 fused_ordering(220) 00:13:48.438 fused_ordering(221) 00:13:48.439 fused_ordering(222) 00:13:48.439 fused_ordering(223) 00:13:48.439 fused_ordering(224) 00:13:48.439 fused_ordering(225) 00:13:48.439 fused_ordering(226) 00:13:48.439 fused_ordering(227) 00:13:48.439 fused_ordering(228) 00:13:48.439 fused_ordering(229) 00:13:48.439 fused_ordering(230) 00:13:48.439 fused_ordering(231) 00:13:48.439 fused_ordering(232) 00:13:48.439 fused_ordering(233) 00:13:48.439 fused_ordering(234) 00:13:48.439 fused_ordering(235) 00:13:48.439 fused_ordering(236) 00:13:48.439 fused_ordering(237) 00:13:48.439 fused_ordering(238) 00:13:48.439 fused_ordering(239) 00:13:48.439 fused_ordering(240) 00:13:48.439 fused_ordering(241) 00:13:48.439 fused_ordering(242) 00:13:48.439 fused_ordering(243) 00:13:48.439 fused_ordering(244) 00:13:48.439 fused_ordering(245) 00:13:48.439 fused_ordering(246) 00:13:48.439 fused_ordering(247) 00:13:48.439 fused_ordering(248) 00:13:48.439 fused_ordering(249) 00:13:48.439 fused_ordering(250) 00:13:48.439 fused_ordering(251) 00:13:48.439 fused_ordering(252) 00:13:48.439 fused_ordering(253) 00:13:48.439 fused_ordering(254) 00:13:48.439 fused_ordering(255) 00:13:48.439 fused_ordering(256) 00:13:48.439 fused_ordering(257) 00:13:48.439 fused_ordering(258) 00:13:48.439 fused_ordering(259) 00:13:48.439 fused_ordering(260) 00:13:48.439 fused_ordering(261) 00:13:48.439 fused_ordering(262) 00:13:48.439 fused_ordering(263) 00:13:48.439 fused_ordering(264) 00:13:48.439 fused_ordering(265) 00:13:48.439 fused_ordering(266) 00:13:48.439 fused_ordering(267) 00:13:48.439 fused_ordering(268) 00:13:48.439 fused_ordering(269) 00:13:48.439 fused_ordering(270) 00:13:48.439 fused_ordering(271) 00:13:48.439 fused_ordering(272) 00:13:48.439 fused_ordering(273) 00:13:48.439 fused_ordering(274) 00:13:48.439 fused_ordering(275) 00:13:48.439 fused_ordering(276) 00:13:48.439 fused_ordering(277) 00:13:48.439 fused_ordering(278) 00:13:48.439 fused_ordering(279) 00:13:48.439 fused_ordering(280) 00:13:48.439 fused_ordering(281) 00:13:48.439 fused_ordering(282) 00:13:48.439 fused_ordering(283) 00:13:48.439 fused_ordering(284) 00:13:48.439 fused_ordering(285) 00:13:48.439 fused_ordering(286) 00:13:48.439 fused_ordering(287) 00:13:48.439 fused_ordering(288) 00:13:48.439 fused_ordering(289) 00:13:48.439 fused_ordering(290) 00:13:48.439 fused_ordering(291) 00:13:48.439 fused_ordering(292) 00:13:48.439 fused_ordering(293) 00:13:48.439 fused_ordering(294) 00:13:48.439 fused_ordering(295) 00:13:48.439 fused_ordering(296) 00:13:48.439 fused_ordering(297) 00:13:48.439 fused_ordering(298) 00:13:48.439 fused_ordering(299) 00:13:48.439 fused_ordering(300) 00:13:48.439 fused_ordering(301) 00:13:48.439 fused_ordering(302) 00:13:48.439 fused_ordering(303) 00:13:48.439 fused_ordering(304) 00:13:48.439 fused_ordering(305) 00:13:48.439 fused_ordering(306) 00:13:48.439 fused_ordering(307) 00:13:48.439 fused_ordering(308) 00:13:48.439 fused_ordering(309) 00:13:48.439 fused_ordering(310) 00:13:48.439 fused_ordering(311) 00:13:48.439 fused_ordering(312) 00:13:48.439 fused_ordering(313) 00:13:48.439 fused_ordering(314) 00:13:48.439 fused_ordering(315) 00:13:48.439 fused_ordering(316) 00:13:48.439 fused_ordering(317) 00:13:48.439 fused_ordering(318) 00:13:48.439 fused_ordering(319) 00:13:48.439 fused_ordering(320) 00:13:48.439 fused_ordering(321) 00:13:48.439 fused_ordering(322) 00:13:48.439 fused_ordering(323) 00:13:48.439 fused_ordering(324) 00:13:48.439 fused_ordering(325) 00:13:48.439 fused_ordering(326) 00:13:48.439 fused_ordering(327) 00:13:48.439 fused_ordering(328) 00:13:48.439 fused_ordering(329) 00:13:48.439 fused_ordering(330) 00:13:48.439 fused_ordering(331) 00:13:48.439 fused_ordering(332) 00:13:48.439 fused_ordering(333) 00:13:48.439 fused_ordering(334) 00:13:48.439 fused_ordering(335) 00:13:48.439 fused_ordering(336) 00:13:48.439 fused_ordering(337) 00:13:48.439 fused_ordering(338) 00:13:48.439 fused_ordering(339) 00:13:48.439 fused_ordering(340) 00:13:48.439 fused_ordering(341) 00:13:48.439 fused_ordering(342) 00:13:48.439 fused_ordering(343) 00:13:48.439 fused_ordering(344) 00:13:48.439 fused_ordering(345) 00:13:48.439 fused_ordering(346) 00:13:48.439 fused_ordering(347) 00:13:48.439 fused_ordering(348) 00:13:48.439 fused_ordering(349) 00:13:48.439 fused_ordering(350) 00:13:48.439 fused_ordering(351) 00:13:48.439 fused_ordering(352) 00:13:48.439 fused_ordering(353) 00:13:48.439 fused_ordering(354) 00:13:48.439 fused_ordering(355) 00:13:48.439 fused_ordering(356) 00:13:48.439 fused_ordering(357) 00:13:48.439 fused_ordering(358) 00:13:48.439 fused_ordering(359) 00:13:48.439 fused_ordering(360) 00:13:48.439 fused_ordering(361) 00:13:48.439 fused_ordering(362) 00:13:48.439 fused_ordering(363) 00:13:48.439 fused_ordering(364) 00:13:48.439 fused_ordering(365) 00:13:48.439 fused_ordering(366) 00:13:48.439 fused_ordering(367) 00:13:48.439 fused_ordering(368) 00:13:48.439 fused_ordering(369) 00:13:48.439 fused_ordering(370) 00:13:48.439 fused_ordering(371) 00:13:48.439 fused_ordering(372) 00:13:48.439 fused_ordering(373) 00:13:48.439 fused_ordering(374) 00:13:48.439 fused_ordering(375) 00:13:48.439 fused_ordering(376) 00:13:48.439 fused_ordering(377) 00:13:48.439 fused_ordering(378) 00:13:48.439 fused_ordering(379) 00:13:48.439 fused_ordering(380) 00:13:48.439 fused_ordering(381) 00:13:48.439 fused_ordering(382) 00:13:48.439 fused_ordering(383) 00:13:48.439 fused_ordering(384) 00:13:48.439 fused_ordering(385) 00:13:48.439 fused_ordering(386) 00:13:48.439 fused_ordering(387) 00:13:48.439 fused_ordering(388) 00:13:48.439 fused_ordering(389) 00:13:48.439 fused_ordering(390) 00:13:48.439 fused_ordering(391) 00:13:48.439 fused_ordering(392) 00:13:48.439 fused_ordering(393) 00:13:48.439 fused_ordering(394) 00:13:48.439 fused_ordering(395) 00:13:48.439 fused_ordering(396) 00:13:48.439 fused_ordering(397) 00:13:48.439 fused_ordering(398) 00:13:48.439 fused_ordering(399) 00:13:48.439 fused_ordering(400) 00:13:48.439 fused_ordering(401) 00:13:48.439 fused_ordering(402) 00:13:48.439 fused_ordering(403) 00:13:48.439 fused_ordering(404) 00:13:48.439 fused_ordering(405) 00:13:48.439 fused_ordering(406) 00:13:48.439 fused_ordering(407) 00:13:48.439 fused_ordering(408) 00:13:48.439 fused_ordering(409) 00:13:48.439 fused_ordering(410) 00:13:48.701 fused_ordering(411) 00:13:48.701 fused_ordering(412) 00:13:48.701 fused_ordering(413) 00:13:48.701 fused_ordering(414) 00:13:48.701 fused_ordering(415) 00:13:48.701 fused_ordering(416) 00:13:48.701 fused_ordering(417) 00:13:48.701 fused_ordering(418) 00:13:48.701 fused_ordering(419) 00:13:48.701 fused_ordering(420) 00:13:48.701 fused_ordering(421) 00:13:48.701 fused_ordering(422) 00:13:48.701 fused_ordering(423) 00:13:48.701 fused_ordering(424) 00:13:48.701 fused_ordering(425) 00:13:48.701 fused_ordering(426) 00:13:48.701 fused_ordering(427) 00:13:48.701 fused_ordering(428) 00:13:48.701 fused_ordering(429) 00:13:48.701 fused_ordering(430) 00:13:48.701 fused_ordering(431) 00:13:48.701 fused_ordering(432) 00:13:48.701 fused_ordering(433) 00:13:48.701 fused_ordering(434) 00:13:48.701 fused_ordering(435) 00:13:48.701 fused_ordering(436) 00:13:48.701 fused_ordering(437) 00:13:48.701 fused_ordering(438) 00:13:48.701 fused_ordering(439) 00:13:48.701 fused_ordering(440) 00:13:48.701 fused_ordering(441) 00:13:48.701 fused_ordering(442) 00:13:48.701 fused_ordering(443) 00:13:48.701 fused_ordering(444) 00:13:48.701 fused_ordering(445) 00:13:48.701 fused_ordering(446) 00:13:48.701 fused_ordering(447) 00:13:48.701 fused_ordering(448) 00:13:48.701 fused_ordering(449) 00:13:48.701 fused_ordering(450) 00:13:48.701 fused_ordering(451) 00:13:48.701 fused_ordering(452) 00:13:48.701 fused_ordering(453) 00:13:48.701 fused_ordering(454) 00:13:48.701 fused_ordering(455) 00:13:48.701 fused_ordering(456) 00:13:48.701 fused_ordering(457) 00:13:48.701 fused_ordering(458) 00:13:48.701 fused_ordering(459) 00:13:48.701 fused_ordering(460) 00:13:48.701 fused_ordering(461) 00:13:48.701 fused_ordering(462) 00:13:48.701 fused_ordering(463) 00:13:48.701 fused_ordering(464) 00:13:48.701 fused_ordering(465) 00:13:48.701 fused_ordering(466) 00:13:48.701 fused_ordering(467) 00:13:48.701 fused_ordering(468) 00:13:48.701 fused_ordering(469) 00:13:48.701 fused_ordering(470) 00:13:48.701 fused_ordering(471) 00:13:48.701 fused_ordering(472) 00:13:48.701 fused_ordering(473) 00:13:48.701 fused_ordering(474) 00:13:48.701 fused_ordering(475) 00:13:48.701 fused_ordering(476) 00:13:48.701 fused_ordering(477) 00:13:48.701 fused_ordering(478) 00:13:48.701 fused_ordering(479) 00:13:48.701 fused_ordering(480) 00:13:48.701 fused_ordering(481) 00:13:48.701 fused_ordering(482) 00:13:48.701 fused_ordering(483) 00:13:48.701 fused_ordering(484) 00:13:48.701 fused_ordering(485) 00:13:48.701 fused_ordering(486) 00:13:48.701 fused_ordering(487) 00:13:48.701 fused_ordering(488) 00:13:48.701 fused_ordering(489) 00:13:48.701 fused_ordering(490) 00:13:48.701 fused_ordering(491) 00:13:48.701 fused_ordering(492) 00:13:48.701 fused_ordering(493) 00:13:48.701 fused_ordering(494) 00:13:48.701 fused_ordering(495) 00:13:48.701 fused_ordering(496) 00:13:48.701 fused_ordering(497) 00:13:48.701 fused_ordering(498) 00:13:48.701 fused_ordering(499) 00:13:48.701 fused_ordering(500) 00:13:48.701 fused_ordering(501) 00:13:48.701 fused_ordering(502) 00:13:48.701 fused_ordering(503) 00:13:48.701 fused_ordering(504) 00:13:48.701 fused_ordering(505) 00:13:48.701 fused_ordering(506) 00:13:48.701 fused_ordering(507) 00:13:48.701 fused_ordering(508) 00:13:48.701 fused_ordering(509) 00:13:48.701 fused_ordering(510) 00:13:48.701 fused_ordering(511) 00:13:48.701 fused_ordering(512) 00:13:48.701 fused_ordering(513) 00:13:48.701 fused_ordering(514) 00:13:48.701 fused_ordering(515) 00:13:48.701 fused_ordering(516) 00:13:48.701 fused_ordering(517) 00:13:48.701 fused_ordering(518) 00:13:48.701 fused_ordering(519) 00:13:48.701 fused_ordering(520) 00:13:48.701 fused_ordering(521) 00:13:48.701 fused_ordering(522) 00:13:48.701 fused_ordering(523) 00:13:48.701 fused_ordering(524) 00:13:48.701 fused_ordering(525) 00:13:48.701 fused_ordering(526) 00:13:48.701 fused_ordering(527) 00:13:48.701 fused_ordering(528) 00:13:48.701 fused_ordering(529) 00:13:48.701 fused_ordering(530) 00:13:48.701 fused_ordering(531) 00:13:48.701 fused_ordering(532) 00:13:48.701 fused_ordering(533) 00:13:48.701 fused_ordering(534) 00:13:48.701 fused_ordering(535) 00:13:48.701 fused_ordering(536) 00:13:48.701 fused_ordering(537) 00:13:48.701 fused_ordering(538) 00:13:48.701 fused_ordering(539) 00:13:48.701 fused_ordering(540) 00:13:48.701 fused_ordering(541) 00:13:48.701 fused_ordering(542) 00:13:48.701 fused_ordering(543) 00:13:48.701 fused_ordering(544) 00:13:48.701 fused_ordering(545) 00:13:48.701 fused_ordering(546) 00:13:48.701 fused_ordering(547) 00:13:48.701 fused_ordering(548) 00:13:48.701 fused_ordering(549) 00:13:48.701 fused_ordering(550) 00:13:48.701 fused_ordering(551) 00:13:48.701 fused_ordering(552) 00:13:48.701 fused_ordering(553) 00:13:48.701 fused_ordering(554) 00:13:48.701 fused_ordering(555) 00:13:48.701 fused_ordering(556) 00:13:48.701 fused_ordering(557) 00:13:48.701 fused_ordering(558) 00:13:48.701 fused_ordering(559) 00:13:48.701 fused_ordering(560) 00:13:48.701 fused_ordering(561) 00:13:48.701 fused_ordering(562) 00:13:48.701 fused_ordering(563) 00:13:48.701 fused_ordering(564) 00:13:48.701 fused_ordering(565) 00:13:48.701 fused_ordering(566) 00:13:48.701 fused_ordering(567) 00:13:48.701 fused_ordering(568) 00:13:48.701 fused_ordering(569) 00:13:48.701 fused_ordering(570) 00:13:48.701 fused_ordering(571) 00:13:48.701 fused_ordering(572) 00:13:48.701 fused_ordering(573) 00:13:48.701 fused_ordering(574) 00:13:48.701 fused_ordering(575) 00:13:48.701 fused_ordering(576) 00:13:48.701 fused_ordering(577) 00:13:48.701 fused_ordering(578) 00:13:48.701 fused_ordering(579) 00:13:48.701 fused_ordering(580) 00:13:48.701 fused_ordering(581) 00:13:48.701 fused_ordering(582) 00:13:48.701 fused_ordering(583) 00:13:48.701 fused_ordering(584) 00:13:48.701 fused_ordering(585) 00:13:48.701 fused_ordering(586) 00:13:48.701 fused_ordering(587) 00:13:48.701 fused_ordering(588) 00:13:48.701 fused_ordering(589) 00:13:48.701 fused_ordering(590) 00:13:48.701 fused_ordering(591) 00:13:48.701 fused_ordering(592) 00:13:48.701 fused_ordering(593) 00:13:48.701 fused_ordering(594) 00:13:48.701 fused_ordering(595) 00:13:48.701 fused_ordering(596) 00:13:48.701 fused_ordering(597) 00:13:48.701 fused_ordering(598) 00:13:48.701 fused_ordering(599) 00:13:48.702 fused_ordering(600) 00:13:48.702 fused_ordering(601) 00:13:48.702 fused_ordering(602) 00:13:48.702 fused_ordering(603) 00:13:48.702 fused_ordering(604) 00:13:48.702 fused_ordering(605) 00:13:48.702 fused_ordering(606) 00:13:48.702 fused_ordering(607) 00:13:48.702 fused_ordering(608) 00:13:48.702 fused_ordering(609) 00:13:48.702 fused_ordering(610) 00:13:48.702 fused_ordering(611) 00:13:48.702 fused_ordering(612) 00:13:48.702 fused_ordering(613) 00:13:48.702 fused_ordering(614) 00:13:48.702 fused_ordering(615) 00:13:49.283 fused_ordering(616) 00:13:49.283 fused_ordering(617) 00:13:49.283 fused_ordering(618) 00:13:49.283 fused_ordering(619) 00:13:49.283 fused_ordering(620) 00:13:49.283 fused_ordering(621) 00:13:49.283 fused_ordering(622) 00:13:49.283 fused_ordering(623) 00:13:49.283 fused_ordering(624) 00:13:49.283 fused_ordering(625) 00:13:49.283 fused_ordering(626) 00:13:49.283 fused_ordering(627) 00:13:49.283 fused_ordering(628) 00:13:49.283 fused_ordering(629) 00:13:49.283 fused_ordering(630) 00:13:49.283 fused_ordering(631) 00:13:49.283 fused_ordering(632) 00:13:49.283 fused_ordering(633) 00:13:49.283 fused_ordering(634) 00:13:49.283 fused_ordering(635) 00:13:49.283 fused_ordering(636) 00:13:49.283 fused_ordering(637) 00:13:49.283 fused_ordering(638) 00:13:49.283 fused_ordering(639) 00:13:49.283 fused_ordering(640) 00:13:49.283 fused_ordering(641) 00:13:49.283 fused_ordering(642) 00:13:49.283 fused_ordering(643) 00:13:49.283 fused_ordering(644) 00:13:49.283 fused_ordering(645) 00:13:49.283 fused_ordering(646) 00:13:49.283 fused_ordering(647) 00:13:49.283 fused_ordering(648) 00:13:49.283 fused_ordering(649) 00:13:49.283 fused_ordering(650) 00:13:49.283 fused_ordering(651) 00:13:49.283 fused_ordering(652) 00:13:49.283 fused_ordering(653) 00:13:49.283 fused_ordering(654) 00:13:49.283 fused_ordering(655) 00:13:49.283 fused_ordering(656) 00:13:49.283 fused_ordering(657) 00:13:49.283 fused_ordering(658) 00:13:49.283 fused_ordering(659) 00:13:49.283 fused_ordering(660) 00:13:49.283 fused_ordering(661) 00:13:49.283 fused_ordering(662) 00:13:49.283 fused_ordering(663) 00:13:49.283 fused_ordering(664) 00:13:49.283 fused_ordering(665) 00:13:49.283 fused_ordering(666) 00:13:49.283 fused_ordering(667) 00:13:49.283 fused_ordering(668) 00:13:49.283 fused_ordering(669) 00:13:49.283 fused_ordering(670) 00:13:49.283 fused_ordering(671) 00:13:49.283 fused_ordering(672) 00:13:49.283 fused_ordering(673) 00:13:49.283 fused_ordering(674) 00:13:49.283 fused_ordering(675) 00:13:49.283 fused_ordering(676) 00:13:49.283 fused_ordering(677) 00:13:49.283 fused_ordering(678) 00:13:49.283 fused_ordering(679) 00:13:49.283 fused_ordering(680) 00:13:49.283 fused_ordering(681) 00:13:49.283 fused_ordering(682) 00:13:49.283 fused_ordering(683) 00:13:49.283 fused_ordering(684) 00:13:49.283 fused_ordering(685) 00:13:49.283 fused_ordering(686) 00:13:49.283 fused_ordering(687) 00:13:49.283 fused_ordering(688) 00:13:49.283 fused_ordering(689) 00:13:49.283 fused_ordering(690) 00:13:49.283 fused_ordering(691) 00:13:49.283 fused_ordering(692) 00:13:49.283 fused_ordering(693) 00:13:49.283 fused_ordering(694) 00:13:49.283 fused_ordering(695) 00:13:49.283 fused_ordering(696) 00:13:49.283 fused_ordering(697) 00:13:49.283 fused_ordering(698) 00:13:49.283 fused_ordering(699) 00:13:49.283 fused_ordering(700) 00:13:49.283 fused_ordering(701) 00:13:49.283 fused_ordering(702) 00:13:49.283 fused_ordering(703) 00:13:49.283 fused_ordering(704) 00:13:49.283 fused_ordering(705) 00:13:49.283 fused_ordering(706) 00:13:49.283 fused_ordering(707) 00:13:49.283 fused_ordering(708) 00:13:49.283 fused_ordering(709) 00:13:49.283 fused_ordering(710) 00:13:49.283 fused_ordering(711) 00:13:49.283 fused_ordering(712) 00:13:49.283 fused_ordering(713) 00:13:49.283 fused_ordering(714) 00:13:49.283 fused_ordering(715) 00:13:49.283 fused_ordering(716) 00:13:49.283 fused_ordering(717) 00:13:49.283 fused_ordering(718) 00:13:49.283 fused_ordering(719) 00:13:49.283 fused_ordering(720) 00:13:49.283 fused_ordering(721) 00:13:49.283 fused_ordering(722) 00:13:49.283 fused_ordering(723) 00:13:49.283 fused_ordering(724) 00:13:49.283 fused_ordering(725) 00:13:49.283 fused_ordering(726) 00:13:49.283 fused_ordering(727) 00:13:49.283 fused_ordering(728) 00:13:49.283 fused_ordering(729) 00:13:49.283 fused_ordering(730) 00:13:49.283 fused_ordering(731) 00:13:49.283 fused_ordering(732) 00:13:49.283 fused_ordering(733) 00:13:49.283 fused_ordering(734) 00:13:49.283 fused_ordering(735) 00:13:49.283 fused_ordering(736) 00:13:49.283 fused_ordering(737) 00:13:49.283 fused_ordering(738) 00:13:49.283 fused_ordering(739) 00:13:49.283 fused_ordering(740) 00:13:49.283 fused_ordering(741) 00:13:49.283 fused_ordering(742) 00:13:49.283 fused_ordering(743) 00:13:49.283 fused_ordering(744) 00:13:49.283 fused_ordering(745) 00:13:49.283 fused_ordering(746) 00:13:49.283 fused_ordering(747) 00:13:49.283 fused_ordering(748) 00:13:49.283 fused_ordering(749) 00:13:49.283 fused_ordering(750) 00:13:49.283 fused_ordering(751) 00:13:49.283 fused_ordering(752) 00:13:49.283 fused_ordering(753) 00:13:49.283 fused_ordering(754) 00:13:49.283 fused_ordering(755) 00:13:49.283 fused_ordering(756) 00:13:49.283 fused_ordering(757) 00:13:49.283 fused_ordering(758) 00:13:49.283 fused_ordering(759) 00:13:49.283 fused_ordering(760) 00:13:49.283 fused_ordering(761) 00:13:49.283 fused_ordering(762) 00:13:49.283 fused_ordering(763) 00:13:49.283 fused_ordering(764) 00:13:49.283 fused_ordering(765) 00:13:49.283 fused_ordering(766) 00:13:49.283 fused_ordering(767) 00:13:49.283 fused_ordering(768) 00:13:49.283 fused_ordering(769) 00:13:49.283 fused_ordering(770) 00:13:49.283 fused_ordering(771) 00:13:49.283 fused_ordering(772) 00:13:49.283 fused_ordering(773) 00:13:49.283 fused_ordering(774) 00:13:49.283 fused_ordering(775) 00:13:49.283 fused_ordering(776) 00:13:49.283 fused_ordering(777) 00:13:49.283 fused_ordering(778) 00:13:49.283 fused_ordering(779) 00:13:49.283 fused_ordering(780) 00:13:49.283 fused_ordering(781) 00:13:49.283 fused_ordering(782) 00:13:49.283 fused_ordering(783) 00:13:49.283 fused_ordering(784) 00:13:49.283 fused_ordering(785) 00:13:49.283 fused_ordering(786) 00:13:49.283 fused_ordering(787) 00:13:49.283 fused_ordering(788) 00:13:49.283 fused_ordering(789) 00:13:49.283 fused_ordering(790) 00:13:49.283 fused_ordering(791) 00:13:49.283 fused_ordering(792) 00:13:49.283 fused_ordering(793) 00:13:49.283 fused_ordering(794) 00:13:49.283 fused_ordering(795) 00:13:49.283 fused_ordering(796) 00:13:49.283 fused_ordering(797) 00:13:49.283 fused_ordering(798) 00:13:49.283 fused_ordering(799) 00:13:49.283 fused_ordering(800) 00:13:49.283 fused_ordering(801) 00:13:49.283 fused_ordering(802) 00:13:49.283 fused_ordering(803) 00:13:49.283 fused_ordering(804) 00:13:49.283 fused_ordering(805) 00:13:49.283 fused_ordering(806) 00:13:49.283 fused_ordering(807) 00:13:49.283 fused_ordering(808) 00:13:49.283 fused_ordering(809) 00:13:49.283 fused_ordering(810) 00:13:49.283 fused_ordering(811) 00:13:49.283 fused_ordering(812) 00:13:49.283 fused_ordering(813) 00:13:49.283 fused_ordering(814) 00:13:49.283 fused_ordering(815) 00:13:49.283 fused_ordering(816) 00:13:49.283 fused_ordering(817) 00:13:49.283 fused_ordering(818) 00:13:49.283 fused_ordering(819) 00:13:49.283 fused_ordering(820) 00:13:49.856 fused_ordering(821) 00:13:49.856 fused_ordering(822) 00:13:49.856 fused_ordering(823) 00:13:49.856 fused_ordering(824) 00:13:49.856 fused_ordering(825) 00:13:49.856 fused_ordering(826) 00:13:49.856 fused_ordering(827) 00:13:49.856 fused_ordering(828) 00:13:49.856 fused_ordering(829) 00:13:49.856 fused_ordering(830) 00:13:49.856 fused_ordering(831) 00:13:49.856 fused_ordering(832) 00:13:49.856 fused_ordering(833) 00:13:49.856 fused_ordering(834) 00:13:49.856 fused_ordering(835) 00:13:49.856 fused_ordering(836) 00:13:49.856 fused_ordering(837) 00:13:49.856 fused_ordering(838) 00:13:49.856 fused_ordering(839) 00:13:49.856 fused_ordering(840) 00:13:49.856 fused_ordering(841) 00:13:49.856 fused_ordering(842) 00:13:49.856 fused_ordering(843) 00:13:49.856 fused_ordering(844) 00:13:49.856 fused_ordering(845) 00:13:49.856 fused_ordering(846) 00:13:49.856 fused_ordering(847) 00:13:49.856 fused_ordering(848) 00:13:49.856 fused_ordering(849) 00:13:49.856 fused_ordering(850) 00:13:49.856 fused_ordering(851) 00:13:49.856 fused_ordering(852) 00:13:49.856 fused_ordering(853) 00:13:49.856 fused_ordering(854) 00:13:49.856 fused_ordering(855) 00:13:49.856 fused_ordering(856) 00:13:49.856 fused_ordering(857) 00:13:49.856 fused_ordering(858) 00:13:49.856 fused_ordering(859) 00:13:49.856 fused_ordering(860) 00:13:49.856 fused_ordering(861) 00:13:49.856 fused_ordering(862) 00:13:49.856 fused_ordering(863) 00:13:49.856 fused_ordering(864) 00:13:49.856 fused_ordering(865) 00:13:49.856 fused_ordering(866) 00:13:49.856 fused_ordering(867) 00:13:49.856 fused_ordering(868) 00:13:49.856 fused_ordering(869) 00:13:49.856 fused_ordering(870) 00:13:49.856 fused_ordering(871) 00:13:49.856 fused_ordering(872) 00:13:49.856 fused_ordering(873) 00:13:49.856 fused_ordering(874) 00:13:49.856 fused_ordering(875) 00:13:49.856 fused_ordering(876) 00:13:49.856 fused_ordering(877) 00:13:49.856 fused_ordering(878) 00:13:49.856 fused_ordering(879) 00:13:49.856 fused_ordering(880) 00:13:49.856 fused_ordering(881) 00:13:49.856 fused_ordering(882) 00:13:49.856 fused_ordering(883) 00:13:49.856 fused_ordering(884) 00:13:49.856 fused_ordering(885) 00:13:49.856 fused_ordering(886) 00:13:49.856 fused_ordering(887) 00:13:49.856 fused_ordering(888) 00:13:49.856 fused_ordering(889) 00:13:49.856 fused_ordering(890) 00:13:49.856 fused_ordering(891) 00:13:49.856 fused_ordering(892) 00:13:49.856 fused_ordering(893) 00:13:49.856 fused_ordering(894) 00:13:49.856 fused_ordering(895) 00:13:49.856 fused_ordering(896) 00:13:49.856 fused_ordering(897) 00:13:49.856 fused_ordering(898) 00:13:49.856 fused_ordering(899) 00:13:49.856 fused_ordering(900) 00:13:49.856 fused_ordering(901) 00:13:49.856 fused_ordering(902) 00:13:49.856 fused_ordering(903) 00:13:49.856 fused_ordering(904) 00:13:49.856 fused_ordering(905) 00:13:49.856 fused_ordering(906) 00:13:49.856 fused_ordering(907) 00:13:49.856 fused_ordering(908) 00:13:49.856 fused_ordering(909) 00:13:49.856 fused_ordering(910) 00:13:49.856 fused_ordering(911) 00:13:49.856 fused_ordering(912) 00:13:49.856 fused_ordering(913) 00:13:49.856 fused_ordering(914) 00:13:49.856 fused_ordering(915) 00:13:49.856 fused_ordering(916) 00:13:49.856 fused_ordering(917) 00:13:49.856 fused_ordering(918) 00:13:49.856 fused_ordering(919) 00:13:49.856 fused_ordering(920) 00:13:49.856 fused_ordering(921) 00:13:49.856 fused_ordering(922) 00:13:49.856 fused_ordering(923) 00:13:49.856 fused_ordering(924) 00:13:49.856 fused_ordering(925) 00:13:49.856 fused_ordering(926) 00:13:49.856 fused_ordering(927) 00:13:49.856 fused_ordering(928) 00:13:49.856 fused_ordering(929) 00:13:49.856 fused_ordering(930) 00:13:49.856 fused_ordering(931) 00:13:49.856 fused_ordering(932) 00:13:49.856 fused_ordering(933) 00:13:49.856 fused_ordering(934) 00:13:49.856 fused_ordering(935) 00:13:49.856 fused_ordering(936) 00:13:49.856 fused_ordering(937) 00:13:49.856 fused_ordering(938) 00:13:49.856 fused_ordering(939) 00:13:49.856 fused_ordering(940) 00:13:49.856 fused_ordering(941) 00:13:49.856 fused_ordering(942) 00:13:49.856 fused_ordering(943) 00:13:49.856 fused_ordering(944) 00:13:49.856 fused_ordering(945) 00:13:49.856 fused_ordering(946) 00:13:49.856 fused_ordering(947) 00:13:49.857 fused_ordering(948) 00:13:49.857 fused_ordering(949) 00:13:49.857 fused_ordering(950) 00:13:49.857 fused_ordering(951) 00:13:49.857 fused_ordering(952) 00:13:49.857 fused_ordering(953) 00:13:49.857 fused_ordering(954) 00:13:49.857 fused_ordering(955) 00:13:49.857 fused_ordering(956) 00:13:49.857 fused_ordering(957) 00:13:49.857 fused_ordering(958) 00:13:49.857 fused_ordering(959) 00:13:49.857 fused_ordering(960) 00:13:49.857 fused_ordering(961) 00:13:49.857 fused_ordering(962) 00:13:49.857 fused_ordering(963) 00:13:49.857 fused_ordering(964) 00:13:49.857 fused_ordering(965) 00:13:49.857 fused_ordering(966) 00:13:49.857 fused_ordering(967) 00:13:49.857 fused_ordering(968) 00:13:49.857 fused_ordering(969) 00:13:49.857 fused_ordering(970) 00:13:49.857 fused_ordering(971) 00:13:49.857 fused_ordering(972) 00:13:49.857 fused_ordering(973) 00:13:49.857 fused_ordering(974) 00:13:49.857 fused_ordering(975) 00:13:49.857 fused_ordering(976) 00:13:49.857 fused_ordering(977) 00:13:49.857 fused_ordering(978) 00:13:49.857 fused_ordering(979) 00:13:49.857 fused_ordering(980) 00:13:49.857 fused_ordering(981) 00:13:49.857 fused_ordering(982) 00:13:49.857 fused_ordering(983) 00:13:49.857 fused_ordering(984) 00:13:49.857 fused_ordering(985) 00:13:49.857 fused_ordering(986) 00:13:49.857 fused_ordering(987) 00:13:49.857 fused_ordering(988) 00:13:49.857 fused_ordering(989) 00:13:49.857 fused_ordering(990) 00:13:49.857 fused_ordering(991) 00:13:49.857 fused_ordering(992) 00:13:49.857 fused_ordering(993) 00:13:49.857 fused_ordering(994) 00:13:49.857 fused_ordering(995) 00:13:49.857 fused_ordering(996) 00:13:49.857 fused_ordering(997) 00:13:49.857 fused_ordering(998) 00:13:49.857 fused_ordering(999) 00:13:49.857 fused_ordering(1000) 00:13:49.857 fused_ordering(1001) 00:13:49.857 fused_ordering(1002) 00:13:49.857 fused_ordering(1003) 00:13:49.857 fused_ordering(1004) 00:13:49.857 fused_ordering(1005) 00:13:49.857 fused_ordering(1006) 00:13:49.857 fused_ordering(1007) 00:13:49.857 fused_ordering(1008) 00:13:49.857 fused_ordering(1009) 00:13:49.857 fused_ordering(1010) 00:13:49.857 fused_ordering(1011) 00:13:49.857 fused_ordering(1012) 00:13:49.857 fused_ordering(1013) 00:13:49.857 fused_ordering(1014) 00:13:49.857 fused_ordering(1015) 00:13:49.857 fused_ordering(1016) 00:13:49.857 fused_ordering(1017) 00:13:49.857 fused_ordering(1018) 00:13:49.857 fused_ordering(1019) 00:13:49.857 fused_ordering(1020) 00:13:49.857 fused_ordering(1021) 00:13:49.857 fused_ordering(1022) 00:13:49.857 fused_ordering(1023) 00:13:49.857 00:21:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:13:49.857 00:21:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:13:49.857 00:21:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@514 -- # nvmfcleanup 00:13:49.857 00:21:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync 00:13:49.857 00:21:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:49.857 00:21:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e 00:13:49.857 00:21:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:49.857 00:21:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:49.857 rmmod nvme_tcp 00:13:50.119 rmmod nvme_fabrics 00:13:50.119 rmmod nvme_keyring 00:13:50.119 00:21:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:50.119 00:21:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e 00:13:50.119 00:21:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0 00:13:50.119 00:21:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@515 -- # '[' -n 3189492 ']' 00:13:50.119 00:21:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@516 -- # killprocess 3189492 00:13:50.119 00:21:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@950 -- # '[' -z 3189492 ']' 00:13:50.119 00:21:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # kill -0 3189492 00:13:50.119 00:21:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@955 -- # uname 00:13:50.119 00:21:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:50.119 00:21:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3189492 00:13:50.119 00:21:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:13:50.119 00:21:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:13:50.119 00:21:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3189492' 00:13:50.119 killing process with pid 3189492 00:13:50.119 00:21:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@969 -- # kill 3189492 00:13:50.119 00:21:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@974 -- # wait 3189492 00:13:50.119 00:21:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:13:50.119 00:21:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:13:50.119 00:21:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:13:50.119 00:21:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # iptr 00:13:50.119 00:21:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@789 -- # iptables-save 00:13:50.119 00:21:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:13:50.119 00:21:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@789 -- # iptables-restore 00:13:50.119 00:21:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:50.119 00:21:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:50.119 00:21:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:50.119 00:21:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:50.119 00:21:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:52.670 00:21:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:52.670 00:13:52.670 real 0m13.413s 00:13:52.670 user 0m7.045s 00:13:52.670 sys 0m7.140s 00:13:52.670 00:21:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:52.670 00:21:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:52.670 ************************************ 00:13:52.670 END TEST nvmf_fused_ordering 00:13:52.670 ************************************ 00:13:52.670 00:21:22 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:13:52.670 00:21:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:52.670 00:21:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:52.670 00:21:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:52.670 ************************************ 00:13:52.670 START TEST nvmf_ns_masking 00:13:52.670 ************************************ 00:13:52.670 00:21:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1125 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:13:52.670 * Looking for test storage... 00:13:52.670 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:52.670 00:21:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:13:52.670 00:21:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1681 -- # lcov --version 00:13:52.670 00:21:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:13:52.670 00:21:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:13:52.670 00:21:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:52.670 00:21:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:52.670 00:21:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:52.670 00:21:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:13:52.670 00:21:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:13:52.670 00:21:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:13:52.670 00:21:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:13:52.670 00:21:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:13:52.670 00:21:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:13:52.670 00:21:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:13:52.670 00:21:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:52.670 00:21:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:13:52.670 00:21:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:13:52.670 00:21:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:52.670 00:21:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:52.670 00:21:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:13:52.670 00:21:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:13:52.670 00:21:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:52.670 00:21:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:13:52.670 00:21:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:13:52.671 00:21:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:13:52.671 00:21:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:13:52.671 00:21:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:52.671 00:21:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:13:52.671 00:21:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:13:52.671 00:21:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:52.671 00:21:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:52.671 00:21:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:13:52.671 00:21:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:52.671 00:21:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:13:52.671 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:52.671 --rc genhtml_branch_coverage=1 00:13:52.671 --rc genhtml_function_coverage=1 00:13:52.671 --rc genhtml_legend=1 00:13:52.671 --rc geninfo_all_blocks=1 00:13:52.671 --rc geninfo_unexecuted_blocks=1 00:13:52.671 00:13:52.671 ' 00:13:52.671 00:21:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:13:52.671 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:52.671 --rc genhtml_branch_coverage=1 00:13:52.671 --rc genhtml_function_coverage=1 00:13:52.671 --rc genhtml_legend=1 00:13:52.671 --rc geninfo_all_blocks=1 00:13:52.671 --rc geninfo_unexecuted_blocks=1 00:13:52.671 00:13:52.671 ' 00:13:52.671 00:21:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:13:52.671 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:52.671 --rc genhtml_branch_coverage=1 00:13:52.671 --rc genhtml_function_coverage=1 00:13:52.671 --rc genhtml_legend=1 00:13:52.671 --rc geninfo_all_blocks=1 00:13:52.671 --rc geninfo_unexecuted_blocks=1 00:13:52.671 00:13:52.671 ' 00:13:52.671 00:21:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:13:52.671 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:52.671 --rc genhtml_branch_coverage=1 00:13:52.671 --rc genhtml_function_coverage=1 00:13:52.671 --rc genhtml_legend=1 00:13:52.671 --rc geninfo_all_blocks=1 00:13:52.671 --rc geninfo_unexecuted_blocks=1 00:13:52.671 00:13:52.671 ' 00:13:52.671 00:21:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:52.671 00:21:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:13:52.671 00:21:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:52.671 00:21:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:52.671 00:21:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:52.671 00:21:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:52.671 00:21:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:52.671 00:21:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:52.671 00:21:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:52.671 00:21:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:52.671 00:21:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:52.671 00:21:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:52.671 00:21:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:52.671 00:21:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:52.671 00:21:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:52.671 00:21:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:52.671 00:21:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:52.671 00:21:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:52.671 00:21:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:52.671 00:21:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:13:52.671 00:21:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:52.671 00:21:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:52.671 00:21:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:52.671 00:21:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:52.671 00:21:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:52.671 00:21:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:52.671 00:21:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:13:52.671 00:21:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:52.671 00:21:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0 00:13:52.671 00:21:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:52.671 00:21:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:52.671 00:21:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:52.671 00:21:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:52.671 00:21:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:52.671 00:21:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:52.671 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:52.671 00:21:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:52.671 00:21:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:52.671 00:21:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:52.671 00:21:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:52.671 00:21:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:13:52.671 00:21:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:13:52.671 00:21:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:13:52.671 00:21:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=2f243865-7d54-45c8-b20c-3ca194ba5233 00:13:52.671 00:21:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:13:52.671 00:21:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=05fa6d7b-2b96-40cb-9c9b-9b56c0132473 00:13:52.671 00:21:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:13:52.671 00:21:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:13:52.671 00:21:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:13:52.671 00:21:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:13:52.671 00:21:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=dd122958-eaed-4e20-a105-a370979f49a1 00:13:52.671 00:21:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:13:52.671 00:21:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:13:52.671 00:21:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:52.671 00:21:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # prepare_net_devs 00:13:52.671 00:21:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@436 -- # local -g is_hw=no 00:13:52.671 00:21:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # remove_spdk_ns 00:13:52.672 00:21:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:52.672 00:21:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:52.672 00:21:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:52.672 00:21:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:13:52.672 00:21:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:13:52.672 00:21:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@309 -- # xtrace_disable 00:13:52.672 00:21:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:00.821 00:21:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:00.821 00:21:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # pci_devs=() 00:14:00.821 00:21:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:00.821 00:21:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:00.821 00:21:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:00.821 00:21:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:00.821 00:21:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:00.821 00:21:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # net_devs=() 00:14:00.821 00:21:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:00.821 00:21:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # e810=() 00:14:00.821 00:21:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # local -ga e810 00:14:00.821 00:21:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # x722=() 00:14:00.821 00:21:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # local -ga x722 00:14:00.821 00:21:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # mlx=() 00:14:00.821 00:21:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # local -ga mlx 00:14:00.821 00:21:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:00.821 00:21:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:00.821 00:21:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:00.821 00:21:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:00.821 00:21:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:00.821 00:21:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:00.821 00:21:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:00.821 00:21:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:00.821 00:21:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:00.821 00:21:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:00.821 00:21:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:00.821 00:21:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:00.821 00:21:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:00.821 00:21:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:00.821 00:21:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:00.821 00:21:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:00.821 00:21:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:00.821 00:21:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:00.821 00:21:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:00.821 00:21:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:14:00.821 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:14:00.821 00:21:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:00.821 00:21:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:00.821 00:21:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:00.821 00:21:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:00.821 00:21:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:00.821 00:21:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:00.821 00:21:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:14:00.821 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:14:00.821 00:21:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:00.821 00:21:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:00.821 00:21:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:00.821 00:21:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:00.821 00:21:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:00.821 00:21:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:00.821 00:21:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:00.821 00:21:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:00.821 00:21:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:14:00.821 00:21:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:00.821 00:21:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:14:00.821 00:21:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:00.821 00:21:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ up == up ]] 00:14:00.821 00:21:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:14:00.821 00:21:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:00.821 00:21:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:14:00.821 Found net devices under 0000:4b:00.0: cvl_0_0 00:14:00.821 00:21:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:14:00.821 00:21:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:14:00.821 00:21:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:00.821 00:21:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:14:00.821 00:21:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:00.821 00:21:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ up == up ]] 00:14:00.821 00:21:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:14:00.821 00:21:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:00.821 00:21:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:14:00.821 Found net devices under 0000:4b:00.1: cvl_0_1 00:14:00.821 00:21:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:14:00.821 00:21:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:14:00.821 00:21:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # is_hw=yes 00:14:00.821 00:21:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:14:00.821 00:21:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:14:00.821 00:21:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:14:00.821 00:21:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:00.821 00:21:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:00.821 00:21:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:00.821 00:21:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:00.821 00:21:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:00.821 00:21:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:00.821 00:21:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:00.821 00:21:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:00.821 00:21:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:00.821 00:21:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:00.821 00:21:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:00.821 00:21:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:00.821 00:21:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:00.821 00:21:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:00.821 00:21:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:00.821 00:21:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:00.821 00:21:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:00.821 00:21:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:00.821 00:21:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:00.821 00:21:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:00.821 00:21:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:00.821 00:21:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:00.822 00:21:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:00.822 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:00.822 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.636 ms 00:14:00.822 00:14:00.822 --- 10.0.0.2 ping statistics --- 00:14:00.822 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:00.822 rtt min/avg/max/mdev = 0.636/0.636/0.636/0.000 ms 00:14:00.822 00:21:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:00.822 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:00.822 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.267 ms 00:14:00.822 00:14:00.822 --- 10.0.0.1 ping statistics --- 00:14:00.822 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:00.822 rtt min/avg/max/mdev = 0.267/0.267/0.267/0.000 ms 00:14:00.822 00:21:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:00.822 00:21:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@448 -- # return 0 00:14:00.822 00:21:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:14:00.822 00:21:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:00.822 00:21:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:14:00.822 00:21:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:14:00.822 00:21:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:00.822 00:21:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:14:00.822 00:21:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:14:00.822 00:21:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:14:00.822 00:21:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:14:00.822 00:21:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:00.822 00:21:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:00.822 00:21:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@507 -- # nvmfpid=3194256 00:14:00.822 00:21:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@508 -- # waitforlisten 3194256 00:14:00.822 00:21:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:14:00.822 00:21:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@831 -- # '[' -z 3194256 ']' 00:14:00.822 00:21:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:00.822 00:21:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:00.822 00:21:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:00.822 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:00.822 00:21:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:00.822 00:21:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:00.822 [2024-10-09 00:21:30.783775] Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 initialization... 00:14:00.822 [2024-10-09 00:21:30.783844] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:00.822 [2024-10-09 00:21:30.874662] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:00.822 [2024-10-09 00:21:30.969072] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:00.822 [2024-10-09 00:21:30.969134] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:00.822 [2024-10-09 00:21:30.969142] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:00.822 [2024-10-09 00:21:30.969150] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:00.822 [2024-10-09 00:21:30.969156] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:00.822 [2024-10-09 00:21:30.969950] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:14:01.089 00:21:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:01.089 00:21:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # return 0 00:14:01.089 00:21:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:14:01.089 00:21:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:01.089 00:21:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:01.089 00:21:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:01.089 00:21:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:01.351 [2024-10-09 00:21:31.820224] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:01.351 00:21:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:14:01.351 00:21:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:14:01.351 00:21:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:14:01.611 Malloc1 00:14:01.611 00:21:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:14:01.872 Malloc2 00:14:01.872 00:21:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:14:01.872 00:21:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:14:02.132 00:21:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:02.392 [2024-10-09 00:21:32.852867] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:02.393 00:21:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:14:02.393 00:21:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I dd122958-eaed-4e20-a105-a370979f49a1 -a 10.0.0.2 -s 4420 -i 4 00:14:02.653 00:21:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:14:02.653 00:21:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:14:02.653 00:21:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:02.653 00:21:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:14:02.653 00:21:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:14:04.569 00:21:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:14:04.569 00:21:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:14:04.569 00:21:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:14:04.569 00:21:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:14:04.569 00:21:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:14:04.569 00:21:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:14:04.569 00:21:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:14:04.569 00:21:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:04.569 00:21:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:14:04.569 00:21:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:14:04.569 00:21:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:14:04.569 00:21:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:04.569 00:21:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:04.569 [ 0]:0x1 00:14:04.569 00:21:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:04.569 00:21:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:04.569 00:21:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=16920521c6e148be97d632066ca04e77 00:14:04.569 00:21:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 16920521c6e148be97d632066ca04e77 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:04.569 00:21:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:14:04.830 00:21:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:14:04.830 00:21:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:04.830 00:21:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:04.830 [ 0]:0x1 00:14:04.830 00:21:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:04.830 00:21:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:04.830 00:21:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=16920521c6e148be97d632066ca04e77 00:14:04.830 00:21:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 16920521c6e148be97d632066ca04e77 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:04.830 00:21:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:14:04.830 00:21:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:04.830 00:21:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:04.830 [ 1]:0x2 00:14:04.830 00:21:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:04.830 00:21:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:05.091 00:21:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=f4f1ecc7ff2342b8b832189d3c62f982 00:14:05.091 00:21:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ f4f1ecc7ff2342b8b832189d3c62f982 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:05.091 00:21:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:14:05.091 00:21:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:05.091 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:05.091 00:21:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:05.366 00:21:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:14:05.366 00:21:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:14:05.367 00:21:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I dd122958-eaed-4e20-a105-a370979f49a1 -a 10.0.0.2 -s 4420 -i 4 00:14:05.628 00:21:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:14:05.628 00:21:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:14:05.628 00:21:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:05.628 00:21:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 1 ]] 00:14:05.628 00:21:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=1 00:14:05.628 00:21:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:14:07.543 00:21:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:14:07.543 00:21:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:14:07.543 00:21:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:14:07.543 00:21:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:14:07.543 00:21:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:14:07.543 00:21:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:14:07.543 00:21:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:14:07.543 00:21:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:07.543 00:21:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:14:07.543 00:21:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:14:07.543 00:21:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:14:07.543 00:21:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:14:07.543 00:21:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:14:07.543 00:21:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:14:07.543 00:21:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:07.543 00:21:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:14:07.543 00:21:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:07.543 00:21:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:14:07.543 00:21:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:07.543 00:21:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:07.543 00:21:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:07.543 00:21:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:07.818 00:21:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:07.818 00:21:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:07.818 00:21:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:14:07.818 00:21:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:07.818 00:21:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:07.818 00:21:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:07.818 00:21:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:14:07.818 00:21:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:07.818 00:21:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:07.818 [ 0]:0x2 00:14:07.818 00:21:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:07.818 00:21:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:07.818 00:21:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=f4f1ecc7ff2342b8b832189d3c62f982 00:14:07.818 00:21:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ f4f1ecc7ff2342b8b832189d3c62f982 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:07.818 00:21:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:08.082 00:21:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:14:08.082 00:21:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:08.082 00:21:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:08.082 [ 0]:0x1 00:14:08.082 00:21:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:08.082 00:21:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:08.082 00:21:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=16920521c6e148be97d632066ca04e77 00:14:08.082 00:21:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 16920521c6e148be97d632066ca04e77 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:08.082 00:21:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:14:08.082 00:21:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:08.082 00:21:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:08.082 [ 1]:0x2 00:14:08.083 00:21:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:08.083 00:21:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:08.083 00:21:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=f4f1ecc7ff2342b8b832189d3c62f982 00:14:08.083 00:21:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ f4f1ecc7ff2342b8b832189d3c62f982 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:08.083 00:21:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:08.344 00:21:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:14:08.344 00:21:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:14:08.344 00:21:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:14:08.344 00:21:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:14:08.344 00:21:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:08.344 00:21:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:14:08.344 00:21:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:08.344 00:21:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:14:08.344 00:21:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:08.344 00:21:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:08.344 00:21:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:08.344 00:21:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:08.344 00:21:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:08.344 00:21:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:08.344 00:21:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:14:08.344 00:21:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:08.344 00:21:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:08.344 00:21:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:08.344 00:21:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:14:08.344 00:21:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:08.344 00:21:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:08.344 [ 0]:0x2 00:14:08.344 00:21:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:08.344 00:21:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:08.344 00:21:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=f4f1ecc7ff2342b8b832189d3c62f982 00:14:08.344 00:21:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ f4f1ecc7ff2342b8b832189d3c62f982 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:08.344 00:21:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:14:08.344 00:21:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:08.344 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:08.344 00:21:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:08.605 00:21:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:14:08.605 00:21:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I dd122958-eaed-4e20-a105-a370979f49a1 -a 10.0.0.2 -s 4420 -i 4 00:14:08.605 00:21:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:14:08.605 00:21:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:14:08.605 00:21:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:08.605 00:21:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:14:08.605 00:21:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:14:08.605 00:21:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:14:11.159 00:21:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:14:11.159 00:21:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:14:11.159 00:21:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:14:11.159 00:21:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:14:11.160 00:21:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:14:11.160 00:21:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:14:11.160 00:21:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:14:11.160 00:21:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:11.160 00:21:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:14:11.160 00:21:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:14:11.160 00:21:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:14:11.160 00:21:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:11.160 00:21:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:11.160 [ 0]:0x1 00:14:11.160 00:21:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:11.160 00:21:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:11.160 00:21:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=16920521c6e148be97d632066ca04e77 00:14:11.160 00:21:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 16920521c6e148be97d632066ca04e77 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:11.160 00:21:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:14:11.160 00:21:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:11.160 00:21:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:11.160 [ 1]:0x2 00:14:11.160 00:21:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:11.160 00:21:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:11.160 00:21:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=f4f1ecc7ff2342b8b832189d3c62f982 00:14:11.160 00:21:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ f4f1ecc7ff2342b8b832189d3c62f982 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:11.160 00:21:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:11.160 00:21:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:14:11.160 00:21:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:14:11.160 00:21:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:14:11.160 00:21:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:14:11.160 00:21:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:11.160 00:21:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:14:11.160 00:21:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:11.160 00:21:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:14:11.160 00:21:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:11.160 00:21:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:11.160 00:21:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:11.160 00:21:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:11.160 00:21:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:11.160 00:21:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:11.160 00:21:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:14:11.160 00:21:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:11.160 00:21:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:11.160 00:21:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:11.160 00:21:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:14:11.160 00:21:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:11.160 00:21:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:11.160 [ 0]:0x2 00:14:11.160 00:21:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:11.160 00:21:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:11.160 00:21:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=f4f1ecc7ff2342b8b832189d3c62f982 00:14:11.160 00:21:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ f4f1ecc7ff2342b8b832189d3c62f982 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:11.160 00:21:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:11.160 00:21:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:14:11.160 00:21:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:11.160 00:21:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:11.160 00:21:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:11.160 00:21:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:11.160 00:21:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:11.160 00:21:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:11.160 00:21:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:11.160 00:21:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:11.160 00:21:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:14:11.160 00:21:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:11.422 [2024-10-09 00:21:41.865414] nvmf_rpc.c:1870:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:14:11.422 request: 00:14:11.422 { 00:14:11.422 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:11.422 "nsid": 2, 00:14:11.422 "host": "nqn.2016-06.io.spdk:host1", 00:14:11.422 "method": "nvmf_ns_remove_host", 00:14:11.422 "req_id": 1 00:14:11.422 } 00:14:11.422 Got JSON-RPC error response 00:14:11.422 response: 00:14:11.422 { 00:14:11.422 "code": -32602, 00:14:11.422 "message": "Invalid parameters" 00:14:11.422 } 00:14:11.422 00:21:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:14:11.422 00:21:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:11.422 00:21:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:11.422 00:21:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:11.422 00:21:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:14:11.422 00:21:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:14:11.422 00:21:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:14:11.422 00:21:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:14:11.422 00:21:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:11.422 00:21:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:14:11.422 00:21:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:11.422 00:21:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:14:11.422 00:21:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:11.422 00:21:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:11.422 00:21:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:11.422 00:21:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:11.422 00:21:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:11.422 00:21:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:11.422 00:21:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:14:11.422 00:21:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:11.422 00:21:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:11.422 00:21:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:11.422 00:21:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:14:11.422 00:21:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:11.422 00:21:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:11.422 [ 0]:0x2 00:14:11.422 00:21:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:11.422 00:21:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:11.422 00:21:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=f4f1ecc7ff2342b8b832189d3c62f982 00:14:11.422 00:21:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ f4f1ecc7ff2342b8b832189d3c62f982 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:11.422 00:21:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:14:11.422 00:21:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:11.683 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:11.683 00:21:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=3196737 00:14:11.683 00:21:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:14:11.683 00:21:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:14:11.683 00:21:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 3196737 /var/tmp/host.sock 00:14:11.683 00:21:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@831 -- # '[' -z 3196737 ']' 00:14:11.683 00:21:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/host.sock 00:14:11.683 00:21:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:11.683 00:21:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:14:11.683 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:14:11.683 00:21:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:11.683 00:21:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:11.683 [2024-10-09 00:21:42.125880] Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 initialization... 00:14:11.683 [2024-10-09 00:21:42.125935] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3196737 ] 00:14:11.683 [2024-10-09 00:21:42.203753] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:11.683 [2024-10-09 00:21:42.267746] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:14:12.649 00:21:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:12.649 00:21:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # return 0 00:14:12.649 00:21:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:12.649 00:21:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:12.649 00:21:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 2f243865-7d54-45c8-b20c-3ca194ba5233 00:14:12.649 00:21:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@785 -- # tr -d - 00:14:12.649 00:21:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 2F2438657D5445C8B20C3CA194BA5233 -i 00:14:12.910 00:21:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 05fa6d7b-2b96-40cb-9c9b-9b56c0132473 00:14:12.910 00:21:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@785 -- # tr -d - 00:14:12.910 00:21:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 05FA6D7B2B9640CB9C9B9B56C0132473 -i 00:14:13.171 00:21:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:13.432 00:21:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:14:13.432 00:21:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:14:13.432 00:21:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:14:14.002 nvme0n1 00:14:14.002 00:21:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:14:14.002 00:21:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:14:14.263 nvme1n2 00:14:14.263 00:21:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:14:14.263 00:21:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:14:14.263 00:21:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:14:14.263 00:21:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:14:14.263 00:21:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:14:14.536 00:21:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:14:14.536 00:21:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:14:14.537 00:21:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:14:14.537 00:21:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:14:14.537 00:21:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 2f243865-7d54-45c8-b20c-3ca194ba5233 == \2\f\2\4\3\8\6\5\-\7\d\5\4\-\4\5\c\8\-\b\2\0\c\-\3\c\a\1\9\4\b\a\5\2\3\3 ]] 00:14:14.537 00:21:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:14:14.537 00:21:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:14:14.537 00:21:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:14:14.797 00:21:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 05fa6d7b-2b96-40cb-9c9b-9b56c0132473 == \0\5\f\a\6\d\7\b\-\2\b\9\6\-\4\0\c\b\-\9\c\9\b\-\9\b\5\6\c\0\1\3\2\4\7\3 ]] 00:14:14.797 00:21:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # killprocess 3196737 00:14:14.797 00:21:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@950 -- # '[' -z 3196737 ']' 00:14:14.798 00:21:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # kill -0 3196737 00:14:14.798 00:21:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # uname 00:14:14.798 00:21:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:14.798 00:21:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3196737 00:14:14.798 00:21:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:14:14.798 00:21:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:14:14.798 00:21:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3196737' 00:14:14.798 killing process with pid 3196737 00:14:14.798 00:21:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@969 -- # kill 3196737 00:14:14.798 00:21:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@974 -- # wait 3196737 00:14:15.059 00:21:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:15.320 00:21:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # trap - SIGINT SIGTERM EXIT 00:14:15.321 00:21:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # nvmftestfini 00:14:15.321 00:21:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@514 -- # nvmfcleanup 00:14:15.321 00:21:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync 00:14:15.321 00:21:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:15.321 00:21:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e 00:14:15.321 00:21:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:15.321 00:21:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:15.321 rmmod nvme_tcp 00:14:15.321 rmmod nvme_fabrics 00:14:15.321 rmmod nvme_keyring 00:14:15.321 00:21:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:15.321 00:21:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e 00:14:15.321 00:21:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0 00:14:15.321 00:21:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@515 -- # '[' -n 3194256 ']' 00:14:15.321 00:21:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@516 -- # killprocess 3194256 00:14:15.321 00:21:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@950 -- # '[' -z 3194256 ']' 00:14:15.321 00:21:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # kill -0 3194256 00:14:15.321 00:21:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # uname 00:14:15.321 00:21:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:15.321 00:21:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3194256 00:14:15.321 00:21:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:15.321 00:21:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:15.321 00:21:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3194256' 00:14:15.321 killing process with pid 3194256 00:14:15.321 00:21:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@969 -- # kill 3194256 00:14:15.321 00:21:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@974 -- # wait 3194256 00:14:15.581 00:21:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:14:15.581 00:21:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:14:15.581 00:21:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:14:15.581 00:21:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # iptr 00:14:15.581 00:21:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@789 -- # iptables-save 00:14:15.581 00:21:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:14:15.581 00:21:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@789 -- # iptables-restore 00:14:15.581 00:21:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:15.581 00:21:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:15.581 00:21:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:15.581 00:21:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:15.581 00:21:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:17.496 00:21:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:17.757 00:14:17.757 real 0m25.237s 00:14:17.757 user 0m25.708s 00:14:17.757 sys 0m7.937s 00:14:17.757 00:21:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:17.757 00:21:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:17.757 ************************************ 00:14:17.757 END TEST nvmf_ns_masking 00:14:17.757 ************************************ 00:14:17.757 00:21:48 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:14:17.757 00:21:48 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:14:17.757 00:21:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:17.757 00:21:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:17.757 00:21:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:17.757 ************************************ 00:14:17.757 START TEST nvmf_nvme_cli 00:14:17.757 ************************************ 00:14:17.757 00:21:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:14:17.757 * Looking for test storage... 00:14:17.757 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:17.757 00:21:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:14:17.757 00:21:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1681 -- # lcov --version 00:14:17.757 00:21:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:14:18.018 00:21:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:14:18.018 00:21:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:18.018 00:21:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:18.018 00:21:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:18.018 00:21:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # IFS=.-: 00:14:18.018 00:21:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # read -ra ver1 00:14:18.018 00:21:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # IFS=.-: 00:14:18.018 00:21:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # read -ra ver2 00:14:18.018 00:21:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@338 -- # local 'op=<' 00:14:18.018 00:21:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@340 -- # ver1_l=2 00:14:18.018 00:21:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@341 -- # ver2_l=1 00:14:18.018 00:21:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:18.018 00:21:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@344 -- # case "$op" in 00:14:18.018 00:21:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@345 -- # : 1 00:14:18.018 00:21:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:18.018 00:21:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:18.018 00:21:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # decimal 1 00:14:18.018 00:21:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=1 00:14:18.018 00:21:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:18.018 00:21:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 1 00:14:18.018 00:21:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # ver1[v]=1 00:14:18.018 00:21:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # decimal 2 00:14:18.018 00:21:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=2 00:14:18.018 00:21:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:18.018 00:21:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 2 00:14:18.018 00:21:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # ver2[v]=2 00:14:18.018 00:21:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:18.018 00:21:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:18.018 00:21:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # return 0 00:14:18.018 00:21:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:18.018 00:21:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:14:18.018 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:18.018 --rc genhtml_branch_coverage=1 00:14:18.018 --rc genhtml_function_coverage=1 00:14:18.018 --rc genhtml_legend=1 00:14:18.018 --rc geninfo_all_blocks=1 00:14:18.018 --rc geninfo_unexecuted_blocks=1 00:14:18.018 00:14:18.018 ' 00:14:18.018 00:21:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:14:18.018 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:18.018 --rc genhtml_branch_coverage=1 00:14:18.018 --rc genhtml_function_coverage=1 00:14:18.018 --rc genhtml_legend=1 00:14:18.018 --rc geninfo_all_blocks=1 00:14:18.018 --rc geninfo_unexecuted_blocks=1 00:14:18.018 00:14:18.018 ' 00:14:18.018 00:21:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:14:18.018 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:18.018 --rc genhtml_branch_coverage=1 00:14:18.018 --rc genhtml_function_coverage=1 00:14:18.018 --rc genhtml_legend=1 00:14:18.018 --rc geninfo_all_blocks=1 00:14:18.018 --rc geninfo_unexecuted_blocks=1 00:14:18.018 00:14:18.018 ' 00:14:18.018 00:21:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:14:18.018 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:18.018 --rc genhtml_branch_coverage=1 00:14:18.018 --rc genhtml_function_coverage=1 00:14:18.018 --rc genhtml_legend=1 00:14:18.018 --rc geninfo_all_blocks=1 00:14:18.018 --rc geninfo_unexecuted_blocks=1 00:14:18.018 00:14:18.018 ' 00:14:18.018 00:21:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:18.018 00:21:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:14:18.018 00:21:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:18.018 00:21:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:18.018 00:21:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:18.018 00:21:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:18.018 00:21:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:18.018 00:21:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:18.018 00:21:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:18.018 00:21:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:18.019 00:21:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:18.019 00:21:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:18.019 00:21:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:18.019 00:21:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:18.019 00:21:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:18.019 00:21:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:18.019 00:21:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:18.019 00:21:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:18.019 00:21:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:18.019 00:21:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@15 -- # shopt -s extglob 00:14:18.019 00:21:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:18.019 00:21:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:18.019 00:21:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:18.019 00:21:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:18.019 00:21:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:18.019 00:21:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:18.019 00:21:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:14:18.019 00:21:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:18.019 00:21:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # : 0 00:14:18.019 00:21:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:18.019 00:21:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:18.019 00:21:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:18.019 00:21:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:18.019 00:21:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:18.019 00:21:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:18.019 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:18.019 00:21:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:18.019 00:21:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:18.019 00:21:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:18.019 00:21:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:18.019 00:21:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:18.019 00:21:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:14:18.019 00:21:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:14:18.019 00:21:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:14:18.019 00:21:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:18.019 00:21:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # prepare_net_devs 00:14:18.019 00:21:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@436 -- # local -g is_hw=no 00:14:18.019 00:21:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # remove_spdk_ns 00:14:18.019 00:21:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:18.019 00:21:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:18.019 00:21:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:18.019 00:21:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:14:18.019 00:21:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:14:18.019 00:21:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@309 -- # xtrace_disable 00:14:18.019 00:21:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:26.266 00:21:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:26.266 00:21:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # pci_devs=() 00:14:26.266 00:21:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:26.266 00:21:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:26.266 00:21:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:26.266 00:21:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:26.266 00:21:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:26.266 00:21:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # net_devs=() 00:14:26.266 00:21:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:26.266 00:21:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # e810=() 00:14:26.266 00:21:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # local -ga e810 00:14:26.266 00:21:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # x722=() 00:14:26.266 00:21:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # local -ga x722 00:14:26.266 00:21:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # mlx=() 00:14:26.266 00:21:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # local -ga mlx 00:14:26.266 00:21:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:26.266 00:21:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:26.266 00:21:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:26.266 00:21:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:26.266 00:21:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:26.266 00:21:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:26.266 00:21:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:26.266 00:21:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:26.266 00:21:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:26.266 00:21:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:26.266 00:21:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:26.266 00:21:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:26.266 00:21:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:26.266 00:21:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:26.266 00:21:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:26.266 00:21:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:26.266 00:21:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:26.266 00:21:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:26.266 00:21:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:26.266 00:21:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:14:26.266 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:14:26.266 00:21:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:26.266 00:21:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:26.266 00:21:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:26.266 00:21:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:26.267 00:21:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:26.267 00:21:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:26.267 00:21:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:14:26.267 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:14:26.267 00:21:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:26.267 00:21:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:26.267 00:21:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:26.267 00:21:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:26.267 00:21:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:26.267 00:21:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:26.267 00:21:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:26.267 00:21:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:26.267 00:21:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:14:26.267 00:21:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:26.267 00:21:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:14:26.267 00:21:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:26.267 00:21:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ up == up ]] 00:14:26.267 00:21:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:14:26.267 00:21:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:26.267 00:21:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:14:26.267 Found net devices under 0000:4b:00.0: cvl_0_0 00:14:26.267 00:21:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:14:26.267 00:21:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:14:26.267 00:21:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:26.267 00:21:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:14:26.267 00:21:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:26.267 00:21:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ up == up ]] 00:14:26.267 00:21:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:14:26.267 00:21:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:26.267 00:21:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:14:26.267 Found net devices under 0000:4b:00.1: cvl_0_1 00:14:26.267 00:21:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:14:26.267 00:21:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:14:26.267 00:21:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # is_hw=yes 00:14:26.267 00:21:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:14:26.267 00:21:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:14:26.267 00:21:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:14:26.267 00:21:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:26.267 00:21:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:26.267 00:21:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:26.267 00:21:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:26.267 00:21:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:26.267 00:21:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:26.267 00:21:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:26.267 00:21:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:26.267 00:21:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:26.267 00:21:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:26.267 00:21:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:26.267 00:21:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:26.267 00:21:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:26.267 00:21:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:26.267 00:21:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:26.267 00:21:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:26.267 00:21:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:26.267 00:21:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:26.267 00:21:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:26.267 00:21:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:26.267 00:21:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:26.267 00:21:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:26.267 00:21:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:26.267 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:26.267 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.666 ms 00:14:26.267 00:14:26.267 --- 10.0.0.2 ping statistics --- 00:14:26.267 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:26.267 rtt min/avg/max/mdev = 0.666/0.666/0.666/0.000 ms 00:14:26.267 00:21:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:26.267 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:26.267 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.276 ms 00:14:26.267 00:14:26.267 --- 10.0.0.1 ping statistics --- 00:14:26.267 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:26.267 rtt min/avg/max/mdev = 0.276/0.276/0.276/0.000 ms 00:14:26.267 00:21:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:26.267 00:21:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@448 -- # return 0 00:14:26.267 00:21:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:14:26.267 00:21:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:26.267 00:21:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:14:26.267 00:21:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:14:26.267 00:21:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:26.267 00:21:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:14:26.267 00:21:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:14:26.267 00:21:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:14:26.267 00:21:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:14:26.267 00:21:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:26.267 00:21:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:26.267 00:21:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@507 -- # nvmfpid=3201694 00:14:26.267 00:21:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@508 -- # waitforlisten 3201694 00:14:26.267 00:21:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:26.267 00:21:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@831 -- # '[' -z 3201694 ']' 00:14:26.267 00:21:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:26.267 00:21:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:26.267 00:21:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:26.267 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:26.267 00:21:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:26.267 00:21:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:26.267 [2024-10-09 00:21:56.021286] Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 initialization... 00:14:26.268 [2024-10-09 00:21:56.021355] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:26.268 [2024-10-09 00:21:56.111865] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:26.268 [2024-10-09 00:21:56.209532] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:26.268 [2024-10-09 00:21:56.209590] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:26.268 [2024-10-09 00:21:56.209599] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:26.268 [2024-10-09 00:21:56.209607] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:26.268 [2024-10-09 00:21:56.209613] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:26.268 [2024-10-09 00:21:56.211582] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:14:26.268 [2024-10-09 00:21:56.211761] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:14:26.268 [2024-10-09 00:21:56.211893] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:14:26.268 [2024-10-09 00:21:56.211893] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:14:26.268 00:21:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:26.268 00:21:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@864 -- # return 0 00:14:26.268 00:21:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:14:26.268 00:21:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:26.268 00:21:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:26.268 00:21:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:26.268 00:21:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:26.268 00:21:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.268 00:21:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:26.268 [2024-10-09 00:21:56.899763] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:26.530 00:21:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.530 00:21:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:26.530 00:21:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.530 00:21:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:26.530 Malloc0 00:14:26.530 00:21:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.530 00:21:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:14:26.530 00:21:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.530 00:21:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:26.530 Malloc1 00:14:26.530 00:21:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.530 00:21:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:14:26.530 00:21:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.530 00:21:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:26.530 00:21:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.530 00:21:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:26.530 00:21:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.530 00:21:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:26.530 00:21:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.530 00:21:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:26.530 00:21:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.530 00:21:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:26.530 00:21:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.530 00:21:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:26.530 00:21:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.530 00:21:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:26.530 [2024-10-09 00:21:57.001912] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:26.530 00:21:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.530 00:21:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:26.530 00:21:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.530 00:21:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:26.530 00:21:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.530 00:21:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 4420 00:14:26.530 00:14:26.530 Discovery Log Number of Records 2, Generation counter 2 00:14:26.530 =====Discovery Log Entry 0====== 00:14:26.530 trtype: tcp 00:14:26.530 adrfam: ipv4 00:14:26.530 subtype: current discovery subsystem 00:14:26.530 treq: not required 00:14:26.530 portid: 0 00:14:26.530 trsvcid: 4420 00:14:26.530 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:14:26.530 traddr: 10.0.0.2 00:14:26.530 eflags: explicit discovery connections, duplicate discovery information 00:14:26.530 sectype: none 00:14:26.530 =====Discovery Log Entry 1====== 00:14:26.530 trtype: tcp 00:14:26.530 adrfam: ipv4 00:14:26.530 subtype: nvme subsystem 00:14:26.530 treq: not required 00:14:26.530 portid: 0 00:14:26.530 trsvcid: 4420 00:14:26.530 subnqn: nqn.2016-06.io.spdk:cnode1 00:14:26.530 traddr: 10.0.0.2 00:14:26.530 eflags: none 00:14:26.530 sectype: none 00:14:26.530 00:21:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:14:26.530 00:21:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:14:26.530 00:21:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # local dev _ 00:14:26.530 00:21:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:14:26.530 00:21:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@547 -- # nvme list 00:14:26.530 00:21:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ Node == /dev/nvme* ]] 00:14:26.530 00:21:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:14:26.530 00:21:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ --------------------- == /dev/nvme* ]] 00:14:26.530 00:21:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:14:26.530 00:21:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:14:26.531 00:21:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:28.444 00:21:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:14:28.444 00:21:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1198 -- # local i=0 00:14:28.444 00:21:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:28.444 00:21:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:14:28.444 00:21:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:14:28.444 00:21:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # sleep 2 00:14:30.358 00:22:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:14:30.358 00:22:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:14:30.358 00:22:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:14:30.358 00:22:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:14:30.358 00:22:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:14:30.358 00:22:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # return 0 00:14:30.358 00:22:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:14:30.358 00:22:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # local dev _ 00:14:30.358 00:22:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:14:30.358 00:22:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@547 -- # nvme list 00:14:30.358 00:22:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ Node == /dev/nvme* ]] 00:14:30.358 00:22:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:14:30.358 00:22:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ --------------------- == /dev/nvme* ]] 00:14:30.358 00:22:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:14:30.358 00:22:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:14:30.358 00:22:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # echo /dev/nvme0n1 00:14:30.358 00:22:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:14:30.358 00:22:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:14:30.358 00:22:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # echo /dev/nvme0n2 00:14:30.358 00:22:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:14:30.358 00:22:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n1 00:14:30.358 /dev/nvme0n2 ]] 00:14:30.358 00:22:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:14:30.358 00:22:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:14:30.358 00:22:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # local dev _ 00:14:30.358 00:22:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:14:30.358 00:22:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@547 -- # nvme list 00:14:30.358 00:22:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ Node == /dev/nvme* ]] 00:14:30.358 00:22:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:14:30.358 00:22:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ --------------------- == /dev/nvme* ]] 00:14:30.358 00:22:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:14:30.358 00:22:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:14:30.358 00:22:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # echo /dev/nvme0n1 00:14:30.358 00:22:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:14:30.358 00:22:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:14:30.358 00:22:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # echo /dev/nvme0n2 00:14:30.358 00:22:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:14:30.359 00:22:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:14:30.359 00:22:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:30.359 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:30.359 00:22:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:30.359 00:22:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1219 -- # local i=0 00:14:30.359 00:22:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:14:30.359 00:22:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:30.359 00:22:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:14:30.359 00:22:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:30.359 00:22:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # return 0 00:14:30.359 00:22:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:14:30.359 00:22:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:30.359 00:22:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:30.359 00:22:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:30.359 00:22:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:30.359 00:22:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:14:30.359 00:22:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:14:30.359 00:22:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@514 -- # nvmfcleanup 00:14:30.359 00:22:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # sync 00:14:30.359 00:22:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:30.359 00:22:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set +e 00:14:30.359 00:22:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:30.359 00:22:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:30.359 rmmod nvme_tcp 00:14:30.359 rmmod nvme_fabrics 00:14:30.359 rmmod nvme_keyring 00:14:30.359 00:22:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:30.359 00:22:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@128 -- # set -e 00:14:30.359 00:22:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@129 -- # return 0 00:14:30.359 00:22:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@515 -- # '[' -n 3201694 ']' 00:14:30.359 00:22:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@516 -- # killprocess 3201694 00:14:30.359 00:22:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@950 -- # '[' -z 3201694 ']' 00:14:30.359 00:22:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # kill -0 3201694 00:14:30.359 00:22:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@955 -- # uname 00:14:30.359 00:22:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:30.359 00:22:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3201694 00:14:30.619 00:22:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:30.619 00:22:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:30.619 00:22:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3201694' 00:14:30.619 killing process with pid 3201694 00:14:30.619 00:22:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@969 -- # kill 3201694 00:14:30.619 00:22:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@974 -- # wait 3201694 00:14:30.619 00:22:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:14:30.619 00:22:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:14:30.619 00:22:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:14:30.619 00:22:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # iptr 00:14:30.619 00:22:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@789 -- # iptables-save 00:14:30.619 00:22:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:14:30.619 00:22:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@789 -- # iptables-restore 00:14:30.619 00:22:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:30.619 00:22:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:30.619 00:22:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:30.619 00:22:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:30.619 00:22:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:33.177 00:22:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:33.177 00:14:33.177 real 0m15.055s 00:14:33.177 user 0m22.088s 00:14:33.177 sys 0m6.297s 00:14:33.177 00:22:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:33.177 00:22:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:33.177 ************************************ 00:14:33.177 END TEST nvmf_nvme_cli 00:14:33.177 ************************************ 00:14:33.177 00:22:03 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 1 -eq 1 ]] 00:14:33.177 00:22:03 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@31 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:14:33.177 00:22:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:33.177 00:22:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:33.177 00:22:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:33.177 ************************************ 00:14:33.177 START TEST nvmf_vfio_user 00:14:33.177 ************************************ 00:14:33.177 00:22:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:14:33.177 * Looking for test storage... 00:14:33.177 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:33.177 00:22:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:14:33.177 00:22:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1681 -- # lcov --version 00:14:33.177 00:22:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:14:33.177 00:22:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:14:33.177 00:22:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:33.177 00:22:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:33.177 00:22:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:33.177 00:22:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # IFS=.-: 00:14:33.177 00:22:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # read -ra ver1 00:14:33.177 00:22:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # IFS=.-: 00:14:33.177 00:22:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # read -ra ver2 00:14:33.177 00:22:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@338 -- # local 'op=<' 00:14:33.177 00:22:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@340 -- # ver1_l=2 00:14:33.177 00:22:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@341 -- # ver2_l=1 00:14:33.177 00:22:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:33.177 00:22:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@344 -- # case "$op" in 00:14:33.177 00:22:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@345 -- # : 1 00:14:33.177 00:22:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:33.177 00:22:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:33.177 00:22:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # decimal 1 00:14:33.177 00:22:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=1 00:14:33.177 00:22:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:33.177 00:22:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 1 00:14:33.177 00:22:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # ver1[v]=1 00:14:33.177 00:22:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # decimal 2 00:14:33.177 00:22:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=2 00:14:33.177 00:22:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:33.177 00:22:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 2 00:14:33.177 00:22:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # ver2[v]=2 00:14:33.177 00:22:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:33.177 00:22:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:33.177 00:22:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # return 0 00:14:33.177 00:22:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:33.177 00:22:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:14:33.177 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:33.177 --rc genhtml_branch_coverage=1 00:14:33.177 --rc genhtml_function_coverage=1 00:14:33.177 --rc genhtml_legend=1 00:14:33.177 --rc geninfo_all_blocks=1 00:14:33.177 --rc geninfo_unexecuted_blocks=1 00:14:33.177 00:14:33.177 ' 00:14:33.177 00:22:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:14:33.177 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:33.177 --rc genhtml_branch_coverage=1 00:14:33.177 --rc genhtml_function_coverage=1 00:14:33.177 --rc genhtml_legend=1 00:14:33.177 --rc geninfo_all_blocks=1 00:14:33.177 --rc geninfo_unexecuted_blocks=1 00:14:33.177 00:14:33.177 ' 00:14:33.177 00:22:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:14:33.177 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:33.177 --rc genhtml_branch_coverage=1 00:14:33.177 --rc genhtml_function_coverage=1 00:14:33.177 --rc genhtml_legend=1 00:14:33.177 --rc geninfo_all_blocks=1 00:14:33.177 --rc geninfo_unexecuted_blocks=1 00:14:33.177 00:14:33.177 ' 00:14:33.177 00:22:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:14:33.177 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:33.177 --rc genhtml_branch_coverage=1 00:14:33.177 --rc genhtml_function_coverage=1 00:14:33.177 --rc genhtml_legend=1 00:14:33.177 --rc geninfo_all_blocks=1 00:14:33.177 --rc geninfo_unexecuted_blocks=1 00:14:33.177 00:14:33.177 ' 00:14:33.177 00:22:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:33.177 00:22:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:14:33.177 00:22:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:33.177 00:22:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:33.177 00:22:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:33.177 00:22:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:33.177 00:22:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:33.177 00:22:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:33.177 00:22:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:33.178 00:22:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:33.178 00:22:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:33.178 00:22:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:33.178 00:22:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:33.178 00:22:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:33.178 00:22:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:33.178 00:22:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:33.178 00:22:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:33.178 00:22:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:33.178 00:22:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:33.178 00:22:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@15 -- # shopt -s extglob 00:14:33.178 00:22:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:33.178 00:22:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:33.178 00:22:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:33.178 00:22:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:33.178 00:22:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:33.178 00:22:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:33.178 00:22:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:14:33.178 00:22:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:33.178 00:22:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@51 -- # : 0 00:14:33.178 00:22:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:33.178 00:22:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:33.178 00:22:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:33.178 00:22:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:33.178 00:22:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:33.178 00:22:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:33.178 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:33.178 00:22:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:33.178 00:22:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:33.178 00:22:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:33.178 00:22:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:14:33.178 00:22:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:14:33.178 00:22:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:14:33.178 00:22:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:33.178 00:22:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:14:33.178 00:22:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:14:33.178 00:22:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:14:33.178 00:22:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:14:33.178 00:22:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:14:33.178 00:22:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:14:33.178 00:22:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=3203269 00:14:33.178 00:22:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 3203269' 00:14:33.178 Process pid: 3203269 00:14:33.178 00:22:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:14:33.178 00:22:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 3203269 00:14:33.178 00:22:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@831 -- # '[' -z 3203269 ']' 00:14:33.178 00:22:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:14:33.178 00:22:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:33.178 00:22:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:33.178 00:22:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:33.178 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:33.178 00:22:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:33.178 00:22:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:14:33.178 [2024-10-09 00:22:03.652598] Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 initialization... 00:14:33.178 [2024-10-09 00:22:03.652667] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:33.178 [2024-10-09 00:22:03.733678] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:33.178 [2024-10-09 00:22:03.794233] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:33.178 [2024-10-09 00:22:03.794266] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:33.178 [2024-10-09 00:22:03.794272] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:33.178 [2024-10-09 00:22:03.794277] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:33.178 [2024-10-09 00:22:03.794281] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:33.178 [2024-10-09 00:22:03.795586] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:14:33.178 [2024-10-09 00:22:03.795707] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:14:33.178 [2024-10-09 00:22:03.795858] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:14:33.178 [2024-10-09 00:22:03.795964] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:14:34.133 00:22:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:34.133 00:22:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # return 0 00:14:34.133 00:22:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:14:35.081 00:22:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:14:35.081 00:22:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:14:35.081 00:22:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:14:35.081 00:22:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:35.081 00:22:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:14:35.081 00:22:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:14:35.349 Malloc1 00:14:35.349 00:22:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:14:35.622 00:22:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:14:35.622 00:22:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:14:35.887 00:22:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:35.887 00:22:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:14:35.887 00:22:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:14:36.148 Malloc2 00:14:36.148 00:22:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:14:36.148 00:22:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:14:36.408 00:22:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:14:36.670 00:22:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:14:36.670 00:22:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:14:36.670 00:22:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:36.670 00:22:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:14:36.670 00:22:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:14:36.670 00:22:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:14:36.670 [2024-10-09 00:22:07.162919] Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 initialization... 00:14:36.670 [2024-10-09 00:22:07.162985] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3203960 ] 00:14:36.670 [2024-10-09 00:22:07.191858] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:14:36.670 [2024-10-09 00:22:07.203438] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:36.670 [2024-10-09 00:22:07.203455] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fc630bd3000 00:14:36.670 [2024-10-09 00:22:07.204436] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:36.670 [2024-10-09 00:22:07.205440] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:36.670 [2024-10-09 00:22:07.206445] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:36.670 [2024-10-09 00:22:07.207451] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:36.670 [2024-10-09 00:22:07.208455] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:36.670 [2024-10-09 00:22:07.209457] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:36.670 [2024-10-09 00:22:07.210462] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:36.670 [2024-10-09 00:22:07.211463] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:36.670 [2024-10-09 00:22:07.212469] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:36.670 [2024-10-09 00:22:07.212476] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fc630bc8000 00:14:36.670 [2024-10-09 00:22:07.213388] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:36.670 [2024-10-09 00:22:07.224847] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:14:36.670 [2024-10-09 00:22:07.224865] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:14:36.670 [2024-10-09 00:22:07.230590] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:14:36.670 [2024-10-09 00:22:07.230626] nvme_pcie_common.c: 134:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:14:36.670 [2024-10-09 00:22:07.230693] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:14:36.670 [2024-10-09 00:22:07.230708] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:14:36.670 [2024-10-09 00:22:07.230712] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:14:36.670 [2024-10-09 00:22:07.231590] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:14:36.670 [2024-10-09 00:22:07.231597] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:14:36.670 [2024-10-09 00:22:07.231602] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:14:36.670 [2024-10-09 00:22:07.232594] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:14:36.670 [2024-10-09 00:22:07.232600] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:14:36.670 [2024-10-09 00:22:07.232606] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:14:36.670 [2024-10-09 00:22:07.233597] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:14:36.670 [2024-10-09 00:22:07.233603] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:14:36.670 [2024-10-09 00:22:07.234604] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:14:36.670 [2024-10-09 00:22:07.234610] nvme_ctrlr.c:3893:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:14:36.670 [2024-10-09 00:22:07.234614] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:14:36.670 [2024-10-09 00:22:07.234618] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:14:36.670 [2024-10-09 00:22:07.234723] nvme_ctrlr.c:4091:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:14:36.670 [2024-10-09 00:22:07.234727] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:14:36.670 [2024-10-09 00:22:07.234732] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:14:36.670 [2024-10-09 00:22:07.235611] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:14:36.670 [2024-10-09 00:22:07.236615] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:14:36.670 [2024-10-09 00:22:07.237617] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:14:36.670 [2024-10-09 00:22:07.238618] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:36.670 [2024-10-09 00:22:07.238668] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:14:36.671 [2024-10-09 00:22:07.239637] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:14:36.671 [2024-10-09 00:22:07.239643] nvme_ctrlr.c:3928:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:14:36.671 [2024-10-09 00:22:07.239646] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:14:36.671 [2024-10-09 00:22:07.239661] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:14:36.671 [2024-10-09 00:22:07.239666] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:14:36.671 [2024-10-09 00:22:07.239678] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:36.671 [2024-10-09 00:22:07.239682] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:36.671 [2024-10-09 00:22:07.239685] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:36.671 [2024-10-09 00:22:07.239696] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:36.671 [2024-10-09 00:22:07.239728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:14:36.671 [2024-10-09 00:22:07.239735] nvme_ctrlr.c:2077:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:14:36.671 [2024-10-09 00:22:07.239739] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:14:36.671 [2024-10-09 00:22:07.239742] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:14:36.671 [2024-10-09 00:22:07.239745] nvme_ctrlr.c:2095:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:14:36.671 [2024-10-09 00:22:07.239749] nvme_ctrlr.c:2108:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:14:36.671 [2024-10-09 00:22:07.239752] nvme_ctrlr.c:2123:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:14:36.671 [2024-10-09 00:22:07.239756] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:14:36.671 [2024-10-09 00:22:07.239762] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:14:36.671 [2024-10-09 00:22:07.239769] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:14:36.671 [2024-10-09 00:22:07.239782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:14:36.671 [2024-10-09 00:22:07.239792] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:36.671 [2024-10-09 00:22:07.239798] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:36.671 [2024-10-09 00:22:07.239804] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:36.671 [2024-10-09 00:22:07.239810] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:36.671 [2024-10-09 00:22:07.239814] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:14:36.671 [2024-10-09 00:22:07.239820] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:14:36.671 [2024-10-09 00:22:07.239827] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:14:36.671 [2024-10-09 00:22:07.239835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:14:36.671 [2024-10-09 00:22:07.239839] nvme_ctrlr.c:3034:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:14:36.671 [2024-10-09 00:22:07.239843] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:14:36.671 [2024-10-09 00:22:07.239847] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:14:36.671 [2024-10-09 00:22:07.239853] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:14:36.671 [2024-10-09 00:22:07.239860] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:36.671 [2024-10-09 00:22:07.239867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:14:36.671 [2024-10-09 00:22:07.239910] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:14:36.671 [2024-10-09 00:22:07.239915] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:14:36.671 [2024-10-09 00:22:07.239921] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:14:36.671 [2024-10-09 00:22:07.239924] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:14:36.671 [2024-10-09 00:22:07.239927] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:36.671 [2024-10-09 00:22:07.239932] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:14:36.671 [2024-10-09 00:22:07.239944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:14:36.671 [2024-10-09 00:22:07.239952] nvme_ctrlr.c:4722:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:14:36.671 [2024-10-09 00:22:07.239960] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:14:36.671 [2024-10-09 00:22:07.239966] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:14:36.671 [2024-10-09 00:22:07.239971] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:36.671 [2024-10-09 00:22:07.239975] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:36.671 [2024-10-09 00:22:07.239978] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:36.671 [2024-10-09 00:22:07.239982] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:36.671 [2024-10-09 00:22:07.240000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:14:36.671 [2024-10-09 00:22:07.240009] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:14:36.671 [2024-10-09 00:22:07.240014] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:14:36.671 [2024-10-09 00:22:07.240019] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:36.671 [2024-10-09 00:22:07.240023] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:36.671 [2024-10-09 00:22:07.240025] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:36.671 [2024-10-09 00:22:07.240029] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:36.671 [2024-10-09 00:22:07.240042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:14:36.671 [2024-10-09 00:22:07.240048] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:14:36.671 [2024-10-09 00:22:07.240052] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:14:36.671 [2024-10-09 00:22:07.240059] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:14:36.671 [2024-10-09 00:22:07.240063] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host behavior support feature (timeout 30000 ms) 00:14:36.671 [2024-10-09 00:22:07.240067] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:14:36.671 [2024-10-09 00:22:07.240071] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:14:36.671 [2024-10-09 00:22:07.240075] nvme_ctrlr.c:3134:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:14:36.671 [2024-10-09 00:22:07.240078] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:14:36.671 [2024-10-09 00:22:07.240082] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:14:36.671 [2024-10-09 00:22:07.240096] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:14:36.671 [2024-10-09 00:22:07.240102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:14:36.671 [2024-10-09 00:22:07.240110] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:14:36.671 [2024-10-09 00:22:07.240115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:14:36.671 [2024-10-09 00:22:07.240123] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:14:36.671 [2024-10-09 00:22:07.240130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:14:36.671 [2024-10-09 00:22:07.240139] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:36.671 [2024-10-09 00:22:07.240150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:14:36.671 [2024-10-09 00:22:07.240159] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:14:36.671 [2024-10-09 00:22:07.240162] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:14:36.671 [2024-10-09 00:22:07.240165] nvme_pcie_common.c:1241:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:14:36.671 [2024-10-09 00:22:07.240168] nvme_pcie_common.c:1257:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:14:36.671 [2024-10-09 00:22:07.240170] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:14:36.671 [2024-10-09 00:22:07.240174] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:14:36.671 [2024-10-09 00:22:07.240180] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:14:36.671 [2024-10-09 00:22:07.240183] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:14:36.671 [2024-10-09 00:22:07.240186] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:36.671 [2024-10-09 00:22:07.240190] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:14:36.671 [2024-10-09 00:22:07.240195] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:14:36.671 [2024-10-09 00:22:07.240198] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:36.671 [2024-10-09 00:22:07.240201] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:36.672 [2024-10-09 00:22:07.240205] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:36.672 [2024-10-09 00:22:07.240211] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:14:36.672 [2024-10-09 00:22:07.240214] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:14:36.672 [2024-10-09 00:22:07.240216] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:36.672 [2024-10-09 00:22:07.240221] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:14:36.672 [2024-10-09 00:22:07.240226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:14:36.672 [2024-10-09 00:22:07.240234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:14:36.672 [2024-10-09 00:22:07.240242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:14:36.672 [2024-10-09 00:22:07.240247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:14:36.672 ===================================================== 00:14:36.672 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:36.672 ===================================================== 00:14:36.672 Controller Capabilities/Features 00:14:36.672 ================================ 00:14:36.672 Vendor ID: 4e58 00:14:36.672 Subsystem Vendor ID: 4e58 00:14:36.672 Serial Number: SPDK1 00:14:36.672 Model Number: SPDK bdev Controller 00:14:36.672 Firmware Version: 25.01 00:14:36.672 Recommended Arb Burst: 6 00:14:36.672 IEEE OUI Identifier: 8d 6b 50 00:14:36.672 Multi-path I/O 00:14:36.672 May have multiple subsystem ports: Yes 00:14:36.672 May have multiple controllers: Yes 00:14:36.672 Associated with SR-IOV VF: No 00:14:36.672 Max Data Transfer Size: 131072 00:14:36.672 Max Number of Namespaces: 32 00:14:36.672 Max Number of I/O Queues: 127 00:14:36.672 NVMe Specification Version (VS): 1.3 00:14:36.672 NVMe Specification Version (Identify): 1.3 00:14:36.672 Maximum Queue Entries: 256 00:14:36.672 Contiguous Queues Required: Yes 00:14:36.672 Arbitration Mechanisms Supported 00:14:36.672 Weighted Round Robin: Not Supported 00:14:36.672 Vendor Specific: Not Supported 00:14:36.672 Reset Timeout: 15000 ms 00:14:36.672 Doorbell Stride: 4 bytes 00:14:36.672 NVM Subsystem Reset: Not Supported 00:14:36.672 Command Sets Supported 00:14:36.672 NVM Command Set: Supported 00:14:36.672 Boot Partition: Not Supported 00:14:36.672 Memory Page Size Minimum: 4096 bytes 00:14:36.672 Memory Page Size Maximum: 4096 bytes 00:14:36.672 Persistent Memory Region: Not Supported 00:14:36.672 Optional Asynchronous Events Supported 00:14:36.672 Namespace Attribute Notices: Supported 00:14:36.672 Firmware Activation Notices: Not Supported 00:14:36.672 ANA Change Notices: Not Supported 00:14:36.672 PLE Aggregate Log Change Notices: Not Supported 00:14:36.672 LBA Status Info Alert Notices: Not Supported 00:14:36.672 EGE Aggregate Log Change Notices: Not Supported 00:14:36.672 Normal NVM Subsystem Shutdown event: Not Supported 00:14:36.672 Zone Descriptor Change Notices: Not Supported 00:14:36.672 Discovery Log Change Notices: Not Supported 00:14:36.672 Controller Attributes 00:14:36.672 128-bit Host Identifier: Supported 00:14:36.672 Non-Operational Permissive Mode: Not Supported 00:14:36.672 NVM Sets: Not Supported 00:14:36.672 Read Recovery Levels: Not Supported 00:14:36.672 Endurance Groups: Not Supported 00:14:36.672 Predictable Latency Mode: Not Supported 00:14:36.672 Traffic Based Keep ALive: Not Supported 00:14:36.672 Namespace Granularity: Not Supported 00:14:36.672 SQ Associations: Not Supported 00:14:36.672 UUID List: Not Supported 00:14:36.672 Multi-Domain Subsystem: Not Supported 00:14:36.672 Fixed Capacity Management: Not Supported 00:14:36.672 Variable Capacity Management: Not Supported 00:14:36.672 Delete Endurance Group: Not Supported 00:14:36.672 Delete NVM Set: Not Supported 00:14:36.672 Extended LBA Formats Supported: Not Supported 00:14:36.672 Flexible Data Placement Supported: Not Supported 00:14:36.672 00:14:36.672 Controller Memory Buffer Support 00:14:36.672 ================================ 00:14:36.672 Supported: No 00:14:36.672 00:14:36.672 Persistent Memory Region Support 00:14:36.672 ================================ 00:14:36.672 Supported: No 00:14:36.672 00:14:36.672 Admin Command Set Attributes 00:14:36.672 ============================ 00:14:36.672 Security Send/Receive: Not Supported 00:14:36.672 Format NVM: Not Supported 00:14:36.672 Firmware Activate/Download: Not Supported 00:14:36.672 Namespace Management: Not Supported 00:14:36.672 Device Self-Test: Not Supported 00:14:36.672 Directives: Not Supported 00:14:36.672 NVMe-MI: Not Supported 00:14:36.672 Virtualization Management: Not Supported 00:14:36.672 Doorbell Buffer Config: Not Supported 00:14:36.672 Get LBA Status Capability: Not Supported 00:14:36.672 Command & Feature Lockdown Capability: Not Supported 00:14:36.672 Abort Command Limit: 4 00:14:36.672 Async Event Request Limit: 4 00:14:36.672 Number of Firmware Slots: N/A 00:14:36.672 Firmware Slot 1 Read-Only: N/A 00:14:36.672 Firmware Activation Without Reset: N/A 00:14:36.672 Multiple Update Detection Support: N/A 00:14:36.672 Firmware Update Granularity: No Information Provided 00:14:36.672 Per-Namespace SMART Log: No 00:14:36.672 Asymmetric Namespace Access Log Page: Not Supported 00:14:36.672 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:14:36.672 Command Effects Log Page: Supported 00:14:36.672 Get Log Page Extended Data: Supported 00:14:36.672 Telemetry Log Pages: Not Supported 00:14:36.672 Persistent Event Log Pages: Not Supported 00:14:36.672 Supported Log Pages Log Page: May Support 00:14:36.672 Commands Supported & Effects Log Page: Not Supported 00:14:36.672 Feature Identifiers & Effects Log Page:May Support 00:14:36.672 NVMe-MI Commands & Effects Log Page: May Support 00:14:36.672 Data Area 4 for Telemetry Log: Not Supported 00:14:36.672 Error Log Page Entries Supported: 128 00:14:36.672 Keep Alive: Supported 00:14:36.672 Keep Alive Granularity: 10000 ms 00:14:36.672 00:14:36.672 NVM Command Set Attributes 00:14:36.672 ========================== 00:14:36.672 Submission Queue Entry Size 00:14:36.672 Max: 64 00:14:36.672 Min: 64 00:14:36.672 Completion Queue Entry Size 00:14:36.672 Max: 16 00:14:36.672 Min: 16 00:14:36.672 Number of Namespaces: 32 00:14:36.672 Compare Command: Supported 00:14:36.672 Write Uncorrectable Command: Not Supported 00:14:36.672 Dataset Management Command: Supported 00:14:36.672 Write Zeroes Command: Supported 00:14:36.672 Set Features Save Field: Not Supported 00:14:36.672 Reservations: Not Supported 00:14:36.672 Timestamp: Not Supported 00:14:36.672 Copy: Supported 00:14:36.672 Volatile Write Cache: Present 00:14:36.672 Atomic Write Unit (Normal): 1 00:14:36.672 Atomic Write Unit (PFail): 1 00:14:36.672 Atomic Compare & Write Unit: 1 00:14:36.672 Fused Compare & Write: Supported 00:14:36.672 Scatter-Gather List 00:14:36.672 SGL Command Set: Supported (Dword aligned) 00:14:36.672 SGL Keyed: Not Supported 00:14:36.672 SGL Bit Bucket Descriptor: Not Supported 00:14:36.672 SGL Metadata Pointer: Not Supported 00:14:36.672 Oversized SGL: Not Supported 00:14:36.672 SGL Metadata Address: Not Supported 00:14:36.672 SGL Offset: Not Supported 00:14:36.672 Transport SGL Data Block: Not Supported 00:14:36.672 Replay Protected Memory Block: Not Supported 00:14:36.672 00:14:36.672 Firmware Slot Information 00:14:36.672 ========================= 00:14:36.672 Active slot: 1 00:14:36.672 Slot 1 Firmware Revision: 25.01 00:14:36.672 00:14:36.672 00:14:36.672 Commands Supported and Effects 00:14:36.672 ============================== 00:14:36.672 Admin Commands 00:14:36.672 -------------- 00:14:36.672 Get Log Page (02h): Supported 00:14:36.672 Identify (06h): Supported 00:14:36.672 Abort (08h): Supported 00:14:36.672 Set Features (09h): Supported 00:14:36.672 Get Features (0Ah): Supported 00:14:36.672 Asynchronous Event Request (0Ch): Supported 00:14:36.672 Keep Alive (18h): Supported 00:14:36.672 I/O Commands 00:14:36.672 ------------ 00:14:36.672 Flush (00h): Supported LBA-Change 00:14:36.672 Write (01h): Supported LBA-Change 00:14:36.672 Read (02h): Supported 00:14:36.672 Compare (05h): Supported 00:14:36.672 Write Zeroes (08h): Supported LBA-Change 00:14:36.672 Dataset Management (09h): Supported LBA-Change 00:14:36.672 Copy (19h): Supported LBA-Change 00:14:36.672 00:14:36.672 Error Log 00:14:36.672 ========= 00:14:36.672 00:14:36.672 Arbitration 00:14:36.672 =========== 00:14:36.672 Arbitration Burst: 1 00:14:36.672 00:14:36.672 Power Management 00:14:36.672 ================ 00:14:36.672 Number of Power States: 1 00:14:36.672 Current Power State: Power State #0 00:14:36.672 Power State #0: 00:14:36.672 Max Power: 0.00 W 00:14:36.672 Non-Operational State: Operational 00:14:36.672 Entry Latency: Not Reported 00:14:36.672 Exit Latency: Not Reported 00:14:36.672 Relative Read Throughput: 0 00:14:36.672 Relative Read Latency: 0 00:14:36.672 Relative Write Throughput: 0 00:14:36.672 Relative Write Latency: 0 00:14:36.672 Idle Power: Not Reported 00:14:36.673 Active Power: Not Reported 00:14:36.673 Non-Operational Permissive Mode: Not Supported 00:14:36.673 00:14:36.673 Health Information 00:14:36.673 ================== 00:14:36.673 Critical Warnings: 00:14:36.673 Available Spare Space: OK 00:14:36.673 Temperature: OK 00:14:36.673 Device Reliability: OK 00:14:36.673 Read Only: No 00:14:36.673 Volatile Memory Backup: OK 00:14:36.673 Current Temperature: 0 Kelvin (-273 Celsius) 00:14:36.673 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:14:36.673 Available Spare: 0% 00:14:36.673 Available Sp[2024-10-09 00:22:07.240317] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:14:36.673 [2024-10-09 00:22:07.240323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:14:36.673 [2024-10-09 00:22:07.240342] nvme_ctrlr.c:4386:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:14:36.673 [2024-10-09 00:22:07.240349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:36.673 [2024-10-09 00:22:07.240356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:36.673 [2024-10-09 00:22:07.240361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:36.673 [2024-10-09 00:22:07.240365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:36.673 [2024-10-09 00:22:07.240640] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:14:36.673 [2024-10-09 00:22:07.240647] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:14:36.673 [2024-10-09 00:22:07.241646] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:36.673 [2024-10-09 00:22:07.241685] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:14:36.673 [2024-10-09 00:22:07.241689] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:14:36.673 [2024-10-09 00:22:07.242657] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:14:36.673 [2024-10-09 00:22:07.242665] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:14:36.673 [2024-10-09 00:22:07.242727] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:14:36.673 [2024-10-09 00:22:07.243669] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:36.673 are Threshold: 0% 00:14:36.673 Life Percentage Used: 0% 00:14:36.673 Data Units Read: 0 00:14:36.673 Data Units Written: 0 00:14:36.673 Host Read Commands: 0 00:14:36.673 Host Write Commands: 0 00:14:36.673 Controller Busy Time: 0 minutes 00:14:36.673 Power Cycles: 0 00:14:36.673 Power On Hours: 0 hours 00:14:36.673 Unsafe Shutdowns: 0 00:14:36.673 Unrecoverable Media Errors: 0 00:14:36.673 Lifetime Error Log Entries: 0 00:14:36.673 Warning Temperature Time: 0 minutes 00:14:36.673 Critical Temperature Time: 0 minutes 00:14:36.673 00:14:36.673 Number of Queues 00:14:36.673 ================ 00:14:36.673 Number of I/O Submission Queues: 127 00:14:36.673 Number of I/O Completion Queues: 127 00:14:36.673 00:14:36.673 Active Namespaces 00:14:36.673 ================= 00:14:36.673 Namespace ID:1 00:14:36.673 Error Recovery Timeout: Unlimited 00:14:36.673 Command Set Identifier: NVM (00h) 00:14:36.673 Deallocate: Supported 00:14:36.673 Deallocated/Unwritten Error: Not Supported 00:14:36.673 Deallocated Read Value: Unknown 00:14:36.673 Deallocate in Write Zeroes: Not Supported 00:14:36.673 Deallocated Guard Field: 0xFFFF 00:14:36.673 Flush: Supported 00:14:36.673 Reservation: Supported 00:14:36.673 Namespace Sharing Capabilities: Multiple Controllers 00:14:36.673 Size (in LBAs): 131072 (0GiB) 00:14:36.673 Capacity (in LBAs): 131072 (0GiB) 00:14:36.673 Utilization (in LBAs): 131072 (0GiB) 00:14:36.673 NGUID: EBFFD627C5D94420BE3D86858AE252A8 00:14:36.673 UUID: ebffd627-c5d9-4420-be3d-86858ae252a8 00:14:36.673 Thin Provisioning: Not Supported 00:14:36.673 Per-NS Atomic Units: Yes 00:14:36.673 Atomic Boundary Size (Normal): 0 00:14:36.673 Atomic Boundary Size (PFail): 0 00:14:36.673 Atomic Boundary Offset: 0 00:14:36.673 Maximum Single Source Range Length: 65535 00:14:36.673 Maximum Copy Length: 65535 00:14:36.673 Maximum Source Range Count: 1 00:14:36.673 NGUID/EUI64 Never Reused: No 00:14:36.673 Namespace Write Protected: No 00:14:36.673 Number of LBA Formats: 1 00:14:36.673 Current LBA Format: LBA Format #00 00:14:36.673 LBA Format #00: Data Size: 512 Metadata Size: 0 00:14:36.673 00:14:36.673 00:22:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:14:36.933 [2024-10-09 00:22:07.420330] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:42.219 Initializing NVMe Controllers 00:14:42.219 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:42.219 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:14:42.219 Initialization complete. Launching workers. 00:14:42.219 ======================================================== 00:14:42.219 Latency(us) 00:14:42.219 Device Information : IOPS MiB/s Average min max 00:14:42.219 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 39987.03 156.20 3200.91 853.58 7693.85 00:14:42.219 ======================================================== 00:14:42.219 Total : 39987.03 156.20 3200.91 853.58 7693.85 00:14:42.219 00:14:42.219 [2024-10-09 00:22:12.440439] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:42.219 00:22:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:14:42.219 [2024-10-09 00:22:12.619234] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:47.521 Initializing NVMe Controllers 00:14:47.521 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:47.521 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:14:47.521 Initialization complete. Launching workers. 00:14:47.521 ======================================================== 00:14:47.521 Latency(us) 00:14:47.521 Device Information : IOPS MiB/s Average min max 00:14:47.521 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16037.19 62.65 7986.98 5981.40 11056.93 00:14:47.521 ======================================================== 00:14:47.521 Total : 16037.19 62.65 7986.98 5981.40 11056.93 00:14:47.521 00:14:47.521 [2024-10-09 00:22:17.658034] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:47.521 00:22:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:14:47.521 [2024-10-09 00:22:17.839814] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:52.809 [2024-10-09 00:22:22.914983] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:52.809 Initializing NVMe Controllers 00:14:52.809 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:52.809 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:52.809 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:14:52.809 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:14:52.809 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:14:52.809 Initialization complete. Launching workers. 00:14:52.809 Starting thread on core 2 00:14:52.809 Starting thread on core 3 00:14:52.809 Starting thread on core 1 00:14:52.809 00:22:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:14:52.809 [2024-10-09 00:22:23.147870] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:56.104 [2024-10-09 00:22:26.327839] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:56.104 Initializing NVMe Controllers 00:14:56.104 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:14:56.104 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:14:56.104 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:14:56.104 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:14:56.104 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:14:56.104 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:14:56.104 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:14:56.104 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:14:56.104 Initialization complete. Launching workers. 00:14:56.104 Starting thread on core 1 with urgent priority queue 00:14:56.104 Starting thread on core 2 with urgent priority queue 00:14:56.104 Starting thread on core 3 with urgent priority queue 00:14:56.104 Starting thread on core 0 with urgent priority queue 00:14:56.104 SPDK bdev Controller (SPDK1 ) core 0: 6841.33 IO/s 14.62 secs/100000 ios 00:14:56.104 SPDK bdev Controller (SPDK1 ) core 1: 6137.67 IO/s 16.29 secs/100000 ios 00:14:56.104 SPDK bdev Controller (SPDK1 ) core 2: 5457.67 IO/s 18.32 secs/100000 ios 00:14:56.104 SPDK bdev Controller (SPDK1 ) core 3: 8188.33 IO/s 12.21 secs/100000 ios 00:14:56.104 ======================================================== 00:14:56.104 00:14:56.104 00:22:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:14:56.104 [2024-10-09 00:22:26.556151] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:56.104 Initializing NVMe Controllers 00:14:56.104 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:14:56.104 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:14:56.104 Namespace ID: 1 size: 0GB 00:14:56.104 Initialization complete. 00:14:56.104 INFO: using host memory buffer for IO 00:14:56.104 Hello world! 00:14:56.104 [2024-10-09 00:22:26.593364] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:56.104 00:22:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:14:56.366 [2024-10-09 00:22:26.811097] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:57.305 Initializing NVMe Controllers 00:14:57.305 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:14:57.305 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:14:57.305 Initialization complete. Launching workers. 00:14:57.305 submit (in ns) avg, min, max = 5523.4, 2817.5, 3999216.7 00:14:57.305 complete (in ns) avg, min, max = 17140.7, 1631.7, 4001020.0 00:14:57.305 00:14:57.305 Submit histogram 00:14:57.305 ================ 00:14:57.305 Range in us Cumulative Count 00:14:57.305 2.813 - 2.827: 0.0049% ( 1) 00:14:57.305 2.827 - 2.840: 0.6285% ( 126) 00:14:57.305 2.840 - 2.853: 2.2963% ( 337) 00:14:57.305 2.853 - 2.867: 5.7607% ( 700) 00:14:57.305 2.867 - 2.880: 12.2290% ( 1307) 00:14:57.305 2.880 - 2.893: 18.4995% ( 1267) 00:14:57.305 2.893 - 2.907: 24.2552% ( 1163) 00:14:57.305 2.907 - 2.920: 30.9809% ( 1359) 00:14:57.305 2.920 - 2.933: 35.7963% ( 973) 00:14:57.305 2.933 - 2.947: 40.2356% ( 897) 00:14:57.305 2.947 - 2.960: 45.4370% ( 1051) 00:14:57.305 2.960 - 2.973: 51.1284% ( 1150) 00:14:57.305 2.973 - 2.987: 57.1810% ( 1223) 00:14:57.305 2.987 - 3.000: 66.4605% ( 1875) 00:14:57.305 3.000 - 3.013: 75.5518% ( 1837) 00:14:57.305 3.013 - 3.027: 83.5049% ( 1607) 00:14:57.305 3.027 - 3.040: 90.1910% ( 1351) 00:14:57.305 3.040 - 3.053: 94.7689% ( 925) 00:14:57.305 3.053 - 3.067: 97.5205% ( 556) 00:14:57.305 3.067 - 3.080: 98.8667% ( 272) 00:14:57.305 3.080 - 3.093: 99.3913% ( 106) 00:14:57.305 3.093 - 3.107: 99.5199% ( 26) 00:14:57.306 3.107 - 3.120: 99.5942% ( 15) 00:14:57.306 3.120 - 3.133: 99.6140% ( 4) 00:14:57.306 3.133 - 3.147: 99.6189% ( 1) 00:14:57.306 3.200 - 3.213: 99.6239% ( 1) 00:14:57.306 3.227 - 3.240: 99.6288% ( 1) 00:14:57.306 3.267 - 3.280: 99.6338% ( 1) 00:14:57.306 3.653 - 3.680: 99.6387% ( 1) 00:14:57.306 3.707 - 3.733: 99.6437% ( 1) 00:14:57.306 3.867 - 3.893: 99.6486% ( 1) 00:14:57.306 4.187 - 4.213: 99.6585% ( 2) 00:14:57.306 4.213 - 4.240: 99.6635% ( 1) 00:14:57.306 4.240 - 4.267: 99.6684% ( 1) 00:14:57.306 4.347 - 4.373: 99.6734% ( 1) 00:14:57.306 4.400 - 4.427: 99.6783% ( 1) 00:14:57.306 4.453 - 4.480: 99.6833% ( 1) 00:14:57.306 4.533 - 4.560: 99.6882% ( 1) 00:14:57.306 4.587 - 4.613: 99.6932% ( 1) 00:14:57.306 4.640 - 4.667: 99.6981% ( 1) 00:14:57.306 4.667 - 4.693: 99.7031% ( 1) 00:14:57.306 4.693 - 4.720: 99.7130% ( 2) 00:14:57.306 4.720 - 4.747: 99.7179% ( 1) 00:14:57.306 4.773 - 4.800: 99.7278% ( 2) 00:14:57.306 4.907 - 4.933: 99.7328% ( 1) 00:14:57.306 4.933 - 4.960: 99.7476% ( 3) 00:14:57.306 4.960 - 4.987: 99.7575% ( 2) 00:14:57.306 4.987 - 5.013: 99.7723% ( 3) 00:14:57.306 5.013 - 5.040: 99.7773% ( 1) 00:14:57.306 5.093 - 5.120: 99.7822% ( 1) 00:14:57.306 5.147 - 5.173: 99.7872% ( 1) 00:14:57.306 5.173 - 5.200: 99.7921% ( 1) 00:14:57.306 5.440 - 5.467: 99.7971% ( 1) 00:14:57.306 5.467 - 5.493: 99.8070% ( 2) 00:14:57.306 5.627 - 5.653: 99.8119% ( 1) 00:14:57.306 5.733 - 5.760: 99.8169% ( 1) 00:14:57.306 5.813 - 5.840: 99.8218% ( 1) 00:14:57.306 5.947 - 5.973: 99.8268% ( 1) 00:14:57.306 5.973 - 6.000: 99.8317% ( 1) 00:14:57.306 6.213 - 6.240: 99.8367% ( 1) 00:14:57.306 6.240 - 6.267: 99.8416% ( 1) 00:14:57.306 6.293 - 6.320: 99.8466% ( 1) 00:14:57.306 6.507 - 6.533: 99.8515% ( 1) 00:14:57.306 6.587 - 6.613: 99.8565% ( 1) 00:14:57.306 6.667 - 6.693: 99.8664% ( 2) 00:14:57.306 6.720 - 6.747: 99.8763% ( 2) 00:14:57.306 6.773 - 6.800: 99.8812% ( 1) 00:14:57.306 6.827 - 6.880: 99.8862% ( 1) 00:14:57.306 6.880 - 6.933: 99.8911% ( 1) 00:14:57.306 6.933 - 6.987: 99.8961% ( 1) 00:14:57.306 7.093 - 7.147: 99.9010% ( 1) 00:14:57.306 7.147 - 7.200: 99.9109% ( 2) 00:14:57.306 7.253 - 7.307: 99.9159% ( 1) 00:14:57.306 7.413 - 7.467: 99.9208% ( 1) 00:14:57.306 7.520 - 7.573: 99.9258% ( 1) 00:14:57.306 7.573 - 7.627: 99.9307% ( 1) 00:14:57.306 7.627 - 7.680: 99.9357% ( 1) 00:14:57.306 3659.093 - 3686.400: 99.9406% ( 1) 00:14:57.306 3986.773 - 4014.080: 100.0000% ( 12) 00:14:57.306 00:14:57.306 [2024-10-09 00:22:27.831798] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:57.306 Complete histogram 00:14:57.306 ================== 00:14:57.306 Range in us Cumulative Count 00:14:57.306 1.627 - 1.633: 0.0049% ( 1) 00:14:57.306 1.640 - 1.647: 0.2326% ( 46) 00:14:57.306 1.647 - 1.653: 1.1680% ( 189) 00:14:57.306 1.653 - 1.660: 1.2818% ( 23) 00:14:57.306 1.660 - 1.667: 1.3560% ( 15) 00:14:57.306 1.667 - 1.673: 1.4303% ( 15) 00:14:57.306 1.673 - 1.680: 1.4402% ( 2) 00:14:57.306 1.680 - 1.687: 1.5194% ( 16) 00:14:57.306 1.687 - 1.693: 10.7097% ( 1857) 00:14:57.306 1.693 - 1.700: 45.7587% ( 7082) 00:14:57.306 1.700 - 1.707: 51.6876% ( 1198) 00:14:57.306 1.707 - 1.720: 73.1070% ( 4328) 00:14:57.306 1.720 - 1.733: 82.0994% ( 1817) 00:14:57.306 1.733 - 1.747: 83.3663% ( 256) 00:14:57.306 1.747 - 1.760: 86.8950% ( 713) 00:14:57.306 1.760 - 1.773: 92.5517% ( 1143) 00:14:57.306 1.773 - 1.787: 96.8227% ( 863) 00:14:57.306 1.787 - 1.800: 98.7083% ( 381) 00:14:57.306 1.800 - 1.813: 99.2923% ( 118) 00:14:57.306 1.813 - 1.827: 99.3913% ( 20) 00:14:57.306 1.827 - 1.840: 99.4111% ( 4) 00:14:57.306 1.840 - 1.853: 99.4259% ( 3) 00:14:57.306 1.867 - 1.880: 99.4309% ( 1) 00:14:57.306 2.013 - 2.027: 99.4358% ( 1) 00:14:57.306 3.320 - 3.333: 99.4408% ( 1) 00:14:57.306 3.413 - 3.440: 99.4457% ( 1) 00:14:57.306 3.493 - 3.520: 99.4507% ( 1) 00:14:57.306 3.600 - 3.627: 99.4556% ( 1) 00:14:57.306 3.653 - 3.680: 99.4655% ( 2) 00:14:57.306 3.707 - 3.733: 99.4705% ( 1) 00:14:57.306 3.760 - 3.787: 99.4754% ( 1) 00:14:57.306 3.947 - 3.973: 99.4853% ( 2) 00:14:57.306 4.213 - 4.240: 99.4903% ( 1) 00:14:57.306 4.427 - 4.453: 99.4952% ( 1) 00:14:57.306 4.453 - 4.480: 99.5001% ( 1) 00:14:57.306 4.587 - 4.613: 99.5051% ( 1) 00:14:57.306 4.613 - 4.640: 99.5100% ( 1) 00:14:57.306 4.693 - 4.720: 99.5150% ( 1) 00:14:57.306 4.800 - 4.827: 99.5249% ( 2) 00:14:57.306 4.880 - 4.907: 99.5298% ( 1) 00:14:57.306 4.907 - 4.933: 99.5348% ( 1) 00:14:57.306 4.960 - 4.987: 99.5447% ( 2) 00:14:57.306 5.013 - 5.040: 99.5496% ( 1) 00:14:57.306 5.227 - 5.253: 99.5546% ( 1) 00:14:57.306 5.307 - 5.333: 99.5595% ( 1) 00:14:57.306 5.387 - 5.413: 99.5645% ( 1) 00:14:57.306 5.413 - 5.440: 99.5694% ( 1) 00:14:57.306 5.493 - 5.520: 99.5744% ( 1) 00:14:57.306 5.680 - 5.707: 99.5793% ( 1) 00:14:57.306 5.760 - 5.787: 99.5843% ( 1) 00:14:57.306 6.053 - 6.080: 99.5892% ( 1) 00:14:57.306 6.320 - 6.347: 99.5942% ( 1) 00:14:57.306 6.427 - 6.453: 99.5991% ( 1) 00:14:57.306 6.453 - 6.480: 99.6041% ( 1) 00:14:57.306 10.293 - 10.347: 99.6090% ( 1) 00:14:57.306 10.507 - 10.560: 99.6140% ( 1) 00:14:57.306 3986.773 - 4014.080: 100.0000% ( 78) 00:14:57.306 00:14:57.306 00:22:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:14:57.306 00:22:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:14:57.306 00:22:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:14:57.306 00:22:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:14:57.306 00:22:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:14:57.567 [ 00:14:57.567 { 00:14:57.567 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:57.567 "subtype": "Discovery", 00:14:57.567 "listen_addresses": [], 00:14:57.567 "allow_any_host": true, 00:14:57.567 "hosts": [] 00:14:57.567 }, 00:14:57.567 { 00:14:57.567 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:14:57.567 "subtype": "NVMe", 00:14:57.567 "listen_addresses": [ 00:14:57.567 { 00:14:57.567 "trtype": "VFIOUSER", 00:14:57.567 "adrfam": "IPv4", 00:14:57.567 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:14:57.567 "trsvcid": "0" 00:14:57.567 } 00:14:57.567 ], 00:14:57.567 "allow_any_host": true, 00:14:57.567 "hosts": [], 00:14:57.567 "serial_number": "SPDK1", 00:14:57.567 "model_number": "SPDK bdev Controller", 00:14:57.567 "max_namespaces": 32, 00:14:57.567 "min_cntlid": 1, 00:14:57.567 "max_cntlid": 65519, 00:14:57.567 "namespaces": [ 00:14:57.567 { 00:14:57.567 "nsid": 1, 00:14:57.567 "bdev_name": "Malloc1", 00:14:57.567 "name": "Malloc1", 00:14:57.567 "nguid": "EBFFD627C5D94420BE3D86858AE252A8", 00:14:57.567 "uuid": "ebffd627-c5d9-4420-be3d-86858ae252a8" 00:14:57.567 } 00:14:57.567 ] 00:14:57.567 }, 00:14:57.567 { 00:14:57.567 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:14:57.567 "subtype": "NVMe", 00:14:57.567 "listen_addresses": [ 00:14:57.567 { 00:14:57.567 "trtype": "VFIOUSER", 00:14:57.567 "adrfam": "IPv4", 00:14:57.567 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:14:57.567 "trsvcid": "0" 00:14:57.567 } 00:14:57.567 ], 00:14:57.567 "allow_any_host": true, 00:14:57.567 "hosts": [], 00:14:57.567 "serial_number": "SPDK2", 00:14:57.567 "model_number": "SPDK bdev Controller", 00:14:57.567 "max_namespaces": 32, 00:14:57.567 "min_cntlid": 1, 00:14:57.567 "max_cntlid": 65519, 00:14:57.567 "namespaces": [ 00:14:57.567 { 00:14:57.567 "nsid": 1, 00:14:57.567 "bdev_name": "Malloc2", 00:14:57.567 "name": "Malloc2", 00:14:57.567 "nguid": "9B9E7A37A429470DA3947C085B074A39", 00:14:57.567 "uuid": "9b9e7a37-a429-470d-a394-7c085b074a39" 00:14:57.567 } 00:14:57.567 ] 00:14:57.567 } 00:14:57.567 ] 00:14:57.567 00:22:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:14:57.567 00:22:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=3208003 00:14:57.567 00:22:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:14:57.567 00:22:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:14:57.567 00:22:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:14:57.567 00:22:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:14:57.567 00:22:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:14:57.567 00:22:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:14:57.567 00:22:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:14:57.567 00:22:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:14:57.567 [2024-10-09 00:22:28.197176] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:57.829 Malloc3 00:14:57.829 00:22:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:14:57.829 [2024-10-09 00:22:28.415698] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:57.829 00:22:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:14:57.829 Asynchronous Event Request test 00:14:57.829 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:14:57.829 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:14:57.829 Registering asynchronous event callbacks... 00:14:57.829 Starting namespace attribute notice tests for all controllers... 00:14:57.829 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:14:57.829 aer_cb - Changed Namespace 00:14:57.829 Cleaning up... 00:14:58.090 [ 00:14:58.090 { 00:14:58.090 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:58.090 "subtype": "Discovery", 00:14:58.090 "listen_addresses": [], 00:14:58.090 "allow_any_host": true, 00:14:58.090 "hosts": [] 00:14:58.090 }, 00:14:58.090 { 00:14:58.090 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:14:58.090 "subtype": "NVMe", 00:14:58.090 "listen_addresses": [ 00:14:58.090 { 00:14:58.090 "trtype": "VFIOUSER", 00:14:58.090 "adrfam": "IPv4", 00:14:58.090 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:14:58.090 "trsvcid": "0" 00:14:58.090 } 00:14:58.090 ], 00:14:58.090 "allow_any_host": true, 00:14:58.090 "hosts": [], 00:14:58.090 "serial_number": "SPDK1", 00:14:58.090 "model_number": "SPDK bdev Controller", 00:14:58.090 "max_namespaces": 32, 00:14:58.090 "min_cntlid": 1, 00:14:58.090 "max_cntlid": 65519, 00:14:58.090 "namespaces": [ 00:14:58.090 { 00:14:58.090 "nsid": 1, 00:14:58.090 "bdev_name": "Malloc1", 00:14:58.090 "name": "Malloc1", 00:14:58.090 "nguid": "EBFFD627C5D94420BE3D86858AE252A8", 00:14:58.090 "uuid": "ebffd627-c5d9-4420-be3d-86858ae252a8" 00:14:58.090 }, 00:14:58.090 { 00:14:58.090 "nsid": 2, 00:14:58.090 "bdev_name": "Malloc3", 00:14:58.090 "name": "Malloc3", 00:14:58.090 "nguid": "A1FF26AEAF3342F4B7EE5811FD1A700E", 00:14:58.090 "uuid": "a1ff26ae-af33-42f4-b7ee-5811fd1a700e" 00:14:58.090 } 00:14:58.090 ] 00:14:58.090 }, 00:14:58.090 { 00:14:58.090 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:14:58.090 "subtype": "NVMe", 00:14:58.090 "listen_addresses": [ 00:14:58.090 { 00:14:58.090 "trtype": "VFIOUSER", 00:14:58.090 "adrfam": "IPv4", 00:14:58.090 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:14:58.090 "trsvcid": "0" 00:14:58.090 } 00:14:58.090 ], 00:14:58.090 "allow_any_host": true, 00:14:58.090 "hosts": [], 00:14:58.091 "serial_number": "SPDK2", 00:14:58.091 "model_number": "SPDK bdev Controller", 00:14:58.091 "max_namespaces": 32, 00:14:58.091 "min_cntlid": 1, 00:14:58.091 "max_cntlid": 65519, 00:14:58.091 "namespaces": [ 00:14:58.091 { 00:14:58.091 "nsid": 1, 00:14:58.091 "bdev_name": "Malloc2", 00:14:58.091 "name": "Malloc2", 00:14:58.091 "nguid": "9B9E7A37A429470DA3947C085B074A39", 00:14:58.091 "uuid": "9b9e7a37-a429-470d-a394-7c085b074a39" 00:14:58.091 } 00:14:58.091 ] 00:14:58.091 } 00:14:58.091 ] 00:14:58.091 00:22:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 3208003 00:14:58.091 00:22:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:58.091 00:22:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:14:58.091 00:22:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:14:58.091 00:22:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:14:58.091 [2024-10-09 00:22:28.643894] Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 initialization... 00:14:58.091 [2024-10-09 00:22:28.643940] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3208094 ] 00:14:58.091 [2024-10-09 00:22:28.671767] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:14:58.091 [2024-10-09 00:22:28.675057] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:58.091 [2024-10-09 00:22:28.675077] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fba0c985000 00:14:58.091 [2024-10-09 00:22:28.676054] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:58.091 [2024-10-09 00:22:28.677057] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:58.091 [2024-10-09 00:22:28.678064] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:58.091 [2024-10-09 00:22:28.679074] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:58.091 [2024-10-09 00:22:28.680080] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:58.091 [2024-10-09 00:22:28.681085] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:58.091 [2024-10-09 00:22:28.682093] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:58.091 [2024-10-09 00:22:28.683100] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:58.091 [2024-10-09 00:22:28.684108] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:58.091 [2024-10-09 00:22:28.684116] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fba0c97a000 00:14:58.091 [2024-10-09 00:22:28.685028] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:58.091 [2024-10-09 00:22:28.696414] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:14:58.091 [2024-10-09 00:22:28.696433] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:14:58.091 [2024-10-09 00:22:28.701508] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:14:58.091 [2024-10-09 00:22:28.701544] nvme_pcie_common.c: 134:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:14:58.091 [2024-10-09 00:22:28.701604] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:14:58.091 [2024-10-09 00:22:28.701617] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:14:58.091 [2024-10-09 00:22:28.701623] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:14:58.091 [2024-10-09 00:22:28.702508] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:14:58.091 [2024-10-09 00:22:28.702515] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:14:58.091 [2024-10-09 00:22:28.702520] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:14:58.091 [2024-10-09 00:22:28.703511] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:14:58.091 [2024-10-09 00:22:28.703518] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:14:58.091 [2024-10-09 00:22:28.703524] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:14:58.091 [2024-10-09 00:22:28.704520] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:14:58.091 [2024-10-09 00:22:28.704526] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:14:58.091 [2024-10-09 00:22:28.705529] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:14:58.091 [2024-10-09 00:22:28.705536] nvme_ctrlr.c:3893:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:14:58.091 [2024-10-09 00:22:28.705540] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:14:58.091 [2024-10-09 00:22:28.705545] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:14:58.091 [2024-10-09 00:22:28.705649] nvme_ctrlr.c:4091:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:14:58.091 [2024-10-09 00:22:28.705652] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:14:58.091 [2024-10-09 00:22:28.705656] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:14:58.091 [2024-10-09 00:22:28.706534] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:14:58.091 [2024-10-09 00:22:28.707542] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:14:58.091 [2024-10-09 00:22:28.708550] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:14:58.091 [2024-10-09 00:22:28.709557] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:58.091 [2024-10-09 00:22:28.709590] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:14:58.091 [2024-10-09 00:22:28.710570] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:14:58.091 [2024-10-09 00:22:28.710577] nvme_ctrlr.c:3928:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:14:58.091 [2024-10-09 00:22:28.710581] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:14:58.091 [2024-10-09 00:22:28.710598] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:14:58.091 [2024-10-09 00:22:28.710603] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:14:58.091 [2024-10-09 00:22:28.710613] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:58.091 [2024-10-09 00:22:28.710617] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:58.091 [2024-10-09 00:22:28.710620] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:58.091 [2024-10-09 00:22:28.710629] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:58.091 [2024-10-09 00:22:28.717728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:14:58.091 [2024-10-09 00:22:28.717737] nvme_ctrlr.c:2077:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:14:58.091 [2024-10-09 00:22:28.717741] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:14:58.091 [2024-10-09 00:22:28.717744] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:14:58.091 [2024-10-09 00:22:28.717748] nvme_ctrlr.c:2095:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:14:58.091 [2024-10-09 00:22:28.717751] nvme_ctrlr.c:2108:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:14:58.091 [2024-10-09 00:22:28.717754] nvme_ctrlr.c:2123:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:14:58.091 [2024-10-09 00:22:28.717758] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:14:58.091 [2024-10-09 00:22:28.717763] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:14:58.091 [2024-10-09 00:22:28.717771] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:14:58.354 [2024-10-09 00:22:28.725727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:14:58.354 [2024-10-09 00:22:28.725738] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:58.354 [2024-10-09 00:22:28.725744] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:58.354 [2024-10-09 00:22:28.725751] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:58.354 [2024-10-09 00:22:28.725757] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:58.354 [2024-10-09 00:22:28.725760] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:14:58.354 [2024-10-09 00:22:28.725767] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:14:58.354 [2024-10-09 00:22:28.725775] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:14:58.354 [2024-10-09 00:22:28.733727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:14:58.354 [2024-10-09 00:22:28.733734] nvme_ctrlr.c:3034:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:14:58.354 [2024-10-09 00:22:28.733740] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:14:58.354 [2024-10-09 00:22:28.733745] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:14:58.354 [2024-10-09 00:22:28.733751] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:14:58.354 [2024-10-09 00:22:28.733758] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:58.354 [2024-10-09 00:22:28.741727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:14:58.354 [2024-10-09 00:22:28.741774] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:14:58.354 [2024-10-09 00:22:28.741780] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:14:58.354 [2024-10-09 00:22:28.741786] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:14:58.354 [2024-10-09 00:22:28.741789] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:14:58.354 [2024-10-09 00:22:28.741792] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:58.354 [2024-10-09 00:22:28.741796] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:14:58.354 [2024-10-09 00:22:28.749727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:14:58.354 [2024-10-09 00:22:28.749736] nvme_ctrlr.c:4722:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:14:58.354 [2024-10-09 00:22:28.749747] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:14:58.355 [2024-10-09 00:22:28.749753] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:14:58.355 [2024-10-09 00:22:28.749758] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:58.355 [2024-10-09 00:22:28.749761] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:58.355 [2024-10-09 00:22:28.749763] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:58.355 [2024-10-09 00:22:28.749768] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:58.355 [2024-10-09 00:22:28.757726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:14:58.355 [2024-10-09 00:22:28.757738] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:14:58.355 [2024-10-09 00:22:28.757744] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:14:58.355 [2024-10-09 00:22:28.757749] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:58.355 [2024-10-09 00:22:28.757752] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:58.355 [2024-10-09 00:22:28.757755] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:58.355 [2024-10-09 00:22:28.757759] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:58.355 [2024-10-09 00:22:28.765726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:14:58.355 [2024-10-09 00:22:28.765734] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:14:58.355 [2024-10-09 00:22:28.765739] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:14:58.355 [2024-10-09 00:22:28.765745] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:14:58.355 [2024-10-09 00:22:28.765750] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host behavior support feature (timeout 30000 ms) 00:14:58.355 [2024-10-09 00:22:28.765753] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:14:58.355 [2024-10-09 00:22:28.765757] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:14:58.355 [2024-10-09 00:22:28.765761] nvme_ctrlr.c:3134:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:14:58.355 [2024-10-09 00:22:28.765764] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:14:58.355 [2024-10-09 00:22:28.765767] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:14:58.355 [2024-10-09 00:22:28.765780] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:14:58.355 [2024-10-09 00:22:28.773725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:14:58.355 [2024-10-09 00:22:28.773736] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:14:58.355 [2024-10-09 00:22:28.781726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:14:58.355 [2024-10-09 00:22:28.781736] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:14:58.355 [2024-10-09 00:22:28.789724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:14:58.355 [2024-10-09 00:22:28.789735] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:58.355 [2024-10-09 00:22:28.797728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:14:58.355 [2024-10-09 00:22:28.797743] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:14:58.355 [2024-10-09 00:22:28.797746] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:14:58.355 [2024-10-09 00:22:28.797749] nvme_pcie_common.c:1241:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:14:58.355 [2024-10-09 00:22:28.797752] nvme_pcie_common.c:1257:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:14:58.355 [2024-10-09 00:22:28.797754] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:14:58.355 [2024-10-09 00:22:28.797759] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:14:58.355 [2024-10-09 00:22:28.797764] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:14:58.355 [2024-10-09 00:22:28.797767] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:14:58.355 [2024-10-09 00:22:28.797770] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:58.355 [2024-10-09 00:22:28.797776] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:14:58.355 [2024-10-09 00:22:28.797781] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:14:58.355 [2024-10-09 00:22:28.797784] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:58.355 [2024-10-09 00:22:28.797787] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:58.355 [2024-10-09 00:22:28.797791] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:58.355 [2024-10-09 00:22:28.797797] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:14:58.355 [2024-10-09 00:22:28.797800] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:14:58.355 [2024-10-09 00:22:28.797802] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:58.355 [2024-10-09 00:22:28.797807] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:14:58.355 [2024-10-09 00:22:28.805726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:14:58.355 [2024-10-09 00:22:28.805738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:14:58.355 [2024-10-09 00:22:28.805746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:14:58.355 [2024-10-09 00:22:28.805751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:14:58.355 ===================================================== 00:14:58.355 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:14:58.355 ===================================================== 00:14:58.355 Controller Capabilities/Features 00:14:58.355 ================================ 00:14:58.355 Vendor ID: 4e58 00:14:58.355 Subsystem Vendor ID: 4e58 00:14:58.355 Serial Number: SPDK2 00:14:58.355 Model Number: SPDK bdev Controller 00:14:58.355 Firmware Version: 25.01 00:14:58.355 Recommended Arb Burst: 6 00:14:58.355 IEEE OUI Identifier: 8d 6b 50 00:14:58.355 Multi-path I/O 00:14:58.355 May have multiple subsystem ports: Yes 00:14:58.355 May have multiple controllers: Yes 00:14:58.355 Associated with SR-IOV VF: No 00:14:58.355 Max Data Transfer Size: 131072 00:14:58.355 Max Number of Namespaces: 32 00:14:58.355 Max Number of I/O Queues: 127 00:14:58.355 NVMe Specification Version (VS): 1.3 00:14:58.355 NVMe Specification Version (Identify): 1.3 00:14:58.355 Maximum Queue Entries: 256 00:14:58.355 Contiguous Queues Required: Yes 00:14:58.355 Arbitration Mechanisms Supported 00:14:58.355 Weighted Round Robin: Not Supported 00:14:58.355 Vendor Specific: Not Supported 00:14:58.355 Reset Timeout: 15000 ms 00:14:58.355 Doorbell Stride: 4 bytes 00:14:58.355 NVM Subsystem Reset: Not Supported 00:14:58.355 Command Sets Supported 00:14:58.355 NVM Command Set: Supported 00:14:58.355 Boot Partition: Not Supported 00:14:58.355 Memory Page Size Minimum: 4096 bytes 00:14:58.355 Memory Page Size Maximum: 4096 bytes 00:14:58.355 Persistent Memory Region: Not Supported 00:14:58.355 Optional Asynchronous Events Supported 00:14:58.355 Namespace Attribute Notices: Supported 00:14:58.355 Firmware Activation Notices: Not Supported 00:14:58.355 ANA Change Notices: Not Supported 00:14:58.355 PLE Aggregate Log Change Notices: Not Supported 00:14:58.355 LBA Status Info Alert Notices: Not Supported 00:14:58.355 EGE Aggregate Log Change Notices: Not Supported 00:14:58.355 Normal NVM Subsystem Shutdown event: Not Supported 00:14:58.355 Zone Descriptor Change Notices: Not Supported 00:14:58.355 Discovery Log Change Notices: Not Supported 00:14:58.355 Controller Attributes 00:14:58.355 128-bit Host Identifier: Supported 00:14:58.355 Non-Operational Permissive Mode: Not Supported 00:14:58.355 NVM Sets: Not Supported 00:14:58.355 Read Recovery Levels: Not Supported 00:14:58.355 Endurance Groups: Not Supported 00:14:58.355 Predictable Latency Mode: Not Supported 00:14:58.355 Traffic Based Keep ALive: Not Supported 00:14:58.355 Namespace Granularity: Not Supported 00:14:58.355 SQ Associations: Not Supported 00:14:58.355 UUID List: Not Supported 00:14:58.355 Multi-Domain Subsystem: Not Supported 00:14:58.355 Fixed Capacity Management: Not Supported 00:14:58.355 Variable Capacity Management: Not Supported 00:14:58.355 Delete Endurance Group: Not Supported 00:14:58.355 Delete NVM Set: Not Supported 00:14:58.355 Extended LBA Formats Supported: Not Supported 00:14:58.355 Flexible Data Placement Supported: Not Supported 00:14:58.355 00:14:58.355 Controller Memory Buffer Support 00:14:58.355 ================================ 00:14:58.355 Supported: No 00:14:58.355 00:14:58.355 Persistent Memory Region Support 00:14:58.355 ================================ 00:14:58.355 Supported: No 00:14:58.355 00:14:58.355 Admin Command Set Attributes 00:14:58.355 ============================ 00:14:58.355 Security Send/Receive: Not Supported 00:14:58.355 Format NVM: Not Supported 00:14:58.355 Firmware Activate/Download: Not Supported 00:14:58.355 Namespace Management: Not Supported 00:14:58.355 Device Self-Test: Not Supported 00:14:58.355 Directives: Not Supported 00:14:58.355 NVMe-MI: Not Supported 00:14:58.355 Virtualization Management: Not Supported 00:14:58.356 Doorbell Buffer Config: Not Supported 00:14:58.356 Get LBA Status Capability: Not Supported 00:14:58.356 Command & Feature Lockdown Capability: Not Supported 00:14:58.356 Abort Command Limit: 4 00:14:58.356 Async Event Request Limit: 4 00:14:58.356 Number of Firmware Slots: N/A 00:14:58.356 Firmware Slot 1 Read-Only: N/A 00:14:58.356 Firmware Activation Without Reset: N/A 00:14:58.356 Multiple Update Detection Support: N/A 00:14:58.356 Firmware Update Granularity: No Information Provided 00:14:58.356 Per-Namespace SMART Log: No 00:14:58.356 Asymmetric Namespace Access Log Page: Not Supported 00:14:58.356 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:14:58.356 Command Effects Log Page: Supported 00:14:58.356 Get Log Page Extended Data: Supported 00:14:58.356 Telemetry Log Pages: Not Supported 00:14:58.356 Persistent Event Log Pages: Not Supported 00:14:58.356 Supported Log Pages Log Page: May Support 00:14:58.356 Commands Supported & Effects Log Page: Not Supported 00:14:58.356 Feature Identifiers & Effects Log Page:May Support 00:14:58.356 NVMe-MI Commands & Effects Log Page: May Support 00:14:58.356 Data Area 4 for Telemetry Log: Not Supported 00:14:58.356 Error Log Page Entries Supported: 128 00:14:58.356 Keep Alive: Supported 00:14:58.356 Keep Alive Granularity: 10000 ms 00:14:58.356 00:14:58.356 NVM Command Set Attributes 00:14:58.356 ========================== 00:14:58.356 Submission Queue Entry Size 00:14:58.356 Max: 64 00:14:58.356 Min: 64 00:14:58.356 Completion Queue Entry Size 00:14:58.356 Max: 16 00:14:58.356 Min: 16 00:14:58.356 Number of Namespaces: 32 00:14:58.356 Compare Command: Supported 00:14:58.356 Write Uncorrectable Command: Not Supported 00:14:58.356 Dataset Management Command: Supported 00:14:58.356 Write Zeroes Command: Supported 00:14:58.356 Set Features Save Field: Not Supported 00:14:58.356 Reservations: Not Supported 00:14:58.356 Timestamp: Not Supported 00:14:58.356 Copy: Supported 00:14:58.356 Volatile Write Cache: Present 00:14:58.356 Atomic Write Unit (Normal): 1 00:14:58.356 Atomic Write Unit (PFail): 1 00:14:58.356 Atomic Compare & Write Unit: 1 00:14:58.356 Fused Compare & Write: Supported 00:14:58.356 Scatter-Gather List 00:14:58.356 SGL Command Set: Supported (Dword aligned) 00:14:58.356 SGL Keyed: Not Supported 00:14:58.356 SGL Bit Bucket Descriptor: Not Supported 00:14:58.356 SGL Metadata Pointer: Not Supported 00:14:58.356 Oversized SGL: Not Supported 00:14:58.356 SGL Metadata Address: Not Supported 00:14:58.356 SGL Offset: Not Supported 00:14:58.356 Transport SGL Data Block: Not Supported 00:14:58.356 Replay Protected Memory Block: Not Supported 00:14:58.356 00:14:58.356 Firmware Slot Information 00:14:58.356 ========================= 00:14:58.356 Active slot: 1 00:14:58.356 Slot 1 Firmware Revision: 25.01 00:14:58.356 00:14:58.356 00:14:58.356 Commands Supported and Effects 00:14:58.356 ============================== 00:14:58.356 Admin Commands 00:14:58.356 -------------- 00:14:58.356 Get Log Page (02h): Supported 00:14:58.356 Identify (06h): Supported 00:14:58.356 Abort (08h): Supported 00:14:58.356 Set Features (09h): Supported 00:14:58.356 Get Features (0Ah): Supported 00:14:58.356 Asynchronous Event Request (0Ch): Supported 00:14:58.356 Keep Alive (18h): Supported 00:14:58.356 I/O Commands 00:14:58.356 ------------ 00:14:58.356 Flush (00h): Supported LBA-Change 00:14:58.356 Write (01h): Supported LBA-Change 00:14:58.356 Read (02h): Supported 00:14:58.356 Compare (05h): Supported 00:14:58.356 Write Zeroes (08h): Supported LBA-Change 00:14:58.356 Dataset Management (09h): Supported LBA-Change 00:14:58.356 Copy (19h): Supported LBA-Change 00:14:58.356 00:14:58.356 Error Log 00:14:58.356 ========= 00:14:58.356 00:14:58.356 Arbitration 00:14:58.356 =========== 00:14:58.356 Arbitration Burst: 1 00:14:58.356 00:14:58.356 Power Management 00:14:58.356 ================ 00:14:58.356 Number of Power States: 1 00:14:58.356 Current Power State: Power State #0 00:14:58.356 Power State #0: 00:14:58.356 Max Power: 0.00 W 00:14:58.356 Non-Operational State: Operational 00:14:58.356 Entry Latency: Not Reported 00:14:58.356 Exit Latency: Not Reported 00:14:58.356 Relative Read Throughput: 0 00:14:58.356 Relative Read Latency: 0 00:14:58.356 Relative Write Throughput: 0 00:14:58.356 Relative Write Latency: 0 00:14:58.356 Idle Power: Not Reported 00:14:58.356 Active Power: Not Reported 00:14:58.356 Non-Operational Permissive Mode: Not Supported 00:14:58.356 00:14:58.356 Health Information 00:14:58.356 ================== 00:14:58.356 Critical Warnings: 00:14:58.356 Available Spare Space: OK 00:14:58.356 Temperature: OK 00:14:58.356 Device Reliability: OK 00:14:58.356 Read Only: No 00:14:58.356 Volatile Memory Backup: OK 00:14:58.356 Current Temperature: 0 Kelvin (-273 Celsius) 00:14:58.356 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:14:58.356 Available Spare: 0% 00:14:58.356 Available Sp[2024-10-09 00:22:28.805818] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:14:58.356 [2024-10-09 00:22:28.813724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:14:58.356 [2024-10-09 00:22:28.813745] nvme_ctrlr.c:4386:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:14:58.356 [2024-10-09 00:22:28.813752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:58.356 [2024-10-09 00:22:28.813757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:58.356 [2024-10-09 00:22:28.813761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:58.356 [2024-10-09 00:22:28.813766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:58.356 [2024-10-09 00:22:28.813807] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:14:58.356 [2024-10-09 00:22:28.813816] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:14:58.356 [2024-10-09 00:22:28.814810] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:58.356 [2024-10-09 00:22:28.814847] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:14:58.356 [2024-10-09 00:22:28.814852] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:14:58.356 [2024-10-09 00:22:28.815811] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:14:58.356 [2024-10-09 00:22:28.815819] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:14:58.356 [2024-10-09 00:22:28.815868] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:14:58.356 [2024-10-09 00:22:28.816834] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:58.356 are Threshold: 0% 00:14:58.356 Life Percentage Used: 0% 00:14:58.356 Data Units Read: 0 00:14:58.356 Data Units Written: 0 00:14:58.356 Host Read Commands: 0 00:14:58.356 Host Write Commands: 0 00:14:58.356 Controller Busy Time: 0 minutes 00:14:58.356 Power Cycles: 0 00:14:58.356 Power On Hours: 0 hours 00:14:58.356 Unsafe Shutdowns: 0 00:14:58.356 Unrecoverable Media Errors: 0 00:14:58.356 Lifetime Error Log Entries: 0 00:14:58.356 Warning Temperature Time: 0 minutes 00:14:58.356 Critical Temperature Time: 0 minutes 00:14:58.356 00:14:58.356 Number of Queues 00:14:58.356 ================ 00:14:58.356 Number of I/O Submission Queues: 127 00:14:58.356 Number of I/O Completion Queues: 127 00:14:58.356 00:14:58.356 Active Namespaces 00:14:58.356 ================= 00:14:58.356 Namespace ID:1 00:14:58.356 Error Recovery Timeout: Unlimited 00:14:58.356 Command Set Identifier: NVM (00h) 00:14:58.356 Deallocate: Supported 00:14:58.356 Deallocated/Unwritten Error: Not Supported 00:14:58.356 Deallocated Read Value: Unknown 00:14:58.356 Deallocate in Write Zeroes: Not Supported 00:14:58.356 Deallocated Guard Field: 0xFFFF 00:14:58.356 Flush: Supported 00:14:58.356 Reservation: Supported 00:14:58.356 Namespace Sharing Capabilities: Multiple Controllers 00:14:58.356 Size (in LBAs): 131072 (0GiB) 00:14:58.356 Capacity (in LBAs): 131072 (0GiB) 00:14:58.356 Utilization (in LBAs): 131072 (0GiB) 00:14:58.356 NGUID: 9B9E7A37A429470DA3947C085B074A39 00:14:58.356 UUID: 9b9e7a37-a429-470d-a394-7c085b074a39 00:14:58.356 Thin Provisioning: Not Supported 00:14:58.356 Per-NS Atomic Units: Yes 00:14:58.356 Atomic Boundary Size (Normal): 0 00:14:58.356 Atomic Boundary Size (PFail): 0 00:14:58.356 Atomic Boundary Offset: 0 00:14:58.356 Maximum Single Source Range Length: 65535 00:14:58.356 Maximum Copy Length: 65535 00:14:58.356 Maximum Source Range Count: 1 00:14:58.356 NGUID/EUI64 Never Reused: No 00:14:58.356 Namespace Write Protected: No 00:14:58.356 Number of LBA Formats: 1 00:14:58.356 Current LBA Format: LBA Format #00 00:14:58.356 LBA Format #00: Data Size: 512 Metadata Size: 0 00:14:58.356 00:14:58.356 00:22:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:14:58.618 [2024-10-09 00:22:28.994742] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:03.912 Initializing NVMe Controllers 00:15:03.912 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:03.912 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:15:03.912 Initialization complete. Launching workers. 00:15:03.912 ======================================================== 00:15:03.912 Latency(us) 00:15:03.912 Device Information : IOPS MiB/s Average min max 00:15:03.912 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39991.80 156.22 3203.04 844.41 6797.32 00:15:03.912 ======================================================== 00:15:03.912 Total : 39991.80 156.22 3203.04 844.41 6797.32 00:15:03.912 00:15:03.912 [2024-10-09 00:22:34.102913] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:03.912 00:22:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:15:03.912 [2024-10-09 00:22:34.282476] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:09.207 Initializing NVMe Controllers 00:15:09.207 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:09.207 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:15:09.207 Initialization complete. Launching workers. 00:15:09.207 ======================================================== 00:15:09.207 Latency(us) 00:15:09.207 Device Information : IOPS MiB/s Average min max 00:15:09.207 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 40035.80 156.39 3197.02 848.17 7746.07 00:15:09.207 ======================================================== 00:15:09.207 Total : 40035.80 156.39 3197.02 848.17 7746.07 00:15:09.207 00:15:09.207 [2024-10-09 00:22:39.302756] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:09.207 00:22:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:15:09.207 [2024-10-09 00:22:39.492941] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:14.481 [2024-10-09 00:22:44.626804] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:14.481 Initializing NVMe Controllers 00:15:14.481 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:14.481 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:14.481 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:15:14.481 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:15:14.481 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:15:14.481 Initialization complete. Launching workers. 00:15:14.481 Starting thread on core 2 00:15:14.481 Starting thread on core 3 00:15:14.481 Starting thread on core 1 00:15:14.481 00:22:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:15:14.481 [2024-10-09 00:22:44.863127] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:17.773 [2024-10-09 00:22:47.912786] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:17.773 Initializing NVMe Controllers 00:15:17.773 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:17.773 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:17.773 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:15:17.773 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:15:17.773 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:15:17.773 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:15:17.773 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:15:17.773 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:15:17.773 Initialization complete. Launching workers. 00:15:17.773 Starting thread on core 1 with urgent priority queue 00:15:17.773 Starting thread on core 2 with urgent priority queue 00:15:17.773 Starting thread on core 3 with urgent priority queue 00:15:17.773 Starting thread on core 0 with urgent priority queue 00:15:17.773 SPDK bdev Controller (SPDK2 ) core 0: 15477.33 IO/s 6.46 secs/100000 ios 00:15:17.773 SPDK bdev Controller (SPDK2 ) core 1: 8717.67 IO/s 11.47 secs/100000 ios 00:15:17.773 SPDK bdev Controller (SPDK2 ) core 2: 9610.67 IO/s 10.41 secs/100000 ios 00:15:17.773 SPDK bdev Controller (SPDK2 ) core 3: 14463.33 IO/s 6.91 secs/100000 ios 00:15:17.773 ======================================================== 00:15:17.773 00:15:17.773 00:22:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:15:17.773 [2024-10-09 00:22:48.140101] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:17.773 Initializing NVMe Controllers 00:15:17.773 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:17.773 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:17.773 Namespace ID: 1 size: 0GB 00:15:17.773 Initialization complete. 00:15:17.773 INFO: using host memory buffer for IO 00:15:17.773 Hello world! 00:15:17.773 [2024-10-09 00:22:48.152170] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:17.773 00:22:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:15:17.774 [2024-10-09 00:22:48.373939] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:19.169 Initializing NVMe Controllers 00:15:19.169 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:19.169 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:19.169 Initialization complete. Launching workers. 00:15:19.169 submit (in ns) avg, min, max = 5144.8, 2833.3, 3999398.3 00:15:19.169 complete (in ns) avg, min, max = 17232.1, 1622.5, 3998758.3 00:15:19.169 00:15:19.169 Submit histogram 00:15:19.169 ================ 00:15:19.169 Range in us Cumulative Count 00:15:19.169 2.827 - 2.840: 0.1821% ( 37) 00:15:19.169 2.840 - 2.853: 1.4374% ( 255) 00:15:19.169 2.853 - 2.867: 3.9628% ( 513) 00:15:19.169 2.867 - 2.880: 8.4572% ( 913) 00:15:19.169 2.880 - 2.893: 13.3947% ( 1003) 00:15:19.169 2.893 - 2.907: 18.6915% ( 1076) 00:15:19.169 2.907 - 2.920: 24.0868% ( 1096) 00:15:19.169 2.920 - 2.933: 30.1418% ( 1230) 00:15:19.169 2.933 - 2.947: 35.5715% ( 1103) 00:15:19.169 2.947 - 2.960: 40.4745% ( 996) 00:15:19.169 2.960 - 2.973: 45.4809% ( 1017) 00:15:19.169 2.973 - 2.987: 51.0633% ( 1134) 00:15:19.169 2.987 - 3.000: 58.9347% ( 1599) 00:15:19.169 3.000 - 3.013: 69.3167% ( 2109) 00:15:19.169 3.013 - 3.027: 78.4287% ( 1851) 00:15:19.169 3.027 - 3.040: 85.5764% ( 1452) 00:15:19.169 3.040 - 3.053: 91.3114% ( 1165) 00:15:19.169 3.053 - 3.067: 94.9247% ( 734) 00:15:19.169 3.067 - 3.080: 97.4500% ( 513) 00:15:19.169 3.080 - 3.093: 98.7792% ( 270) 00:15:19.169 3.093 - 3.107: 99.3600% ( 118) 00:15:19.169 3.107 - 3.120: 99.5274% ( 34) 00:15:19.169 3.120 - 3.133: 99.5570% ( 6) 00:15:19.169 3.133 - 3.147: 99.5668% ( 2) 00:15:19.169 3.187 - 3.200: 99.5766% ( 2) 00:15:19.169 3.200 - 3.213: 99.5816% ( 1) 00:15:19.169 3.520 - 3.547: 99.5865% ( 1) 00:15:19.169 3.547 - 3.573: 99.5914% ( 1) 00:15:19.169 3.627 - 3.653: 99.5963% ( 1) 00:15:19.169 3.733 - 3.760: 99.6013% ( 1) 00:15:19.169 3.893 - 3.920: 99.6062% ( 1) 00:15:19.169 3.920 - 3.947: 99.6111% ( 1) 00:15:19.169 4.053 - 4.080: 99.6210% ( 2) 00:15:19.169 4.133 - 4.160: 99.6357% ( 3) 00:15:19.169 4.160 - 4.187: 99.6406% ( 1) 00:15:19.169 4.293 - 4.320: 99.6456% ( 1) 00:15:19.169 4.453 - 4.480: 99.6505% ( 1) 00:15:19.169 4.507 - 4.533: 99.6554% ( 1) 00:15:19.169 4.533 - 4.560: 99.6603% ( 1) 00:15:19.169 4.587 - 4.613: 99.6702% ( 2) 00:15:19.169 4.613 - 4.640: 99.6800% ( 2) 00:15:19.169 4.720 - 4.747: 99.6849% ( 1) 00:15:19.169 4.747 - 4.773: 99.6899% ( 1) 00:15:19.169 4.773 - 4.800: 99.6948% ( 1) 00:15:19.169 4.800 - 4.827: 99.6997% ( 1) 00:15:19.169 4.853 - 4.880: 99.7145% ( 3) 00:15:19.169 4.907 - 4.933: 99.7194% ( 1) 00:15:19.169 4.933 - 4.960: 99.7243% ( 1) 00:15:19.169 4.960 - 4.987: 99.7293% ( 1) 00:15:19.169 5.067 - 5.093: 99.7391% ( 2) 00:15:19.169 5.147 - 5.173: 99.7489% ( 2) 00:15:19.169 5.173 - 5.200: 99.7588% ( 2) 00:15:19.169 5.253 - 5.280: 99.7637% ( 1) 00:15:19.169 5.413 - 5.440: 99.7686% ( 1) 00:15:19.169 5.440 - 5.467: 99.7736% ( 1) 00:15:19.169 5.467 - 5.493: 99.7785% ( 1) 00:15:19.169 5.493 - 5.520: 99.7834% ( 1) 00:15:19.169 5.520 - 5.547: 99.7883% ( 1) 00:15:19.169 5.627 - 5.653: 99.7932% ( 1) 00:15:19.169 5.680 - 5.707: 99.7982% ( 1) 00:15:19.169 5.760 - 5.787: 99.8031% ( 1) 00:15:19.169 5.787 - 5.813: 99.8080% ( 1) 00:15:19.169 5.867 - 5.893: 99.8129% ( 1) 00:15:19.169 5.920 - 5.947: 99.8179% ( 1) 00:15:19.169 6.000 - 6.027: 99.8228% ( 1) 00:15:19.169 6.027 - 6.053: 99.8277% ( 1) 00:15:19.169 6.080 - 6.107: 99.8326% ( 1) 00:15:19.169 6.133 - 6.160: 99.8376% ( 1) 00:15:19.169 6.160 - 6.187: 99.8425% ( 1) 00:15:19.169 6.213 - 6.240: 99.8474% ( 1) 00:15:19.169 6.240 - 6.267: 99.8523% ( 1) 00:15:19.169 6.293 - 6.320: 99.8572% ( 1) 00:15:19.169 6.427 - 6.453: 99.8622% ( 1) 00:15:19.169 6.507 - 6.533: 99.8671% ( 1) 00:15:19.169 6.533 - 6.560: 99.8769% ( 2) 00:15:19.169 6.667 - 6.693: 99.8819% ( 1) 00:15:19.169 6.693 - 6.720: 99.8966% ( 3) 00:15:19.169 6.880 - 6.933: 99.9015% ( 1) 00:15:19.169 6.987 - 7.040: 99.9065% ( 1) 00:15:19.169 [2024-10-09 00:22:49.467256] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:19.169 7.200 - 7.253: 99.9114% ( 1) 00:15:19.169 7.360 - 7.413: 99.9212% ( 2) 00:15:19.169 7.467 - 7.520: 99.9262% ( 1) 00:15:19.169 7.627 - 7.680: 99.9311% ( 1) 00:15:19.169 7.893 - 7.947: 99.9360% ( 1) 00:15:19.169 8.000 - 8.053: 99.9409% ( 1) 00:15:19.169 9.333 - 9.387: 99.9459% ( 1) 00:15:19.169 3986.773 - 4014.080: 100.0000% ( 11) 00:15:19.169 00:15:19.169 Complete histogram 00:15:19.169 ================== 00:15:19.169 Range in us Cumulative Count 00:15:19.169 1.620 - 1.627: 0.0049% ( 1) 00:15:19.169 1.627 - 1.633: 0.0098% ( 1) 00:15:19.169 1.633 - 1.640: 0.6793% ( 136) 00:15:19.169 1.640 - 1.647: 1.1470% ( 95) 00:15:19.169 1.647 - 1.653: 1.2061% ( 12) 00:15:19.169 1.653 - 1.660: 1.2996% ( 19) 00:15:19.169 1.660 - 1.667: 1.3488% ( 10) 00:15:19.169 1.667 - 1.673: 1.3685% ( 4) 00:15:19.169 1.673 - 1.680: 1.3882% ( 4) 00:15:19.169 1.680 - 1.687: 1.4030% ( 3) 00:15:19.169 1.687 - 1.693: 1.4522% ( 10) 00:15:19.169 1.693 - 1.700: 41.2523% ( 8085) 00:15:19.169 1.700 - 1.707: 51.9445% ( 2172) 00:15:19.169 1.707 - 1.720: 72.6888% ( 4214) 00:15:19.169 1.720 - 1.733: 81.9140% ( 1874) 00:15:19.169 1.733 - 1.747: 84.1489% ( 454) 00:15:19.169 1.747 - 1.760: 86.2902% ( 435) 00:15:19.169 1.760 - 1.773: 90.8536% ( 927) 00:15:19.169 1.773 - 1.787: 95.6188% ( 968) 00:15:19.169 1.787 - 1.800: 98.2032% ( 525) 00:15:19.169 1.800 - 1.813: 99.1730% ( 197) 00:15:19.169 1.813 - 1.827: 99.4093% ( 48) 00:15:19.169 1.827 - 1.840: 99.4388% ( 6) 00:15:19.169 1.867 - 1.880: 99.4437% ( 1) 00:15:19.169 2.000 - 2.013: 99.4487% ( 1) 00:15:19.169 3.440 - 3.467: 99.4536% ( 1) 00:15:19.169 3.467 - 3.493: 99.4585% ( 1) 00:15:19.169 3.573 - 3.600: 99.4634% ( 1) 00:15:19.169 3.627 - 3.653: 99.4733% ( 2) 00:15:19.169 3.680 - 3.707: 99.4782% ( 1) 00:15:19.169 3.760 - 3.787: 99.4831% ( 1) 00:15:19.169 4.107 - 4.133: 99.4880% ( 1) 00:15:19.169 4.133 - 4.160: 99.4930% ( 1) 00:15:19.169 4.347 - 4.373: 99.4979% ( 1) 00:15:19.169 4.373 - 4.400: 99.5028% ( 1) 00:15:19.169 4.480 - 4.507: 99.5077% ( 1) 00:15:19.169 4.613 - 4.640: 99.5127% ( 1) 00:15:19.169 4.773 - 4.800: 99.5176% ( 1) 00:15:19.169 4.800 - 4.827: 99.5225% ( 1) 00:15:19.169 5.093 - 5.120: 99.5274% ( 1) 00:15:19.169 5.173 - 5.200: 99.5323% ( 1) 00:15:19.169 5.280 - 5.307: 99.5422% ( 2) 00:15:19.169 5.307 - 5.333: 99.5471% ( 1) 00:15:19.169 5.333 - 5.360: 99.5520% ( 1) 00:15:19.169 5.440 - 5.467: 99.5570% ( 1) 00:15:19.170 5.733 - 5.760: 99.5668% ( 2) 00:15:19.170 5.813 - 5.840: 99.5717% ( 1) 00:15:19.170 5.920 - 5.947: 99.5766% ( 1) 00:15:19.170 6.160 - 6.187: 99.5816% ( 1) 00:15:19.170 6.240 - 6.267: 99.5865% ( 1) 00:15:19.170 6.667 - 6.693: 99.5914% ( 1) 00:15:19.170 8.213 - 8.267: 99.5963% ( 1) 00:15:19.170 8.800 - 8.853: 99.6013% ( 1) 00:15:19.170 11.467 - 11.520: 99.6062% ( 1) 00:15:19.170 32.853 - 33.067: 99.6111% ( 1) 00:15:19.170 3481.600 - 3495.253: 99.6160% ( 1) 00:15:19.170 3986.773 - 4014.080: 100.0000% ( 78) 00:15:19.170 00:15:19.170 00:22:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:15:19.170 00:22:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:15:19.170 00:22:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:15:19.170 00:22:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:15:19.170 00:22:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:19.170 [ 00:15:19.170 { 00:15:19.170 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:19.170 "subtype": "Discovery", 00:15:19.170 "listen_addresses": [], 00:15:19.170 "allow_any_host": true, 00:15:19.170 "hosts": [] 00:15:19.170 }, 00:15:19.170 { 00:15:19.170 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:19.170 "subtype": "NVMe", 00:15:19.170 "listen_addresses": [ 00:15:19.170 { 00:15:19.170 "trtype": "VFIOUSER", 00:15:19.170 "adrfam": "IPv4", 00:15:19.170 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:19.170 "trsvcid": "0" 00:15:19.170 } 00:15:19.170 ], 00:15:19.170 "allow_any_host": true, 00:15:19.170 "hosts": [], 00:15:19.170 "serial_number": "SPDK1", 00:15:19.170 "model_number": "SPDK bdev Controller", 00:15:19.170 "max_namespaces": 32, 00:15:19.170 "min_cntlid": 1, 00:15:19.170 "max_cntlid": 65519, 00:15:19.170 "namespaces": [ 00:15:19.170 { 00:15:19.170 "nsid": 1, 00:15:19.170 "bdev_name": "Malloc1", 00:15:19.170 "name": "Malloc1", 00:15:19.170 "nguid": "EBFFD627C5D94420BE3D86858AE252A8", 00:15:19.170 "uuid": "ebffd627-c5d9-4420-be3d-86858ae252a8" 00:15:19.170 }, 00:15:19.170 { 00:15:19.170 "nsid": 2, 00:15:19.170 "bdev_name": "Malloc3", 00:15:19.170 "name": "Malloc3", 00:15:19.170 "nguid": "A1FF26AEAF3342F4B7EE5811FD1A700E", 00:15:19.170 "uuid": "a1ff26ae-af33-42f4-b7ee-5811fd1a700e" 00:15:19.170 } 00:15:19.170 ] 00:15:19.170 }, 00:15:19.170 { 00:15:19.170 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:19.170 "subtype": "NVMe", 00:15:19.170 "listen_addresses": [ 00:15:19.170 { 00:15:19.170 "trtype": "VFIOUSER", 00:15:19.170 "adrfam": "IPv4", 00:15:19.170 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:19.170 "trsvcid": "0" 00:15:19.170 } 00:15:19.170 ], 00:15:19.170 "allow_any_host": true, 00:15:19.170 "hosts": [], 00:15:19.170 "serial_number": "SPDK2", 00:15:19.170 "model_number": "SPDK bdev Controller", 00:15:19.170 "max_namespaces": 32, 00:15:19.170 "min_cntlid": 1, 00:15:19.170 "max_cntlid": 65519, 00:15:19.170 "namespaces": [ 00:15:19.170 { 00:15:19.170 "nsid": 1, 00:15:19.170 "bdev_name": "Malloc2", 00:15:19.170 "name": "Malloc2", 00:15:19.170 "nguid": "9B9E7A37A429470DA3947C085B074A39", 00:15:19.170 "uuid": "9b9e7a37-a429-470d-a394-7c085b074a39" 00:15:19.170 } 00:15:19.170 ] 00:15:19.170 } 00:15:19.170 ] 00:15:19.170 00:22:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:15:19.170 00:22:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=3212262 00:15:19.170 00:22:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:15:19.170 00:22:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:15:19.170 00:22:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:15:19.170 00:22:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:19.170 00:22:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:19.170 00:22:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:15:19.170 00:22:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:15:19.170 00:22:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:15:19.441 [2024-10-09 00:22:49.835072] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:19.441 Malloc4 00:15:19.441 00:22:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:15:19.441 [2024-10-09 00:22:50.045723] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:19.441 00:22:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:19.441 Asynchronous Event Request test 00:15:19.441 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:19.441 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:19.442 Registering asynchronous event callbacks... 00:15:19.442 Starting namespace attribute notice tests for all controllers... 00:15:19.442 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:15:19.442 aer_cb - Changed Namespace 00:15:19.442 Cleaning up... 00:15:19.703 [ 00:15:19.703 { 00:15:19.703 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:19.703 "subtype": "Discovery", 00:15:19.703 "listen_addresses": [], 00:15:19.703 "allow_any_host": true, 00:15:19.703 "hosts": [] 00:15:19.703 }, 00:15:19.703 { 00:15:19.703 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:19.703 "subtype": "NVMe", 00:15:19.703 "listen_addresses": [ 00:15:19.703 { 00:15:19.703 "trtype": "VFIOUSER", 00:15:19.703 "adrfam": "IPv4", 00:15:19.703 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:19.703 "trsvcid": "0" 00:15:19.703 } 00:15:19.703 ], 00:15:19.703 "allow_any_host": true, 00:15:19.703 "hosts": [], 00:15:19.703 "serial_number": "SPDK1", 00:15:19.703 "model_number": "SPDK bdev Controller", 00:15:19.703 "max_namespaces": 32, 00:15:19.703 "min_cntlid": 1, 00:15:19.703 "max_cntlid": 65519, 00:15:19.703 "namespaces": [ 00:15:19.703 { 00:15:19.703 "nsid": 1, 00:15:19.703 "bdev_name": "Malloc1", 00:15:19.703 "name": "Malloc1", 00:15:19.703 "nguid": "EBFFD627C5D94420BE3D86858AE252A8", 00:15:19.703 "uuid": "ebffd627-c5d9-4420-be3d-86858ae252a8" 00:15:19.703 }, 00:15:19.703 { 00:15:19.703 "nsid": 2, 00:15:19.703 "bdev_name": "Malloc3", 00:15:19.703 "name": "Malloc3", 00:15:19.703 "nguid": "A1FF26AEAF3342F4B7EE5811FD1A700E", 00:15:19.703 "uuid": "a1ff26ae-af33-42f4-b7ee-5811fd1a700e" 00:15:19.703 } 00:15:19.703 ] 00:15:19.703 }, 00:15:19.703 { 00:15:19.703 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:19.703 "subtype": "NVMe", 00:15:19.703 "listen_addresses": [ 00:15:19.703 { 00:15:19.703 "trtype": "VFIOUSER", 00:15:19.703 "adrfam": "IPv4", 00:15:19.703 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:19.703 "trsvcid": "0" 00:15:19.703 } 00:15:19.703 ], 00:15:19.703 "allow_any_host": true, 00:15:19.703 "hosts": [], 00:15:19.703 "serial_number": "SPDK2", 00:15:19.703 "model_number": "SPDK bdev Controller", 00:15:19.703 "max_namespaces": 32, 00:15:19.703 "min_cntlid": 1, 00:15:19.703 "max_cntlid": 65519, 00:15:19.703 "namespaces": [ 00:15:19.703 { 00:15:19.703 "nsid": 1, 00:15:19.703 "bdev_name": "Malloc2", 00:15:19.703 "name": "Malloc2", 00:15:19.703 "nguid": "9B9E7A37A429470DA3947C085B074A39", 00:15:19.703 "uuid": "9b9e7a37-a429-470d-a394-7c085b074a39" 00:15:19.703 }, 00:15:19.703 { 00:15:19.703 "nsid": 2, 00:15:19.703 "bdev_name": "Malloc4", 00:15:19.703 "name": "Malloc4", 00:15:19.703 "nguid": "50F40F37A7274531A256FCB90C837C7D", 00:15:19.703 "uuid": "50f40f37-a727-4531-a256-fcb90c837c7d" 00:15:19.703 } 00:15:19.703 ] 00:15:19.703 } 00:15:19.703 ] 00:15:19.703 00:22:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 3212262 00:15:19.703 00:22:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:15:19.703 00:22:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 3203269 00:15:19.703 00:22:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@950 -- # '[' -z 3203269 ']' 00:15:19.703 00:22:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # kill -0 3203269 00:15:19.703 00:22:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # uname 00:15:19.703 00:22:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:19.703 00:22:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3203269 00:15:19.703 00:22:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:19.703 00:22:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:19.703 00:22:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3203269' 00:15:19.703 killing process with pid 3203269 00:15:19.703 00:22:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@969 -- # kill 3203269 00:15:19.703 00:22:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@974 -- # wait 3203269 00:15:19.964 00:22:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:15:19.964 00:22:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:15:19.964 00:22:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:15:19.964 00:22:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:15:19.964 00:22:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:15:19.964 00:22:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=3212378 00:15:19.964 00:22:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 3212378' 00:15:19.964 Process pid: 3212378 00:15:19.964 00:22:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:19.964 00:22:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:15:19.964 00:22:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 3212378 00:15:19.964 00:22:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@831 -- # '[' -z 3212378 ']' 00:15:19.964 00:22:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:19.964 00:22:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:19.964 00:22:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:19.964 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:19.964 00:22:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:19.964 00:22:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:15:19.964 [2024-10-09 00:22:50.542809] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:15:19.964 [2024-10-09 00:22:50.543746] Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 initialization... 00:15:19.964 [2024-10-09 00:22:50.543793] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:20.225 [2024-10-09 00:22:50.622528] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:20.225 [2024-10-09 00:22:50.683072] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:20.225 [2024-10-09 00:22:50.683112] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:20.225 [2024-10-09 00:22:50.683118] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:20.225 [2024-10-09 00:22:50.683123] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:20.225 [2024-10-09 00:22:50.683127] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:20.225 [2024-10-09 00:22:50.684427] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:15:20.225 [2024-10-09 00:22:50.684580] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:15:20.225 [2024-10-09 00:22:50.684748] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:15:20.225 [2024-10-09 00:22:50.684772] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:15:20.225 [2024-10-09 00:22:50.746526] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:15:20.225 [2024-10-09 00:22:50.747529] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:15:20.225 [2024-10-09 00:22:50.748386] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:15:20.225 [2024-10-09 00:22:50.748903] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:15:20.225 [2024-10-09 00:22:50.748923] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:15:20.807 00:22:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:20.807 00:22:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # return 0 00:15:20.807 00:22:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:15:21.753 00:22:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:15:22.014 00:22:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:15:22.014 00:22:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:15:22.014 00:22:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:22.014 00:22:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:15:22.014 00:22:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:15:22.322 Malloc1 00:15:22.322 00:22:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:15:22.594 00:22:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:15:22.594 00:22:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:15:22.906 00:22:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:22.906 00:22:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:15:22.906 00:22:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:15:22.906 Malloc2 00:15:22.906 00:22:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:15:23.166 00:22:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:15:23.426 00:22:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:15:23.687 00:22:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:15:23.687 00:22:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 3212378 00:15:23.687 00:22:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@950 -- # '[' -z 3212378 ']' 00:15:23.687 00:22:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # kill -0 3212378 00:15:23.687 00:22:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # uname 00:15:23.687 00:22:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:23.687 00:22:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3212378 00:15:23.687 00:22:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:23.687 00:22:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:23.687 00:22:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3212378' 00:15:23.687 killing process with pid 3212378 00:15:23.687 00:22:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@969 -- # kill 3212378 00:15:23.687 00:22:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@974 -- # wait 3212378 00:15:23.687 00:22:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:15:23.687 00:22:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:15:23.687 00:15:23.687 real 0m50.940s 00:15:23.687 user 3m15.033s 00:15:23.687 sys 0m2.738s 00:15:23.687 00:22:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:23.687 00:22:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:15:23.687 ************************************ 00:15:23.687 END TEST nvmf_vfio_user 00:15:23.687 ************************************ 00:15:23.949 00:22:54 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@32 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:15:23.949 00:22:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:15:23.949 00:22:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:23.949 00:22:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:23.949 ************************************ 00:15:23.949 START TEST nvmf_vfio_user_nvme_compliance 00:15:23.949 ************************************ 00:15:23.949 00:22:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:15:23.949 * Looking for test storage... 00:15:23.949 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:15:23.949 00:22:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:15:23.949 00:22:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1681 -- # lcov --version 00:15:23.949 00:22:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:15:23.949 00:22:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:15:23.949 00:22:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:23.949 00:22:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:23.949 00:22:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:23.949 00:22:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # IFS=.-: 00:15:23.949 00:22:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # read -ra ver1 00:15:23.949 00:22:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # IFS=.-: 00:15:23.949 00:22:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # read -ra ver2 00:15:23.949 00:22:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@338 -- # local 'op=<' 00:15:23.949 00:22:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@340 -- # ver1_l=2 00:15:23.949 00:22:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@341 -- # ver2_l=1 00:15:23.949 00:22:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:23.949 00:22:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@344 -- # case "$op" in 00:15:23.949 00:22:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@345 -- # : 1 00:15:23.949 00:22:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:23.949 00:22:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:23.949 00:22:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # decimal 1 00:15:23.949 00:22:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=1 00:15:23.949 00:22:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:23.949 00:22:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 1 00:15:23.949 00:22:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # ver1[v]=1 00:15:23.949 00:22:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # decimal 2 00:15:23.949 00:22:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=2 00:15:23.949 00:22:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:23.949 00:22:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 2 00:15:23.949 00:22:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # ver2[v]=2 00:15:23.949 00:22:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:23.949 00:22:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:23.949 00:22:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # return 0 00:15:23.949 00:22:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:23.949 00:22:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:15:23.949 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:23.949 --rc genhtml_branch_coverage=1 00:15:23.949 --rc genhtml_function_coverage=1 00:15:23.949 --rc genhtml_legend=1 00:15:23.949 --rc geninfo_all_blocks=1 00:15:23.949 --rc geninfo_unexecuted_blocks=1 00:15:23.949 00:15:23.949 ' 00:15:23.949 00:22:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:15:23.949 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:23.949 --rc genhtml_branch_coverage=1 00:15:23.949 --rc genhtml_function_coverage=1 00:15:23.949 --rc genhtml_legend=1 00:15:23.949 --rc geninfo_all_blocks=1 00:15:23.949 --rc geninfo_unexecuted_blocks=1 00:15:23.949 00:15:23.949 ' 00:15:23.949 00:22:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:15:23.949 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:23.949 --rc genhtml_branch_coverage=1 00:15:23.949 --rc genhtml_function_coverage=1 00:15:23.949 --rc genhtml_legend=1 00:15:23.949 --rc geninfo_all_blocks=1 00:15:23.949 --rc geninfo_unexecuted_blocks=1 00:15:23.949 00:15:23.949 ' 00:15:23.949 00:22:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:15:23.949 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:23.949 --rc genhtml_branch_coverage=1 00:15:23.949 --rc genhtml_function_coverage=1 00:15:23.949 --rc genhtml_legend=1 00:15:23.949 --rc geninfo_all_blocks=1 00:15:23.949 --rc geninfo_unexecuted_blocks=1 00:15:23.949 00:15:23.949 ' 00:15:23.949 00:22:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:23.949 00:22:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:15:23.949 00:22:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:23.949 00:22:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:23.949 00:22:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:23.949 00:22:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:23.949 00:22:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:23.949 00:22:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:23.949 00:22:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:23.949 00:22:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:23.949 00:22:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:24.211 00:22:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:24.211 00:22:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:24.211 00:22:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:24.211 00:22:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:24.211 00:22:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:24.211 00:22:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:24.211 00:22:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:24.211 00:22:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:24.211 00:22:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@15 -- # shopt -s extglob 00:15:24.211 00:22:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:24.211 00:22:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:24.211 00:22:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:24.211 00:22:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:24.211 00:22:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:24.211 00:22:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:24.211 00:22:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:15:24.211 00:22:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:24.211 00:22:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # : 0 00:15:24.211 00:22:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:24.211 00:22:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:24.211 00:22:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:24.211 00:22:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:24.211 00:22:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:24.211 00:22:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:24.211 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:24.211 00:22:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:24.211 00:22:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:24.211 00:22:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:24.211 00:22:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:24.211 00:22:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:24.211 00:22:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:15:24.211 00:22:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:15:24.211 00:22:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:15:24.211 00:22:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=3213201 00:15:24.211 00:22:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 3213201' 00:15:24.211 Process pid: 3213201 00:15:24.211 00:22:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:24.211 00:22:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 3213201 00:15:24.211 00:22:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:15:24.211 00:22:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@831 -- # '[' -z 3213201 ']' 00:15:24.211 00:22:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:24.211 00:22:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:24.211 00:22:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:24.211 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:24.211 00:22:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:24.211 00:22:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:24.211 [2024-10-09 00:22:54.671374] Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 initialization... 00:15:24.211 [2024-10-09 00:22:54.671448] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:24.211 [2024-10-09 00:22:54.751334] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:24.211 [2024-10-09 00:22:54.813306] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:24.211 [2024-10-09 00:22:54.813341] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:24.212 [2024-10-09 00:22:54.813347] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:24.212 [2024-10-09 00:22:54.813352] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:24.212 [2024-10-09 00:22:54.813356] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:24.212 [2024-10-09 00:22:54.814369] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:15:24.212 [2024-10-09 00:22:54.814525] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:15:24.212 [2024-10-09 00:22:54.814525] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:15:25.152 00:22:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:25.152 00:22:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@864 -- # return 0 00:15:25.152 00:22:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:15:26.099 00:22:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:15:26.099 00:22:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:15:26.099 00:22:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:15:26.099 00:22:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:26.099 00:22:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:26.099 00:22:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:26.099 00:22:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:15:26.099 00:22:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:15:26.099 00:22:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:26.099 00:22:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:26.099 malloc0 00:15:26.099 00:22:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:26.099 00:22:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:15:26.099 00:22:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:26.099 00:22:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:26.099 00:22:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:26.099 00:22:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:15:26.099 00:22:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:26.099 00:22:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:26.099 00:22:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:26.099 00:22:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:15:26.099 00:22:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:26.099 00:22:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:26.099 00:22:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:26.099 00:22:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:15:26.099 00:15:26.099 00:15:26.099 CUnit - A unit testing framework for C - Version 2.1-3 00:15:26.099 http://cunit.sourceforge.net/ 00:15:26.099 00:15:26.099 00:15:26.099 Suite: nvme_compliance 00:15:26.099 Test: admin_identify_ctrlr_verify_dptr ...[2024-10-09 00:22:56.706183] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:26.099 [2024-10-09 00:22:56.707470] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:15:26.099 [2024-10-09 00:22:56.707482] vfio_user.c:5507:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:15:26.099 [2024-10-09 00:22:56.707487] vfio_user.c:5600:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:15:26.099 [2024-10-09 00:22:56.709210] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:26.369 passed 00:15:26.369 Test: admin_identify_ctrlr_verify_fused ...[2024-10-09 00:22:56.786708] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:26.369 [2024-10-09 00:22:56.791747] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:26.369 passed 00:15:26.369 Test: admin_identify_ns ...[2024-10-09 00:22:56.867314] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:26.369 [2024-10-09 00:22:56.927730] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:15:26.369 [2024-10-09 00:22:56.935726] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:15:26.369 [2024-10-09 00:22:56.956804] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:26.369 passed 00:15:26.631 Test: admin_get_features_mandatory_features ...[2024-10-09 00:22:57.030029] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:26.631 [2024-10-09 00:22:57.033055] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:26.631 passed 00:15:26.631 Test: admin_get_features_optional_features ...[2024-10-09 00:22:57.110515] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:26.631 [2024-10-09 00:22:57.113536] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:26.631 passed 00:15:26.631 Test: admin_set_features_number_of_queues ...[2024-10-09 00:22:57.191105] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:26.892 [2024-10-09 00:22:57.296804] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:26.892 passed 00:15:26.892 Test: admin_get_log_page_mandatory_logs ...[2024-10-09 00:22:57.369020] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:26.892 [2024-10-09 00:22:57.372036] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:26.892 passed 00:15:26.892 Test: admin_get_log_page_with_lpo ...[2024-10-09 00:22:57.447809] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:26.892 [2024-10-09 00:22:57.516730] ctrlr.c:2697:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:15:27.153 [2024-10-09 00:22:57.529772] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:27.153 passed 00:15:27.153 Test: fabric_property_get ...[2024-10-09 00:22:57.602974] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:27.153 [2024-10-09 00:22:57.604185] vfio_user.c:5600:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:15:27.153 [2024-10-09 00:22:57.605997] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:27.153 passed 00:15:27.153 Test: admin_delete_io_sq_use_admin_qid ...[2024-10-09 00:22:57.684435] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:27.153 [2024-10-09 00:22:57.685630] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:15:27.153 [2024-10-09 00:22:57.687457] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:27.153 passed 00:15:27.153 Test: admin_delete_io_sq_delete_sq_twice ...[2024-10-09 00:22:57.761184] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:27.414 [2024-10-09 00:22:57.845728] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:15:27.414 [2024-10-09 00:22:57.861727] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:15:27.414 [2024-10-09 00:22:57.866799] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:27.414 passed 00:15:27.414 Test: admin_delete_io_cq_use_admin_qid ...[2024-10-09 00:22:57.940022] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:27.414 [2024-10-09 00:22:57.941237] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:15:27.414 [2024-10-09 00:22:57.943040] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:27.414 passed 00:15:27.414 Test: admin_delete_io_cq_delete_cq_first ...[2024-10-09 00:22:58.019741] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:27.674 [2024-10-09 00:22:58.097726] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:15:27.674 [2024-10-09 00:22:58.121729] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:15:27.674 [2024-10-09 00:22:58.126794] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:27.674 passed 00:15:27.674 Test: admin_create_io_cq_verify_iv_pc ...[2024-10-09 00:22:58.199998] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:27.674 [2024-10-09 00:22:58.201194] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:15:27.674 [2024-10-09 00:22:58.201216] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:15:27.674 [2024-10-09 00:22:58.203014] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:27.674 passed 00:15:27.674 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-10-09 00:22:58.277755] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:27.935 [2024-10-09 00:22:58.370724] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:15:27.935 [2024-10-09 00:22:58.378727] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:15:27.935 [2024-10-09 00:22:58.386725] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:15:27.935 [2024-10-09 00:22:58.394730] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:15:27.935 [2024-10-09 00:22:58.423799] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:27.935 passed 00:15:27.935 Test: admin_create_io_sq_verify_pc ...[2024-10-09 00:22:58.499826] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:27.935 [2024-10-09 00:22:58.517734] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:15:27.935 [2024-10-09 00:22:58.534996] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:27.935 passed 00:15:28.195 Test: admin_create_io_qp_max_qps ...[2024-10-09 00:22:58.607439] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:29.140 [2024-10-09 00:22:59.716728] nvme_ctrlr.c:5504:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:15:29.712 [2024-10-09 00:23:00.105140] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:29.712 passed 00:15:29.712 Test: admin_create_io_sq_shared_cq ...[2024-10-09 00:23:00.181009] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:29.712 [2024-10-09 00:23:00.313736] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:15:29.972 [2024-10-09 00:23:00.350771] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:29.972 passed 00:15:29.972 00:15:29.972 Run Summary: Type Total Ran Passed Failed Inactive 00:15:29.972 suites 1 1 n/a 0 0 00:15:29.972 tests 18 18 18 0 0 00:15:29.972 asserts 360 360 360 0 n/a 00:15:29.972 00:15:29.972 Elapsed time = 1.500 seconds 00:15:29.972 00:23:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 3213201 00:15:29.972 00:23:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@950 -- # '[' -z 3213201 ']' 00:15:29.972 00:23:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # kill -0 3213201 00:15:29.972 00:23:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@955 -- # uname 00:15:29.972 00:23:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:29.972 00:23:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3213201 00:15:29.972 00:23:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:29.972 00:23:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:29.972 00:23:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3213201' 00:15:29.972 killing process with pid 3213201 00:15:29.972 00:23:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@969 -- # kill 3213201 00:15:29.972 00:23:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@974 -- # wait 3213201 00:15:29.972 00:23:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:15:29.972 00:23:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:15:29.972 00:15:29.972 real 0m6.221s 00:15:29.972 user 0m17.532s 00:15:29.972 sys 0m0.566s 00:15:29.972 00:23:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:29.972 00:23:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:29.972 ************************************ 00:15:29.972 END TEST nvmf_vfio_user_nvme_compliance 00:15:29.972 ************************************ 00:15:30.234 00:23:00 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@33 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:15:30.234 00:23:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:15:30.234 00:23:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:30.234 00:23:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:30.234 ************************************ 00:15:30.234 START TEST nvmf_vfio_user_fuzz 00:15:30.234 ************************************ 00:15:30.234 00:23:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:15:30.234 * Looking for test storage... 00:15:30.234 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:30.234 00:23:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:15:30.234 00:23:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1681 -- # lcov --version 00:15:30.234 00:23:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:15:30.234 00:23:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:15:30.234 00:23:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:30.234 00:23:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:30.234 00:23:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:30.234 00:23:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:15:30.234 00:23:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:15:30.234 00:23:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:15:30.234 00:23:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:15:30.234 00:23:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:15:30.234 00:23:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:15:30.234 00:23:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:15:30.234 00:23:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:30.234 00:23:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:15:30.234 00:23:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@345 -- # : 1 00:15:30.234 00:23:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:30.234 00:23:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:30.234 00:23:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # decimal 1 00:15:30.234 00:23:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=1 00:15:30.234 00:23:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:30.234 00:23:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 1 00:15:30.234 00:23:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:15:30.234 00:23:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # decimal 2 00:15:30.234 00:23:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=2 00:15:30.234 00:23:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:30.234 00:23:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 2 00:15:30.234 00:23:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:15:30.234 00:23:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:30.234 00:23:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:30.234 00:23:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # return 0 00:15:30.234 00:23:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:30.234 00:23:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:15:30.234 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:30.234 --rc genhtml_branch_coverage=1 00:15:30.234 --rc genhtml_function_coverage=1 00:15:30.234 --rc genhtml_legend=1 00:15:30.234 --rc geninfo_all_blocks=1 00:15:30.234 --rc geninfo_unexecuted_blocks=1 00:15:30.235 00:15:30.235 ' 00:15:30.235 00:23:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:15:30.235 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:30.235 --rc genhtml_branch_coverage=1 00:15:30.235 --rc genhtml_function_coverage=1 00:15:30.235 --rc genhtml_legend=1 00:15:30.235 --rc geninfo_all_blocks=1 00:15:30.235 --rc geninfo_unexecuted_blocks=1 00:15:30.235 00:15:30.235 ' 00:15:30.235 00:23:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:15:30.235 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:30.235 --rc genhtml_branch_coverage=1 00:15:30.235 --rc genhtml_function_coverage=1 00:15:30.235 --rc genhtml_legend=1 00:15:30.235 --rc geninfo_all_blocks=1 00:15:30.235 --rc geninfo_unexecuted_blocks=1 00:15:30.235 00:15:30.235 ' 00:15:30.235 00:23:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:15:30.235 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:30.235 --rc genhtml_branch_coverage=1 00:15:30.235 --rc genhtml_function_coverage=1 00:15:30.235 --rc genhtml_legend=1 00:15:30.235 --rc geninfo_all_blocks=1 00:15:30.235 --rc geninfo_unexecuted_blocks=1 00:15:30.235 00:15:30.235 ' 00:15:30.235 00:23:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:30.235 00:23:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:15:30.235 00:23:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:30.496 00:23:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:30.496 00:23:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:30.496 00:23:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:30.496 00:23:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:30.496 00:23:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:30.496 00:23:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:30.496 00:23:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:30.496 00:23:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:30.496 00:23:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:30.496 00:23:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:30.496 00:23:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:30.496 00:23:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:30.496 00:23:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:30.496 00:23:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:30.496 00:23:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:30.496 00:23:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:30.496 00:23:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:15:30.496 00:23:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:30.496 00:23:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:30.496 00:23:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:30.496 00:23:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:30.496 00:23:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:30.496 00:23:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:30.496 00:23:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:15:30.496 00:23:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:30.496 00:23:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # : 0 00:15:30.496 00:23:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:30.496 00:23:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:30.496 00:23:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:30.496 00:23:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:30.496 00:23:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:30.496 00:23:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:30.496 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:30.496 00:23:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:30.496 00:23:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:30.496 00:23:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:30.497 00:23:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:15:30.497 00:23:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:15:30.497 00:23:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:15:30.497 00:23:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:15:30.497 00:23:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:15:30.497 00:23:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:15:30.497 00:23:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:15:30.497 00:23:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=3214542 00:15:30.497 00:23:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 3214542' 00:15:30.497 Process pid: 3214542 00:15:30.497 00:23:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:30.497 00:23:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 3214542 00:15:30.497 00:23:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:15:30.497 00:23:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@831 -- # '[' -z 3214542 ']' 00:15:30.497 00:23:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:30.497 00:23:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:30.497 00:23:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:30.497 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:30.497 00:23:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:30.497 00:23:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:31.439 00:23:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:31.439 00:23:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@864 -- # return 0 00:15:31.439 00:23:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:15:32.397 00:23:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:15:32.397 00:23:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.397 00:23:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:32.397 00:23:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.397 00:23:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:15:32.397 00:23:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:15:32.397 00:23:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.397 00:23:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:32.397 malloc0 00:15:32.397 00:23:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.397 00:23:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:15:32.397 00:23:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.397 00:23:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:32.397 00:23:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.397 00:23:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:15:32.397 00:23:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.397 00:23:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:32.397 00:23:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.397 00:23:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:15:32.397 00:23:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.397 00:23:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:32.397 00:23:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.397 00:23:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:15:32.397 00:23:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:16:04.512 Fuzzing completed. Shutting down the fuzz application 00:16:04.512 00:16:04.512 Dumping successful admin opcodes: 00:16:04.512 8, 9, 10, 24, 00:16:04.512 Dumping successful io opcodes: 00:16:04.512 0, 00:16:04.512 NS: 0x200003a1ef00 I/O qp, Total commands completed: 1366201, total successful commands: 5360, random_seed: 307193472 00:16:04.512 NS: 0x200003a1ef00 admin qp, Total commands completed: 338588, total successful commands: 2736, random_seed: 1109014528 00:16:04.512 00:23:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:16:04.512 00:23:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.512 00:23:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:04.512 00:23:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.512 00:23:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 3214542 00:16:04.512 00:23:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@950 -- # '[' -z 3214542 ']' 00:16:04.512 00:23:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # kill -0 3214542 00:16:04.512 00:23:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@955 -- # uname 00:16:04.512 00:23:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:04.512 00:23:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3214542 00:16:04.512 00:23:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:04.512 00:23:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:04.512 00:23:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3214542' 00:16:04.512 killing process with pid 3214542 00:16:04.512 00:23:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@969 -- # kill 3214542 00:16:04.512 00:23:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@974 -- # wait 3214542 00:16:04.512 00:23:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:16:04.512 00:23:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:16:04.512 00:16:04.512 real 0m33.038s 00:16:04.512 user 0m38.059s 00:16:04.512 sys 0m23.879s 00:16:04.512 00:23:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:04.512 00:23:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:04.512 ************************************ 00:16:04.512 END TEST nvmf_vfio_user_fuzz 00:16:04.512 ************************************ 00:16:04.512 00:23:33 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:16:04.512 00:23:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:16:04.512 00:23:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:04.512 00:23:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:04.512 ************************************ 00:16:04.512 START TEST nvmf_auth_target 00:16:04.512 ************************************ 00:16:04.512 00:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:16:04.512 * Looking for test storage... 00:16:04.512 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:04.512 00:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:16:04.512 00:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1681 -- # lcov --version 00:16:04.512 00:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:16:04.512 00:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:16:04.512 00:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:04.512 00:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:04.512 00:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:04.512 00:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:16:04.512 00:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:16:04.512 00:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:16:04.512 00:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:16:04.512 00:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:16:04.512 00:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:16:04.512 00:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:16:04.512 00:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:04.512 00:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:16:04.512 00:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:16:04.512 00:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:04.512 00:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:04.512 00:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:16:04.512 00:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:16:04.512 00:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:04.512 00:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:16:04.512 00:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:16:04.512 00:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:16:04.512 00:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:16:04.512 00:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:04.512 00:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:16:04.512 00:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:16:04.512 00:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:04.512 00:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:04.512 00:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:16:04.512 00:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:04.512 00:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:16:04.512 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:04.512 --rc genhtml_branch_coverage=1 00:16:04.512 --rc genhtml_function_coverage=1 00:16:04.512 --rc genhtml_legend=1 00:16:04.512 --rc geninfo_all_blocks=1 00:16:04.512 --rc geninfo_unexecuted_blocks=1 00:16:04.512 00:16:04.512 ' 00:16:04.512 00:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:16:04.512 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:04.512 --rc genhtml_branch_coverage=1 00:16:04.512 --rc genhtml_function_coverage=1 00:16:04.512 --rc genhtml_legend=1 00:16:04.512 --rc geninfo_all_blocks=1 00:16:04.512 --rc geninfo_unexecuted_blocks=1 00:16:04.512 00:16:04.512 ' 00:16:04.512 00:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:16:04.512 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:04.512 --rc genhtml_branch_coverage=1 00:16:04.512 --rc genhtml_function_coverage=1 00:16:04.512 --rc genhtml_legend=1 00:16:04.512 --rc geninfo_all_blocks=1 00:16:04.512 --rc geninfo_unexecuted_blocks=1 00:16:04.512 00:16:04.513 ' 00:16:04.513 00:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:16:04.513 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:04.513 --rc genhtml_branch_coverage=1 00:16:04.513 --rc genhtml_function_coverage=1 00:16:04.513 --rc genhtml_legend=1 00:16:04.513 --rc geninfo_all_blocks=1 00:16:04.513 --rc geninfo_unexecuted_blocks=1 00:16:04.513 00:16:04.513 ' 00:16:04.513 00:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:04.513 00:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:16:04.513 00:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:04.513 00:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:04.513 00:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:04.513 00:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:04.513 00:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:04.513 00:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:04.513 00:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:04.513 00:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:04.513 00:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:04.513 00:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:04.513 00:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:04.513 00:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:04.513 00:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:04.513 00:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:04.513 00:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:04.513 00:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:04.513 00:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:04.513 00:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:16:04.513 00:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:04.513 00:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:04.513 00:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:04.513 00:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:04.513 00:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:04.513 00:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:04.513 00:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:16:04.513 00:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:04.513 00:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:16:04.513 00:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:04.513 00:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:04.513 00:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:04.513 00:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:04.513 00:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:04.513 00:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:04.513 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:04.513 00:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:04.513 00:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:04.513 00:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:04.513 00:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:16:04.513 00:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:16:04.513 00:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:16:04.513 00:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:04.513 00:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:16:04.513 00:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:16:04.513 00:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:16:04.513 00:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:16:04.513 00:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:16:04.513 00:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:04.513 00:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # prepare_net_devs 00:16:04.513 00:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@436 -- # local -g is_hw=no 00:16:04.513 00:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # remove_spdk_ns 00:16:04.513 00:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:04.513 00:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:04.513 00:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:04.513 00:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:16:04.513 00:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:16:04.513 00:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@309 -- # xtrace_disable 00:16:04.513 00:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:11.129 00:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:11.129 00:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # pci_devs=() 00:16:11.129 00:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:16:11.129 00:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:16:11.129 00:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:16:11.129 00:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:16:11.129 00:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:16:11.129 00:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # net_devs=() 00:16:11.129 00:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:16:11.129 00:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # e810=() 00:16:11.129 00:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # local -ga e810 00:16:11.129 00:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # x722=() 00:16:11.129 00:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # local -ga x722 00:16:11.129 00:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # mlx=() 00:16:11.129 00:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # local -ga mlx 00:16:11.129 00:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:11.129 00:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:11.129 00:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:11.129 00:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:11.129 00:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:11.129 00:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:11.129 00:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:11.129 00:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:16:11.129 00:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:11.129 00:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:11.129 00:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:11.129 00:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:11.129 00:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:16:11.129 00:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:16:11.129 00:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:16:11.129 00:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:16:11.129 00:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:16:11.129 00:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:16:11.129 00:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:11.129 00:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:16:11.129 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:16:11.129 00:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:11.129 00:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:11.129 00:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:11.129 00:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:11.129 00:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:11.129 00:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:11.129 00:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:16:11.129 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:16:11.129 00:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:11.129 00:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:11.129 00:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:11.129 00:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:11.129 00:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:11.129 00:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:16:11.129 00:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:16:11.129 00:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:16:11.129 00:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:16:11.129 00:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:11.129 00:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:16:11.130 00:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:11.130 00:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ up == up ]] 00:16:11.130 00:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:16:11.130 00:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:11.130 00:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:16:11.130 Found net devices under 0000:4b:00.0: cvl_0_0 00:16:11.130 00:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:16:11.130 00:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:16:11.130 00:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:11.130 00:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:16:11.130 00:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:11.130 00:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ up == up ]] 00:16:11.130 00:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:16:11.130 00:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:11.130 00:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:16:11.130 Found net devices under 0000:4b:00.1: cvl_0_1 00:16:11.130 00:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:16:11.130 00:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:16:11.130 00:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # is_hw=yes 00:16:11.130 00:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:16:11.130 00:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:16:11.130 00:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:16:11.130 00:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:11.130 00:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:11.130 00:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:11.130 00:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:11.130 00:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:16:11.130 00:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:11.130 00:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:11.130 00:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:16:11.130 00:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:16:11.130 00:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:11.130 00:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:11.130 00:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:16:11.130 00:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:16:11.130 00:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:16:11.130 00:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:11.130 00:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:11.130 00:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:11.130 00:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:16:11.130 00:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:11.130 00:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:11.130 00:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:11.130 00:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:16:11.130 00:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:16:11.130 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:11.130 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.584 ms 00:16:11.130 00:16:11.130 --- 10.0.0.2 ping statistics --- 00:16:11.130 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:11.130 rtt min/avg/max/mdev = 0.584/0.584/0.584/0.000 ms 00:16:11.130 00:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:11.130 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:11.130 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.285 ms 00:16:11.130 00:16:11.130 --- 10.0.0.1 ping statistics --- 00:16:11.130 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:11.130 rtt min/avg/max/mdev = 0.285/0.285/0.285/0.000 ms 00:16:11.130 00:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:11.130 00:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@448 -- # return 0 00:16:11.130 00:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:16:11.130 00:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:11.130 00:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:16:11.130 00:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:16:11.130 00:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:11.130 00:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:16:11.130 00:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:16:11.130 00:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:16:11.130 00:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:16:11.130 00:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:11.130 00:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:11.130 00:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # nvmfpid=3224547 00:16:11.130 00:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # waitforlisten 3224547 00:16:11.130 00:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:16:11.130 00:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 3224547 ']' 00:16:11.130 00:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:11.130 00:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:11.130 00:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:11.130 00:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:11.130 00:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:12.076 00:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:12.076 00:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:16:12.076 00:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:16:12.076 00:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:12.076 00:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:12.076 00:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:12.076 00:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=3224876 00:16:12.076 00:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:16:12.076 00:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:16:12.076 00:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:16:12.076 00:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:16:12.076 00:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:12.076 00:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:16:12.076 00:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=null 00:16:12.076 00:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=48 00:16:12.076 00:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:12.076 00:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=9ee19f8ffe975635012c80a232f8b2d50a519236d6748cd3 00:16:12.076 00:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-null.XXX 00:16:12.076 00:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-null.wTE 00:16:12.076 00:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key 9ee19f8ffe975635012c80a232f8b2d50a519236d6748cd3 0 00:16:12.076 00:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 9ee19f8ffe975635012c80a232f8b2d50a519236d6748cd3 0 00:16:12.076 00:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:16:12.076 00:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:16:12.076 00:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=9ee19f8ffe975635012c80a232f8b2d50a519236d6748cd3 00:16:12.076 00:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=0 00:16:12.076 00:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:16:12.076 00:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-null.wTE 00:16:12.076 00:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-null.wTE 00:16:12.076 00:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.wTE 00:16:12.076 00:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:16:12.076 00:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:16:12.076 00:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:12.076 00:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:16:12.076 00:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=sha512 00:16:12.076 00:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=64 00:16:12.076 00:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 32 /dev/urandom 00:16:12.076 00:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=d0799e296f33a63fdb1b43cc92d5ba82e6b2ef2118aacbb0bbd47f1035df170e 00:16:12.076 00:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha512.XXX 00:16:12.076 00:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha512.vhm 00:16:12.076 00:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key d0799e296f33a63fdb1b43cc92d5ba82e6b2ef2118aacbb0bbd47f1035df170e 3 00:16:12.076 00:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 d0799e296f33a63fdb1b43cc92d5ba82e6b2ef2118aacbb0bbd47f1035df170e 3 00:16:12.076 00:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:16:12.076 00:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:16:12.076 00:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=d0799e296f33a63fdb1b43cc92d5ba82e6b2ef2118aacbb0bbd47f1035df170e 00:16:12.076 00:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=3 00:16:12.076 00:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:16:12.076 00:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha512.vhm 00:16:12.076 00:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha512.vhm 00:16:12.076 00:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.vhm 00:16:12.076 00:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:16:12.076 00:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:16:12.076 00:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:12.076 00:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:16:12.076 00:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=sha256 00:16:12.076 00:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=32 00:16:12.076 00:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 16 /dev/urandom 00:16:12.076 00:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=b5450279aa7aa2c36b04f9bc0c7f3dc0 00:16:12.076 00:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha256.XXX 00:16:12.076 00:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha256.JYJ 00:16:12.076 00:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key b5450279aa7aa2c36b04f9bc0c7f3dc0 1 00:16:12.076 00:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 b5450279aa7aa2c36b04f9bc0c7f3dc0 1 00:16:12.076 00:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:16:12.076 00:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:16:12.076 00:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=b5450279aa7aa2c36b04f9bc0c7f3dc0 00:16:12.076 00:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=1 00:16:12.076 00:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:16:12.076 00:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha256.JYJ 00:16:12.076 00:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha256.JYJ 00:16:12.076 00:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.JYJ 00:16:12.076 00:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:16:12.076 00:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:16:12.076 00:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:12.076 00:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:16:12.076 00:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=sha384 00:16:12.076 00:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=48 00:16:12.076 00:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:12.076 00:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=9bce77b7072e7be633ac6e47d1274c399d324b82be150984 00:16:12.076 00:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha384.XXX 00:16:12.077 00:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha384.FXj 00:16:12.077 00:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key 9bce77b7072e7be633ac6e47d1274c399d324b82be150984 2 00:16:12.077 00:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 9bce77b7072e7be633ac6e47d1274c399d324b82be150984 2 00:16:12.077 00:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:16:12.077 00:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:16:12.077 00:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=9bce77b7072e7be633ac6e47d1274c399d324b82be150984 00:16:12.077 00:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=2 00:16:12.077 00:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:16:12.338 00:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha384.FXj 00:16:12.338 00:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha384.FXj 00:16:12.338 00:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.FXj 00:16:12.338 00:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:16:12.338 00:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:16:12.338 00:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:12.338 00:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:16:12.338 00:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=sha384 00:16:12.338 00:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=48 00:16:12.338 00:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:12.338 00:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=34376972b0b54c289047db95d2f1a490fdb52568a2369f98 00:16:12.338 00:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha384.XXX 00:16:12.338 00:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha384.Vay 00:16:12.338 00:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key 34376972b0b54c289047db95d2f1a490fdb52568a2369f98 2 00:16:12.338 00:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 34376972b0b54c289047db95d2f1a490fdb52568a2369f98 2 00:16:12.338 00:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:16:12.338 00:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:16:12.338 00:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=34376972b0b54c289047db95d2f1a490fdb52568a2369f98 00:16:12.338 00:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=2 00:16:12.338 00:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:16:12.338 00:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha384.Vay 00:16:12.338 00:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha384.Vay 00:16:12.338 00:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.Vay 00:16:12.338 00:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:16:12.338 00:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:16:12.338 00:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:12.338 00:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:16:12.338 00:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=sha256 00:16:12.338 00:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=32 00:16:12.338 00:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 16 /dev/urandom 00:16:12.338 00:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=81509d932411299cc0fbb6f0c4e53936 00:16:12.338 00:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha256.XXX 00:16:12.338 00:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha256.GqX 00:16:12.338 00:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key 81509d932411299cc0fbb6f0c4e53936 1 00:16:12.338 00:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 81509d932411299cc0fbb6f0c4e53936 1 00:16:12.338 00:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:16:12.338 00:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:16:12.338 00:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=81509d932411299cc0fbb6f0c4e53936 00:16:12.338 00:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=1 00:16:12.338 00:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:16:12.338 00:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha256.GqX 00:16:12.338 00:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha256.GqX 00:16:12.338 00:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.GqX 00:16:12.338 00:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:16:12.338 00:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:16:12.338 00:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:12.338 00:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:16:12.338 00:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=sha512 00:16:12.338 00:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=64 00:16:12.338 00:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 32 /dev/urandom 00:16:12.338 00:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=5678e9c8685a7fb17837a480bca542d6dd1755d2017f806ad4368c29c899ee0d 00:16:12.338 00:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha512.XXX 00:16:12.338 00:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha512.90f 00:16:12.338 00:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key 5678e9c8685a7fb17837a480bca542d6dd1755d2017f806ad4368c29c899ee0d 3 00:16:12.338 00:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 5678e9c8685a7fb17837a480bca542d6dd1755d2017f806ad4368c29c899ee0d 3 00:16:12.338 00:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:16:12.338 00:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:16:12.338 00:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=5678e9c8685a7fb17837a480bca542d6dd1755d2017f806ad4368c29c899ee0d 00:16:12.338 00:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=3 00:16:12.338 00:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:16:12.338 00:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha512.90f 00:16:12.338 00:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha512.90f 00:16:12.338 00:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.90f 00:16:12.338 00:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:16:12.338 00:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 3224547 00:16:12.338 00:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 3224547 ']' 00:16:12.338 00:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:12.338 00:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:12.338 00:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:12.338 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:12.338 00:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:12.338 00:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:12.599 00:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:12.599 00:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:16:12.599 00:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 3224876 /var/tmp/host.sock 00:16:12.599 00:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 3224876 ']' 00:16:12.599 00:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/host.sock 00:16:12.599 00:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:12.599 00:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:16:12.599 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:16:12.599 00:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:12.599 00:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:12.859 00:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:12.859 00:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:16:12.859 00:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:16:12.859 00:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.859 00:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:12.859 00:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.859 00:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:16:12.859 00:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.wTE 00:16:12.859 00:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.859 00:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:12.859 00:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.859 00:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.wTE 00:16:12.859 00:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.wTE 00:16:13.120 00:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.vhm ]] 00:16:13.120 00:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.vhm 00:16:13.120 00:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.120 00:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:13.120 00:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.120 00:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.vhm 00:16:13.120 00:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.vhm 00:16:13.392 00:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:16:13.393 00:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.JYJ 00:16:13.393 00:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.393 00:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:13.393 00:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.393 00:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.JYJ 00:16:13.393 00:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.JYJ 00:16:13.393 00:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.FXj ]] 00:16:13.393 00:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.FXj 00:16:13.393 00:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.393 00:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:13.393 00:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.393 00:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.FXj 00:16:13.393 00:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.FXj 00:16:13.660 00:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:16:13.660 00:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.Vay 00:16:13.660 00:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.660 00:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:13.660 00:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.660 00:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.Vay 00:16:13.660 00:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.Vay 00:16:13.922 00:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.GqX ]] 00:16:13.922 00:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.GqX 00:16:13.922 00:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.922 00:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:13.922 00:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.922 00:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.GqX 00:16:13.922 00:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.GqX 00:16:14.182 00:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:16:14.182 00:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.90f 00:16:14.182 00:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.182 00:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:14.182 00:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.182 00:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.90f 00:16:14.182 00:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.90f 00:16:14.182 00:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:16:14.182 00:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:16:14.182 00:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:14.182 00:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:14.182 00:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:14.182 00:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:14.444 00:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:16:14.444 00:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:14.444 00:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:14.444 00:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:14.444 00:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:14.444 00:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:14.444 00:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:14.444 00:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.444 00:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:14.444 00:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.444 00:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:14.444 00:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:14.444 00:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:14.704 00:16:14.704 00:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:14.704 00:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:14.704 00:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:14.965 00:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:14.965 00:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:14.965 00:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.965 00:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:14.965 00:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.965 00:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:14.965 { 00:16:14.965 "cntlid": 1, 00:16:14.965 "qid": 0, 00:16:14.965 "state": "enabled", 00:16:14.965 "thread": "nvmf_tgt_poll_group_000", 00:16:14.965 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:14.965 "listen_address": { 00:16:14.965 "trtype": "TCP", 00:16:14.965 "adrfam": "IPv4", 00:16:14.965 "traddr": "10.0.0.2", 00:16:14.965 "trsvcid": "4420" 00:16:14.965 }, 00:16:14.965 "peer_address": { 00:16:14.965 "trtype": "TCP", 00:16:14.965 "adrfam": "IPv4", 00:16:14.965 "traddr": "10.0.0.1", 00:16:14.965 "trsvcid": "42590" 00:16:14.965 }, 00:16:14.965 "auth": { 00:16:14.965 "state": "completed", 00:16:14.965 "digest": "sha256", 00:16:14.965 "dhgroup": "null" 00:16:14.965 } 00:16:14.965 } 00:16:14.965 ]' 00:16:14.965 00:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:14.965 00:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:14.965 00:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:14.965 00:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:14.965 00:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:15.226 00:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:15.226 00:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:15.226 00:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:15.226 00:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OWVlMTlmOGZmZTk3NTYzNTAxMmM4MGEyMzJmOGIyZDUwYTUxOTIzNmQ2NzQ4Y2QzlWFTxQ==: --dhchap-ctrl-secret DHHC-1:03:ZDA3OTllMjk2ZjMzYTYzZmRiMWI0M2NjOTJkNWJhODJlNmIyZWYyMTE4YWFjYmIwYmJkNDdmMTAzNWRmMTcwZQH4CrE=: 00:16:15.226 00:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:OWVlMTlmOGZmZTk3NTYzNTAxMmM4MGEyMzJmOGIyZDUwYTUxOTIzNmQ2NzQ4Y2QzlWFTxQ==: --dhchap-ctrl-secret DHHC-1:03:ZDA3OTllMjk2ZjMzYTYzZmRiMWI0M2NjOTJkNWJhODJlNmIyZWYyMTE4YWFjYmIwYmJkNDdmMTAzNWRmMTcwZQH4CrE=: 00:16:15.795 00:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:15.795 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:15.795 00:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:15.795 00:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.795 00:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:15.795 00:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.795 00:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:15.795 00:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:15.795 00:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:16.054 00:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:16:16.054 00:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:16.054 00:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:16.054 00:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:16.054 00:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:16.054 00:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:16.054 00:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:16.054 00:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.054 00:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.054 00:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.054 00:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:16.054 00:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:16.054 00:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:16.314 00:16:16.314 00:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:16.314 00:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:16.314 00:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:16.575 00:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:16.575 00:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:16.575 00:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.575 00:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.575 00:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.575 00:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:16.575 { 00:16:16.575 "cntlid": 3, 00:16:16.575 "qid": 0, 00:16:16.575 "state": "enabled", 00:16:16.575 "thread": "nvmf_tgt_poll_group_000", 00:16:16.575 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:16.575 "listen_address": { 00:16:16.575 "trtype": "TCP", 00:16:16.575 "adrfam": "IPv4", 00:16:16.575 "traddr": "10.0.0.2", 00:16:16.575 "trsvcid": "4420" 00:16:16.575 }, 00:16:16.575 "peer_address": { 00:16:16.575 "trtype": "TCP", 00:16:16.575 "adrfam": "IPv4", 00:16:16.575 "traddr": "10.0.0.1", 00:16:16.575 "trsvcid": "42614" 00:16:16.575 }, 00:16:16.575 "auth": { 00:16:16.575 "state": "completed", 00:16:16.575 "digest": "sha256", 00:16:16.575 "dhgroup": "null" 00:16:16.575 } 00:16:16.575 } 00:16:16.575 ]' 00:16:16.575 00:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:16.575 00:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:16.575 00:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:16.575 00:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:16.575 00:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:16.575 00:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:16.575 00:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:16.575 00:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:16.836 00:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YjU0NTAyNzlhYTdhYTJjMzZiMDRmOWJjMGM3ZjNkYzCZ8b3c: --dhchap-ctrl-secret DHHC-1:02:OWJjZTc3YjcwNzJlN2JlNjMzYWM2ZTQ3ZDEyNzRjMzk5ZDMyNGI4MmJlMTUwOTg0H8R8cQ==: 00:16:16.836 00:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:YjU0NTAyNzlhYTdhYTJjMzZiMDRmOWJjMGM3ZjNkYzCZ8b3c: --dhchap-ctrl-secret DHHC-1:02:OWJjZTc3YjcwNzJlN2JlNjMzYWM2ZTQ3ZDEyNzRjMzk5ZDMyNGI4MmJlMTUwOTg0H8R8cQ==: 00:16:17.408 00:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:17.408 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:17.408 00:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:17.408 00:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.408 00:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:17.408 00:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.408 00:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:17.408 00:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:17.408 00:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:17.669 00:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:16:17.669 00:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:17.669 00:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:17.669 00:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:17.669 00:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:17.669 00:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:17.669 00:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:17.669 00:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.669 00:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:17.669 00:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.669 00:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:17.669 00:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:17.669 00:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:17.930 00:16:17.930 00:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:17.930 00:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:17.930 00:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:18.202 00:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:18.202 00:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:18.202 00:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.202 00:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.202 00:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.202 00:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:18.202 { 00:16:18.202 "cntlid": 5, 00:16:18.202 "qid": 0, 00:16:18.202 "state": "enabled", 00:16:18.202 "thread": "nvmf_tgt_poll_group_000", 00:16:18.202 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:18.202 "listen_address": { 00:16:18.202 "trtype": "TCP", 00:16:18.203 "adrfam": "IPv4", 00:16:18.203 "traddr": "10.0.0.2", 00:16:18.203 "trsvcid": "4420" 00:16:18.203 }, 00:16:18.203 "peer_address": { 00:16:18.203 "trtype": "TCP", 00:16:18.203 "adrfam": "IPv4", 00:16:18.203 "traddr": "10.0.0.1", 00:16:18.203 "trsvcid": "40250" 00:16:18.203 }, 00:16:18.203 "auth": { 00:16:18.203 "state": "completed", 00:16:18.203 "digest": "sha256", 00:16:18.203 "dhgroup": "null" 00:16:18.203 } 00:16:18.203 } 00:16:18.203 ]' 00:16:18.203 00:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:18.203 00:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:18.203 00:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:18.203 00:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:18.203 00:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:18.203 00:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:18.203 00:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:18.203 00:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:18.468 00:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MzQzNzY5NzJiMGI1NGMyODkwNDdkYjk1ZDJmMWE0OTBmZGI1MjU2OGEyMzY5Zjk4bgm3qg==: --dhchap-ctrl-secret DHHC-1:01:ODE1MDlkOTMyNDExMjk5Y2MwZmJiNmYwYzRlNTM5MzZLnj+N: 00:16:18.468 00:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:MzQzNzY5NzJiMGI1NGMyODkwNDdkYjk1ZDJmMWE0OTBmZGI1MjU2OGEyMzY5Zjk4bgm3qg==: --dhchap-ctrl-secret DHHC-1:01:ODE1MDlkOTMyNDExMjk5Y2MwZmJiNmYwYzRlNTM5MzZLnj+N: 00:16:19.040 00:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:19.040 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:19.040 00:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:19.040 00:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.040 00:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:19.040 00:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.040 00:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:19.040 00:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:19.040 00:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:19.301 00:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:16:19.301 00:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:19.301 00:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:19.301 00:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:19.301 00:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:19.301 00:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:19.301 00:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:16:19.301 00:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.301 00:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:19.301 00:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.301 00:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:19.301 00:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:19.301 00:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:19.562 00:16:19.562 00:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:19.562 00:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:19.562 00:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:19.562 00:23:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:19.562 00:23:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:19.562 00:23:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.562 00:23:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:19.562 00:23:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.562 00:23:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:19.562 { 00:16:19.562 "cntlid": 7, 00:16:19.562 "qid": 0, 00:16:19.562 "state": "enabled", 00:16:19.562 "thread": "nvmf_tgt_poll_group_000", 00:16:19.562 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:19.562 "listen_address": { 00:16:19.562 "trtype": "TCP", 00:16:19.562 "adrfam": "IPv4", 00:16:19.562 "traddr": "10.0.0.2", 00:16:19.562 "trsvcid": "4420" 00:16:19.562 }, 00:16:19.562 "peer_address": { 00:16:19.562 "trtype": "TCP", 00:16:19.562 "adrfam": "IPv4", 00:16:19.562 "traddr": "10.0.0.1", 00:16:19.562 "trsvcid": "40286" 00:16:19.562 }, 00:16:19.562 "auth": { 00:16:19.562 "state": "completed", 00:16:19.562 "digest": "sha256", 00:16:19.562 "dhgroup": "null" 00:16:19.562 } 00:16:19.562 } 00:16:19.562 ]' 00:16:19.562 00:23:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:19.823 00:23:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:19.823 00:23:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:19.823 00:23:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:19.823 00:23:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:19.823 00:23:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:19.823 00:23:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:19.823 00:23:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:20.084 00:23:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NTY3OGU5Yzg2ODVhN2ZiMTc4MzdhNDgwYmNhNTQyZDZkZDE3NTVkMjAxN2Y4MDZhZDQzNjhjMjljODk5ZWUwZPFhi2U=: 00:16:20.084 00:23:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:NTY3OGU5Yzg2ODVhN2ZiMTc4MzdhNDgwYmNhNTQyZDZkZDE3NTVkMjAxN2Y4MDZhZDQzNjhjMjljODk5ZWUwZPFhi2U=: 00:16:20.655 00:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:20.655 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:20.655 00:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:20.655 00:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.655 00:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.655 00:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.655 00:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:20.655 00:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:20.655 00:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:20.655 00:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:20.655 00:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:16:20.655 00:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:20.656 00:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:20.916 00:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:20.916 00:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:20.916 00:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:20.916 00:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:20.916 00:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.916 00:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.916 00:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.916 00:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:20.916 00:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:20.916 00:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:20.916 00:16:20.916 00:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:20.916 00:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:20.916 00:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:21.177 00:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:21.177 00:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:21.177 00:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.177 00:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.177 00:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.177 00:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:21.177 { 00:16:21.177 "cntlid": 9, 00:16:21.177 "qid": 0, 00:16:21.177 "state": "enabled", 00:16:21.177 "thread": "nvmf_tgt_poll_group_000", 00:16:21.177 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:21.177 "listen_address": { 00:16:21.177 "trtype": "TCP", 00:16:21.177 "adrfam": "IPv4", 00:16:21.177 "traddr": "10.0.0.2", 00:16:21.177 "trsvcid": "4420" 00:16:21.177 }, 00:16:21.177 "peer_address": { 00:16:21.177 "trtype": "TCP", 00:16:21.177 "adrfam": "IPv4", 00:16:21.177 "traddr": "10.0.0.1", 00:16:21.177 "trsvcid": "40306" 00:16:21.177 }, 00:16:21.177 "auth": { 00:16:21.177 "state": "completed", 00:16:21.177 "digest": "sha256", 00:16:21.177 "dhgroup": "ffdhe2048" 00:16:21.177 } 00:16:21.177 } 00:16:21.177 ]' 00:16:21.177 00:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:21.177 00:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:21.177 00:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:21.177 00:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:21.177 00:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:21.438 00:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:21.438 00:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:21.438 00:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:21.438 00:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OWVlMTlmOGZmZTk3NTYzNTAxMmM4MGEyMzJmOGIyZDUwYTUxOTIzNmQ2NzQ4Y2QzlWFTxQ==: --dhchap-ctrl-secret DHHC-1:03:ZDA3OTllMjk2ZjMzYTYzZmRiMWI0M2NjOTJkNWJhODJlNmIyZWYyMTE4YWFjYmIwYmJkNDdmMTAzNWRmMTcwZQH4CrE=: 00:16:21.438 00:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:OWVlMTlmOGZmZTk3NTYzNTAxMmM4MGEyMzJmOGIyZDUwYTUxOTIzNmQ2NzQ4Y2QzlWFTxQ==: --dhchap-ctrl-secret DHHC-1:03:ZDA3OTllMjk2ZjMzYTYzZmRiMWI0M2NjOTJkNWJhODJlNmIyZWYyMTE4YWFjYmIwYmJkNDdmMTAzNWRmMTcwZQH4CrE=: 00:16:22.007 00:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:22.007 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:22.007 00:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:22.007 00:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.007 00:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.007 00:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.007 00:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:22.007 00:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:22.007 00:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:22.267 00:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:16:22.267 00:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:22.267 00:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:22.267 00:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:22.267 00:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:22.267 00:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:22.267 00:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:22.267 00:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.267 00:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.267 00:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.267 00:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:22.267 00:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:22.267 00:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:22.528 00:16:22.528 00:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:22.528 00:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:22.528 00:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:22.807 00:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:22.807 00:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:22.807 00:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.807 00:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.807 00:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.807 00:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:22.807 { 00:16:22.807 "cntlid": 11, 00:16:22.807 "qid": 0, 00:16:22.807 "state": "enabled", 00:16:22.807 "thread": "nvmf_tgt_poll_group_000", 00:16:22.807 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:22.807 "listen_address": { 00:16:22.807 "trtype": "TCP", 00:16:22.807 "adrfam": "IPv4", 00:16:22.807 "traddr": "10.0.0.2", 00:16:22.807 "trsvcid": "4420" 00:16:22.807 }, 00:16:22.807 "peer_address": { 00:16:22.807 "trtype": "TCP", 00:16:22.807 "adrfam": "IPv4", 00:16:22.807 "traddr": "10.0.0.1", 00:16:22.807 "trsvcid": "40336" 00:16:22.808 }, 00:16:22.808 "auth": { 00:16:22.808 "state": "completed", 00:16:22.808 "digest": "sha256", 00:16:22.808 "dhgroup": "ffdhe2048" 00:16:22.808 } 00:16:22.808 } 00:16:22.808 ]' 00:16:22.808 00:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:22.808 00:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:22.808 00:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:22.808 00:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:22.808 00:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:22.808 00:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:22.808 00:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:22.808 00:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:23.074 00:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YjU0NTAyNzlhYTdhYTJjMzZiMDRmOWJjMGM3ZjNkYzCZ8b3c: --dhchap-ctrl-secret DHHC-1:02:OWJjZTc3YjcwNzJlN2JlNjMzYWM2ZTQ3ZDEyNzRjMzk5ZDMyNGI4MmJlMTUwOTg0H8R8cQ==: 00:16:23.074 00:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:YjU0NTAyNzlhYTdhYTJjMzZiMDRmOWJjMGM3ZjNkYzCZ8b3c: --dhchap-ctrl-secret DHHC-1:02:OWJjZTc3YjcwNzJlN2JlNjMzYWM2ZTQ3ZDEyNzRjMzk5ZDMyNGI4MmJlMTUwOTg0H8R8cQ==: 00:16:23.644 00:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:23.644 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:23.644 00:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:23.644 00:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.644 00:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.644 00:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.644 00:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:23.644 00:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:23.644 00:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:23.912 00:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:16:23.912 00:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:23.912 00:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:23.912 00:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:23.912 00:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:23.912 00:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:23.912 00:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:23.912 00:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.912 00:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.912 00:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.912 00:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:23.912 00:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:23.912 00:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:24.172 00:16:24.172 00:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:24.172 00:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:24.172 00:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:24.172 00:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:24.172 00:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:24.172 00:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.172 00:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.431 00:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.431 00:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:24.431 { 00:16:24.431 "cntlid": 13, 00:16:24.431 "qid": 0, 00:16:24.431 "state": "enabled", 00:16:24.431 "thread": "nvmf_tgt_poll_group_000", 00:16:24.431 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:24.431 "listen_address": { 00:16:24.431 "trtype": "TCP", 00:16:24.431 "adrfam": "IPv4", 00:16:24.431 "traddr": "10.0.0.2", 00:16:24.431 "trsvcid": "4420" 00:16:24.431 }, 00:16:24.431 "peer_address": { 00:16:24.431 "trtype": "TCP", 00:16:24.431 "adrfam": "IPv4", 00:16:24.431 "traddr": "10.0.0.1", 00:16:24.431 "trsvcid": "40356" 00:16:24.431 }, 00:16:24.431 "auth": { 00:16:24.431 "state": "completed", 00:16:24.431 "digest": "sha256", 00:16:24.431 "dhgroup": "ffdhe2048" 00:16:24.431 } 00:16:24.431 } 00:16:24.431 ]' 00:16:24.431 00:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:24.431 00:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:24.431 00:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:24.431 00:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:24.431 00:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:24.431 00:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:24.431 00:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:24.431 00:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:24.690 00:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MzQzNzY5NzJiMGI1NGMyODkwNDdkYjk1ZDJmMWE0OTBmZGI1MjU2OGEyMzY5Zjk4bgm3qg==: --dhchap-ctrl-secret DHHC-1:01:ODE1MDlkOTMyNDExMjk5Y2MwZmJiNmYwYzRlNTM5MzZLnj+N: 00:16:24.690 00:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:MzQzNzY5NzJiMGI1NGMyODkwNDdkYjk1ZDJmMWE0OTBmZGI1MjU2OGEyMzY5Zjk4bgm3qg==: --dhchap-ctrl-secret DHHC-1:01:ODE1MDlkOTMyNDExMjk5Y2MwZmJiNmYwYzRlNTM5MzZLnj+N: 00:16:25.261 00:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:25.261 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:25.261 00:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:25.261 00:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.261 00:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.261 00:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.261 00:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:25.261 00:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:25.261 00:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:25.521 00:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:16:25.521 00:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:25.521 00:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:25.521 00:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:25.521 00:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:25.521 00:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:25.521 00:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:16:25.521 00:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.521 00:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.521 00:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.521 00:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:25.521 00:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:25.521 00:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:25.521 00:16:25.781 00:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:25.781 00:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:25.781 00:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:25.781 00:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:25.781 00:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:25.781 00:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.781 00:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.781 00:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.781 00:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:25.781 { 00:16:25.781 "cntlid": 15, 00:16:25.781 "qid": 0, 00:16:25.781 "state": "enabled", 00:16:25.781 "thread": "nvmf_tgt_poll_group_000", 00:16:25.781 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:25.781 "listen_address": { 00:16:25.781 "trtype": "TCP", 00:16:25.781 "adrfam": "IPv4", 00:16:25.781 "traddr": "10.0.0.2", 00:16:25.781 "trsvcid": "4420" 00:16:25.781 }, 00:16:25.781 "peer_address": { 00:16:25.781 "trtype": "TCP", 00:16:25.781 "adrfam": "IPv4", 00:16:25.781 "traddr": "10.0.0.1", 00:16:25.781 "trsvcid": "40392" 00:16:25.781 }, 00:16:25.781 "auth": { 00:16:25.781 "state": "completed", 00:16:25.781 "digest": "sha256", 00:16:25.781 "dhgroup": "ffdhe2048" 00:16:25.781 } 00:16:25.781 } 00:16:25.781 ]' 00:16:25.781 00:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:26.041 00:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:26.041 00:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:26.041 00:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:26.041 00:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:26.041 00:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:26.041 00:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:26.041 00:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:26.301 00:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NTY3OGU5Yzg2ODVhN2ZiMTc4MzdhNDgwYmNhNTQyZDZkZDE3NTVkMjAxN2Y4MDZhZDQzNjhjMjljODk5ZWUwZPFhi2U=: 00:16:26.301 00:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:NTY3OGU5Yzg2ODVhN2ZiMTc4MzdhNDgwYmNhNTQyZDZkZDE3NTVkMjAxN2Y4MDZhZDQzNjhjMjljODk5ZWUwZPFhi2U=: 00:16:26.871 00:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:26.871 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:26.871 00:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:26.871 00:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.872 00:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.872 00:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.872 00:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:26.872 00:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:26.872 00:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:26.872 00:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:26.872 00:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:16:26.872 00:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:26.872 00:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:26.872 00:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:26.872 00:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:26.872 00:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:26.872 00:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:26.872 00:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.872 00:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.872 00:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.872 00:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:26.872 00:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:26.872 00:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:27.132 00:16:27.132 00:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:27.132 00:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:27.132 00:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:27.393 00:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:27.393 00:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:27.393 00:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.393 00:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.393 00:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.393 00:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:27.393 { 00:16:27.393 "cntlid": 17, 00:16:27.393 "qid": 0, 00:16:27.393 "state": "enabled", 00:16:27.393 "thread": "nvmf_tgt_poll_group_000", 00:16:27.393 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:27.393 "listen_address": { 00:16:27.393 "trtype": "TCP", 00:16:27.393 "adrfam": "IPv4", 00:16:27.393 "traddr": "10.0.0.2", 00:16:27.393 "trsvcid": "4420" 00:16:27.393 }, 00:16:27.393 "peer_address": { 00:16:27.393 "trtype": "TCP", 00:16:27.393 "adrfam": "IPv4", 00:16:27.393 "traddr": "10.0.0.1", 00:16:27.393 "trsvcid": "40408" 00:16:27.393 }, 00:16:27.393 "auth": { 00:16:27.393 "state": "completed", 00:16:27.393 "digest": "sha256", 00:16:27.393 "dhgroup": "ffdhe3072" 00:16:27.393 } 00:16:27.393 } 00:16:27.393 ]' 00:16:27.393 00:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:27.393 00:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:27.393 00:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:27.654 00:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:27.654 00:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:27.654 00:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:27.654 00:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:27.654 00:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:27.654 00:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OWVlMTlmOGZmZTk3NTYzNTAxMmM4MGEyMzJmOGIyZDUwYTUxOTIzNmQ2NzQ4Y2QzlWFTxQ==: --dhchap-ctrl-secret DHHC-1:03:ZDA3OTllMjk2ZjMzYTYzZmRiMWI0M2NjOTJkNWJhODJlNmIyZWYyMTE4YWFjYmIwYmJkNDdmMTAzNWRmMTcwZQH4CrE=: 00:16:27.654 00:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:OWVlMTlmOGZmZTk3NTYzNTAxMmM4MGEyMzJmOGIyZDUwYTUxOTIzNmQ2NzQ4Y2QzlWFTxQ==: --dhchap-ctrl-secret DHHC-1:03:ZDA3OTllMjk2ZjMzYTYzZmRiMWI0M2NjOTJkNWJhODJlNmIyZWYyMTE4YWFjYmIwYmJkNDdmMTAzNWRmMTcwZQH4CrE=: 00:16:28.594 00:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:28.594 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:28.594 00:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:28.594 00:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.594 00:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.594 00:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.594 00:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:28.594 00:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:28.594 00:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:28.594 00:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:16:28.594 00:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:28.594 00:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:28.594 00:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:28.594 00:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:28.594 00:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:28.594 00:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:28.594 00:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.594 00:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.594 00:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.594 00:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:28.594 00:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:28.594 00:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:28.855 00:16:28.855 00:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:28.855 00:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:28.855 00:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:29.115 00:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:29.115 00:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:29.115 00:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.115 00:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.115 00:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.115 00:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:29.115 { 00:16:29.115 "cntlid": 19, 00:16:29.115 "qid": 0, 00:16:29.115 "state": "enabled", 00:16:29.115 "thread": "nvmf_tgt_poll_group_000", 00:16:29.115 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:29.115 "listen_address": { 00:16:29.115 "trtype": "TCP", 00:16:29.115 "adrfam": "IPv4", 00:16:29.115 "traddr": "10.0.0.2", 00:16:29.115 "trsvcid": "4420" 00:16:29.115 }, 00:16:29.115 "peer_address": { 00:16:29.115 "trtype": "TCP", 00:16:29.115 "adrfam": "IPv4", 00:16:29.115 "traddr": "10.0.0.1", 00:16:29.115 "trsvcid": "41108" 00:16:29.115 }, 00:16:29.115 "auth": { 00:16:29.115 "state": "completed", 00:16:29.115 "digest": "sha256", 00:16:29.115 "dhgroup": "ffdhe3072" 00:16:29.115 } 00:16:29.115 } 00:16:29.115 ]' 00:16:29.115 00:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:29.115 00:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:29.115 00:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:29.115 00:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:29.115 00:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:29.115 00:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:29.115 00:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:29.115 00:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:29.376 00:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YjU0NTAyNzlhYTdhYTJjMzZiMDRmOWJjMGM3ZjNkYzCZ8b3c: --dhchap-ctrl-secret DHHC-1:02:OWJjZTc3YjcwNzJlN2JlNjMzYWM2ZTQ3ZDEyNzRjMzk5ZDMyNGI4MmJlMTUwOTg0H8R8cQ==: 00:16:29.376 00:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:YjU0NTAyNzlhYTdhYTJjMzZiMDRmOWJjMGM3ZjNkYzCZ8b3c: --dhchap-ctrl-secret DHHC-1:02:OWJjZTc3YjcwNzJlN2JlNjMzYWM2ZTQ3ZDEyNzRjMzk5ZDMyNGI4MmJlMTUwOTg0H8R8cQ==: 00:16:29.946 00:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:29.946 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:29.946 00:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:29.946 00:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.946 00:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.946 00:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.946 00:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:29.946 00:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:29.946 00:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:30.207 00:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:16:30.207 00:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:30.207 00:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:30.207 00:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:30.207 00:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:30.207 00:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:30.207 00:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:30.207 00:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.207 00:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.207 00:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.207 00:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:30.207 00:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:30.207 00:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:30.468 00:16:30.468 00:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:30.468 00:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:30.468 00:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:30.468 00:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:30.468 00:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:30.468 00:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.468 00:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.468 00:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.468 00:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:30.468 { 00:16:30.468 "cntlid": 21, 00:16:30.468 "qid": 0, 00:16:30.468 "state": "enabled", 00:16:30.468 "thread": "nvmf_tgt_poll_group_000", 00:16:30.468 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:30.468 "listen_address": { 00:16:30.468 "trtype": "TCP", 00:16:30.468 "adrfam": "IPv4", 00:16:30.468 "traddr": "10.0.0.2", 00:16:30.468 "trsvcid": "4420" 00:16:30.468 }, 00:16:30.468 "peer_address": { 00:16:30.468 "trtype": "TCP", 00:16:30.468 "adrfam": "IPv4", 00:16:30.468 "traddr": "10.0.0.1", 00:16:30.468 "trsvcid": "41140" 00:16:30.468 }, 00:16:30.468 "auth": { 00:16:30.468 "state": "completed", 00:16:30.468 "digest": "sha256", 00:16:30.468 "dhgroup": "ffdhe3072" 00:16:30.468 } 00:16:30.468 } 00:16:30.468 ]' 00:16:30.468 00:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:30.731 00:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:30.732 00:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:30.732 00:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:30.732 00:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:30.732 00:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:30.732 00:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:30.732 00:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:31.046 00:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MzQzNzY5NzJiMGI1NGMyODkwNDdkYjk1ZDJmMWE0OTBmZGI1MjU2OGEyMzY5Zjk4bgm3qg==: --dhchap-ctrl-secret DHHC-1:01:ODE1MDlkOTMyNDExMjk5Y2MwZmJiNmYwYzRlNTM5MzZLnj+N: 00:16:31.046 00:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:MzQzNzY5NzJiMGI1NGMyODkwNDdkYjk1ZDJmMWE0OTBmZGI1MjU2OGEyMzY5Zjk4bgm3qg==: --dhchap-ctrl-secret DHHC-1:01:ODE1MDlkOTMyNDExMjk5Y2MwZmJiNmYwYzRlNTM5MzZLnj+N: 00:16:31.355 00:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:31.355 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:31.355 00:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:31.355 00:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.355 00:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.355 00:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.355 00:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:31.355 00:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:31.355 00:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:31.631 00:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:16:31.631 00:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:31.631 00:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:31.631 00:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:31.631 00:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:31.631 00:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:31.631 00:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:16:31.631 00:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.631 00:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.631 00:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.631 00:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:31.631 00:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:31.631 00:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:31.892 00:16:31.892 00:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:31.892 00:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:31.892 00:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:32.152 00:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:32.152 00:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:32.152 00:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.152 00:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.152 00:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.152 00:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:32.152 { 00:16:32.152 "cntlid": 23, 00:16:32.152 "qid": 0, 00:16:32.152 "state": "enabled", 00:16:32.152 "thread": "nvmf_tgt_poll_group_000", 00:16:32.152 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:32.152 "listen_address": { 00:16:32.152 "trtype": "TCP", 00:16:32.152 "adrfam": "IPv4", 00:16:32.152 "traddr": "10.0.0.2", 00:16:32.152 "trsvcid": "4420" 00:16:32.152 }, 00:16:32.152 "peer_address": { 00:16:32.152 "trtype": "TCP", 00:16:32.152 "adrfam": "IPv4", 00:16:32.152 "traddr": "10.0.0.1", 00:16:32.152 "trsvcid": "41174" 00:16:32.152 }, 00:16:32.152 "auth": { 00:16:32.152 "state": "completed", 00:16:32.152 "digest": "sha256", 00:16:32.152 "dhgroup": "ffdhe3072" 00:16:32.152 } 00:16:32.152 } 00:16:32.152 ]' 00:16:32.152 00:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:32.152 00:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:32.152 00:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:32.152 00:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:32.152 00:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:32.152 00:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:32.152 00:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:32.152 00:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:32.412 00:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NTY3OGU5Yzg2ODVhN2ZiMTc4MzdhNDgwYmNhNTQyZDZkZDE3NTVkMjAxN2Y4MDZhZDQzNjhjMjljODk5ZWUwZPFhi2U=: 00:16:32.412 00:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:NTY3OGU5Yzg2ODVhN2ZiMTc4MzdhNDgwYmNhNTQyZDZkZDE3NTVkMjAxN2Y4MDZhZDQzNjhjMjljODk5ZWUwZPFhi2U=: 00:16:32.983 00:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:32.983 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:32.983 00:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:32.983 00:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.983 00:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.983 00:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.983 00:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:32.983 00:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:32.983 00:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:32.983 00:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:33.244 00:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:16:33.244 00:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:33.244 00:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:33.244 00:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:33.244 00:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:33.244 00:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:33.244 00:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:33.244 00:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.244 00:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.244 00:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.244 00:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:33.244 00:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:33.244 00:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:33.506 00:16:33.506 00:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:33.506 00:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:33.506 00:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:33.772 00:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:33.772 00:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:33.772 00:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.772 00:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.772 00:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.772 00:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:33.772 { 00:16:33.772 "cntlid": 25, 00:16:33.772 "qid": 0, 00:16:33.772 "state": "enabled", 00:16:33.772 "thread": "nvmf_tgt_poll_group_000", 00:16:33.772 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:33.772 "listen_address": { 00:16:33.772 "trtype": "TCP", 00:16:33.772 "adrfam": "IPv4", 00:16:33.772 "traddr": "10.0.0.2", 00:16:33.772 "trsvcid": "4420" 00:16:33.772 }, 00:16:33.772 "peer_address": { 00:16:33.772 "trtype": "TCP", 00:16:33.772 "adrfam": "IPv4", 00:16:33.772 "traddr": "10.0.0.1", 00:16:33.772 "trsvcid": "41208" 00:16:33.772 }, 00:16:33.772 "auth": { 00:16:33.772 "state": "completed", 00:16:33.772 "digest": "sha256", 00:16:33.772 "dhgroup": "ffdhe4096" 00:16:33.772 } 00:16:33.772 } 00:16:33.772 ]' 00:16:33.772 00:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:33.772 00:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:33.772 00:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:33.772 00:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:33.772 00:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:33.772 00:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:33.772 00:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:33.772 00:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:34.033 00:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OWVlMTlmOGZmZTk3NTYzNTAxMmM4MGEyMzJmOGIyZDUwYTUxOTIzNmQ2NzQ4Y2QzlWFTxQ==: --dhchap-ctrl-secret DHHC-1:03:ZDA3OTllMjk2ZjMzYTYzZmRiMWI0M2NjOTJkNWJhODJlNmIyZWYyMTE4YWFjYmIwYmJkNDdmMTAzNWRmMTcwZQH4CrE=: 00:16:34.033 00:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:OWVlMTlmOGZmZTk3NTYzNTAxMmM4MGEyMzJmOGIyZDUwYTUxOTIzNmQ2NzQ4Y2QzlWFTxQ==: --dhchap-ctrl-secret DHHC-1:03:ZDA3OTllMjk2ZjMzYTYzZmRiMWI0M2NjOTJkNWJhODJlNmIyZWYyMTE4YWFjYmIwYmJkNDdmMTAzNWRmMTcwZQH4CrE=: 00:16:34.605 00:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:34.605 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:34.605 00:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:34.605 00:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.605 00:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.605 00:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.605 00:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:34.605 00:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:34.605 00:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:34.866 00:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:16:34.866 00:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:34.866 00:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:34.866 00:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:34.866 00:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:34.866 00:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:34.866 00:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:34.866 00:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.866 00:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.866 00:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.866 00:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:34.866 00:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:34.867 00:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:35.128 00:16:35.128 00:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:35.128 00:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:35.128 00:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:35.128 00:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:35.128 00:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:35.128 00:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.128 00:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.128 00:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.128 00:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:35.128 { 00:16:35.128 "cntlid": 27, 00:16:35.128 "qid": 0, 00:16:35.128 "state": "enabled", 00:16:35.128 "thread": "nvmf_tgt_poll_group_000", 00:16:35.128 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:35.128 "listen_address": { 00:16:35.128 "trtype": "TCP", 00:16:35.128 "adrfam": "IPv4", 00:16:35.128 "traddr": "10.0.0.2", 00:16:35.128 "trsvcid": "4420" 00:16:35.128 }, 00:16:35.128 "peer_address": { 00:16:35.128 "trtype": "TCP", 00:16:35.128 "adrfam": "IPv4", 00:16:35.128 "traddr": "10.0.0.1", 00:16:35.128 "trsvcid": "41240" 00:16:35.128 }, 00:16:35.128 "auth": { 00:16:35.128 "state": "completed", 00:16:35.128 "digest": "sha256", 00:16:35.128 "dhgroup": "ffdhe4096" 00:16:35.128 } 00:16:35.128 } 00:16:35.128 ]' 00:16:35.128 00:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:35.388 00:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:35.388 00:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:35.388 00:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:35.388 00:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:35.388 00:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:35.388 00:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:35.388 00:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:35.648 00:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YjU0NTAyNzlhYTdhYTJjMzZiMDRmOWJjMGM3ZjNkYzCZ8b3c: --dhchap-ctrl-secret DHHC-1:02:OWJjZTc3YjcwNzJlN2JlNjMzYWM2ZTQ3ZDEyNzRjMzk5ZDMyNGI4MmJlMTUwOTg0H8R8cQ==: 00:16:35.648 00:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:YjU0NTAyNzlhYTdhYTJjMzZiMDRmOWJjMGM3ZjNkYzCZ8b3c: --dhchap-ctrl-secret DHHC-1:02:OWJjZTc3YjcwNzJlN2JlNjMzYWM2ZTQ3ZDEyNzRjMzk5ZDMyNGI4MmJlMTUwOTg0H8R8cQ==: 00:16:36.218 00:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:36.218 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:36.218 00:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:36.218 00:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.218 00:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.218 00:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.218 00:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:36.218 00:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:36.218 00:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:36.218 00:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:16:36.218 00:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:36.218 00:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:36.218 00:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:36.218 00:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:36.218 00:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:36.218 00:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:36.218 00:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.218 00:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.218 00:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.218 00:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:36.218 00:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:36.218 00:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:36.478 00:16:36.739 00:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:36.739 00:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:36.739 00:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:36.739 00:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:36.739 00:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:36.739 00:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.739 00:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.739 00:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.739 00:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:36.739 { 00:16:36.739 "cntlid": 29, 00:16:36.739 "qid": 0, 00:16:36.739 "state": "enabled", 00:16:36.739 "thread": "nvmf_tgt_poll_group_000", 00:16:36.739 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:36.739 "listen_address": { 00:16:36.739 "trtype": "TCP", 00:16:36.739 "adrfam": "IPv4", 00:16:36.739 "traddr": "10.0.0.2", 00:16:36.739 "trsvcid": "4420" 00:16:36.739 }, 00:16:36.739 "peer_address": { 00:16:36.739 "trtype": "TCP", 00:16:36.739 "adrfam": "IPv4", 00:16:36.739 "traddr": "10.0.0.1", 00:16:36.739 "trsvcid": "41276" 00:16:36.739 }, 00:16:36.739 "auth": { 00:16:36.739 "state": "completed", 00:16:36.739 "digest": "sha256", 00:16:36.739 "dhgroup": "ffdhe4096" 00:16:36.739 } 00:16:36.739 } 00:16:36.739 ]' 00:16:36.739 00:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:36.739 00:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:36.739 00:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:37.016 00:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:37.016 00:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:37.016 00:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:37.016 00:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:37.016 00:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:37.016 00:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MzQzNzY5NzJiMGI1NGMyODkwNDdkYjk1ZDJmMWE0OTBmZGI1MjU2OGEyMzY5Zjk4bgm3qg==: --dhchap-ctrl-secret DHHC-1:01:ODE1MDlkOTMyNDExMjk5Y2MwZmJiNmYwYzRlNTM5MzZLnj+N: 00:16:37.016 00:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:MzQzNzY5NzJiMGI1NGMyODkwNDdkYjk1ZDJmMWE0OTBmZGI1MjU2OGEyMzY5Zjk4bgm3qg==: --dhchap-ctrl-secret DHHC-1:01:ODE1MDlkOTMyNDExMjk5Y2MwZmJiNmYwYzRlNTM5MzZLnj+N: 00:16:37.595 00:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:37.855 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:37.855 00:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:37.855 00:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.855 00:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.855 00:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.855 00:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:37.855 00:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:37.855 00:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:37.855 00:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:16:37.855 00:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:37.855 00:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:37.855 00:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:37.855 00:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:37.855 00:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:37.855 00:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:16:37.855 00:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.855 00:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.855 00:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.855 00:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:37.855 00:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:37.855 00:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:38.115 00:16:38.115 00:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:38.115 00:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:38.115 00:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:38.376 00:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:38.376 00:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:38.376 00:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.376 00:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.376 00:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.376 00:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:38.376 { 00:16:38.376 "cntlid": 31, 00:16:38.376 "qid": 0, 00:16:38.376 "state": "enabled", 00:16:38.376 "thread": "nvmf_tgt_poll_group_000", 00:16:38.376 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:38.376 "listen_address": { 00:16:38.376 "trtype": "TCP", 00:16:38.376 "adrfam": "IPv4", 00:16:38.376 "traddr": "10.0.0.2", 00:16:38.376 "trsvcid": "4420" 00:16:38.376 }, 00:16:38.376 "peer_address": { 00:16:38.376 "trtype": "TCP", 00:16:38.376 "adrfam": "IPv4", 00:16:38.376 "traddr": "10.0.0.1", 00:16:38.376 "trsvcid": "53840" 00:16:38.376 }, 00:16:38.376 "auth": { 00:16:38.376 "state": "completed", 00:16:38.376 "digest": "sha256", 00:16:38.376 "dhgroup": "ffdhe4096" 00:16:38.376 } 00:16:38.376 } 00:16:38.376 ]' 00:16:38.376 00:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:38.376 00:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:38.376 00:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:38.376 00:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:38.376 00:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:38.636 00:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:38.636 00:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:38.636 00:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:38.636 00:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NTY3OGU5Yzg2ODVhN2ZiMTc4MzdhNDgwYmNhNTQyZDZkZDE3NTVkMjAxN2Y4MDZhZDQzNjhjMjljODk5ZWUwZPFhi2U=: 00:16:38.636 00:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:NTY3OGU5Yzg2ODVhN2ZiMTc4MzdhNDgwYmNhNTQyZDZkZDE3NTVkMjAxN2Y4MDZhZDQzNjhjMjljODk5ZWUwZPFhi2U=: 00:16:39.206 00:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:39.206 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:39.206 00:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:39.206 00:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.206 00:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.466 00:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.466 00:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:39.466 00:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:39.466 00:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:39.466 00:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:39.466 00:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:16:39.466 00:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:39.466 00:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:39.466 00:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:39.466 00:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:39.466 00:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:39.466 00:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:39.466 00:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.466 00:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.466 00:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.466 00:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:39.466 00:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:39.466 00:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:39.727 00:16:39.987 00:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:39.987 00:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:39.987 00:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:39.987 00:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:39.987 00:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:39.987 00:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.987 00:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.987 00:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.987 00:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:39.987 { 00:16:39.987 "cntlid": 33, 00:16:39.987 "qid": 0, 00:16:39.987 "state": "enabled", 00:16:39.987 "thread": "nvmf_tgt_poll_group_000", 00:16:39.987 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:39.987 "listen_address": { 00:16:39.987 "trtype": "TCP", 00:16:39.987 "adrfam": "IPv4", 00:16:39.987 "traddr": "10.0.0.2", 00:16:39.987 "trsvcid": "4420" 00:16:39.987 }, 00:16:39.987 "peer_address": { 00:16:39.987 "trtype": "TCP", 00:16:39.987 "adrfam": "IPv4", 00:16:39.987 "traddr": "10.0.0.1", 00:16:39.987 "trsvcid": "53864" 00:16:39.987 }, 00:16:39.987 "auth": { 00:16:39.987 "state": "completed", 00:16:39.987 "digest": "sha256", 00:16:39.987 "dhgroup": "ffdhe6144" 00:16:39.987 } 00:16:39.987 } 00:16:39.988 ]' 00:16:39.988 00:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:40.249 00:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:40.249 00:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:40.249 00:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:40.249 00:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:40.249 00:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:40.249 00:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:40.249 00:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:40.523 00:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OWVlMTlmOGZmZTk3NTYzNTAxMmM4MGEyMzJmOGIyZDUwYTUxOTIzNmQ2NzQ4Y2QzlWFTxQ==: --dhchap-ctrl-secret DHHC-1:03:ZDA3OTllMjk2ZjMzYTYzZmRiMWI0M2NjOTJkNWJhODJlNmIyZWYyMTE4YWFjYmIwYmJkNDdmMTAzNWRmMTcwZQH4CrE=: 00:16:40.523 00:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:OWVlMTlmOGZmZTk3NTYzNTAxMmM4MGEyMzJmOGIyZDUwYTUxOTIzNmQ2NzQ4Y2QzlWFTxQ==: --dhchap-ctrl-secret DHHC-1:03:ZDA3OTllMjk2ZjMzYTYzZmRiMWI0M2NjOTJkNWJhODJlNmIyZWYyMTE4YWFjYmIwYmJkNDdmMTAzNWRmMTcwZQH4CrE=: 00:16:41.099 00:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:41.099 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:41.099 00:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:41.099 00:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.099 00:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.099 00:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.100 00:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:41.100 00:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:41.100 00:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:41.100 00:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:16:41.100 00:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:41.100 00:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:41.100 00:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:41.100 00:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:41.100 00:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:41.100 00:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:41.100 00:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.100 00:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.100 00:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.100 00:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:41.100 00:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:41.100 00:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:41.672 00:16:41.672 00:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:41.672 00:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:41.672 00:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:41.672 00:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:41.672 00:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:41.672 00:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.672 00:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.672 00:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.672 00:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:41.672 { 00:16:41.672 "cntlid": 35, 00:16:41.672 "qid": 0, 00:16:41.672 "state": "enabled", 00:16:41.672 "thread": "nvmf_tgt_poll_group_000", 00:16:41.672 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:41.672 "listen_address": { 00:16:41.672 "trtype": "TCP", 00:16:41.672 "adrfam": "IPv4", 00:16:41.672 "traddr": "10.0.0.2", 00:16:41.672 "trsvcid": "4420" 00:16:41.672 }, 00:16:41.672 "peer_address": { 00:16:41.672 "trtype": "TCP", 00:16:41.672 "adrfam": "IPv4", 00:16:41.672 "traddr": "10.0.0.1", 00:16:41.672 "trsvcid": "53902" 00:16:41.672 }, 00:16:41.672 "auth": { 00:16:41.672 "state": "completed", 00:16:41.672 "digest": "sha256", 00:16:41.672 "dhgroup": "ffdhe6144" 00:16:41.672 } 00:16:41.672 } 00:16:41.672 ]' 00:16:41.672 00:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:41.672 00:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:41.672 00:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:41.932 00:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:41.932 00:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:41.932 00:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:41.932 00:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:41.932 00:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:42.193 00:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YjU0NTAyNzlhYTdhYTJjMzZiMDRmOWJjMGM3ZjNkYzCZ8b3c: --dhchap-ctrl-secret DHHC-1:02:OWJjZTc3YjcwNzJlN2JlNjMzYWM2ZTQ3ZDEyNzRjMzk5ZDMyNGI4MmJlMTUwOTg0H8R8cQ==: 00:16:42.193 00:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:YjU0NTAyNzlhYTdhYTJjMzZiMDRmOWJjMGM3ZjNkYzCZ8b3c: --dhchap-ctrl-secret DHHC-1:02:OWJjZTc3YjcwNzJlN2JlNjMzYWM2ZTQ3ZDEyNzRjMzk5ZDMyNGI4MmJlMTUwOTg0H8R8cQ==: 00:16:42.771 00:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:42.771 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:42.771 00:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:42.771 00:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.771 00:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.771 00:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.771 00:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:42.771 00:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:42.771 00:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:42.771 00:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:16:42.771 00:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:42.771 00:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:42.771 00:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:42.771 00:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:42.771 00:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:42.771 00:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:42.771 00:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.771 00:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.771 00:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.771 00:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:42.771 00:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:42.771 00:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:43.343 00:16:43.343 00:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:43.343 00:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:43.343 00:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:43.343 00:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:43.343 00:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:43.343 00:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.343 00:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.343 00:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.343 00:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:43.343 { 00:16:43.343 "cntlid": 37, 00:16:43.343 "qid": 0, 00:16:43.343 "state": "enabled", 00:16:43.343 "thread": "nvmf_tgt_poll_group_000", 00:16:43.343 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:43.343 "listen_address": { 00:16:43.343 "trtype": "TCP", 00:16:43.343 "adrfam": "IPv4", 00:16:43.343 "traddr": "10.0.0.2", 00:16:43.343 "trsvcid": "4420" 00:16:43.343 }, 00:16:43.343 "peer_address": { 00:16:43.343 "trtype": "TCP", 00:16:43.343 "adrfam": "IPv4", 00:16:43.343 "traddr": "10.0.0.1", 00:16:43.343 "trsvcid": "53940" 00:16:43.343 }, 00:16:43.343 "auth": { 00:16:43.343 "state": "completed", 00:16:43.343 "digest": "sha256", 00:16:43.344 "dhgroup": "ffdhe6144" 00:16:43.344 } 00:16:43.344 } 00:16:43.344 ]' 00:16:43.344 00:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:43.344 00:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:43.344 00:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:43.344 00:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:43.344 00:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:43.604 00:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:43.604 00:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:43.604 00:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:43.604 00:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MzQzNzY5NzJiMGI1NGMyODkwNDdkYjk1ZDJmMWE0OTBmZGI1MjU2OGEyMzY5Zjk4bgm3qg==: --dhchap-ctrl-secret DHHC-1:01:ODE1MDlkOTMyNDExMjk5Y2MwZmJiNmYwYzRlNTM5MzZLnj+N: 00:16:43.604 00:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:MzQzNzY5NzJiMGI1NGMyODkwNDdkYjk1ZDJmMWE0OTBmZGI1MjU2OGEyMzY5Zjk4bgm3qg==: --dhchap-ctrl-secret DHHC-1:01:ODE1MDlkOTMyNDExMjk5Y2MwZmJiNmYwYzRlNTM5MzZLnj+N: 00:16:44.176 00:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:44.176 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:44.176 00:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:44.176 00:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.176 00:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.176 00:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.176 00:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:44.176 00:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:44.176 00:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:44.436 00:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:16:44.436 00:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:44.436 00:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:44.436 00:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:44.436 00:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:44.436 00:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:44.436 00:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:16:44.436 00:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.436 00:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.436 00:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.436 00:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:44.436 00:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:44.436 00:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:44.697 00:16:44.956 00:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:44.956 00:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:44.957 00:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:44.957 00:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:44.957 00:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:44.957 00:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.957 00:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.957 00:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.957 00:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:44.957 { 00:16:44.957 "cntlid": 39, 00:16:44.957 "qid": 0, 00:16:44.957 "state": "enabled", 00:16:44.957 "thread": "nvmf_tgt_poll_group_000", 00:16:44.957 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:44.957 "listen_address": { 00:16:44.957 "trtype": "TCP", 00:16:44.957 "adrfam": "IPv4", 00:16:44.957 "traddr": "10.0.0.2", 00:16:44.957 "trsvcid": "4420" 00:16:44.957 }, 00:16:44.957 "peer_address": { 00:16:44.957 "trtype": "TCP", 00:16:44.957 "adrfam": "IPv4", 00:16:44.957 "traddr": "10.0.0.1", 00:16:44.957 "trsvcid": "53964" 00:16:44.957 }, 00:16:44.957 "auth": { 00:16:44.957 "state": "completed", 00:16:44.957 "digest": "sha256", 00:16:44.957 "dhgroup": "ffdhe6144" 00:16:44.957 } 00:16:44.957 } 00:16:44.957 ]' 00:16:44.957 00:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:44.957 00:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:44.957 00:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:45.217 00:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:45.217 00:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:45.217 00:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:45.217 00:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:45.217 00:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:45.217 00:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NTY3OGU5Yzg2ODVhN2ZiMTc4MzdhNDgwYmNhNTQyZDZkZDE3NTVkMjAxN2Y4MDZhZDQzNjhjMjljODk5ZWUwZPFhi2U=: 00:16:45.217 00:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:NTY3OGU5Yzg2ODVhN2ZiMTc4MzdhNDgwYmNhNTQyZDZkZDE3NTVkMjAxN2Y4MDZhZDQzNjhjMjljODk5ZWUwZPFhi2U=: 00:16:46.164 00:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:46.164 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:46.164 00:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:46.164 00:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.164 00:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.164 00:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.164 00:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:46.164 00:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:46.164 00:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:46.164 00:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:46.164 00:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:16:46.164 00:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:46.164 00:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:46.164 00:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:46.164 00:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:46.164 00:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:46.164 00:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:46.164 00:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.164 00:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.165 00:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.165 00:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:46.165 00:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:46.165 00:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:46.735 00:16:46.735 00:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:46.735 00:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:46.735 00:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:46.735 00:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:46.735 00:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:46.735 00:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.735 00:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.735 00:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.735 00:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:46.735 { 00:16:46.735 "cntlid": 41, 00:16:46.735 "qid": 0, 00:16:46.735 "state": "enabled", 00:16:46.735 "thread": "nvmf_tgt_poll_group_000", 00:16:46.735 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:46.735 "listen_address": { 00:16:46.735 "trtype": "TCP", 00:16:46.735 "adrfam": "IPv4", 00:16:46.735 "traddr": "10.0.0.2", 00:16:46.735 "trsvcid": "4420" 00:16:46.735 }, 00:16:46.735 "peer_address": { 00:16:46.735 "trtype": "TCP", 00:16:46.735 "adrfam": "IPv4", 00:16:46.735 "traddr": "10.0.0.1", 00:16:46.735 "trsvcid": "53996" 00:16:46.735 }, 00:16:46.735 "auth": { 00:16:46.735 "state": "completed", 00:16:46.735 "digest": "sha256", 00:16:46.735 "dhgroup": "ffdhe8192" 00:16:46.735 } 00:16:46.735 } 00:16:46.735 ]' 00:16:46.735 00:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:46.735 00:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:46.735 00:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:46.996 00:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:46.996 00:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:46.996 00:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:46.996 00:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:46.996 00:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:47.257 00:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OWVlMTlmOGZmZTk3NTYzNTAxMmM4MGEyMzJmOGIyZDUwYTUxOTIzNmQ2NzQ4Y2QzlWFTxQ==: --dhchap-ctrl-secret DHHC-1:03:ZDA3OTllMjk2ZjMzYTYzZmRiMWI0M2NjOTJkNWJhODJlNmIyZWYyMTE4YWFjYmIwYmJkNDdmMTAzNWRmMTcwZQH4CrE=: 00:16:47.257 00:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:OWVlMTlmOGZmZTk3NTYzNTAxMmM4MGEyMzJmOGIyZDUwYTUxOTIzNmQ2NzQ4Y2QzlWFTxQ==: --dhchap-ctrl-secret DHHC-1:03:ZDA3OTllMjk2ZjMzYTYzZmRiMWI0M2NjOTJkNWJhODJlNmIyZWYyMTE4YWFjYmIwYmJkNDdmMTAzNWRmMTcwZQH4CrE=: 00:16:47.839 00:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:47.839 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:47.839 00:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:47.839 00:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.839 00:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.839 00:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.839 00:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:47.839 00:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:47.839 00:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:47.839 00:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:16:47.839 00:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:47.839 00:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:47.839 00:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:47.839 00:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:47.839 00:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:47.839 00:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:47.839 00:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.839 00:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.839 00:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.839 00:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:47.839 00:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:47.839 00:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:48.409 00:16:48.409 00:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:48.409 00:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:48.409 00:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:48.670 00:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:48.670 00:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:48.670 00:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.670 00:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.670 00:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.670 00:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:48.670 { 00:16:48.670 "cntlid": 43, 00:16:48.670 "qid": 0, 00:16:48.670 "state": "enabled", 00:16:48.670 "thread": "nvmf_tgt_poll_group_000", 00:16:48.670 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:48.670 "listen_address": { 00:16:48.670 "trtype": "TCP", 00:16:48.670 "adrfam": "IPv4", 00:16:48.670 "traddr": "10.0.0.2", 00:16:48.670 "trsvcid": "4420" 00:16:48.670 }, 00:16:48.670 "peer_address": { 00:16:48.670 "trtype": "TCP", 00:16:48.670 "adrfam": "IPv4", 00:16:48.670 "traddr": "10.0.0.1", 00:16:48.670 "trsvcid": "40908" 00:16:48.670 }, 00:16:48.670 "auth": { 00:16:48.670 "state": "completed", 00:16:48.670 "digest": "sha256", 00:16:48.670 "dhgroup": "ffdhe8192" 00:16:48.670 } 00:16:48.670 } 00:16:48.670 ]' 00:16:48.670 00:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:48.670 00:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:48.670 00:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:48.670 00:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:48.670 00:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:48.670 00:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:48.670 00:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:48.670 00:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:48.931 00:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YjU0NTAyNzlhYTdhYTJjMzZiMDRmOWJjMGM3ZjNkYzCZ8b3c: --dhchap-ctrl-secret DHHC-1:02:OWJjZTc3YjcwNzJlN2JlNjMzYWM2ZTQ3ZDEyNzRjMzk5ZDMyNGI4MmJlMTUwOTg0H8R8cQ==: 00:16:48.931 00:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:YjU0NTAyNzlhYTdhYTJjMzZiMDRmOWJjMGM3ZjNkYzCZ8b3c: --dhchap-ctrl-secret DHHC-1:02:OWJjZTc3YjcwNzJlN2JlNjMzYWM2ZTQ3ZDEyNzRjMzk5ZDMyNGI4MmJlMTUwOTg0H8R8cQ==: 00:16:49.501 00:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:49.501 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:49.501 00:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:49.501 00:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.501 00:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.501 00:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.501 00:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:49.501 00:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:49.501 00:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:49.762 00:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:16:49.762 00:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:49.762 00:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:49.762 00:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:49.762 00:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:49.762 00:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:49.762 00:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:49.762 00:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.762 00:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.762 00:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.762 00:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:49.762 00:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:49.762 00:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:50.333 00:16:50.333 00:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:50.333 00:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:50.333 00:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:50.333 00:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:50.333 00:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:50.333 00:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.333 00:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.333 00:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.333 00:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:50.333 { 00:16:50.333 "cntlid": 45, 00:16:50.333 "qid": 0, 00:16:50.333 "state": "enabled", 00:16:50.333 "thread": "nvmf_tgt_poll_group_000", 00:16:50.333 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:50.333 "listen_address": { 00:16:50.333 "trtype": "TCP", 00:16:50.333 "adrfam": "IPv4", 00:16:50.333 "traddr": "10.0.0.2", 00:16:50.333 "trsvcid": "4420" 00:16:50.333 }, 00:16:50.333 "peer_address": { 00:16:50.333 "trtype": "TCP", 00:16:50.333 "adrfam": "IPv4", 00:16:50.333 "traddr": "10.0.0.1", 00:16:50.333 "trsvcid": "40936" 00:16:50.333 }, 00:16:50.333 "auth": { 00:16:50.333 "state": "completed", 00:16:50.333 "digest": "sha256", 00:16:50.333 "dhgroup": "ffdhe8192" 00:16:50.333 } 00:16:50.333 } 00:16:50.333 ]' 00:16:50.333 00:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:50.593 00:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:50.593 00:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:50.593 00:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:50.593 00:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:50.593 00:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:50.593 00:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:50.593 00:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:50.854 00:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MzQzNzY5NzJiMGI1NGMyODkwNDdkYjk1ZDJmMWE0OTBmZGI1MjU2OGEyMzY5Zjk4bgm3qg==: --dhchap-ctrl-secret DHHC-1:01:ODE1MDlkOTMyNDExMjk5Y2MwZmJiNmYwYzRlNTM5MzZLnj+N: 00:16:50.854 00:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:MzQzNzY5NzJiMGI1NGMyODkwNDdkYjk1ZDJmMWE0OTBmZGI1MjU2OGEyMzY5Zjk4bgm3qg==: --dhchap-ctrl-secret DHHC-1:01:ODE1MDlkOTMyNDExMjk5Y2MwZmJiNmYwYzRlNTM5MzZLnj+N: 00:16:51.433 00:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:51.433 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:51.433 00:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:51.433 00:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.433 00:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.433 00:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.433 00:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:51.433 00:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:51.433 00:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:51.433 00:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:16:51.433 00:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:51.433 00:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:51.433 00:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:51.433 00:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:51.433 00:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:51.433 00:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:16:51.433 00:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.433 00:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.433 00:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.433 00:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:51.433 00:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:51.433 00:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:52.005 00:16:52.005 00:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:52.005 00:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:52.005 00:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:52.266 00:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:52.266 00:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:52.266 00:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.266 00:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.266 00:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.266 00:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:52.266 { 00:16:52.266 "cntlid": 47, 00:16:52.266 "qid": 0, 00:16:52.266 "state": "enabled", 00:16:52.266 "thread": "nvmf_tgt_poll_group_000", 00:16:52.266 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:52.266 "listen_address": { 00:16:52.266 "trtype": "TCP", 00:16:52.266 "adrfam": "IPv4", 00:16:52.266 "traddr": "10.0.0.2", 00:16:52.266 "trsvcid": "4420" 00:16:52.266 }, 00:16:52.266 "peer_address": { 00:16:52.266 "trtype": "TCP", 00:16:52.266 "adrfam": "IPv4", 00:16:52.266 "traddr": "10.0.0.1", 00:16:52.266 "trsvcid": "40958" 00:16:52.266 }, 00:16:52.266 "auth": { 00:16:52.266 "state": "completed", 00:16:52.266 "digest": "sha256", 00:16:52.266 "dhgroup": "ffdhe8192" 00:16:52.266 } 00:16:52.266 } 00:16:52.266 ]' 00:16:52.266 00:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:52.266 00:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:52.266 00:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:52.266 00:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:52.266 00:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:52.266 00:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:52.266 00:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:52.266 00:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:52.528 00:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NTY3OGU5Yzg2ODVhN2ZiMTc4MzdhNDgwYmNhNTQyZDZkZDE3NTVkMjAxN2Y4MDZhZDQzNjhjMjljODk5ZWUwZPFhi2U=: 00:16:52.528 00:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:NTY3OGU5Yzg2ODVhN2ZiMTc4MzdhNDgwYmNhNTQyZDZkZDE3NTVkMjAxN2Y4MDZhZDQzNjhjMjljODk5ZWUwZPFhi2U=: 00:16:53.100 00:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:53.100 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:53.100 00:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:53.100 00:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.100 00:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.100 00:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.100 00:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:16:53.100 00:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:53.100 00:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:53.100 00:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:53.100 00:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:53.372 00:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:16:53.373 00:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:53.373 00:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:53.373 00:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:53.373 00:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:53.373 00:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:53.373 00:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:53.373 00:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.373 00:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.373 00:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.373 00:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:53.373 00:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:53.373 00:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:53.637 00:16:53.637 00:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:53.637 00:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:53.637 00:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:53.637 00:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:53.637 00:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:53.637 00:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.637 00:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.637 00:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.637 00:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:53.637 { 00:16:53.637 "cntlid": 49, 00:16:53.637 "qid": 0, 00:16:53.637 "state": "enabled", 00:16:53.637 "thread": "nvmf_tgt_poll_group_000", 00:16:53.637 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:53.637 "listen_address": { 00:16:53.637 "trtype": "TCP", 00:16:53.637 "adrfam": "IPv4", 00:16:53.637 "traddr": "10.0.0.2", 00:16:53.637 "trsvcid": "4420" 00:16:53.637 }, 00:16:53.637 "peer_address": { 00:16:53.637 "trtype": "TCP", 00:16:53.637 "adrfam": "IPv4", 00:16:53.637 "traddr": "10.0.0.1", 00:16:53.637 "trsvcid": "40998" 00:16:53.637 }, 00:16:53.637 "auth": { 00:16:53.637 "state": "completed", 00:16:53.637 "digest": "sha384", 00:16:53.637 "dhgroup": "null" 00:16:53.637 } 00:16:53.637 } 00:16:53.637 ]' 00:16:53.637 00:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:53.898 00:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:53.898 00:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:53.898 00:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:53.898 00:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:53.898 00:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:53.898 00:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:53.898 00:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:54.159 00:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OWVlMTlmOGZmZTk3NTYzNTAxMmM4MGEyMzJmOGIyZDUwYTUxOTIzNmQ2NzQ4Y2QzlWFTxQ==: --dhchap-ctrl-secret DHHC-1:03:ZDA3OTllMjk2ZjMzYTYzZmRiMWI0M2NjOTJkNWJhODJlNmIyZWYyMTE4YWFjYmIwYmJkNDdmMTAzNWRmMTcwZQH4CrE=: 00:16:54.159 00:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:OWVlMTlmOGZmZTk3NTYzNTAxMmM4MGEyMzJmOGIyZDUwYTUxOTIzNmQ2NzQ4Y2QzlWFTxQ==: --dhchap-ctrl-secret DHHC-1:03:ZDA3OTllMjk2ZjMzYTYzZmRiMWI0M2NjOTJkNWJhODJlNmIyZWYyMTE4YWFjYmIwYmJkNDdmMTAzNWRmMTcwZQH4CrE=: 00:16:54.731 00:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:54.731 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:54.731 00:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:54.731 00:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.731 00:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.731 00:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.731 00:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:54.731 00:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:54.731 00:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:54.731 00:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:16:54.731 00:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:54.731 00:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:54.731 00:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:54.731 00:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:54.731 00:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:54.731 00:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:54.731 00:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.731 00:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.731 00:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.731 00:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:54.731 00:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:54.731 00:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:54.991 00:16:54.991 00:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:54.991 00:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:54.991 00:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:55.251 00:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:55.251 00:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:55.251 00:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.251 00:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.251 00:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.251 00:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:55.251 { 00:16:55.251 "cntlid": 51, 00:16:55.251 "qid": 0, 00:16:55.251 "state": "enabled", 00:16:55.251 "thread": "nvmf_tgt_poll_group_000", 00:16:55.251 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:55.251 "listen_address": { 00:16:55.251 "trtype": "TCP", 00:16:55.251 "adrfam": "IPv4", 00:16:55.251 "traddr": "10.0.0.2", 00:16:55.251 "trsvcid": "4420" 00:16:55.251 }, 00:16:55.251 "peer_address": { 00:16:55.251 "trtype": "TCP", 00:16:55.251 "adrfam": "IPv4", 00:16:55.251 "traddr": "10.0.0.1", 00:16:55.251 "trsvcid": "41012" 00:16:55.251 }, 00:16:55.251 "auth": { 00:16:55.251 "state": "completed", 00:16:55.251 "digest": "sha384", 00:16:55.251 "dhgroup": "null" 00:16:55.251 } 00:16:55.251 } 00:16:55.251 ]' 00:16:55.251 00:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:55.251 00:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:55.252 00:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:55.252 00:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:55.252 00:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:55.511 00:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:55.511 00:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:55.511 00:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:55.511 00:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YjU0NTAyNzlhYTdhYTJjMzZiMDRmOWJjMGM3ZjNkYzCZ8b3c: --dhchap-ctrl-secret DHHC-1:02:OWJjZTc3YjcwNzJlN2JlNjMzYWM2ZTQ3ZDEyNzRjMzk5ZDMyNGI4MmJlMTUwOTg0H8R8cQ==: 00:16:55.511 00:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:YjU0NTAyNzlhYTdhYTJjMzZiMDRmOWJjMGM3ZjNkYzCZ8b3c: --dhchap-ctrl-secret DHHC-1:02:OWJjZTc3YjcwNzJlN2JlNjMzYWM2ZTQ3ZDEyNzRjMzk5ZDMyNGI4MmJlMTUwOTg0H8R8cQ==: 00:16:56.083 00:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:56.083 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:56.083 00:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:56.083 00:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.083 00:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.083 00:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.083 00:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:56.083 00:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:56.083 00:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:56.344 00:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:16:56.344 00:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:56.344 00:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:56.344 00:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:56.344 00:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:56.344 00:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:56.344 00:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:56.344 00:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.344 00:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.344 00:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.344 00:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:56.344 00:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:56.344 00:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:56.606 00:16:56.606 00:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:56.606 00:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:56.606 00:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:56.867 00:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:56.867 00:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:56.867 00:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.867 00:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.867 00:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.867 00:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:56.867 { 00:16:56.867 "cntlid": 53, 00:16:56.867 "qid": 0, 00:16:56.867 "state": "enabled", 00:16:56.867 "thread": "nvmf_tgt_poll_group_000", 00:16:56.867 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:56.867 "listen_address": { 00:16:56.867 "trtype": "TCP", 00:16:56.867 "adrfam": "IPv4", 00:16:56.867 "traddr": "10.0.0.2", 00:16:56.867 "trsvcid": "4420" 00:16:56.867 }, 00:16:56.867 "peer_address": { 00:16:56.867 "trtype": "TCP", 00:16:56.867 "adrfam": "IPv4", 00:16:56.867 "traddr": "10.0.0.1", 00:16:56.867 "trsvcid": "41044" 00:16:56.867 }, 00:16:56.867 "auth": { 00:16:56.867 "state": "completed", 00:16:56.867 "digest": "sha384", 00:16:56.867 "dhgroup": "null" 00:16:56.867 } 00:16:56.867 } 00:16:56.867 ]' 00:16:56.867 00:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:56.867 00:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:56.867 00:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:56.867 00:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:56.867 00:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:56.867 00:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:56.867 00:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:56.867 00:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:57.127 00:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MzQzNzY5NzJiMGI1NGMyODkwNDdkYjk1ZDJmMWE0OTBmZGI1MjU2OGEyMzY5Zjk4bgm3qg==: --dhchap-ctrl-secret DHHC-1:01:ODE1MDlkOTMyNDExMjk5Y2MwZmJiNmYwYzRlNTM5MzZLnj+N: 00:16:57.127 00:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:MzQzNzY5NzJiMGI1NGMyODkwNDdkYjk1ZDJmMWE0OTBmZGI1MjU2OGEyMzY5Zjk4bgm3qg==: --dhchap-ctrl-secret DHHC-1:01:ODE1MDlkOTMyNDExMjk5Y2MwZmJiNmYwYzRlNTM5MzZLnj+N: 00:16:57.698 00:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:57.699 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:57.699 00:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:57.699 00:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.699 00:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.699 00:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.699 00:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:57.699 00:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:57.699 00:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:57.960 00:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:16:57.960 00:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:57.960 00:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:57.960 00:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:57.960 00:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:57.960 00:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:57.960 00:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:16:57.960 00:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.960 00:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.960 00:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.960 00:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:57.960 00:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:57.960 00:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:58.221 00:16:58.221 00:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:58.221 00:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:58.221 00:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:58.500 00:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:58.501 00:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:58.501 00:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.501 00:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.501 00:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.501 00:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:58.501 { 00:16:58.501 "cntlid": 55, 00:16:58.501 "qid": 0, 00:16:58.501 "state": "enabled", 00:16:58.501 "thread": "nvmf_tgt_poll_group_000", 00:16:58.501 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:58.501 "listen_address": { 00:16:58.501 "trtype": "TCP", 00:16:58.501 "adrfam": "IPv4", 00:16:58.501 "traddr": "10.0.0.2", 00:16:58.501 "trsvcid": "4420" 00:16:58.501 }, 00:16:58.501 "peer_address": { 00:16:58.501 "trtype": "TCP", 00:16:58.501 "adrfam": "IPv4", 00:16:58.501 "traddr": "10.0.0.1", 00:16:58.501 "trsvcid": "34322" 00:16:58.501 }, 00:16:58.501 "auth": { 00:16:58.501 "state": "completed", 00:16:58.501 "digest": "sha384", 00:16:58.501 "dhgroup": "null" 00:16:58.501 } 00:16:58.501 } 00:16:58.501 ]' 00:16:58.501 00:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:58.501 00:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:58.501 00:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:58.501 00:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:58.501 00:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:58.501 00:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:58.501 00:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:58.501 00:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:58.769 00:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NTY3OGU5Yzg2ODVhN2ZiMTc4MzdhNDgwYmNhNTQyZDZkZDE3NTVkMjAxN2Y4MDZhZDQzNjhjMjljODk5ZWUwZPFhi2U=: 00:16:58.769 00:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:NTY3OGU5Yzg2ODVhN2ZiMTc4MzdhNDgwYmNhNTQyZDZkZDE3NTVkMjAxN2Y4MDZhZDQzNjhjMjljODk5ZWUwZPFhi2U=: 00:16:59.340 00:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:59.340 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:59.340 00:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:59.340 00:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.340 00:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.340 00:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.340 00:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:59.340 00:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:59.340 00:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:59.340 00:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:59.600 00:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:16:59.600 00:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:59.600 00:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:59.600 00:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:59.600 00:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:59.600 00:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:59.600 00:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:59.600 00:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.600 00:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.600 00:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.600 00:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:59.600 00:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:59.600 00:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:59.861 00:16:59.861 00:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:59.861 00:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:59.861 00:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:59.861 00:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:59.861 00:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:59.861 00:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.861 00:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.861 00:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.861 00:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:59.861 { 00:16:59.861 "cntlid": 57, 00:16:59.861 "qid": 0, 00:16:59.861 "state": "enabled", 00:16:59.861 "thread": "nvmf_tgt_poll_group_000", 00:16:59.861 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:59.861 "listen_address": { 00:16:59.861 "trtype": "TCP", 00:16:59.861 "adrfam": "IPv4", 00:16:59.861 "traddr": "10.0.0.2", 00:16:59.861 "trsvcid": "4420" 00:16:59.861 }, 00:16:59.861 "peer_address": { 00:16:59.861 "trtype": "TCP", 00:16:59.861 "adrfam": "IPv4", 00:16:59.861 "traddr": "10.0.0.1", 00:16:59.861 "trsvcid": "34352" 00:16:59.861 }, 00:16:59.861 "auth": { 00:16:59.861 "state": "completed", 00:16:59.861 "digest": "sha384", 00:16:59.861 "dhgroup": "ffdhe2048" 00:16:59.861 } 00:16:59.861 } 00:16:59.861 ]' 00:16:59.861 00:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:00.121 00:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:00.121 00:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:00.121 00:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:00.121 00:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:00.121 00:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:00.121 00:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:00.121 00:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:00.381 00:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OWVlMTlmOGZmZTk3NTYzNTAxMmM4MGEyMzJmOGIyZDUwYTUxOTIzNmQ2NzQ4Y2QzlWFTxQ==: --dhchap-ctrl-secret DHHC-1:03:ZDA3OTllMjk2ZjMzYTYzZmRiMWI0M2NjOTJkNWJhODJlNmIyZWYyMTE4YWFjYmIwYmJkNDdmMTAzNWRmMTcwZQH4CrE=: 00:17:00.381 00:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:OWVlMTlmOGZmZTk3NTYzNTAxMmM4MGEyMzJmOGIyZDUwYTUxOTIzNmQ2NzQ4Y2QzlWFTxQ==: --dhchap-ctrl-secret DHHC-1:03:ZDA3OTllMjk2ZjMzYTYzZmRiMWI0M2NjOTJkNWJhODJlNmIyZWYyMTE4YWFjYmIwYmJkNDdmMTAzNWRmMTcwZQH4CrE=: 00:17:00.963 00:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:00.963 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:00.963 00:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:00.963 00:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.963 00:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.963 00:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.963 00:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:00.963 00:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:00.963 00:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:00.963 00:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:17:00.963 00:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:00.963 00:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:00.963 00:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:00.963 00:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:00.963 00:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:00.963 00:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:00.964 00:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.964 00:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.229 00:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.229 00:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:01.229 00:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:01.229 00:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:01.229 00:17:01.229 00:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:01.229 00:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:01.229 00:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:01.490 00:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:01.490 00:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:01.490 00:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.490 00:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.490 00:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.490 00:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:01.490 { 00:17:01.490 "cntlid": 59, 00:17:01.490 "qid": 0, 00:17:01.490 "state": "enabled", 00:17:01.490 "thread": "nvmf_tgt_poll_group_000", 00:17:01.490 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:01.490 "listen_address": { 00:17:01.490 "trtype": "TCP", 00:17:01.490 "adrfam": "IPv4", 00:17:01.490 "traddr": "10.0.0.2", 00:17:01.490 "trsvcid": "4420" 00:17:01.490 }, 00:17:01.490 "peer_address": { 00:17:01.490 "trtype": "TCP", 00:17:01.490 "adrfam": "IPv4", 00:17:01.490 "traddr": "10.0.0.1", 00:17:01.490 "trsvcid": "34388" 00:17:01.490 }, 00:17:01.490 "auth": { 00:17:01.490 "state": "completed", 00:17:01.490 "digest": "sha384", 00:17:01.490 "dhgroup": "ffdhe2048" 00:17:01.490 } 00:17:01.490 } 00:17:01.490 ]' 00:17:01.490 00:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:01.490 00:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:01.490 00:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:01.750 00:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:01.750 00:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:01.750 00:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:01.750 00:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:01.750 00:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:01.750 00:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YjU0NTAyNzlhYTdhYTJjMzZiMDRmOWJjMGM3ZjNkYzCZ8b3c: --dhchap-ctrl-secret DHHC-1:02:OWJjZTc3YjcwNzJlN2JlNjMzYWM2ZTQ3ZDEyNzRjMzk5ZDMyNGI4MmJlMTUwOTg0H8R8cQ==: 00:17:01.750 00:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:YjU0NTAyNzlhYTdhYTJjMzZiMDRmOWJjMGM3ZjNkYzCZ8b3c: --dhchap-ctrl-secret DHHC-1:02:OWJjZTc3YjcwNzJlN2JlNjMzYWM2ZTQ3ZDEyNzRjMzk5ZDMyNGI4MmJlMTUwOTg0H8R8cQ==: 00:17:02.690 00:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:02.690 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:02.690 00:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:02.690 00:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.690 00:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.690 00:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.690 00:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:02.690 00:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:02.690 00:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:02.690 00:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:17:02.690 00:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:02.690 00:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:02.690 00:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:02.690 00:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:02.690 00:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:02.690 00:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:02.690 00:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.690 00:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.690 00:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.690 00:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:02.690 00:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:02.690 00:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:02.950 00:17:02.950 00:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:02.950 00:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:02.950 00:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:03.211 00:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:03.211 00:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:03.212 00:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.212 00:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.212 00:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.212 00:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:03.212 { 00:17:03.212 "cntlid": 61, 00:17:03.212 "qid": 0, 00:17:03.212 "state": "enabled", 00:17:03.212 "thread": "nvmf_tgt_poll_group_000", 00:17:03.212 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:03.212 "listen_address": { 00:17:03.212 "trtype": "TCP", 00:17:03.212 "adrfam": "IPv4", 00:17:03.212 "traddr": "10.0.0.2", 00:17:03.212 "trsvcid": "4420" 00:17:03.212 }, 00:17:03.212 "peer_address": { 00:17:03.212 "trtype": "TCP", 00:17:03.212 "adrfam": "IPv4", 00:17:03.212 "traddr": "10.0.0.1", 00:17:03.212 "trsvcid": "34426" 00:17:03.212 }, 00:17:03.212 "auth": { 00:17:03.212 "state": "completed", 00:17:03.212 "digest": "sha384", 00:17:03.212 "dhgroup": "ffdhe2048" 00:17:03.212 } 00:17:03.212 } 00:17:03.212 ]' 00:17:03.212 00:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:03.212 00:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:03.212 00:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:03.212 00:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:03.212 00:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:03.212 00:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:03.212 00:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:03.212 00:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:03.473 00:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MzQzNzY5NzJiMGI1NGMyODkwNDdkYjk1ZDJmMWE0OTBmZGI1MjU2OGEyMzY5Zjk4bgm3qg==: --dhchap-ctrl-secret DHHC-1:01:ODE1MDlkOTMyNDExMjk5Y2MwZmJiNmYwYzRlNTM5MzZLnj+N: 00:17:03.473 00:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:MzQzNzY5NzJiMGI1NGMyODkwNDdkYjk1ZDJmMWE0OTBmZGI1MjU2OGEyMzY5Zjk4bgm3qg==: --dhchap-ctrl-secret DHHC-1:01:ODE1MDlkOTMyNDExMjk5Y2MwZmJiNmYwYzRlNTM5MzZLnj+N: 00:17:04.043 00:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:04.043 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:04.043 00:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:04.043 00:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.043 00:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.043 00:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.043 00:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:04.043 00:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:04.043 00:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:04.304 00:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:17:04.304 00:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:04.304 00:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:04.304 00:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:04.304 00:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:04.304 00:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:04.304 00:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:17:04.304 00:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.304 00:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.304 00:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.304 00:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:04.304 00:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:04.304 00:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:04.576 00:17:04.576 00:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:04.576 00:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:04.576 00:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:04.576 00:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:04.576 00:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:04.576 00:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.577 00:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.577 00:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.577 00:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:04.577 { 00:17:04.577 "cntlid": 63, 00:17:04.577 "qid": 0, 00:17:04.577 "state": "enabled", 00:17:04.577 "thread": "nvmf_tgt_poll_group_000", 00:17:04.577 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:04.577 "listen_address": { 00:17:04.577 "trtype": "TCP", 00:17:04.577 "adrfam": "IPv4", 00:17:04.577 "traddr": "10.0.0.2", 00:17:04.577 "trsvcid": "4420" 00:17:04.577 }, 00:17:04.577 "peer_address": { 00:17:04.577 "trtype": "TCP", 00:17:04.577 "adrfam": "IPv4", 00:17:04.577 "traddr": "10.0.0.1", 00:17:04.577 "trsvcid": "34452" 00:17:04.577 }, 00:17:04.577 "auth": { 00:17:04.577 "state": "completed", 00:17:04.577 "digest": "sha384", 00:17:04.577 "dhgroup": "ffdhe2048" 00:17:04.577 } 00:17:04.577 } 00:17:04.577 ]' 00:17:04.577 00:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:04.577 00:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:04.577 00:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:04.845 00:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:04.845 00:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:04.845 00:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:04.845 00:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:04.845 00:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:04.845 00:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NTY3OGU5Yzg2ODVhN2ZiMTc4MzdhNDgwYmNhNTQyZDZkZDE3NTVkMjAxN2Y4MDZhZDQzNjhjMjljODk5ZWUwZPFhi2U=: 00:17:04.845 00:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:NTY3OGU5Yzg2ODVhN2ZiMTc4MzdhNDgwYmNhNTQyZDZkZDE3NTVkMjAxN2Y4MDZhZDQzNjhjMjljODk5ZWUwZPFhi2U=: 00:17:05.786 00:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:05.786 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:05.786 00:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:05.786 00:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.786 00:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.786 00:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.786 00:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:05.786 00:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:05.786 00:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:05.786 00:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:05.786 00:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:17:05.786 00:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:05.786 00:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:05.786 00:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:05.786 00:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:05.786 00:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:05.786 00:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:05.786 00:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.786 00:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.786 00:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.786 00:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:05.786 00:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:05.786 00:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:06.047 00:17:06.047 00:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:06.047 00:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:06.047 00:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:06.047 00:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:06.047 00:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:06.047 00:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.047 00:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.308 00:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.308 00:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:06.308 { 00:17:06.308 "cntlid": 65, 00:17:06.308 "qid": 0, 00:17:06.308 "state": "enabled", 00:17:06.308 "thread": "nvmf_tgt_poll_group_000", 00:17:06.308 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:06.308 "listen_address": { 00:17:06.308 "trtype": "TCP", 00:17:06.308 "adrfam": "IPv4", 00:17:06.308 "traddr": "10.0.0.2", 00:17:06.308 "trsvcid": "4420" 00:17:06.308 }, 00:17:06.308 "peer_address": { 00:17:06.308 "trtype": "TCP", 00:17:06.308 "adrfam": "IPv4", 00:17:06.308 "traddr": "10.0.0.1", 00:17:06.308 "trsvcid": "34484" 00:17:06.308 }, 00:17:06.308 "auth": { 00:17:06.308 "state": "completed", 00:17:06.308 "digest": "sha384", 00:17:06.308 "dhgroup": "ffdhe3072" 00:17:06.308 } 00:17:06.308 } 00:17:06.308 ]' 00:17:06.308 00:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:06.308 00:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:06.308 00:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:06.308 00:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:06.308 00:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:06.308 00:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:06.308 00:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:06.308 00:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:06.569 00:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OWVlMTlmOGZmZTk3NTYzNTAxMmM4MGEyMzJmOGIyZDUwYTUxOTIzNmQ2NzQ4Y2QzlWFTxQ==: --dhchap-ctrl-secret DHHC-1:03:ZDA3OTllMjk2ZjMzYTYzZmRiMWI0M2NjOTJkNWJhODJlNmIyZWYyMTE4YWFjYmIwYmJkNDdmMTAzNWRmMTcwZQH4CrE=: 00:17:06.569 00:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:OWVlMTlmOGZmZTk3NTYzNTAxMmM4MGEyMzJmOGIyZDUwYTUxOTIzNmQ2NzQ4Y2QzlWFTxQ==: --dhchap-ctrl-secret DHHC-1:03:ZDA3OTllMjk2ZjMzYTYzZmRiMWI0M2NjOTJkNWJhODJlNmIyZWYyMTE4YWFjYmIwYmJkNDdmMTAzNWRmMTcwZQH4CrE=: 00:17:07.141 00:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:07.141 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:07.141 00:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:07.141 00:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.141 00:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.141 00:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.141 00:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:07.141 00:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:07.141 00:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:07.141 00:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:17:07.141 00:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:07.141 00:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:07.141 00:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:07.141 00:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:07.141 00:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:07.141 00:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:07.141 00:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.141 00:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.141 00:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.141 00:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:07.141 00:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:07.141 00:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:07.402 00:17:07.402 00:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:07.402 00:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:07.402 00:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:07.663 00:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:07.663 00:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:07.663 00:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.663 00:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.663 00:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.663 00:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:07.663 { 00:17:07.663 "cntlid": 67, 00:17:07.663 "qid": 0, 00:17:07.663 "state": "enabled", 00:17:07.663 "thread": "nvmf_tgt_poll_group_000", 00:17:07.663 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:07.663 "listen_address": { 00:17:07.663 "trtype": "TCP", 00:17:07.663 "adrfam": "IPv4", 00:17:07.663 "traddr": "10.0.0.2", 00:17:07.663 "trsvcid": "4420" 00:17:07.663 }, 00:17:07.663 "peer_address": { 00:17:07.663 "trtype": "TCP", 00:17:07.663 "adrfam": "IPv4", 00:17:07.663 "traddr": "10.0.0.1", 00:17:07.663 "trsvcid": "34516" 00:17:07.663 }, 00:17:07.663 "auth": { 00:17:07.663 "state": "completed", 00:17:07.663 "digest": "sha384", 00:17:07.663 "dhgroup": "ffdhe3072" 00:17:07.663 } 00:17:07.663 } 00:17:07.663 ]' 00:17:07.663 00:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:07.663 00:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:07.663 00:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:07.924 00:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:07.924 00:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:07.924 00:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:07.924 00:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:07.924 00:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:07.924 00:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YjU0NTAyNzlhYTdhYTJjMzZiMDRmOWJjMGM3ZjNkYzCZ8b3c: --dhchap-ctrl-secret DHHC-1:02:OWJjZTc3YjcwNzJlN2JlNjMzYWM2ZTQ3ZDEyNzRjMzk5ZDMyNGI4MmJlMTUwOTg0H8R8cQ==: 00:17:07.924 00:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:YjU0NTAyNzlhYTdhYTJjMzZiMDRmOWJjMGM3ZjNkYzCZ8b3c: --dhchap-ctrl-secret DHHC-1:02:OWJjZTc3YjcwNzJlN2JlNjMzYWM2ZTQ3ZDEyNzRjMzk5ZDMyNGI4MmJlMTUwOTg0H8R8cQ==: 00:17:08.495 00:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:08.495 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:08.495 00:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:08.495 00:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.495 00:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.756 00:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.756 00:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:08.756 00:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:08.756 00:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:08.756 00:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:17:08.756 00:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:08.756 00:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:08.756 00:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:08.756 00:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:08.756 00:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:08.756 00:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:08.756 00:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.756 00:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.756 00:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.756 00:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:08.756 00:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:08.756 00:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:09.016 00:17:09.016 00:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:09.016 00:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:09.016 00:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:09.276 00:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:09.276 00:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:09.276 00:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.276 00:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.276 00:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.276 00:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:09.276 { 00:17:09.276 "cntlid": 69, 00:17:09.276 "qid": 0, 00:17:09.276 "state": "enabled", 00:17:09.276 "thread": "nvmf_tgt_poll_group_000", 00:17:09.276 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:09.276 "listen_address": { 00:17:09.276 "trtype": "TCP", 00:17:09.276 "adrfam": "IPv4", 00:17:09.276 "traddr": "10.0.0.2", 00:17:09.276 "trsvcid": "4420" 00:17:09.276 }, 00:17:09.276 "peer_address": { 00:17:09.276 "trtype": "TCP", 00:17:09.276 "adrfam": "IPv4", 00:17:09.276 "traddr": "10.0.0.1", 00:17:09.276 "trsvcid": "48878" 00:17:09.276 }, 00:17:09.276 "auth": { 00:17:09.276 "state": "completed", 00:17:09.276 "digest": "sha384", 00:17:09.276 "dhgroup": "ffdhe3072" 00:17:09.276 } 00:17:09.276 } 00:17:09.276 ]' 00:17:09.276 00:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:09.276 00:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:09.276 00:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:09.276 00:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:09.276 00:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:09.276 00:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:09.276 00:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:09.276 00:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:09.536 00:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MzQzNzY5NzJiMGI1NGMyODkwNDdkYjk1ZDJmMWE0OTBmZGI1MjU2OGEyMzY5Zjk4bgm3qg==: --dhchap-ctrl-secret DHHC-1:01:ODE1MDlkOTMyNDExMjk5Y2MwZmJiNmYwYzRlNTM5MzZLnj+N: 00:17:09.536 00:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:MzQzNzY5NzJiMGI1NGMyODkwNDdkYjk1ZDJmMWE0OTBmZGI1MjU2OGEyMzY5Zjk4bgm3qg==: --dhchap-ctrl-secret DHHC-1:01:ODE1MDlkOTMyNDExMjk5Y2MwZmJiNmYwYzRlNTM5MzZLnj+N: 00:17:10.175 00:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:10.175 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:10.175 00:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:10.175 00:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.175 00:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.175 00:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:10.175 00:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:10.175 00:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:10.175 00:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:10.475 00:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:17:10.475 00:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:10.475 00:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:10.475 00:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:10.475 00:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:10.475 00:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:10.475 00:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:17:10.475 00:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.475 00:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.475 00:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:10.475 00:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:10.475 00:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:10.475 00:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:10.736 00:17:10.736 00:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:10.736 00:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:10.736 00:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:10.736 00:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:10.736 00:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:10.736 00:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.736 00:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.736 00:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:10.736 00:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:10.736 { 00:17:10.736 "cntlid": 71, 00:17:10.736 "qid": 0, 00:17:10.736 "state": "enabled", 00:17:10.736 "thread": "nvmf_tgt_poll_group_000", 00:17:10.736 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:10.736 "listen_address": { 00:17:10.736 "trtype": "TCP", 00:17:10.736 "adrfam": "IPv4", 00:17:10.736 "traddr": "10.0.0.2", 00:17:10.736 "trsvcid": "4420" 00:17:10.736 }, 00:17:10.736 "peer_address": { 00:17:10.736 "trtype": "TCP", 00:17:10.736 "adrfam": "IPv4", 00:17:10.736 "traddr": "10.0.0.1", 00:17:10.736 "trsvcid": "48908" 00:17:10.736 }, 00:17:10.736 "auth": { 00:17:10.736 "state": "completed", 00:17:10.736 "digest": "sha384", 00:17:10.736 "dhgroup": "ffdhe3072" 00:17:10.736 } 00:17:10.736 } 00:17:10.736 ]' 00:17:10.736 00:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:10.997 00:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:10.997 00:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:10.997 00:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:10.997 00:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:10.997 00:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:10.997 00:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:10.997 00:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:11.257 00:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NTY3OGU5Yzg2ODVhN2ZiMTc4MzdhNDgwYmNhNTQyZDZkZDE3NTVkMjAxN2Y4MDZhZDQzNjhjMjljODk5ZWUwZPFhi2U=: 00:17:11.257 00:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:NTY3OGU5Yzg2ODVhN2ZiMTc4MzdhNDgwYmNhNTQyZDZkZDE3NTVkMjAxN2Y4MDZhZDQzNjhjMjljODk5ZWUwZPFhi2U=: 00:17:11.827 00:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:11.827 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:11.827 00:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:11.827 00:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.827 00:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.827 00:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.827 00:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:11.827 00:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:11.827 00:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:11.827 00:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:11.827 00:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:17:11.827 00:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:11.827 00:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:11.827 00:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:11.827 00:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:11.827 00:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:11.827 00:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:11.827 00:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.827 00:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.827 00:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.827 00:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:11.827 00:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:11.827 00:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:12.087 00:17:12.087 00:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:12.087 00:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:12.087 00:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:12.347 00:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:12.347 00:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:12.347 00:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.347 00:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.347 00:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.347 00:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:12.347 { 00:17:12.347 "cntlid": 73, 00:17:12.347 "qid": 0, 00:17:12.347 "state": "enabled", 00:17:12.347 "thread": "nvmf_tgt_poll_group_000", 00:17:12.347 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:12.347 "listen_address": { 00:17:12.347 "trtype": "TCP", 00:17:12.347 "adrfam": "IPv4", 00:17:12.347 "traddr": "10.0.0.2", 00:17:12.347 "trsvcid": "4420" 00:17:12.347 }, 00:17:12.347 "peer_address": { 00:17:12.347 "trtype": "TCP", 00:17:12.347 "adrfam": "IPv4", 00:17:12.347 "traddr": "10.0.0.1", 00:17:12.347 "trsvcid": "48936" 00:17:12.347 }, 00:17:12.347 "auth": { 00:17:12.347 "state": "completed", 00:17:12.347 "digest": "sha384", 00:17:12.347 "dhgroup": "ffdhe4096" 00:17:12.347 } 00:17:12.347 } 00:17:12.347 ]' 00:17:12.347 00:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:12.347 00:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:12.347 00:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:12.347 00:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:12.347 00:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:12.607 00:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:12.607 00:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:12.607 00:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:12.607 00:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OWVlMTlmOGZmZTk3NTYzNTAxMmM4MGEyMzJmOGIyZDUwYTUxOTIzNmQ2NzQ4Y2QzlWFTxQ==: --dhchap-ctrl-secret DHHC-1:03:ZDA3OTllMjk2ZjMzYTYzZmRiMWI0M2NjOTJkNWJhODJlNmIyZWYyMTE4YWFjYmIwYmJkNDdmMTAzNWRmMTcwZQH4CrE=: 00:17:12.607 00:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:OWVlMTlmOGZmZTk3NTYzNTAxMmM4MGEyMzJmOGIyZDUwYTUxOTIzNmQ2NzQ4Y2QzlWFTxQ==: --dhchap-ctrl-secret DHHC-1:03:ZDA3OTllMjk2ZjMzYTYzZmRiMWI0M2NjOTJkNWJhODJlNmIyZWYyMTE4YWFjYmIwYmJkNDdmMTAzNWRmMTcwZQH4CrE=: 00:17:13.177 00:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:13.177 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:13.177 00:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:13.177 00:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.177 00:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.437 00:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.437 00:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:13.437 00:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:13.437 00:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:13.437 00:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:17:13.437 00:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:13.437 00:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:13.437 00:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:13.437 00:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:13.437 00:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:13.437 00:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:13.437 00:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.437 00:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.437 00:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.437 00:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:13.438 00:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:13.438 00:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:13.698 00:17:13.698 00:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:13.698 00:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:13.698 00:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:13.958 00:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:13.959 00:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:13.959 00:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.959 00:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.959 00:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.959 00:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:13.959 { 00:17:13.959 "cntlid": 75, 00:17:13.959 "qid": 0, 00:17:13.959 "state": "enabled", 00:17:13.959 "thread": "nvmf_tgt_poll_group_000", 00:17:13.959 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:13.959 "listen_address": { 00:17:13.959 "trtype": "TCP", 00:17:13.959 "adrfam": "IPv4", 00:17:13.959 "traddr": "10.0.0.2", 00:17:13.959 "trsvcid": "4420" 00:17:13.959 }, 00:17:13.959 "peer_address": { 00:17:13.959 "trtype": "TCP", 00:17:13.959 "adrfam": "IPv4", 00:17:13.959 "traddr": "10.0.0.1", 00:17:13.959 "trsvcid": "48972" 00:17:13.959 }, 00:17:13.959 "auth": { 00:17:13.959 "state": "completed", 00:17:13.959 "digest": "sha384", 00:17:13.959 "dhgroup": "ffdhe4096" 00:17:13.959 } 00:17:13.959 } 00:17:13.959 ]' 00:17:13.959 00:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:13.959 00:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:13.959 00:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:13.959 00:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:13.959 00:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:13.959 00:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:13.959 00:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:13.959 00:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:14.224 00:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YjU0NTAyNzlhYTdhYTJjMzZiMDRmOWJjMGM3ZjNkYzCZ8b3c: --dhchap-ctrl-secret DHHC-1:02:OWJjZTc3YjcwNzJlN2JlNjMzYWM2ZTQ3ZDEyNzRjMzk5ZDMyNGI4MmJlMTUwOTg0H8R8cQ==: 00:17:14.224 00:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:YjU0NTAyNzlhYTdhYTJjMzZiMDRmOWJjMGM3ZjNkYzCZ8b3c: --dhchap-ctrl-secret DHHC-1:02:OWJjZTc3YjcwNzJlN2JlNjMzYWM2ZTQ3ZDEyNzRjMzk5ZDMyNGI4MmJlMTUwOTg0H8R8cQ==: 00:17:14.798 00:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:14.798 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:14.798 00:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:14.798 00:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.798 00:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.798 00:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.798 00:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:14.798 00:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:14.799 00:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:15.058 00:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:17:15.058 00:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:15.058 00:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:15.058 00:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:15.058 00:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:15.058 00:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:15.058 00:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:15.058 00:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.058 00:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.058 00:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.058 00:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:15.058 00:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:15.058 00:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:15.319 00:17:15.319 00:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:15.319 00:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:15.319 00:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:15.580 00:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:15.580 00:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:15.580 00:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.580 00:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.580 00:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.580 00:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:15.580 { 00:17:15.580 "cntlid": 77, 00:17:15.580 "qid": 0, 00:17:15.580 "state": "enabled", 00:17:15.580 "thread": "nvmf_tgt_poll_group_000", 00:17:15.580 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:15.580 "listen_address": { 00:17:15.580 "trtype": "TCP", 00:17:15.580 "adrfam": "IPv4", 00:17:15.580 "traddr": "10.0.0.2", 00:17:15.580 "trsvcid": "4420" 00:17:15.580 }, 00:17:15.580 "peer_address": { 00:17:15.580 "trtype": "TCP", 00:17:15.580 "adrfam": "IPv4", 00:17:15.580 "traddr": "10.0.0.1", 00:17:15.580 "trsvcid": "48996" 00:17:15.580 }, 00:17:15.580 "auth": { 00:17:15.580 "state": "completed", 00:17:15.580 "digest": "sha384", 00:17:15.580 "dhgroup": "ffdhe4096" 00:17:15.580 } 00:17:15.580 } 00:17:15.580 ]' 00:17:15.580 00:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:15.580 00:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:15.580 00:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:15.580 00:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:15.580 00:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:15.580 00:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:15.580 00:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:15.580 00:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:15.841 00:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MzQzNzY5NzJiMGI1NGMyODkwNDdkYjk1ZDJmMWE0OTBmZGI1MjU2OGEyMzY5Zjk4bgm3qg==: --dhchap-ctrl-secret DHHC-1:01:ODE1MDlkOTMyNDExMjk5Y2MwZmJiNmYwYzRlNTM5MzZLnj+N: 00:17:15.841 00:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:MzQzNzY5NzJiMGI1NGMyODkwNDdkYjk1ZDJmMWE0OTBmZGI1MjU2OGEyMzY5Zjk4bgm3qg==: --dhchap-ctrl-secret DHHC-1:01:ODE1MDlkOTMyNDExMjk5Y2MwZmJiNmYwYzRlNTM5MzZLnj+N: 00:17:16.411 00:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:16.411 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:16.411 00:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:16.411 00:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.411 00:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.411 00:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.411 00:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:16.411 00:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:16.411 00:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:16.680 00:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:17:16.680 00:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:16.680 00:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:16.680 00:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:16.680 00:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:16.680 00:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:16.680 00:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:17:16.680 00:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.680 00:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.680 00:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.680 00:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:16.680 00:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:16.680 00:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:16.944 00:17:16.944 00:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:16.944 00:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:16.944 00:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:17.206 00:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:17.206 00:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:17.206 00:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.206 00:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.206 00:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.206 00:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:17.206 { 00:17:17.206 "cntlid": 79, 00:17:17.206 "qid": 0, 00:17:17.206 "state": "enabled", 00:17:17.206 "thread": "nvmf_tgt_poll_group_000", 00:17:17.206 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:17.206 "listen_address": { 00:17:17.206 "trtype": "TCP", 00:17:17.206 "adrfam": "IPv4", 00:17:17.206 "traddr": "10.0.0.2", 00:17:17.206 "trsvcid": "4420" 00:17:17.206 }, 00:17:17.206 "peer_address": { 00:17:17.206 "trtype": "TCP", 00:17:17.206 "adrfam": "IPv4", 00:17:17.206 "traddr": "10.0.0.1", 00:17:17.206 "trsvcid": "49026" 00:17:17.206 }, 00:17:17.206 "auth": { 00:17:17.206 "state": "completed", 00:17:17.206 "digest": "sha384", 00:17:17.206 "dhgroup": "ffdhe4096" 00:17:17.206 } 00:17:17.206 } 00:17:17.206 ]' 00:17:17.206 00:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:17.206 00:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:17.206 00:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:17.206 00:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:17.206 00:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:17.206 00:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:17.206 00:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:17.206 00:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:17.467 00:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NTY3OGU5Yzg2ODVhN2ZiMTc4MzdhNDgwYmNhNTQyZDZkZDE3NTVkMjAxN2Y4MDZhZDQzNjhjMjljODk5ZWUwZPFhi2U=: 00:17:17.467 00:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:NTY3OGU5Yzg2ODVhN2ZiMTc4MzdhNDgwYmNhNTQyZDZkZDE3NTVkMjAxN2Y4MDZhZDQzNjhjMjljODk5ZWUwZPFhi2U=: 00:17:18.039 00:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:18.039 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:18.039 00:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:18.039 00:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.039 00:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.039 00:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.039 00:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:18.039 00:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:18.039 00:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:18.039 00:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:18.300 00:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:17:18.300 00:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:18.300 00:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:18.300 00:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:18.300 00:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:18.300 00:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:18.300 00:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:18.300 00:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.300 00:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.300 00:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.300 00:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:18.300 00:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:18.300 00:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:18.559 00:17:18.559 00:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:18.559 00:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:18.559 00:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:18.818 00:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:18.818 00:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:18.818 00:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.818 00:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.818 00:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.818 00:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:18.818 { 00:17:18.818 "cntlid": 81, 00:17:18.818 "qid": 0, 00:17:18.818 "state": "enabled", 00:17:18.818 "thread": "nvmf_tgt_poll_group_000", 00:17:18.818 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:18.818 "listen_address": { 00:17:18.818 "trtype": "TCP", 00:17:18.818 "adrfam": "IPv4", 00:17:18.818 "traddr": "10.0.0.2", 00:17:18.818 "trsvcid": "4420" 00:17:18.818 }, 00:17:18.818 "peer_address": { 00:17:18.818 "trtype": "TCP", 00:17:18.818 "adrfam": "IPv4", 00:17:18.818 "traddr": "10.0.0.1", 00:17:18.818 "trsvcid": "48098" 00:17:18.818 }, 00:17:18.818 "auth": { 00:17:18.818 "state": "completed", 00:17:18.818 "digest": "sha384", 00:17:18.818 "dhgroup": "ffdhe6144" 00:17:18.818 } 00:17:18.818 } 00:17:18.818 ]' 00:17:18.818 00:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:18.818 00:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:18.818 00:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:18.818 00:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:18.818 00:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:18.818 00:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:18.818 00:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:18.818 00:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:19.078 00:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OWVlMTlmOGZmZTk3NTYzNTAxMmM4MGEyMzJmOGIyZDUwYTUxOTIzNmQ2NzQ4Y2QzlWFTxQ==: --dhchap-ctrl-secret DHHC-1:03:ZDA3OTllMjk2ZjMzYTYzZmRiMWI0M2NjOTJkNWJhODJlNmIyZWYyMTE4YWFjYmIwYmJkNDdmMTAzNWRmMTcwZQH4CrE=: 00:17:19.078 00:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:OWVlMTlmOGZmZTk3NTYzNTAxMmM4MGEyMzJmOGIyZDUwYTUxOTIzNmQ2NzQ4Y2QzlWFTxQ==: --dhchap-ctrl-secret DHHC-1:03:ZDA3OTllMjk2ZjMzYTYzZmRiMWI0M2NjOTJkNWJhODJlNmIyZWYyMTE4YWFjYmIwYmJkNDdmMTAzNWRmMTcwZQH4CrE=: 00:17:19.647 00:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:19.647 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:19.647 00:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:19.647 00:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.647 00:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.647 00:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.647 00:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:19.647 00:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:19.647 00:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:19.907 00:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:17:19.907 00:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:19.907 00:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:19.907 00:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:19.907 00:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:19.907 00:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:19.907 00:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:19.907 00:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.907 00:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.907 00:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.907 00:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:19.907 00:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:19.907 00:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:20.168 00:17:20.168 00:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:20.168 00:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:20.168 00:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:20.428 00:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:20.428 00:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:20.428 00:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.428 00:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.428 00:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.428 00:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:20.428 { 00:17:20.428 "cntlid": 83, 00:17:20.428 "qid": 0, 00:17:20.428 "state": "enabled", 00:17:20.428 "thread": "nvmf_tgt_poll_group_000", 00:17:20.428 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:20.428 "listen_address": { 00:17:20.428 "trtype": "TCP", 00:17:20.428 "adrfam": "IPv4", 00:17:20.428 "traddr": "10.0.0.2", 00:17:20.428 "trsvcid": "4420" 00:17:20.428 }, 00:17:20.428 "peer_address": { 00:17:20.428 "trtype": "TCP", 00:17:20.428 "adrfam": "IPv4", 00:17:20.428 "traddr": "10.0.0.1", 00:17:20.428 "trsvcid": "48118" 00:17:20.428 }, 00:17:20.428 "auth": { 00:17:20.428 "state": "completed", 00:17:20.428 "digest": "sha384", 00:17:20.428 "dhgroup": "ffdhe6144" 00:17:20.428 } 00:17:20.428 } 00:17:20.428 ]' 00:17:20.428 00:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:20.428 00:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:20.428 00:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:20.428 00:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:20.428 00:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:20.428 00:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:20.428 00:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:20.428 00:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:20.688 00:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YjU0NTAyNzlhYTdhYTJjMzZiMDRmOWJjMGM3ZjNkYzCZ8b3c: --dhchap-ctrl-secret DHHC-1:02:OWJjZTc3YjcwNzJlN2JlNjMzYWM2ZTQ3ZDEyNzRjMzk5ZDMyNGI4MmJlMTUwOTg0H8R8cQ==: 00:17:20.688 00:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:YjU0NTAyNzlhYTdhYTJjMzZiMDRmOWJjMGM3ZjNkYzCZ8b3c: --dhchap-ctrl-secret DHHC-1:02:OWJjZTc3YjcwNzJlN2JlNjMzYWM2ZTQ3ZDEyNzRjMzk5ZDMyNGI4MmJlMTUwOTg0H8R8cQ==: 00:17:21.265 00:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:21.265 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:21.265 00:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:21.265 00:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.265 00:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.265 00:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.265 00:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:21.265 00:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:21.265 00:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:21.526 00:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:17:21.526 00:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:21.526 00:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:21.526 00:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:21.526 00:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:21.526 00:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:21.526 00:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:21.526 00:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.526 00:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.526 00:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.526 00:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:21.526 00:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:21.526 00:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:21.787 00:17:21.787 00:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:21.787 00:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:21.787 00:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:22.047 00:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:22.047 00:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:22.047 00:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.047 00:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.047 00:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.047 00:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:22.047 { 00:17:22.047 "cntlid": 85, 00:17:22.047 "qid": 0, 00:17:22.047 "state": "enabled", 00:17:22.047 "thread": "nvmf_tgt_poll_group_000", 00:17:22.047 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:22.047 "listen_address": { 00:17:22.047 "trtype": "TCP", 00:17:22.047 "adrfam": "IPv4", 00:17:22.047 "traddr": "10.0.0.2", 00:17:22.047 "trsvcid": "4420" 00:17:22.047 }, 00:17:22.047 "peer_address": { 00:17:22.047 "trtype": "TCP", 00:17:22.047 "adrfam": "IPv4", 00:17:22.047 "traddr": "10.0.0.1", 00:17:22.047 "trsvcid": "48138" 00:17:22.047 }, 00:17:22.047 "auth": { 00:17:22.047 "state": "completed", 00:17:22.047 "digest": "sha384", 00:17:22.047 "dhgroup": "ffdhe6144" 00:17:22.047 } 00:17:22.047 } 00:17:22.047 ]' 00:17:22.047 00:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:22.047 00:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:22.047 00:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:22.047 00:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:22.047 00:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:22.307 00:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:22.307 00:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:22.307 00:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:22.307 00:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MzQzNzY5NzJiMGI1NGMyODkwNDdkYjk1ZDJmMWE0OTBmZGI1MjU2OGEyMzY5Zjk4bgm3qg==: --dhchap-ctrl-secret DHHC-1:01:ODE1MDlkOTMyNDExMjk5Y2MwZmJiNmYwYzRlNTM5MzZLnj+N: 00:17:22.307 00:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:MzQzNzY5NzJiMGI1NGMyODkwNDdkYjk1ZDJmMWE0OTBmZGI1MjU2OGEyMzY5Zjk4bgm3qg==: --dhchap-ctrl-secret DHHC-1:01:ODE1MDlkOTMyNDExMjk5Y2MwZmJiNmYwYzRlNTM5MzZLnj+N: 00:17:22.877 00:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:22.877 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:22.877 00:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:22.877 00:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.877 00:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.877 00:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.877 00:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:22.877 00:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:22.877 00:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:23.137 00:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:17:23.137 00:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:23.137 00:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:23.137 00:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:23.137 00:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:23.137 00:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:23.137 00:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:17:23.137 00:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.137 00:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.137 00:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.137 00:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:23.137 00:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:23.137 00:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:23.399 00:17:23.659 00:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:23.659 00:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:23.659 00:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:23.659 00:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:23.659 00:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:23.659 00:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.659 00:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.659 00:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.659 00:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:23.659 { 00:17:23.659 "cntlid": 87, 00:17:23.659 "qid": 0, 00:17:23.659 "state": "enabled", 00:17:23.659 "thread": "nvmf_tgt_poll_group_000", 00:17:23.659 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:23.659 "listen_address": { 00:17:23.659 "trtype": "TCP", 00:17:23.659 "adrfam": "IPv4", 00:17:23.659 "traddr": "10.0.0.2", 00:17:23.659 "trsvcid": "4420" 00:17:23.659 }, 00:17:23.659 "peer_address": { 00:17:23.659 "trtype": "TCP", 00:17:23.659 "adrfam": "IPv4", 00:17:23.659 "traddr": "10.0.0.1", 00:17:23.659 "trsvcid": "48156" 00:17:23.659 }, 00:17:23.659 "auth": { 00:17:23.659 "state": "completed", 00:17:23.659 "digest": "sha384", 00:17:23.659 "dhgroup": "ffdhe6144" 00:17:23.659 } 00:17:23.659 } 00:17:23.659 ]' 00:17:23.659 00:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:23.659 00:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:23.659 00:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:23.918 00:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:23.918 00:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:23.918 00:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:23.918 00:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:23.918 00:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:23.918 00:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NTY3OGU5Yzg2ODVhN2ZiMTc4MzdhNDgwYmNhNTQyZDZkZDE3NTVkMjAxN2Y4MDZhZDQzNjhjMjljODk5ZWUwZPFhi2U=: 00:17:23.918 00:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:NTY3OGU5Yzg2ODVhN2ZiMTc4MzdhNDgwYmNhNTQyZDZkZDE3NTVkMjAxN2Y4MDZhZDQzNjhjMjljODk5ZWUwZPFhi2U=: 00:17:24.489 00:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:24.750 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:24.750 00:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:24.750 00:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.750 00:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.750 00:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.750 00:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:24.750 00:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:24.750 00:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:24.750 00:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:24.750 00:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:17:24.750 00:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:24.750 00:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:24.750 00:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:24.750 00:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:24.750 00:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:24.750 00:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:24.750 00:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.750 00:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.750 00:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.750 00:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:24.750 00:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:24.750 00:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:25.321 00:17:25.321 00:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:25.322 00:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:25.322 00:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:25.582 00:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:25.582 00:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:25.582 00:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.582 00:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.582 00:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.582 00:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:25.582 { 00:17:25.582 "cntlid": 89, 00:17:25.582 "qid": 0, 00:17:25.582 "state": "enabled", 00:17:25.582 "thread": "nvmf_tgt_poll_group_000", 00:17:25.582 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:25.582 "listen_address": { 00:17:25.582 "trtype": "TCP", 00:17:25.582 "adrfam": "IPv4", 00:17:25.582 "traddr": "10.0.0.2", 00:17:25.582 "trsvcid": "4420" 00:17:25.582 }, 00:17:25.582 "peer_address": { 00:17:25.582 "trtype": "TCP", 00:17:25.582 "adrfam": "IPv4", 00:17:25.582 "traddr": "10.0.0.1", 00:17:25.582 "trsvcid": "48192" 00:17:25.582 }, 00:17:25.582 "auth": { 00:17:25.582 "state": "completed", 00:17:25.582 "digest": "sha384", 00:17:25.582 "dhgroup": "ffdhe8192" 00:17:25.582 } 00:17:25.582 } 00:17:25.582 ]' 00:17:25.582 00:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:25.582 00:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:25.582 00:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:25.582 00:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:25.582 00:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:25.582 00:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:25.582 00:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:25.582 00:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:25.842 00:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OWVlMTlmOGZmZTk3NTYzNTAxMmM4MGEyMzJmOGIyZDUwYTUxOTIzNmQ2NzQ4Y2QzlWFTxQ==: --dhchap-ctrl-secret DHHC-1:03:ZDA3OTllMjk2ZjMzYTYzZmRiMWI0M2NjOTJkNWJhODJlNmIyZWYyMTE4YWFjYmIwYmJkNDdmMTAzNWRmMTcwZQH4CrE=: 00:17:25.842 00:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:OWVlMTlmOGZmZTk3NTYzNTAxMmM4MGEyMzJmOGIyZDUwYTUxOTIzNmQ2NzQ4Y2QzlWFTxQ==: --dhchap-ctrl-secret DHHC-1:03:ZDA3OTllMjk2ZjMzYTYzZmRiMWI0M2NjOTJkNWJhODJlNmIyZWYyMTE4YWFjYmIwYmJkNDdmMTAzNWRmMTcwZQH4CrE=: 00:17:26.414 00:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:26.414 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:26.414 00:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:26.414 00:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.414 00:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.414 00:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.414 00:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:26.414 00:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:26.414 00:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:26.676 00:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:17:26.676 00:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:26.676 00:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:26.676 00:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:26.676 00:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:26.676 00:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:26.676 00:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:26.676 00:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.676 00:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.676 00:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.676 00:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:26.676 00:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:26.676 00:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:27.248 00:17:27.248 00:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:27.248 00:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:27.248 00:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:27.248 00:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:27.248 00:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:27.248 00:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.248 00:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.248 00:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.248 00:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:27.248 { 00:17:27.248 "cntlid": 91, 00:17:27.248 "qid": 0, 00:17:27.248 "state": "enabled", 00:17:27.248 "thread": "nvmf_tgt_poll_group_000", 00:17:27.248 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:27.248 "listen_address": { 00:17:27.248 "trtype": "TCP", 00:17:27.248 "adrfam": "IPv4", 00:17:27.248 "traddr": "10.0.0.2", 00:17:27.248 "trsvcid": "4420" 00:17:27.248 }, 00:17:27.248 "peer_address": { 00:17:27.248 "trtype": "TCP", 00:17:27.248 "adrfam": "IPv4", 00:17:27.248 "traddr": "10.0.0.1", 00:17:27.248 "trsvcid": "48230" 00:17:27.248 }, 00:17:27.248 "auth": { 00:17:27.248 "state": "completed", 00:17:27.248 "digest": "sha384", 00:17:27.248 "dhgroup": "ffdhe8192" 00:17:27.248 } 00:17:27.248 } 00:17:27.248 ]' 00:17:27.248 00:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:27.248 00:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:27.248 00:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:27.248 00:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:27.248 00:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:27.509 00:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:27.509 00:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:27.509 00:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:27.509 00:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YjU0NTAyNzlhYTdhYTJjMzZiMDRmOWJjMGM3ZjNkYzCZ8b3c: --dhchap-ctrl-secret DHHC-1:02:OWJjZTc3YjcwNzJlN2JlNjMzYWM2ZTQ3ZDEyNzRjMzk5ZDMyNGI4MmJlMTUwOTg0H8R8cQ==: 00:17:27.509 00:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:YjU0NTAyNzlhYTdhYTJjMzZiMDRmOWJjMGM3ZjNkYzCZ8b3c: --dhchap-ctrl-secret DHHC-1:02:OWJjZTc3YjcwNzJlN2JlNjMzYWM2ZTQ3ZDEyNzRjMzk5ZDMyNGI4MmJlMTUwOTg0H8R8cQ==: 00:17:28.082 00:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:28.343 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:28.344 00:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:28.344 00:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.344 00:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.344 00:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.344 00:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:28.344 00:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:28.344 00:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:28.344 00:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:17:28.344 00:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:28.344 00:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:28.344 00:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:28.344 00:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:28.344 00:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:28.344 00:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:28.344 00:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.344 00:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.344 00:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.344 00:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:28.344 00:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:28.344 00:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:28.916 00:17:28.916 00:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:28.916 00:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:28.916 00:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:29.177 00:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:29.177 00:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:29.177 00:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.177 00:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.177 00:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.177 00:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:29.177 { 00:17:29.177 "cntlid": 93, 00:17:29.177 "qid": 0, 00:17:29.177 "state": "enabled", 00:17:29.177 "thread": "nvmf_tgt_poll_group_000", 00:17:29.177 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:29.177 "listen_address": { 00:17:29.177 "trtype": "TCP", 00:17:29.177 "adrfam": "IPv4", 00:17:29.177 "traddr": "10.0.0.2", 00:17:29.177 "trsvcid": "4420" 00:17:29.177 }, 00:17:29.177 "peer_address": { 00:17:29.177 "trtype": "TCP", 00:17:29.177 "adrfam": "IPv4", 00:17:29.177 "traddr": "10.0.0.1", 00:17:29.177 "trsvcid": "38548" 00:17:29.177 }, 00:17:29.177 "auth": { 00:17:29.178 "state": "completed", 00:17:29.178 "digest": "sha384", 00:17:29.178 "dhgroup": "ffdhe8192" 00:17:29.178 } 00:17:29.178 } 00:17:29.178 ]' 00:17:29.178 00:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:29.178 00:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:29.178 00:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:29.178 00:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:29.178 00:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:29.178 00:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:29.178 00:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:29.178 00:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:29.445 00:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MzQzNzY5NzJiMGI1NGMyODkwNDdkYjk1ZDJmMWE0OTBmZGI1MjU2OGEyMzY5Zjk4bgm3qg==: --dhchap-ctrl-secret DHHC-1:01:ODE1MDlkOTMyNDExMjk5Y2MwZmJiNmYwYzRlNTM5MzZLnj+N: 00:17:29.445 00:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:MzQzNzY5NzJiMGI1NGMyODkwNDdkYjk1ZDJmMWE0OTBmZGI1MjU2OGEyMzY5Zjk4bgm3qg==: --dhchap-ctrl-secret DHHC-1:01:ODE1MDlkOTMyNDExMjk5Y2MwZmJiNmYwYzRlNTM5MzZLnj+N: 00:17:30.015 00:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:30.015 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:30.015 00:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:30.015 00:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.016 00:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.016 00:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.016 00:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:30.016 00:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:30.016 00:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:30.277 00:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:17:30.277 00:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:30.277 00:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:30.277 00:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:30.277 00:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:30.277 00:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:30.277 00:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:17:30.277 00:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.277 00:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.277 00:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.277 00:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:30.277 00:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:30.277 00:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:30.538 00:17:30.799 00:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:30.799 00:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:30.799 00:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:30.799 00:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:30.799 00:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:30.799 00:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.799 00:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.800 00:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.800 00:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:30.800 { 00:17:30.800 "cntlid": 95, 00:17:30.800 "qid": 0, 00:17:30.800 "state": "enabled", 00:17:30.800 "thread": "nvmf_tgt_poll_group_000", 00:17:30.800 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:30.800 "listen_address": { 00:17:30.800 "trtype": "TCP", 00:17:30.800 "adrfam": "IPv4", 00:17:30.800 "traddr": "10.0.0.2", 00:17:30.800 "trsvcid": "4420" 00:17:30.800 }, 00:17:30.800 "peer_address": { 00:17:30.800 "trtype": "TCP", 00:17:30.800 "adrfam": "IPv4", 00:17:30.800 "traddr": "10.0.0.1", 00:17:30.800 "trsvcid": "38560" 00:17:30.800 }, 00:17:30.800 "auth": { 00:17:30.800 "state": "completed", 00:17:30.800 "digest": "sha384", 00:17:30.800 "dhgroup": "ffdhe8192" 00:17:30.800 } 00:17:30.800 } 00:17:30.800 ]' 00:17:30.800 00:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:30.800 00:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:30.800 00:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:31.063 00:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:31.063 00:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:31.063 00:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:31.063 00:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:31.063 00:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:31.063 00:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NTY3OGU5Yzg2ODVhN2ZiMTc4MzdhNDgwYmNhNTQyZDZkZDE3NTVkMjAxN2Y4MDZhZDQzNjhjMjljODk5ZWUwZPFhi2U=: 00:17:31.063 00:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:NTY3OGU5Yzg2ODVhN2ZiMTc4MzdhNDgwYmNhNTQyZDZkZDE3NTVkMjAxN2Y4MDZhZDQzNjhjMjljODk5ZWUwZPFhi2U=: 00:17:32.008 00:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:32.008 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:32.008 00:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:32.008 00:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.008 00:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.008 00:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.008 00:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:17:32.008 00:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:32.008 00:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:32.008 00:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:32.008 00:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:32.008 00:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:17:32.008 00:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:32.008 00:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:32.008 00:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:32.008 00:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:32.008 00:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:32.008 00:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:32.008 00:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.008 00:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.008 00:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.008 00:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:32.008 00:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:32.008 00:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:32.269 00:17:32.269 00:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:32.269 00:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:32.269 00:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:32.530 00:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:32.530 00:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:32.530 00:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.530 00:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.530 00:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.530 00:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:32.530 { 00:17:32.530 "cntlid": 97, 00:17:32.530 "qid": 0, 00:17:32.531 "state": "enabled", 00:17:32.531 "thread": "nvmf_tgt_poll_group_000", 00:17:32.531 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:32.531 "listen_address": { 00:17:32.531 "trtype": "TCP", 00:17:32.531 "adrfam": "IPv4", 00:17:32.531 "traddr": "10.0.0.2", 00:17:32.531 "trsvcid": "4420" 00:17:32.531 }, 00:17:32.531 "peer_address": { 00:17:32.531 "trtype": "TCP", 00:17:32.531 "adrfam": "IPv4", 00:17:32.531 "traddr": "10.0.0.1", 00:17:32.531 "trsvcid": "38584" 00:17:32.531 }, 00:17:32.531 "auth": { 00:17:32.531 "state": "completed", 00:17:32.531 "digest": "sha512", 00:17:32.531 "dhgroup": "null" 00:17:32.531 } 00:17:32.531 } 00:17:32.531 ]' 00:17:32.531 00:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:32.531 00:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:32.531 00:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:32.531 00:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:32.531 00:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:32.531 00:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:32.531 00:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:32.531 00:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:32.809 00:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OWVlMTlmOGZmZTk3NTYzNTAxMmM4MGEyMzJmOGIyZDUwYTUxOTIzNmQ2NzQ4Y2QzlWFTxQ==: --dhchap-ctrl-secret DHHC-1:03:ZDA3OTllMjk2ZjMzYTYzZmRiMWI0M2NjOTJkNWJhODJlNmIyZWYyMTE4YWFjYmIwYmJkNDdmMTAzNWRmMTcwZQH4CrE=: 00:17:32.809 00:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:OWVlMTlmOGZmZTk3NTYzNTAxMmM4MGEyMzJmOGIyZDUwYTUxOTIzNmQ2NzQ4Y2QzlWFTxQ==: --dhchap-ctrl-secret DHHC-1:03:ZDA3OTllMjk2ZjMzYTYzZmRiMWI0M2NjOTJkNWJhODJlNmIyZWYyMTE4YWFjYmIwYmJkNDdmMTAzNWRmMTcwZQH4CrE=: 00:17:33.383 00:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:33.383 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:33.383 00:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:33.383 00:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.383 00:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.383 00:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.383 00:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:33.383 00:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:33.383 00:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:33.644 00:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:17:33.644 00:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:33.644 00:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:33.644 00:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:33.644 00:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:33.644 00:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:33.644 00:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:33.644 00:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.644 00:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.644 00:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.644 00:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:33.644 00:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:33.644 00:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:33.644 00:17:33.906 00:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:33.906 00:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:33.906 00:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:33.906 00:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:33.906 00:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:33.906 00:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.906 00:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.906 00:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.906 00:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:33.906 { 00:17:33.906 "cntlid": 99, 00:17:33.906 "qid": 0, 00:17:33.906 "state": "enabled", 00:17:33.906 "thread": "nvmf_tgt_poll_group_000", 00:17:33.906 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:33.906 "listen_address": { 00:17:33.906 "trtype": "TCP", 00:17:33.906 "adrfam": "IPv4", 00:17:33.906 "traddr": "10.0.0.2", 00:17:33.906 "trsvcid": "4420" 00:17:33.906 }, 00:17:33.906 "peer_address": { 00:17:33.906 "trtype": "TCP", 00:17:33.906 "adrfam": "IPv4", 00:17:33.907 "traddr": "10.0.0.1", 00:17:33.907 "trsvcid": "38616" 00:17:33.907 }, 00:17:33.907 "auth": { 00:17:33.907 "state": "completed", 00:17:33.907 "digest": "sha512", 00:17:33.907 "dhgroup": "null" 00:17:33.907 } 00:17:33.907 } 00:17:33.907 ]' 00:17:33.907 00:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:34.168 00:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:34.168 00:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:34.168 00:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:34.168 00:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:34.168 00:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:34.168 00:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:34.168 00:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:34.429 00:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YjU0NTAyNzlhYTdhYTJjMzZiMDRmOWJjMGM3ZjNkYzCZ8b3c: --dhchap-ctrl-secret DHHC-1:02:OWJjZTc3YjcwNzJlN2JlNjMzYWM2ZTQ3ZDEyNzRjMzk5ZDMyNGI4MmJlMTUwOTg0H8R8cQ==: 00:17:34.429 00:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:YjU0NTAyNzlhYTdhYTJjMzZiMDRmOWJjMGM3ZjNkYzCZ8b3c: --dhchap-ctrl-secret DHHC-1:02:OWJjZTc3YjcwNzJlN2JlNjMzYWM2ZTQ3ZDEyNzRjMzk5ZDMyNGI4MmJlMTUwOTg0H8R8cQ==: 00:17:35.001 00:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:35.001 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:35.001 00:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:35.001 00:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.001 00:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.001 00:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.001 00:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:35.001 00:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:35.001 00:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:35.001 00:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:17:35.001 00:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:35.001 00:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:35.001 00:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:35.001 00:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:35.001 00:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:35.001 00:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:35.001 00:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.001 00:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.001 00:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.001 00:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:35.001 00:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:35.001 00:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:35.262 00:17:35.262 00:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:35.262 00:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:35.262 00:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:35.528 00:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:35.528 00:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:35.528 00:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.528 00:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.528 00:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.528 00:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:35.528 { 00:17:35.528 "cntlid": 101, 00:17:35.528 "qid": 0, 00:17:35.529 "state": "enabled", 00:17:35.529 "thread": "nvmf_tgt_poll_group_000", 00:17:35.529 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:35.529 "listen_address": { 00:17:35.529 "trtype": "TCP", 00:17:35.529 "adrfam": "IPv4", 00:17:35.529 "traddr": "10.0.0.2", 00:17:35.529 "trsvcid": "4420" 00:17:35.529 }, 00:17:35.529 "peer_address": { 00:17:35.529 "trtype": "TCP", 00:17:35.529 "adrfam": "IPv4", 00:17:35.529 "traddr": "10.0.0.1", 00:17:35.529 "trsvcid": "38644" 00:17:35.529 }, 00:17:35.529 "auth": { 00:17:35.529 "state": "completed", 00:17:35.529 "digest": "sha512", 00:17:35.529 "dhgroup": "null" 00:17:35.529 } 00:17:35.529 } 00:17:35.529 ]' 00:17:35.529 00:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:35.529 00:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:35.529 00:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:35.529 00:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:35.529 00:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:35.801 00:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:35.801 00:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:35.801 00:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:35.801 00:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MzQzNzY5NzJiMGI1NGMyODkwNDdkYjk1ZDJmMWE0OTBmZGI1MjU2OGEyMzY5Zjk4bgm3qg==: --dhchap-ctrl-secret DHHC-1:01:ODE1MDlkOTMyNDExMjk5Y2MwZmJiNmYwYzRlNTM5MzZLnj+N: 00:17:35.801 00:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:MzQzNzY5NzJiMGI1NGMyODkwNDdkYjk1ZDJmMWE0OTBmZGI1MjU2OGEyMzY5Zjk4bgm3qg==: --dhchap-ctrl-secret DHHC-1:01:ODE1MDlkOTMyNDExMjk5Y2MwZmJiNmYwYzRlNTM5MzZLnj+N: 00:17:36.373 00:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:36.373 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:36.373 00:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:36.373 00:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.373 00:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:36.373 00:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.373 00:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:36.373 00:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:36.374 00:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:36.634 00:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:17:36.634 00:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:36.634 00:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:36.634 00:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:36.634 00:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:36.634 00:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:36.634 00:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:17:36.634 00:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.634 00:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:36.634 00:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.634 00:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:36.634 00:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:36.634 00:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:36.895 00:17:36.895 00:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:36.895 00:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:36.895 00:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:37.161 00:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:37.161 00:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:37.161 00:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.161 00:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.161 00:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.161 00:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:37.161 { 00:17:37.161 "cntlid": 103, 00:17:37.161 "qid": 0, 00:17:37.161 "state": "enabled", 00:17:37.161 "thread": "nvmf_tgt_poll_group_000", 00:17:37.161 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:37.161 "listen_address": { 00:17:37.161 "trtype": "TCP", 00:17:37.161 "adrfam": "IPv4", 00:17:37.161 "traddr": "10.0.0.2", 00:17:37.161 "trsvcid": "4420" 00:17:37.161 }, 00:17:37.161 "peer_address": { 00:17:37.161 "trtype": "TCP", 00:17:37.161 "adrfam": "IPv4", 00:17:37.161 "traddr": "10.0.0.1", 00:17:37.161 "trsvcid": "38676" 00:17:37.161 }, 00:17:37.161 "auth": { 00:17:37.161 "state": "completed", 00:17:37.161 "digest": "sha512", 00:17:37.161 "dhgroup": "null" 00:17:37.161 } 00:17:37.161 } 00:17:37.161 ]' 00:17:37.161 00:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:37.161 00:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:37.161 00:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:37.161 00:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:37.161 00:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:37.161 00:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:37.161 00:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:37.161 00:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:37.424 00:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NTY3OGU5Yzg2ODVhN2ZiMTc4MzdhNDgwYmNhNTQyZDZkZDE3NTVkMjAxN2Y4MDZhZDQzNjhjMjljODk5ZWUwZPFhi2U=: 00:17:37.424 00:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:NTY3OGU5Yzg2ODVhN2ZiMTc4MzdhNDgwYmNhNTQyZDZkZDE3NTVkMjAxN2Y4MDZhZDQzNjhjMjljODk5ZWUwZPFhi2U=: 00:17:37.996 00:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:37.996 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:37.996 00:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:37.996 00:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.996 00:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.996 00:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.996 00:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:37.996 00:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:37.996 00:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:37.996 00:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:38.256 00:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:17:38.256 00:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:38.256 00:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:38.256 00:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:38.256 00:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:38.256 00:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:38.256 00:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:38.256 00:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.256 00:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.256 00:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.256 00:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:38.256 00:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:38.256 00:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:38.518 00:17:38.518 00:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:38.518 00:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:38.518 00:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:38.518 00:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:38.518 00:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:38.518 00:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.518 00:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.518 00:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.518 00:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:38.518 { 00:17:38.518 "cntlid": 105, 00:17:38.518 "qid": 0, 00:17:38.518 "state": "enabled", 00:17:38.518 "thread": "nvmf_tgt_poll_group_000", 00:17:38.518 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:38.518 "listen_address": { 00:17:38.518 "trtype": "TCP", 00:17:38.518 "adrfam": "IPv4", 00:17:38.518 "traddr": "10.0.0.2", 00:17:38.518 "trsvcid": "4420" 00:17:38.518 }, 00:17:38.518 "peer_address": { 00:17:38.518 "trtype": "TCP", 00:17:38.518 "adrfam": "IPv4", 00:17:38.518 "traddr": "10.0.0.1", 00:17:38.518 "trsvcid": "57054" 00:17:38.518 }, 00:17:38.518 "auth": { 00:17:38.518 "state": "completed", 00:17:38.518 "digest": "sha512", 00:17:38.518 "dhgroup": "ffdhe2048" 00:17:38.518 } 00:17:38.518 } 00:17:38.518 ]' 00:17:38.518 00:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:38.779 00:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:38.779 00:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:38.779 00:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:38.779 00:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:38.779 00:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:38.779 00:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:38.779 00:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:39.040 00:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OWVlMTlmOGZmZTk3NTYzNTAxMmM4MGEyMzJmOGIyZDUwYTUxOTIzNmQ2NzQ4Y2QzlWFTxQ==: --dhchap-ctrl-secret DHHC-1:03:ZDA3OTllMjk2ZjMzYTYzZmRiMWI0M2NjOTJkNWJhODJlNmIyZWYyMTE4YWFjYmIwYmJkNDdmMTAzNWRmMTcwZQH4CrE=: 00:17:39.040 00:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:OWVlMTlmOGZmZTk3NTYzNTAxMmM4MGEyMzJmOGIyZDUwYTUxOTIzNmQ2NzQ4Y2QzlWFTxQ==: --dhchap-ctrl-secret DHHC-1:03:ZDA3OTllMjk2ZjMzYTYzZmRiMWI0M2NjOTJkNWJhODJlNmIyZWYyMTE4YWFjYmIwYmJkNDdmMTAzNWRmMTcwZQH4CrE=: 00:17:39.611 00:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:39.612 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:39.612 00:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:39.612 00:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.612 00:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.612 00:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.612 00:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:39.612 00:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:39.612 00:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:39.612 00:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:17:39.612 00:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:39.612 00:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:39.612 00:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:39.612 00:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:39.612 00:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:39.612 00:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:39.612 00:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.612 00:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.873 00:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.873 00:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:39.873 00:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:39.873 00:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:39.873 00:17:39.873 00:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:39.873 00:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:39.873 00:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:40.134 00:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:40.134 00:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:40.134 00:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.134 00:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.134 00:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.134 00:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:40.134 { 00:17:40.135 "cntlid": 107, 00:17:40.135 "qid": 0, 00:17:40.135 "state": "enabled", 00:17:40.135 "thread": "nvmf_tgt_poll_group_000", 00:17:40.135 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:40.135 "listen_address": { 00:17:40.135 "trtype": "TCP", 00:17:40.135 "adrfam": "IPv4", 00:17:40.135 "traddr": "10.0.0.2", 00:17:40.135 "trsvcid": "4420" 00:17:40.135 }, 00:17:40.135 "peer_address": { 00:17:40.135 "trtype": "TCP", 00:17:40.135 "adrfam": "IPv4", 00:17:40.135 "traddr": "10.0.0.1", 00:17:40.135 "trsvcid": "57084" 00:17:40.135 }, 00:17:40.135 "auth": { 00:17:40.135 "state": "completed", 00:17:40.135 "digest": "sha512", 00:17:40.135 "dhgroup": "ffdhe2048" 00:17:40.135 } 00:17:40.135 } 00:17:40.135 ]' 00:17:40.135 00:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:40.135 00:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:40.135 00:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:40.395 00:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:40.395 00:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:40.395 00:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:40.395 00:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:40.395 00:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:40.395 00:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YjU0NTAyNzlhYTdhYTJjMzZiMDRmOWJjMGM3ZjNkYzCZ8b3c: --dhchap-ctrl-secret DHHC-1:02:OWJjZTc3YjcwNzJlN2JlNjMzYWM2ZTQ3ZDEyNzRjMzk5ZDMyNGI4MmJlMTUwOTg0H8R8cQ==: 00:17:40.395 00:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:YjU0NTAyNzlhYTdhYTJjMzZiMDRmOWJjMGM3ZjNkYzCZ8b3c: --dhchap-ctrl-secret DHHC-1:02:OWJjZTc3YjcwNzJlN2JlNjMzYWM2ZTQ3ZDEyNzRjMzk5ZDMyNGI4MmJlMTUwOTg0H8R8cQ==: 00:17:41.339 00:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:41.339 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:41.339 00:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:41.339 00:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.339 00:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.339 00:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.339 00:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:41.339 00:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:41.339 00:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:41.339 00:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:17:41.339 00:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:41.339 00:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:41.339 00:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:41.339 00:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:41.339 00:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:41.339 00:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:41.339 00:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.339 00:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.339 00:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.339 00:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:41.339 00:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:41.339 00:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:41.600 00:17:41.600 00:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:41.600 00:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:41.600 00:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:41.861 00:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:41.861 00:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:41.861 00:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.861 00:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.861 00:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.861 00:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:41.861 { 00:17:41.861 "cntlid": 109, 00:17:41.861 "qid": 0, 00:17:41.861 "state": "enabled", 00:17:41.861 "thread": "nvmf_tgt_poll_group_000", 00:17:41.861 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:41.861 "listen_address": { 00:17:41.861 "trtype": "TCP", 00:17:41.861 "adrfam": "IPv4", 00:17:41.861 "traddr": "10.0.0.2", 00:17:41.861 "trsvcid": "4420" 00:17:41.861 }, 00:17:41.861 "peer_address": { 00:17:41.861 "trtype": "TCP", 00:17:41.861 "adrfam": "IPv4", 00:17:41.861 "traddr": "10.0.0.1", 00:17:41.861 "trsvcid": "57118" 00:17:41.861 }, 00:17:41.861 "auth": { 00:17:41.861 "state": "completed", 00:17:41.861 "digest": "sha512", 00:17:41.861 "dhgroup": "ffdhe2048" 00:17:41.861 } 00:17:41.861 } 00:17:41.861 ]' 00:17:41.861 00:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:41.861 00:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:41.861 00:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:41.861 00:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:41.861 00:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:41.861 00:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:41.861 00:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:41.861 00:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:42.122 00:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MzQzNzY5NzJiMGI1NGMyODkwNDdkYjk1ZDJmMWE0OTBmZGI1MjU2OGEyMzY5Zjk4bgm3qg==: --dhchap-ctrl-secret DHHC-1:01:ODE1MDlkOTMyNDExMjk5Y2MwZmJiNmYwYzRlNTM5MzZLnj+N: 00:17:42.122 00:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:MzQzNzY5NzJiMGI1NGMyODkwNDdkYjk1ZDJmMWE0OTBmZGI1MjU2OGEyMzY5Zjk4bgm3qg==: --dhchap-ctrl-secret DHHC-1:01:ODE1MDlkOTMyNDExMjk5Y2MwZmJiNmYwYzRlNTM5MzZLnj+N: 00:17:42.692 00:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:42.692 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:42.692 00:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:42.692 00:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.692 00:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.692 00:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.692 00:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:42.692 00:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:42.692 00:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:42.953 00:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:17:42.953 00:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:42.953 00:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:42.953 00:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:42.953 00:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:42.953 00:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:42.953 00:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:17:42.953 00:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.953 00:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.953 00:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.953 00:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:42.953 00:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:42.953 00:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:42.953 00:17:43.214 00:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:43.214 00:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:43.214 00:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:43.214 00:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:43.214 00:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:43.214 00:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.214 00:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.214 00:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.214 00:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:43.214 { 00:17:43.214 "cntlid": 111, 00:17:43.214 "qid": 0, 00:17:43.214 "state": "enabled", 00:17:43.214 "thread": "nvmf_tgt_poll_group_000", 00:17:43.214 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:43.214 "listen_address": { 00:17:43.214 "trtype": "TCP", 00:17:43.214 "adrfam": "IPv4", 00:17:43.214 "traddr": "10.0.0.2", 00:17:43.214 "trsvcid": "4420" 00:17:43.214 }, 00:17:43.214 "peer_address": { 00:17:43.214 "trtype": "TCP", 00:17:43.214 "adrfam": "IPv4", 00:17:43.214 "traddr": "10.0.0.1", 00:17:43.214 "trsvcid": "57146" 00:17:43.214 }, 00:17:43.214 "auth": { 00:17:43.214 "state": "completed", 00:17:43.214 "digest": "sha512", 00:17:43.214 "dhgroup": "ffdhe2048" 00:17:43.214 } 00:17:43.214 } 00:17:43.214 ]' 00:17:43.214 00:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:43.214 00:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:43.214 00:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:43.475 00:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:43.475 00:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:43.475 00:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:43.475 00:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:43.475 00:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:43.735 00:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NTY3OGU5Yzg2ODVhN2ZiMTc4MzdhNDgwYmNhNTQyZDZkZDE3NTVkMjAxN2Y4MDZhZDQzNjhjMjljODk5ZWUwZPFhi2U=: 00:17:43.735 00:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:NTY3OGU5Yzg2ODVhN2ZiMTc4MzdhNDgwYmNhNTQyZDZkZDE3NTVkMjAxN2Y4MDZhZDQzNjhjMjljODk5ZWUwZPFhi2U=: 00:17:44.307 00:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:44.307 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:44.307 00:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:44.307 00:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:44.307 00:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.307 00:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:44.307 00:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:44.307 00:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:44.307 00:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:44.307 00:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:44.307 00:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:17:44.307 00:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:44.307 00:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:44.307 00:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:44.307 00:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:44.307 00:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:44.307 00:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:44.307 00:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:44.307 00:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.307 00:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:44.307 00:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:44.307 00:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:44.307 00:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:44.568 00:17:44.568 00:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:44.568 00:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:44.568 00:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:44.829 00:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:44.829 00:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:44.829 00:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:44.829 00:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.829 00:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:44.829 00:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:44.829 { 00:17:44.829 "cntlid": 113, 00:17:44.829 "qid": 0, 00:17:44.829 "state": "enabled", 00:17:44.829 "thread": "nvmf_tgt_poll_group_000", 00:17:44.829 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:44.829 "listen_address": { 00:17:44.829 "trtype": "TCP", 00:17:44.829 "adrfam": "IPv4", 00:17:44.829 "traddr": "10.0.0.2", 00:17:44.829 "trsvcid": "4420" 00:17:44.829 }, 00:17:44.829 "peer_address": { 00:17:44.829 "trtype": "TCP", 00:17:44.829 "adrfam": "IPv4", 00:17:44.829 "traddr": "10.0.0.1", 00:17:44.829 "trsvcid": "57180" 00:17:44.829 }, 00:17:44.829 "auth": { 00:17:44.829 "state": "completed", 00:17:44.829 "digest": "sha512", 00:17:44.829 "dhgroup": "ffdhe3072" 00:17:44.829 } 00:17:44.829 } 00:17:44.829 ]' 00:17:44.829 00:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:44.829 00:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:44.829 00:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:44.829 00:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:44.829 00:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:45.090 00:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:45.090 00:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:45.090 00:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:45.090 00:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OWVlMTlmOGZmZTk3NTYzNTAxMmM4MGEyMzJmOGIyZDUwYTUxOTIzNmQ2NzQ4Y2QzlWFTxQ==: --dhchap-ctrl-secret DHHC-1:03:ZDA3OTllMjk2ZjMzYTYzZmRiMWI0M2NjOTJkNWJhODJlNmIyZWYyMTE4YWFjYmIwYmJkNDdmMTAzNWRmMTcwZQH4CrE=: 00:17:45.091 00:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:OWVlMTlmOGZmZTk3NTYzNTAxMmM4MGEyMzJmOGIyZDUwYTUxOTIzNmQ2NzQ4Y2QzlWFTxQ==: --dhchap-ctrl-secret DHHC-1:03:ZDA3OTllMjk2ZjMzYTYzZmRiMWI0M2NjOTJkNWJhODJlNmIyZWYyMTE4YWFjYmIwYmJkNDdmMTAzNWRmMTcwZQH4CrE=: 00:17:45.661 00:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:45.661 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:45.661 00:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:45.661 00:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:45.661 00:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.661 00:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:45.661 00:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:45.661 00:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:45.661 00:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:45.921 00:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:17:45.921 00:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:45.921 00:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:45.921 00:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:45.921 00:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:45.921 00:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:45.922 00:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:45.922 00:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:45.922 00:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.922 00:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:45.922 00:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:45.922 00:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:45.922 00:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:46.183 00:17:46.183 00:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:46.183 00:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:46.183 00:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:46.443 00:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:46.444 00:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:46.444 00:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.444 00:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.444 00:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.444 00:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:46.444 { 00:17:46.444 "cntlid": 115, 00:17:46.444 "qid": 0, 00:17:46.444 "state": "enabled", 00:17:46.444 "thread": "nvmf_tgt_poll_group_000", 00:17:46.444 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:46.444 "listen_address": { 00:17:46.444 "trtype": "TCP", 00:17:46.444 "adrfam": "IPv4", 00:17:46.444 "traddr": "10.0.0.2", 00:17:46.444 "trsvcid": "4420" 00:17:46.444 }, 00:17:46.444 "peer_address": { 00:17:46.444 "trtype": "TCP", 00:17:46.444 "adrfam": "IPv4", 00:17:46.444 "traddr": "10.0.0.1", 00:17:46.444 "trsvcid": "57216" 00:17:46.444 }, 00:17:46.444 "auth": { 00:17:46.444 "state": "completed", 00:17:46.444 "digest": "sha512", 00:17:46.444 "dhgroup": "ffdhe3072" 00:17:46.444 } 00:17:46.444 } 00:17:46.444 ]' 00:17:46.444 00:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:46.444 00:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:46.444 00:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:46.444 00:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:46.444 00:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:46.444 00:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:46.444 00:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:46.444 00:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:46.705 00:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YjU0NTAyNzlhYTdhYTJjMzZiMDRmOWJjMGM3ZjNkYzCZ8b3c: --dhchap-ctrl-secret DHHC-1:02:OWJjZTc3YjcwNzJlN2JlNjMzYWM2ZTQ3ZDEyNzRjMzk5ZDMyNGI4MmJlMTUwOTg0H8R8cQ==: 00:17:46.705 00:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:YjU0NTAyNzlhYTdhYTJjMzZiMDRmOWJjMGM3ZjNkYzCZ8b3c: --dhchap-ctrl-secret DHHC-1:02:OWJjZTc3YjcwNzJlN2JlNjMzYWM2ZTQ3ZDEyNzRjMzk5ZDMyNGI4MmJlMTUwOTg0H8R8cQ==: 00:17:47.277 00:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:47.277 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:47.277 00:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:47.277 00:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.277 00:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.277 00:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.277 00:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:47.277 00:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:47.277 00:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:47.539 00:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:17:47.539 00:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:47.539 00:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:47.539 00:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:47.539 00:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:47.539 00:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:47.539 00:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:47.539 00:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.539 00:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.539 00:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.539 00:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:47.539 00:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:47.539 00:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:47.816 00:17:47.816 00:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:47.816 00:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:47.816 00:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:48.129 00:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:48.129 00:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:48.129 00:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.129 00:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.129 00:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.129 00:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:48.129 { 00:17:48.129 "cntlid": 117, 00:17:48.129 "qid": 0, 00:17:48.129 "state": "enabled", 00:17:48.129 "thread": "nvmf_tgt_poll_group_000", 00:17:48.129 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:48.129 "listen_address": { 00:17:48.129 "trtype": "TCP", 00:17:48.129 "adrfam": "IPv4", 00:17:48.129 "traddr": "10.0.0.2", 00:17:48.129 "trsvcid": "4420" 00:17:48.129 }, 00:17:48.129 "peer_address": { 00:17:48.129 "trtype": "TCP", 00:17:48.129 "adrfam": "IPv4", 00:17:48.129 "traddr": "10.0.0.1", 00:17:48.129 "trsvcid": "52862" 00:17:48.129 }, 00:17:48.129 "auth": { 00:17:48.129 "state": "completed", 00:17:48.129 "digest": "sha512", 00:17:48.129 "dhgroup": "ffdhe3072" 00:17:48.129 } 00:17:48.129 } 00:17:48.129 ]' 00:17:48.129 00:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:48.129 00:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:48.129 00:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:48.129 00:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:48.129 00:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:48.129 00:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:48.129 00:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:48.129 00:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:48.446 00:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MzQzNzY5NzJiMGI1NGMyODkwNDdkYjk1ZDJmMWE0OTBmZGI1MjU2OGEyMzY5Zjk4bgm3qg==: --dhchap-ctrl-secret DHHC-1:01:ODE1MDlkOTMyNDExMjk5Y2MwZmJiNmYwYzRlNTM5MzZLnj+N: 00:17:48.446 00:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:MzQzNzY5NzJiMGI1NGMyODkwNDdkYjk1ZDJmMWE0OTBmZGI1MjU2OGEyMzY5Zjk4bgm3qg==: --dhchap-ctrl-secret DHHC-1:01:ODE1MDlkOTMyNDExMjk5Y2MwZmJiNmYwYzRlNTM5MzZLnj+N: 00:17:49.016 00:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:49.016 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:49.016 00:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:49.016 00:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.016 00:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.016 00:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.016 00:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:49.016 00:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:49.016 00:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:49.016 00:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:17:49.016 00:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:49.016 00:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:49.016 00:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:49.016 00:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:49.016 00:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:49.016 00:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:17:49.016 00:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.016 00:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.016 00:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.016 00:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:49.016 00:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:49.016 00:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:49.277 00:17:49.277 00:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:49.277 00:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:49.277 00:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:49.538 00:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:49.538 00:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:49.538 00:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.538 00:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.538 00:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.538 00:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:49.538 { 00:17:49.538 "cntlid": 119, 00:17:49.538 "qid": 0, 00:17:49.538 "state": "enabled", 00:17:49.538 "thread": "nvmf_tgt_poll_group_000", 00:17:49.538 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:49.538 "listen_address": { 00:17:49.538 "trtype": "TCP", 00:17:49.538 "adrfam": "IPv4", 00:17:49.538 "traddr": "10.0.0.2", 00:17:49.538 "trsvcid": "4420" 00:17:49.538 }, 00:17:49.538 "peer_address": { 00:17:49.538 "trtype": "TCP", 00:17:49.538 "adrfam": "IPv4", 00:17:49.538 "traddr": "10.0.0.1", 00:17:49.538 "trsvcid": "52888" 00:17:49.538 }, 00:17:49.538 "auth": { 00:17:49.538 "state": "completed", 00:17:49.538 "digest": "sha512", 00:17:49.538 "dhgroup": "ffdhe3072" 00:17:49.538 } 00:17:49.538 } 00:17:49.538 ]' 00:17:49.538 00:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:49.538 00:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:49.538 00:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:49.538 00:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:49.538 00:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:49.800 00:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:49.800 00:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:49.800 00:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:49.800 00:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NTY3OGU5Yzg2ODVhN2ZiMTc4MzdhNDgwYmNhNTQyZDZkZDE3NTVkMjAxN2Y4MDZhZDQzNjhjMjljODk5ZWUwZPFhi2U=: 00:17:49.800 00:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:NTY3OGU5Yzg2ODVhN2ZiMTc4MzdhNDgwYmNhNTQyZDZkZDE3NTVkMjAxN2Y4MDZhZDQzNjhjMjljODk5ZWUwZPFhi2U=: 00:17:50.373 00:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:50.373 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:50.373 00:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:50.374 00:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.374 00:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.374 00:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.374 00:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:50.374 00:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:50.374 00:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:50.374 00:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:50.634 00:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:17:50.634 00:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:50.634 00:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:50.634 00:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:50.634 00:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:50.634 00:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:50.634 00:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:50.634 00:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.634 00:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.634 00:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.634 00:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:50.634 00:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:50.634 00:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:50.895 00:17:50.895 00:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:50.895 00:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:50.895 00:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:51.156 00:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:51.156 00:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:51.156 00:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.156 00:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.156 00:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.156 00:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:51.156 { 00:17:51.156 "cntlid": 121, 00:17:51.156 "qid": 0, 00:17:51.156 "state": "enabled", 00:17:51.156 "thread": "nvmf_tgt_poll_group_000", 00:17:51.156 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:51.156 "listen_address": { 00:17:51.156 "trtype": "TCP", 00:17:51.156 "adrfam": "IPv4", 00:17:51.156 "traddr": "10.0.0.2", 00:17:51.156 "trsvcid": "4420" 00:17:51.156 }, 00:17:51.156 "peer_address": { 00:17:51.156 "trtype": "TCP", 00:17:51.156 "adrfam": "IPv4", 00:17:51.156 "traddr": "10.0.0.1", 00:17:51.156 "trsvcid": "52926" 00:17:51.156 }, 00:17:51.156 "auth": { 00:17:51.156 "state": "completed", 00:17:51.156 "digest": "sha512", 00:17:51.156 "dhgroup": "ffdhe4096" 00:17:51.156 } 00:17:51.156 } 00:17:51.156 ]' 00:17:51.156 00:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:51.156 00:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:51.156 00:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:51.156 00:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:51.156 00:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:51.156 00:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:51.156 00:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:51.156 00:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:51.417 00:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OWVlMTlmOGZmZTk3NTYzNTAxMmM4MGEyMzJmOGIyZDUwYTUxOTIzNmQ2NzQ4Y2QzlWFTxQ==: --dhchap-ctrl-secret DHHC-1:03:ZDA3OTllMjk2ZjMzYTYzZmRiMWI0M2NjOTJkNWJhODJlNmIyZWYyMTE4YWFjYmIwYmJkNDdmMTAzNWRmMTcwZQH4CrE=: 00:17:51.417 00:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:OWVlMTlmOGZmZTk3NTYzNTAxMmM4MGEyMzJmOGIyZDUwYTUxOTIzNmQ2NzQ4Y2QzlWFTxQ==: --dhchap-ctrl-secret DHHC-1:03:ZDA3OTllMjk2ZjMzYTYzZmRiMWI0M2NjOTJkNWJhODJlNmIyZWYyMTE4YWFjYmIwYmJkNDdmMTAzNWRmMTcwZQH4CrE=: 00:17:51.989 00:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:51.989 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:51.989 00:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:51.989 00:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.989 00:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.989 00:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.989 00:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:51.989 00:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:51.989 00:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:52.249 00:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:17:52.250 00:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:52.250 00:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:52.250 00:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:52.250 00:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:52.250 00:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:52.250 00:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:52.250 00:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.250 00:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.250 00:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.250 00:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:52.250 00:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:52.250 00:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:52.510 00:17:52.510 00:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:52.510 00:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:52.510 00:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:52.510 00:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:52.510 00:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:52.510 00:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.510 00:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.510 00:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.510 00:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:52.510 { 00:17:52.510 "cntlid": 123, 00:17:52.510 "qid": 0, 00:17:52.510 "state": "enabled", 00:17:52.510 "thread": "nvmf_tgt_poll_group_000", 00:17:52.510 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:52.510 "listen_address": { 00:17:52.510 "trtype": "TCP", 00:17:52.510 "adrfam": "IPv4", 00:17:52.510 "traddr": "10.0.0.2", 00:17:52.510 "trsvcid": "4420" 00:17:52.510 }, 00:17:52.510 "peer_address": { 00:17:52.510 "trtype": "TCP", 00:17:52.510 "adrfam": "IPv4", 00:17:52.510 "traddr": "10.0.0.1", 00:17:52.510 "trsvcid": "52956" 00:17:52.510 }, 00:17:52.510 "auth": { 00:17:52.510 "state": "completed", 00:17:52.510 "digest": "sha512", 00:17:52.510 "dhgroup": "ffdhe4096" 00:17:52.510 } 00:17:52.510 } 00:17:52.510 ]' 00:17:52.510 00:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:52.770 00:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:52.770 00:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:52.770 00:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:52.770 00:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:52.770 00:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:52.770 00:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:52.770 00:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:53.030 00:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YjU0NTAyNzlhYTdhYTJjMzZiMDRmOWJjMGM3ZjNkYzCZ8b3c: --dhchap-ctrl-secret DHHC-1:02:OWJjZTc3YjcwNzJlN2JlNjMzYWM2ZTQ3ZDEyNzRjMzk5ZDMyNGI4MmJlMTUwOTg0H8R8cQ==: 00:17:53.030 00:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:YjU0NTAyNzlhYTdhYTJjMzZiMDRmOWJjMGM3ZjNkYzCZ8b3c: --dhchap-ctrl-secret DHHC-1:02:OWJjZTc3YjcwNzJlN2JlNjMzYWM2ZTQ3ZDEyNzRjMzk5ZDMyNGI4MmJlMTUwOTg0H8R8cQ==: 00:17:53.601 00:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:53.601 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:53.601 00:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:53.601 00:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.601 00:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.601 00:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.601 00:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:53.601 00:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:53.601 00:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:53.601 00:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:17:53.601 00:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:53.601 00:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:53.601 00:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:53.601 00:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:53.601 00:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:53.601 00:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:53.601 00:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.601 00:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.861 00:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.861 00:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:53.861 00:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:53.861 00:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:53.861 00:17:54.123 00:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:54.123 00:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:54.123 00:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:54.123 00:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:54.123 00:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:54.123 00:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.123 00:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.123 00:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.123 00:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:54.123 { 00:17:54.123 "cntlid": 125, 00:17:54.123 "qid": 0, 00:17:54.123 "state": "enabled", 00:17:54.123 "thread": "nvmf_tgt_poll_group_000", 00:17:54.123 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:54.123 "listen_address": { 00:17:54.123 "trtype": "TCP", 00:17:54.123 "adrfam": "IPv4", 00:17:54.123 "traddr": "10.0.0.2", 00:17:54.123 "trsvcid": "4420" 00:17:54.123 }, 00:17:54.123 "peer_address": { 00:17:54.123 "trtype": "TCP", 00:17:54.123 "adrfam": "IPv4", 00:17:54.123 "traddr": "10.0.0.1", 00:17:54.123 "trsvcid": "52990" 00:17:54.123 }, 00:17:54.123 "auth": { 00:17:54.123 "state": "completed", 00:17:54.123 "digest": "sha512", 00:17:54.123 "dhgroup": "ffdhe4096" 00:17:54.123 } 00:17:54.123 } 00:17:54.123 ]' 00:17:54.123 00:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:54.384 00:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:54.384 00:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:54.384 00:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:54.384 00:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:54.384 00:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:54.384 00:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:54.384 00:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:54.644 00:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MzQzNzY5NzJiMGI1NGMyODkwNDdkYjk1ZDJmMWE0OTBmZGI1MjU2OGEyMzY5Zjk4bgm3qg==: --dhchap-ctrl-secret DHHC-1:01:ODE1MDlkOTMyNDExMjk5Y2MwZmJiNmYwYzRlNTM5MzZLnj+N: 00:17:54.645 00:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:MzQzNzY5NzJiMGI1NGMyODkwNDdkYjk1ZDJmMWE0OTBmZGI1MjU2OGEyMzY5Zjk4bgm3qg==: --dhchap-ctrl-secret DHHC-1:01:ODE1MDlkOTMyNDExMjk5Y2MwZmJiNmYwYzRlNTM5MzZLnj+N: 00:17:55.215 00:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:55.215 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:55.216 00:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:55.216 00:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:55.216 00:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.216 00:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:55.216 00:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:55.216 00:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:55.216 00:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:55.216 00:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:17:55.216 00:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:55.216 00:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:55.216 00:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:55.216 00:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:55.216 00:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:55.216 00:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:17:55.216 00:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:55.216 00:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.216 00:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:55.216 00:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:55.216 00:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:55.216 00:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:55.475 00:17:55.476 00:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:55.476 00:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:55.476 00:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:55.735 00:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:55.735 00:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:55.735 00:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:55.735 00:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.735 00:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:55.735 00:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:55.735 { 00:17:55.735 "cntlid": 127, 00:17:55.735 "qid": 0, 00:17:55.735 "state": "enabled", 00:17:55.735 "thread": "nvmf_tgt_poll_group_000", 00:17:55.735 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:55.735 "listen_address": { 00:17:55.735 "trtype": "TCP", 00:17:55.735 "adrfam": "IPv4", 00:17:55.735 "traddr": "10.0.0.2", 00:17:55.735 "trsvcid": "4420" 00:17:55.735 }, 00:17:55.735 "peer_address": { 00:17:55.735 "trtype": "TCP", 00:17:55.735 "adrfam": "IPv4", 00:17:55.735 "traddr": "10.0.0.1", 00:17:55.735 "trsvcid": "53014" 00:17:55.735 }, 00:17:55.735 "auth": { 00:17:55.735 "state": "completed", 00:17:55.735 "digest": "sha512", 00:17:55.735 "dhgroup": "ffdhe4096" 00:17:55.735 } 00:17:55.735 } 00:17:55.735 ]' 00:17:55.735 00:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:55.735 00:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:55.735 00:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:55.996 00:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:55.996 00:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:55.996 00:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:55.996 00:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:55.996 00:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:56.257 00:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NTY3OGU5Yzg2ODVhN2ZiMTc4MzdhNDgwYmNhNTQyZDZkZDE3NTVkMjAxN2Y4MDZhZDQzNjhjMjljODk5ZWUwZPFhi2U=: 00:17:56.257 00:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:NTY3OGU5Yzg2ODVhN2ZiMTc4MzdhNDgwYmNhNTQyZDZkZDE3NTVkMjAxN2Y4MDZhZDQzNjhjMjljODk5ZWUwZPFhi2U=: 00:17:56.828 00:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:56.828 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:56.828 00:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:56.828 00:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.828 00:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.828 00:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.828 00:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:56.828 00:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:56.828 00:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:56.828 00:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:56.828 00:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:17:56.828 00:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:56.828 00:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:56.828 00:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:56.828 00:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:56.828 00:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:56.828 00:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:56.829 00:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.829 00:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.829 00:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.829 00:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:56.829 00:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:56.829 00:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:57.400 00:17:57.400 00:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:57.400 00:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:57.400 00:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:57.400 00:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:57.400 00:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:57.400 00:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.400 00:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.400 00:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.400 00:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:57.400 { 00:17:57.400 "cntlid": 129, 00:17:57.400 "qid": 0, 00:17:57.400 "state": "enabled", 00:17:57.400 "thread": "nvmf_tgt_poll_group_000", 00:17:57.400 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:57.400 "listen_address": { 00:17:57.400 "trtype": "TCP", 00:17:57.400 "adrfam": "IPv4", 00:17:57.400 "traddr": "10.0.0.2", 00:17:57.400 "trsvcid": "4420" 00:17:57.400 }, 00:17:57.400 "peer_address": { 00:17:57.400 "trtype": "TCP", 00:17:57.400 "adrfam": "IPv4", 00:17:57.400 "traddr": "10.0.0.1", 00:17:57.400 "trsvcid": "53038" 00:17:57.400 }, 00:17:57.400 "auth": { 00:17:57.400 "state": "completed", 00:17:57.400 "digest": "sha512", 00:17:57.400 "dhgroup": "ffdhe6144" 00:17:57.400 } 00:17:57.400 } 00:17:57.400 ]' 00:17:57.400 00:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:57.400 00:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:57.400 00:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:57.660 00:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:57.660 00:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:57.660 00:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:57.660 00:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:57.660 00:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:57.661 00:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OWVlMTlmOGZmZTk3NTYzNTAxMmM4MGEyMzJmOGIyZDUwYTUxOTIzNmQ2NzQ4Y2QzlWFTxQ==: --dhchap-ctrl-secret DHHC-1:03:ZDA3OTllMjk2ZjMzYTYzZmRiMWI0M2NjOTJkNWJhODJlNmIyZWYyMTE4YWFjYmIwYmJkNDdmMTAzNWRmMTcwZQH4CrE=: 00:17:57.661 00:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:OWVlMTlmOGZmZTk3NTYzNTAxMmM4MGEyMzJmOGIyZDUwYTUxOTIzNmQ2NzQ4Y2QzlWFTxQ==: --dhchap-ctrl-secret DHHC-1:03:ZDA3OTllMjk2ZjMzYTYzZmRiMWI0M2NjOTJkNWJhODJlNmIyZWYyMTE4YWFjYmIwYmJkNDdmMTAzNWRmMTcwZQH4CrE=: 00:17:58.602 00:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:58.602 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:58.602 00:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:58.602 00:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.602 00:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.602 00:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.602 00:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:58.602 00:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:58.602 00:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:58.602 00:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:17:58.602 00:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:58.602 00:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:58.602 00:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:58.602 00:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:58.602 00:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:58.602 00:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:58.602 00:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.602 00:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.602 00:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.602 00:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:58.602 00:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:58.602 00:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:58.863 00:17:58.863 00:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:58.863 00:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:58.863 00:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:59.124 00:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:59.124 00:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:59.125 00:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.125 00:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.125 00:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.125 00:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:59.125 { 00:17:59.125 "cntlid": 131, 00:17:59.125 "qid": 0, 00:17:59.125 "state": "enabled", 00:17:59.125 "thread": "nvmf_tgt_poll_group_000", 00:17:59.125 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:59.125 "listen_address": { 00:17:59.125 "trtype": "TCP", 00:17:59.125 "adrfam": "IPv4", 00:17:59.125 "traddr": "10.0.0.2", 00:17:59.125 "trsvcid": "4420" 00:17:59.125 }, 00:17:59.125 "peer_address": { 00:17:59.125 "trtype": "TCP", 00:17:59.125 "adrfam": "IPv4", 00:17:59.125 "traddr": "10.0.0.1", 00:17:59.125 "trsvcid": "45686" 00:17:59.125 }, 00:17:59.125 "auth": { 00:17:59.125 "state": "completed", 00:17:59.125 "digest": "sha512", 00:17:59.125 "dhgroup": "ffdhe6144" 00:17:59.125 } 00:17:59.125 } 00:17:59.125 ]' 00:17:59.125 00:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:59.125 00:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:59.125 00:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:59.125 00:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:59.125 00:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:59.125 00:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:59.125 00:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:59.125 00:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:59.384 00:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YjU0NTAyNzlhYTdhYTJjMzZiMDRmOWJjMGM3ZjNkYzCZ8b3c: --dhchap-ctrl-secret DHHC-1:02:OWJjZTc3YjcwNzJlN2JlNjMzYWM2ZTQ3ZDEyNzRjMzk5ZDMyNGI4MmJlMTUwOTg0H8R8cQ==: 00:17:59.384 00:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:YjU0NTAyNzlhYTdhYTJjMzZiMDRmOWJjMGM3ZjNkYzCZ8b3c: --dhchap-ctrl-secret DHHC-1:02:OWJjZTc3YjcwNzJlN2JlNjMzYWM2ZTQ3ZDEyNzRjMzk5ZDMyNGI4MmJlMTUwOTg0H8R8cQ==: 00:17:59.954 00:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:59.954 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:59.954 00:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:59.954 00:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.954 00:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.954 00:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.954 00:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:59.954 00:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:59.954 00:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:00.213 00:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:18:00.213 00:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:00.213 00:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:00.213 00:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:00.213 00:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:00.213 00:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:00.213 00:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:00.213 00:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.213 00:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.213 00:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.213 00:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:00.213 00:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:00.213 00:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:00.473 00:18:00.473 00:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:00.473 00:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:00.473 00:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:00.734 00:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:00.734 00:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:00.734 00:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.734 00:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.734 00:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.734 00:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:00.734 { 00:18:00.734 "cntlid": 133, 00:18:00.734 "qid": 0, 00:18:00.734 "state": "enabled", 00:18:00.734 "thread": "nvmf_tgt_poll_group_000", 00:18:00.734 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:00.734 "listen_address": { 00:18:00.734 "trtype": "TCP", 00:18:00.734 "adrfam": "IPv4", 00:18:00.734 "traddr": "10.0.0.2", 00:18:00.734 "trsvcid": "4420" 00:18:00.734 }, 00:18:00.734 "peer_address": { 00:18:00.734 "trtype": "TCP", 00:18:00.734 "adrfam": "IPv4", 00:18:00.734 "traddr": "10.0.0.1", 00:18:00.734 "trsvcid": "45716" 00:18:00.734 }, 00:18:00.734 "auth": { 00:18:00.734 "state": "completed", 00:18:00.735 "digest": "sha512", 00:18:00.735 "dhgroup": "ffdhe6144" 00:18:00.735 } 00:18:00.735 } 00:18:00.735 ]' 00:18:00.735 00:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:00.735 00:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:00.735 00:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:00.735 00:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:00.735 00:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:00.995 00:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:00.995 00:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:00.995 00:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:00.995 00:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MzQzNzY5NzJiMGI1NGMyODkwNDdkYjk1ZDJmMWE0OTBmZGI1MjU2OGEyMzY5Zjk4bgm3qg==: --dhchap-ctrl-secret DHHC-1:01:ODE1MDlkOTMyNDExMjk5Y2MwZmJiNmYwYzRlNTM5MzZLnj+N: 00:18:00.995 00:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:MzQzNzY5NzJiMGI1NGMyODkwNDdkYjk1ZDJmMWE0OTBmZGI1MjU2OGEyMzY5Zjk4bgm3qg==: --dhchap-ctrl-secret DHHC-1:01:ODE1MDlkOTMyNDExMjk5Y2MwZmJiNmYwYzRlNTM5MzZLnj+N: 00:18:01.574 00:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:01.574 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:01.574 00:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:01.574 00:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:01.574 00:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.575 00:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:01.575 00:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:01.575 00:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:01.575 00:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:01.842 00:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:18:01.842 00:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:01.842 00:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:01.842 00:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:01.842 00:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:01.842 00:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:01.842 00:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:01.842 00:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:01.842 00:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.842 00:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:01.842 00:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:01.842 00:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:01.843 00:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:02.103 00:18:02.103 00:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:02.103 00:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:02.103 00:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:02.364 00:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:02.364 00:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:02.364 00:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.364 00:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.364 00:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.364 00:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:02.364 { 00:18:02.364 "cntlid": 135, 00:18:02.364 "qid": 0, 00:18:02.364 "state": "enabled", 00:18:02.364 "thread": "nvmf_tgt_poll_group_000", 00:18:02.364 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:02.364 "listen_address": { 00:18:02.364 "trtype": "TCP", 00:18:02.364 "adrfam": "IPv4", 00:18:02.364 "traddr": "10.0.0.2", 00:18:02.364 "trsvcid": "4420" 00:18:02.364 }, 00:18:02.364 "peer_address": { 00:18:02.364 "trtype": "TCP", 00:18:02.364 "adrfam": "IPv4", 00:18:02.364 "traddr": "10.0.0.1", 00:18:02.364 "trsvcid": "45736" 00:18:02.364 }, 00:18:02.364 "auth": { 00:18:02.364 "state": "completed", 00:18:02.364 "digest": "sha512", 00:18:02.364 "dhgroup": "ffdhe6144" 00:18:02.364 } 00:18:02.364 } 00:18:02.364 ]' 00:18:02.364 00:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:02.364 00:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:02.364 00:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:02.624 00:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:02.624 00:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:02.624 00:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:02.625 00:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:02.625 00:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:02.625 00:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NTY3OGU5Yzg2ODVhN2ZiMTc4MzdhNDgwYmNhNTQyZDZkZDE3NTVkMjAxN2Y4MDZhZDQzNjhjMjljODk5ZWUwZPFhi2U=: 00:18:02.625 00:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:NTY3OGU5Yzg2ODVhN2ZiMTc4MzdhNDgwYmNhNTQyZDZkZDE3NTVkMjAxN2Y4MDZhZDQzNjhjMjljODk5ZWUwZPFhi2U=: 00:18:03.194 00:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:03.455 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:03.455 00:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:03.455 00:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.455 00:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.455 00:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.455 00:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:03.455 00:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:03.455 00:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:03.455 00:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:03.455 00:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:18:03.455 00:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:03.455 00:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:03.455 00:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:03.455 00:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:03.455 00:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:03.455 00:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:03.455 00:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.455 00:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.455 00:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.455 00:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:03.455 00:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:03.455 00:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:04.025 00:18:04.025 00:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:04.025 00:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:04.025 00:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:04.285 00:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:04.285 00:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:04.285 00:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.285 00:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.285 00:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.286 00:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:04.286 { 00:18:04.286 "cntlid": 137, 00:18:04.286 "qid": 0, 00:18:04.286 "state": "enabled", 00:18:04.286 "thread": "nvmf_tgt_poll_group_000", 00:18:04.286 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:04.286 "listen_address": { 00:18:04.286 "trtype": "TCP", 00:18:04.286 "adrfam": "IPv4", 00:18:04.286 "traddr": "10.0.0.2", 00:18:04.286 "trsvcid": "4420" 00:18:04.286 }, 00:18:04.286 "peer_address": { 00:18:04.286 "trtype": "TCP", 00:18:04.286 "adrfam": "IPv4", 00:18:04.286 "traddr": "10.0.0.1", 00:18:04.286 "trsvcid": "45754" 00:18:04.286 }, 00:18:04.286 "auth": { 00:18:04.286 "state": "completed", 00:18:04.286 "digest": "sha512", 00:18:04.286 "dhgroup": "ffdhe8192" 00:18:04.286 } 00:18:04.286 } 00:18:04.286 ]' 00:18:04.286 00:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:04.286 00:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:04.286 00:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:04.286 00:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:04.286 00:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:04.286 00:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:04.286 00:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:04.286 00:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:04.546 00:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OWVlMTlmOGZmZTk3NTYzNTAxMmM4MGEyMzJmOGIyZDUwYTUxOTIzNmQ2NzQ4Y2QzlWFTxQ==: --dhchap-ctrl-secret DHHC-1:03:ZDA3OTllMjk2ZjMzYTYzZmRiMWI0M2NjOTJkNWJhODJlNmIyZWYyMTE4YWFjYmIwYmJkNDdmMTAzNWRmMTcwZQH4CrE=: 00:18:04.546 00:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:OWVlMTlmOGZmZTk3NTYzNTAxMmM4MGEyMzJmOGIyZDUwYTUxOTIzNmQ2NzQ4Y2QzlWFTxQ==: --dhchap-ctrl-secret DHHC-1:03:ZDA3OTllMjk2ZjMzYTYzZmRiMWI0M2NjOTJkNWJhODJlNmIyZWYyMTE4YWFjYmIwYmJkNDdmMTAzNWRmMTcwZQH4CrE=: 00:18:05.115 00:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:05.115 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:05.115 00:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:05.115 00:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:05.115 00:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.115 00:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:05.115 00:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:05.115 00:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:05.115 00:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:05.377 00:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:18:05.377 00:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:05.377 00:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:05.377 00:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:05.377 00:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:05.377 00:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:05.377 00:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:05.377 00:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:05.377 00:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.377 00:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:05.377 00:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:05.377 00:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:05.377 00:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:05.641 00:18:05.902 00:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:05.902 00:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:05.902 00:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:05.902 00:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:05.902 00:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:05.902 00:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:05.902 00:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.902 00:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:05.902 00:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:05.902 { 00:18:05.902 "cntlid": 139, 00:18:05.902 "qid": 0, 00:18:05.902 "state": "enabled", 00:18:05.902 "thread": "nvmf_tgt_poll_group_000", 00:18:05.902 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:05.902 "listen_address": { 00:18:05.902 "trtype": "TCP", 00:18:05.902 "adrfam": "IPv4", 00:18:05.902 "traddr": "10.0.0.2", 00:18:05.902 "trsvcid": "4420" 00:18:05.902 }, 00:18:05.902 "peer_address": { 00:18:05.902 "trtype": "TCP", 00:18:05.902 "adrfam": "IPv4", 00:18:05.902 "traddr": "10.0.0.1", 00:18:05.902 "trsvcid": "45790" 00:18:05.902 }, 00:18:05.902 "auth": { 00:18:05.902 "state": "completed", 00:18:05.902 "digest": "sha512", 00:18:05.902 "dhgroup": "ffdhe8192" 00:18:05.902 } 00:18:05.902 } 00:18:05.902 ]' 00:18:05.902 00:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:05.902 00:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:05.902 00:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:06.163 00:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:06.163 00:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:06.163 00:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:06.163 00:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:06.163 00:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:06.423 00:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YjU0NTAyNzlhYTdhYTJjMzZiMDRmOWJjMGM3ZjNkYzCZ8b3c: --dhchap-ctrl-secret DHHC-1:02:OWJjZTc3YjcwNzJlN2JlNjMzYWM2ZTQ3ZDEyNzRjMzk5ZDMyNGI4MmJlMTUwOTg0H8R8cQ==: 00:18:06.423 00:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:YjU0NTAyNzlhYTdhYTJjMzZiMDRmOWJjMGM3ZjNkYzCZ8b3c: --dhchap-ctrl-secret DHHC-1:02:OWJjZTc3YjcwNzJlN2JlNjMzYWM2ZTQ3ZDEyNzRjMzk5ZDMyNGI4MmJlMTUwOTg0H8R8cQ==: 00:18:06.994 00:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:06.994 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:06.994 00:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:06.994 00:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.994 00:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.994 00:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.994 00:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:06.994 00:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:06.994 00:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:06.994 00:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:18:06.994 00:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:06.994 00:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:06.994 00:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:06.994 00:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:06.994 00:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:06.994 00:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:06.994 00:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.994 00:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.994 00:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.994 00:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:06.994 00:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:06.994 00:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:07.565 00:18:07.565 00:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:07.565 00:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:07.565 00:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:07.825 00:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:07.825 00:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:07.825 00:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:07.825 00:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.825 00:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:07.825 00:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:07.825 { 00:18:07.825 "cntlid": 141, 00:18:07.825 "qid": 0, 00:18:07.825 "state": "enabled", 00:18:07.825 "thread": "nvmf_tgt_poll_group_000", 00:18:07.825 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:07.825 "listen_address": { 00:18:07.825 "trtype": "TCP", 00:18:07.825 "adrfam": "IPv4", 00:18:07.825 "traddr": "10.0.0.2", 00:18:07.825 "trsvcid": "4420" 00:18:07.825 }, 00:18:07.825 "peer_address": { 00:18:07.825 "trtype": "TCP", 00:18:07.825 "adrfam": "IPv4", 00:18:07.825 "traddr": "10.0.0.1", 00:18:07.825 "trsvcid": "45828" 00:18:07.825 }, 00:18:07.825 "auth": { 00:18:07.825 "state": "completed", 00:18:07.825 "digest": "sha512", 00:18:07.825 "dhgroup": "ffdhe8192" 00:18:07.825 } 00:18:07.825 } 00:18:07.825 ]' 00:18:07.825 00:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:07.825 00:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:07.825 00:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:07.825 00:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:07.825 00:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:07.825 00:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:07.825 00:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:07.825 00:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:08.086 00:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MzQzNzY5NzJiMGI1NGMyODkwNDdkYjk1ZDJmMWE0OTBmZGI1MjU2OGEyMzY5Zjk4bgm3qg==: --dhchap-ctrl-secret DHHC-1:01:ODE1MDlkOTMyNDExMjk5Y2MwZmJiNmYwYzRlNTM5MzZLnj+N: 00:18:08.086 00:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:MzQzNzY5NzJiMGI1NGMyODkwNDdkYjk1ZDJmMWE0OTBmZGI1MjU2OGEyMzY5Zjk4bgm3qg==: --dhchap-ctrl-secret DHHC-1:01:ODE1MDlkOTMyNDExMjk5Y2MwZmJiNmYwYzRlNTM5MzZLnj+N: 00:18:08.657 00:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:08.657 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:08.657 00:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:08.657 00:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:08.658 00:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.658 00:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:08.658 00:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:08.658 00:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:08.658 00:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:08.918 00:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:18:08.918 00:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:08.918 00:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:08.918 00:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:08.918 00:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:08.918 00:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:08.918 00:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:08.918 00:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:08.918 00:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.918 00:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:08.918 00:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:08.918 00:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:08.918 00:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:09.490 00:18:09.490 00:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:09.490 00:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:09.490 00:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:09.490 00:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:09.490 00:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:09.490 00:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.490 00:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.490 00:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.490 00:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:09.490 { 00:18:09.490 "cntlid": 143, 00:18:09.490 "qid": 0, 00:18:09.490 "state": "enabled", 00:18:09.490 "thread": "nvmf_tgt_poll_group_000", 00:18:09.490 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:09.490 "listen_address": { 00:18:09.490 "trtype": "TCP", 00:18:09.490 "adrfam": "IPv4", 00:18:09.490 "traddr": "10.0.0.2", 00:18:09.490 "trsvcid": "4420" 00:18:09.490 }, 00:18:09.490 "peer_address": { 00:18:09.490 "trtype": "TCP", 00:18:09.490 "adrfam": "IPv4", 00:18:09.490 "traddr": "10.0.0.1", 00:18:09.490 "trsvcid": "48974" 00:18:09.490 }, 00:18:09.490 "auth": { 00:18:09.490 "state": "completed", 00:18:09.490 "digest": "sha512", 00:18:09.490 "dhgroup": "ffdhe8192" 00:18:09.490 } 00:18:09.490 } 00:18:09.490 ]' 00:18:09.490 00:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:09.490 00:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:09.490 00:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:09.751 00:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:09.751 00:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:09.751 00:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:09.751 00:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:09.751 00:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:10.012 00:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NTY3OGU5Yzg2ODVhN2ZiMTc4MzdhNDgwYmNhNTQyZDZkZDE3NTVkMjAxN2Y4MDZhZDQzNjhjMjljODk5ZWUwZPFhi2U=: 00:18:10.012 00:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:NTY3OGU5Yzg2ODVhN2ZiMTc4MzdhNDgwYmNhNTQyZDZkZDE3NTVkMjAxN2Y4MDZhZDQzNjhjMjljODk5ZWUwZPFhi2U=: 00:18:10.584 00:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:10.584 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:10.584 00:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:10.584 00:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.584 00:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.584 00:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.584 00:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:18:10.584 00:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:18:10.584 00:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:18:10.584 00:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:10.584 00:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:10.584 00:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:10.584 00:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:18:10.584 00:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:10.584 00:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:10.584 00:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:10.584 00:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:10.584 00:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:10.584 00:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:10.584 00:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.584 00:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.584 00:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.584 00:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:10.584 00:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:10.584 00:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:11.156 00:18:11.156 00:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:11.156 00:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:11.156 00:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:11.418 00:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:11.418 00:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:11.418 00:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:11.418 00:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.418 00:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:11.418 00:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:11.418 { 00:18:11.418 "cntlid": 145, 00:18:11.418 "qid": 0, 00:18:11.418 "state": "enabled", 00:18:11.418 "thread": "nvmf_tgt_poll_group_000", 00:18:11.418 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:11.418 "listen_address": { 00:18:11.418 "trtype": "TCP", 00:18:11.418 "adrfam": "IPv4", 00:18:11.418 "traddr": "10.0.0.2", 00:18:11.418 "trsvcid": "4420" 00:18:11.418 }, 00:18:11.418 "peer_address": { 00:18:11.418 "trtype": "TCP", 00:18:11.418 "adrfam": "IPv4", 00:18:11.418 "traddr": "10.0.0.1", 00:18:11.418 "trsvcid": "49006" 00:18:11.418 }, 00:18:11.419 "auth": { 00:18:11.419 "state": "completed", 00:18:11.419 "digest": "sha512", 00:18:11.419 "dhgroup": "ffdhe8192" 00:18:11.419 } 00:18:11.419 } 00:18:11.419 ]' 00:18:11.419 00:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:11.419 00:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:11.419 00:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:11.419 00:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:11.419 00:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:11.419 00:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:11.419 00:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:11.419 00:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:11.684 00:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OWVlMTlmOGZmZTk3NTYzNTAxMmM4MGEyMzJmOGIyZDUwYTUxOTIzNmQ2NzQ4Y2QzlWFTxQ==: --dhchap-ctrl-secret DHHC-1:03:ZDA3OTllMjk2ZjMzYTYzZmRiMWI0M2NjOTJkNWJhODJlNmIyZWYyMTE4YWFjYmIwYmJkNDdmMTAzNWRmMTcwZQH4CrE=: 00:18:11.684 00:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:OWVlMTlmOGZmZTk3NTYzNTAxMmM4MGEyMzJmOGIyZDUwYTUxOTIzNmQ2NzQ4Y2QzlWFTxQ==: --dhchap-ctrl-secret DHHC-1:03:ZDA3OTllMjk2ZjMzYTYzZmRiMWI0M2NjOTJkNWJhODJlNmIyZWYyMTE4YWFjYmIwYmJkNDdmMTAzNWRmMTcwZQH4CrE=: 00:18:12.255 00:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:12.255 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:12.255 00:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:12.255 00:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.255 00:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.255 00:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.255 00:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 00:18:12.255 00:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.255 00:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.255 00:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.255 00:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:18:12.255 00:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:12.255 00:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:18:12.255 00:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:18:12.255 00:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:12.255 00:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:18:12.255 00:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:12.255 00:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key2 00:18:12.255 00:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:18:12.255 00:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:18:12.827 request: 00:18:12.827 { 00:18:12.827 "name": "nvme0", 00:18:12.827 "trtype": "tcp", 00:18:12.827 "traddr": "10.0.0.2", 00:18:12.827 "adrfam": "ipv4", 00:18:12.827 "trsvcid": "4420", 00:18:12.827 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:12.827 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:12.827 "prchk_reftag": false, 00:18:12.827 "prchk_guard": false, 00:18:12.827 "hdgst": false, 00:18:12.827 "ddgst": false, 00:18:12.827 "dhchap_key": "key2", 00:18:12.827 "allow_unrecognized_csi": false, 00:18:12.827 "method": "bdev_nvme_attach_controller", 00:18:12.827 "req_id": 1 00:18:12.827 } 00:18:12.827 Got JSON-RPC error response 00:18:12.827 response: 00:18:12.827 { 00:18:12.827 "code": -5, 00:18:12.827 "message": "Input/output error" 00:18:12.827 } 00:18:12.827 00:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:12.827 00:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:12.827 00:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:12.827 00:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:12.827 00:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:12.827 00:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.827 00:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.827 00:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.827 00:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:12.827 00:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.827 00:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.827 00:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.827 00:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:12.827 00:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:12.827 00:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:12.827 00:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:18:12.827 00:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:12.827 00:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:18:12.827 00:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:12.827 00:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:12.827 00:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:12.827 00:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:13.089 request: 00:18:13.089 { 00:18:13.089 "name": "nvme0", 00:18:13.089 "trtype": "tcp", 00:18:13.089 "traddr": "10.0.0.2", 00:18:13.089 "adrfam": "ipv4", 00:18:13.089 "trsvcid": "4420", 00:18:13.089 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:13.089 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:13.089 "prchk_reftag": false, 00:18:13.089 "prchk_guard": false, 00:18:13.089 "hdgst": false, 00:18:13.089 "ddgst": false, 00:18:13.089 "dhchap_key": "key1", 00:18:13.090 "dhchap_ctrlr_key": "ckey2", 00:18:13.090 "allow_unrecognized_csi": false, 00:18:13.090 "method": "bdev_nvme_attach_controller", 00:18:13.090 "req_id": 1 00:18:13.090 } 00:18:13.090 Got JSON-RPC error response 00:18:13.090 response: 00:18:13.090 { 00:18:13.090 "code": -5, 00:18:13.090 "message": "Input/output error" 00:18:13.090 } 00:18:13.090 00:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:13.090 00:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:13.090 00:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:13.090 00:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:13.090 00:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:13.090 00:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.090 00:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.090 00:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.090 00:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 00:18:13.090 00:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.090 00:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.351 00:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.351 00:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:13.351 00:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:13.351 00:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:13.351 00:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:18:13.351 00:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:13.351 00:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:18:13.351 00:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:13.351 00:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:13.351 00:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:13.351 00:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:13.611 request: 00:18:13.611 { 00:18:13.611 "name": "nvme0", 00:18:13.611 "trtype": "tcp", 00:18:13.611 "traddr": "10.0.0.2", 00:18:13.611 "adrfam": "ipv4", 00:18:13.611 "trsvcid": "4420", 00:18:13.611 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:13.611 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:13.611 "prchk_reftag": false, 00:18:13.611 "prchk_guard": false, 00:18:13.611 "hdgst": false, 00:18:13.611 "ddgst": false, 00:18:13.611 "dhchap_key": "key1", 00:18:13.611 "dhchap_ctrlr_key": "ckey1", 00:18:13.611 "allow_unrecognized_csi": false, 00:18:13.611 "method": "bdev_nvme_attach_controller", 00:18:13.611 "req_id": 1 00:18:13.611 } 00:18:13.611 Got JSON-RPC error response 00:18:13.611 response: 00:18:13.611 { 00:18:13.611 "code": -5, 00:18:13.611 "message": "Input/output error" 00:18:13.611 } 00:18:13.611 00:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:13.611 00:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:13.611 00:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:13.611 00:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:13.611 00:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:13.611 00:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.611 00:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.611 00:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.611 00:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 3224547 00:18:13.611 00:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 3224547 ']' 00:18:13.611 00:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 3224547 00:18:13.611 00:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:18:13.611 00:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:13.611 00:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3224547 00:18:13.872 00:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:13.872 00:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:13.872 00:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3224547' 00:18:13.872 killing process with pid 3224547 00:18:13.872 00:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 3224547 00:18:13.872 00:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 3224547 00:18:13.872 00:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:18:13.872 00:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:18:13.872 00:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:13.872 00:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.872 00:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # nvmfpid=3250134 00:18:13.872 00:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # waitforlisten 3250134 00:18:13.872 00:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:18:13.872 00:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 3250134 ']' 00:18:13.872 00:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:13.872 00:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:13.872 00:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:13.872 00:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:13.872 00:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.815 00:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:14.815 00:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:18:14.815 00:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:18:14.815 00:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:14.815 00:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.815 00:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:14.815 00:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:18:14.815 00:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 3250134 00:18:14.816 00:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 3250134 ']' 00:18:14.816 00:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:14.816 00:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:14.816 00:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:14.816 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:14.816 00:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:14.816 00:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.077 00:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:15.077 00:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:18:15.077 00:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:18:15.077 00:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.077 00:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.077 null0 00:18:15.077 00:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.077 00:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:18:15.077 00:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.wTE 00:18:15.077 00:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.077 00:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.077 00:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.077 00:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.vhm ]] 00:18:15.077 00:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.vhm 00:18:15.077 00:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.077 00:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.077 00:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.077 00:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:18:15.077 00:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.JYJ 00:18:15.077 00:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.077 00:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.077 00:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.077 00:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.FXj ]] 00:18:15.077 00:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.FXj 00:18:15.077 00:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.077 00:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.077 00:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.077 00:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:18:15.077 00:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.Vay 00:18:15.077 00:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.077 00:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.077 00:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.077 00:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.GqX ]] 00:18:15.077 00:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.GqX 00:18:15.077 00:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.077 00:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.077 00:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.077 00:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:18:15.077 00:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.90f 00:18:15.078 00:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.078 00:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.078 00:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.078 00:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:18:15.078 00:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:18:15.078 00:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:15.078 00:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:15.078 00:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:15.078 00:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:15.078 00:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:15.078 00:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:15.078 00:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.078 00:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.078 00:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.078 00:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:15.078 00:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:15.078 00:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:16.020 nvme0n1 00:18:16.020 00:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:16.020 00:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:16.020 00:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:16.020 00:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:16.020 00:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:16.020 00:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.020 00:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.020 00:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.020 00:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:16.020 { 00:18:16.020 "cntlid": 1, 00:18:16.020 "qid": 0, 00:18:16.020 "state": "enabled", 00:18:16.020 "thread": "nvmf_tgt_poll_group_000", 00:18:16.020 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:16.020 "listen_address": { 00:18:16.020 "trtype": "TCP", 00:18:16.020 "adrfam": "IPv4", 00:18:16.020 "traddr": "10.0.0.2", 00:18:16.020 "trsvcid": "4420" 00:18:16.020 }, 00:18:16.020 "peer_address": { 00:18:16.020 "trtype": "TCP", 00:18:16.020 "adrfam": "IPv4", 00:18:16.020 "traddr": "10.0.0.1", 00:18:16.020 "trsvcid": "49052" 00:18:16.020 }, 00:18:16.020 "auth": { 00:18:16.020 "state": "completed", 00:18:16.020 "digest": "sha512", 00:18:16.020 "dhgroup": "ffdhe8192" 00:18:16.020 } 00:18:16.020 } 00:18:16.020 ]' 00:18:16.020 00:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:16.020 00:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:16.020 00:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:16.281 00:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:16.281 00:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:16.281 00:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:16.281 00:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:16.281 00:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:16.541 00:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NTY3OGU5Yzg2ODVhN2ZiMTc4MzdhNDgwYmNhNTQyZDZkZDE3NTVkMjAxN2Y4MDZhZDQzNjhjMjljODk5ZWUwZPFhi2U=: 00:18:16.541 00:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:NTY3OGU5Yzg2ODVhN2ZiMTc4MzdhNDgwYmNhNTQyZDZkZDE3NTVkMjAxN2Y4MDZhZDQzNjhjMjljODk5ZWUwZPFhi2U=: 00:18:17.112 00:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:17.112 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:17.112 00:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:17.112 00:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:17.112 00:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.112 00:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:17.112 00:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:17.112 00:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:17.112 00:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.112 00:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:17.112 00:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:18:17.112 00:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:18:17.374 00:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:18:17.374 00:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:17.374 00:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:18:17.374 00:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:18:17.374 00:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:17.374 00:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:18:17.374 00:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:17.374 00:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:17.374 00:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:17.374 00:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:17.374 request: 00:18:17.374 { 00:18:17.374 "name": "nvme0", 00:18:17.374 "trtype": "tcp", 00:18:17.374 "traddr": "10.0.0.2", 00:18:17.374 "adrfam": "ipv4", 00:18:17.374 "trsvcid": "4420", 00:18:17.374 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:17.374 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:17.374 "prchk_reftag": false, 00:18:17.374 "prchk_guard": false, 00:18:17.374 "hdgst": false, 00:18:17.374 "ddgst": false, 00:18:17.374 "dhchap_key": "key3", 00:18:17.374 "allow_unrecognized_csi": false, 00:18:17.374 "method": "bdev_nvme_attach_controller", 00:18:17.374 "req_id": 1 00:18:17.374 } 00:18:17.374 Got JSON-RPC error response 00:18:17.374 response: 00:18:17.374 { 00:18:17.374 "code": -5, 00:18:17.374 "message": "Input/output error" 00:18:17.374 } 00:18:17.374 00:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:17.374 00:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:17.374 00:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:17.374 00:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:17.374 00:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:18:17.374 00:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:18:17.374 00:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:18:17.374 00:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:18:17.636 00:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:18:17.636 00:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:17.636 00:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:18:17.636 00:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:18:17.636 00:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:17.636 00:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:18:17.636 00:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:17.636 00:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:17.636 00:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:17.636 00:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:17.636 request: 00:18:17.636 { 00:18:17.636 "name": "nvme0", 00:18:17.636 "trtype": "tcp", 00:18:17.636 "traddr": "10.0.0.2", 00:18:17.636 "adrfam": "ipv4", 00:18:17.636 "trsvcid": "4420", 00:18:17.636 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:17.636 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:17.636 "prchk_reftag": false, 00:18:17.636 "prchk_guard": false, 00:18:17.636 "hdgst": false, 00:18:17.636 "ddgst": false, 00:18:17.636 "dhchap_key": "key3", 00:18:17.636 "allow_unrecognized_csi": false, 00:18:17.636 "method": "bdev_nvme_attach_controller", 00:18:17.636 "req_id": 1 00:18:17.636 } 00:18:17.636 Got JSON-RPC error response 00:18:17.636 response: 00:18:17.636 { 00:18:17.636 "code": -5, 00:18:17.636 "message": "Input/output error" 00:18:17.636 } 00:18:17.897 00:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:17.897 00:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:17.897 00:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:17.897 00:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:17.897 00:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:18:17.897 00:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:18:17.897 00:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:18:17.897 00:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:17.897 00:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:17.897 00:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:17.897 00:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:17.897 00:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:17.897 00:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.897 00:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:17.897 00:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:17.897 00:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:17.897 00:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.897 00:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:17.897 00:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:17.897 00:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:17.897 00:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:17.897 00:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:18:17.897 00:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:17.897 00:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:18:17.897 00:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:17.897 00:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:17.897 00:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:17.898 00:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:18.158 request: 00:18:18.158 { 00:18:18.158 "name": "nvme0", 00:18:18.158 "trtype": "tcp", 00:18:18.158 "traddr": "10.0.0.2", 00:18:18.158 "adrfam": "ipv4", 00:18:18.158 "trsvcid": "4420", 00:18:18.158 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:18.158 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:18.158 "prchk_reftag": false, 00:18:18.158 "prchk_guard": false, 00:18:18.158 "hdgst": false, 00:18:18.158 "ddgst": false, 00:18:18.158 "dhchap_key": "key0", 00:18:18.158 "dhchap_ctrlr_key": "key1", 00:18:18.158 "allow_unrecognized_csi": false, 00:18:18.158 "method": "bdev_nvme_attach_controller", 00:18:18.158 "req_id": 1 00:18:18.158 } 00:18:18.158 Got JSON-RPC error response 00:18:18.158 response: 00:18:18.158 { 00:18:18.158 "code": -5, 00:18:18.158 "message": "Input/output error" 00:18:18.158 } 00:18:18.424 00:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:18.424 00:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:18.424 00:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:18.424 00:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:18.424 00:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:18:18.424 00:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:18:18.424 00:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:18:18.424 nvme0n1 00:18:18.686 00:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:18:18.686 00:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:18:18.686 00:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:18.686 00:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:18.686 00:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:18.686 00:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:18.946 00:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 00:18:18.946 00:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.946 00:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.946 00:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.946 00:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:18:18.946 00:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:18:18.946 00:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:18:19.885 nvme0n1 00:18:19.885 00:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:18:19.885 00:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:18:19.885 00:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:19.885 00:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:19.885 00:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:19.885 00:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:19.885 00:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.886 00:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:19.886 00:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:18:19.886 00:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:18:19.886 00:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:20.146 00:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:20.146 00:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:MzQzNzY5NzJiMGI1NGMyODkwNDdkYjk1ZDJmMWE0OTBmZGI1MjU2OGEyMzY5Zjk4bgm3qg==: --dhchap-ctrl-secret DHHC-1:03:NTY3OGU5Yzg2ODVhN2ZiMTc4MzdhNDgwYmNhNTQyZDZkZDE3NTVkMjAxN2Y4MDZhZDQzNjhjMjljODk5ZWUwZPFhi2U=: 00:18:20.146 00:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:MzQzNzY5NzJiMGI1NGMyODkwNDdkYjk1ZDJmMWE0OTBmZGI1MjU2OGEyMzY5Zjk4bgm3qg==: --dhchap-ctrl-secret DHHC-1:03:NTY3OGU5Yzg2ODVhN2ZiMTc4MzdhNDgwYmNhNTQyZDZkZDE3NTVkMjAxN2Y4MDZhZDQzNjhjMjljODk5ZWUwZPFhi2U=: 00:18:20.717 00:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:18:20.717 00:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:18:20.717 00:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:18:20.717 00:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:18:20.717 00:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:18:20.717 00:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:18:20.717 00:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:18:20.717 00:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:20.717 00:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:20.717 00:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:18:20.717 00:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:20.717 00:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:18:20.717 00:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:18:20.717 00:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:20.717 00:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:18:20.977 00:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:20.977 00:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 00:18:20.977 00:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:18:20.977 00:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:18:21.237 request: 00:18:21.237 { 00:18:21.237 "name": "nvme0", 00:18:21.237 "trtype": "tcp", 00:18:21.237 "traddr": "10.0.0.2", 00:18:21.237 "adrfam": "ipv4", 00:18:21.237 "trsvcid": "4420", 00:18:21.237 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:21.237 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:21.237 "prchk_reftag": false, 00:18:21.237 "prchk_guard": false, 00:18:21.237 "hdgst": false, 00:18:21.237 "ddgst": false, 00:18:21.237 "dhchap_key": "key1", 00:18:21.237 "allow_unrecognized_csi": false, 00:18:21.237 "method": "bdev_nvme_attach_controller", 00:18:21.237 "req_id": 1 00:18:21.237 } 00:18:21.237 Got JSON-RPC error response 00:18:21.237 response: 00:18:21.237 { 00:18:21.237 "code": -5, 00:18:21.237 "message": "Input/output error" 00:18:21.237 } 00:18:21.237 00:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:21.237 00:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:21.237 00:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:21.237 00:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:21.237 00:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:21.237 00:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:21.237 00:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:22.179 nvme0n1 00:18:22.179 00:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:18:22.179 00:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:18:22.179 00:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:22.179 00:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:22.179 00:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:22.179 00:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:22.439 00:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:22.439 00:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.439 00:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.439 00:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.439 00:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:18:22.439 00:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:18:22.439 00:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:18:22.699 nvme0n1 00:18:22.699 00:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:18:22.699 00:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:18:22.699 00:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:22.958 00:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:22.958 00:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:22.958 00:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:22.958 00:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key key3 00:18:22.958 00:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.958 00:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.958 00:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.958 00:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:YjU0NTAyNzlhYTdhYTJjMzZiMDRmOWJjMGM3ZjNkYzCZ8b3c: '' 2s 00:18:22.958 00:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:18:22.958 00:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:18:22.958 00:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:YjU0NTAyNzlhYTdhYTJjMzZiMDRmOWJjMGM3ZjNkYzCZ8b3c: 00:18:22.958 00:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:18:22.958 00:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:18:22.958 00:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:18:22.958 00:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:YjU0NTAyNzlhYTdhYTJjMzZiMDRmOWJjMGM3ZjNkYzCZ8b3c: ]] 00:18:22.959 00:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:YjU0NTAyNzlhYTdhYTJjMzZiMDRmOWJjMGM3ZjNkYzCZ8b3c: 00:18:22.959 00:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:18:22.959 00:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:18:22.959 00:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:18:25.498 00:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:18:25.498 00:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1235 -- # local i=0 00:18:25.498 00:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # lsblk -l -o NAME 00:18:25.498 00:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # grep -q -w nvme0n1 00:18:25.498 00:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # lsblk -l -o NAME 00:18:25.498 00:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # grep -q -w nvme0n1 00:18:25.498 00:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # return 0 00:18:25.498 00:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key key2 00:18:25.498 00:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:25.498 00:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.498 00:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:25.498 00:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:MzQzNzY5NzJiMGI1NGMyODkwNDdkYjk1ZDJmMWE0OTBmZGI1MjU2OGEyMzY5Zjk4bgm3qg==: 2s 00:18:25.498 00:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:18:25.498 00:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:18:25.498 00:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:18:25.498 00:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:MzQzNzY5NzJiMGI1NGMyODkwNDdkYjk1ZDJmMWE0OTBmZGI1MjU2OGEyMzY5Zjk4bgm3qg==: 00:18:25.498 00:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:18:25.498 00:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:18:25.498 00:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:18:25.498 00:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:MzQzNzY5NzJiMGI1NGMyODkwNDdkYjk1ZDJmMWE0OTBmZGI1MjU2OGEyMzY5Zjk4bgm3qg==: ]] 00:18:25.498 00:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:MzQzNzY5NzJiMGI1NGMyODkwNDdkYjk1ZDJmMWE0OTBmZGI1MjU2OGEyMzY5Zjk4bgm3qg==: 00:18:25.498 00:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:18:25.498 00:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:18:27.416 00:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:18:27.416 00:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1235 -- # local i=0 00:18:27.416 00:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # lsblk -l -o NAME 00:18:27.416 00:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # grep -q -w nvme0n1 00:18:27.416 00:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # lsblk -l -o NAME 00:18:27.416 00:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # grep -q -w nvme0n1 00:18:27.416 00:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # return 0 00:18:27.416 00:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:27.416 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:27.416 00:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:27.416 00:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:27.416 00:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.416 00:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:27.416 00:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:27.416 00:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:27.417 00:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:27.987 nvme0n1 00:18:27.987 00:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:27.987 00:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:27.987 00:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.987 00:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:27.987 00:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:27.987 00:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:28.256 00:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:18:28.256 00:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:28.256 00:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:18:28.564 00:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:28.564 00:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:28.564 00:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.564 00:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.564 00:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.564 00:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:18:28.564 00:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:18:28.564 00:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:18:28.564 00:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:18:28.564 00:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:28.866 00:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:28.866 00:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:28.866 00:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.866 00:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.866 00:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.866 00:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:18:28.866 00:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:28.866 00:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:18:28.866 00:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:18:28.866 00:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:28.866 00:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:18:28.866 00:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:28.866 00:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:18:28.866 00:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:18:29.439 request: 00:18:29.439 { 00:18:29.439 "name": "nvme0", 00:18:29.439 "dhchap_key": "key1", 00:18:29.439 "dhchap_ctrlr_key": "key3", 00:18:29.439 "method": "bdev_nvme_set_keys", 00:18:29.439 "req_id": 1 00:18:29.439 } 00:18:29.439 Got JSON-RPC error response 00:18:29.439 response: 00:18:29.439 { 00:18:29.439 "code": -13, 00:18:29.439 "message": "Permission denied" 00:18:29.439 } 00:18:29.439 00:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:29.439 00:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:29.439 00:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:29.439 00:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:29.439 00:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:18:29.439 00:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:18:29.439 00:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:29.439 00:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:18:29.439 00:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:18:30.384 00:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:18:30.384 00:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:18:30.384 00:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:30.644 00:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:18:30.644 00:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:30.644 00:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:30.644 00:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.644 00:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:30.644 00:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:30.644 00:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:30.644 00:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:31.583 nvme0n1 00:18:31.583 00:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:31.583 00:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:31.583 00:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.583 00:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:31.583 00:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:18:31.583 00:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:31.583 00:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:18:31.583 00:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:18:31.583 00:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:31.583 00:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:18:31.583 00:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:31.583 00:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:18:31.583 00:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:18:31.843 request: 00:18:31.843 { 00:18:31.843 "name": "nvme0", 00:18:31.843 "dhchap_key": "key2", 00:18:31.843 "dhchap_ctrlr_key": "key0", 00:18:31.843 "method": "bdev_nvme_set_keys", 00:18:31.843 "req_id": 1 00:18:31.843 } 00:18:31.843 Got JSON-RPC error response 00:18:31.843 response: 00:18:31.843 { 00:18:31.843 "code": -13, 00:18:31.843 "message": "Permission denied" 00:18:31.843 } 00:18:31.843 00:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:31.843 00:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:31.843 00:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:31.843 00:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:31.843 00:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:18:31.843 00:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:18:31.843 00:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:32.102 00:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:18:32.102 00:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:18:33.041 00:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:18:33.041 00:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:18:33.041 00:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:33.301 00:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:18:33.301 00:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:18:33.301 00:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:18:33.301 00:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 3224876 00:18:33.301 00:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 3224876 ']' 00:18:33.301 00:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 3224876 00:18:33.301 00:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:18:33.301 00:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:33.301 00:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3224876 00:18:33.301 00:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:18:33.301 00:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:18:33.301 00:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3224876' 00:18:33.301 killing process with pid 3224876 00:18:33.301 00:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 3224876 00:18:33.301 00:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 3224876 00:18:33.561 00:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:18:33.561 00:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@514 -- # nvmfcleanup 00:18:33.561 00:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:18:33.561 00:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:33.561 00:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:18:33.561 00:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:33.561 00:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:33.561 rmmod nvme_tcp 00:18:33.561 rmmod nvme_fabrics 00:18:33.561 rmmod nvme_keyring 00:18:33.561 00:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:33.561 00:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:18:33.561 00:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:18:33.561 00:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@515 -- # '[' -n 3250134 ']' 00:18:33.561 00:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # killprocess 3250134 00:18:33.561 00:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 3250134 ']' 00:18:33.561 00:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 3250134 00:18:33.561 00:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:18:33.561 00:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:33.561 00:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3250134 00:18:33.561 00:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:33.561 00:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:33.561 00:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3250134' 00:18:33.561 killing process with pid 3250134 00:18:33.561 00:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 3250134 00:18:33.561 00:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 3250134 00:18:33.822 00:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:18:33.822 00:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:18:33.822 00:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:18:33.822 00:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:18:33.822 00:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@789 -- # iptables-save 00:18:33.822 00:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:18:33.822 00:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@789 -- # iptables-restore 00:18:33.822 00:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:33.822 00:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:33.822 00:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:33.822 00:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:33.822 00:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:36.363 00:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:36.363 00:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.wTE /tmp/spdk.key-sha256.JYJ /tmp/spdk.key-sha384.Vay /tmp/spdk.key-sha512.90f /tmp/spdk.key-sha512.vhm /tmp/spdk.key-sha384.FXj /tmp/spdk.key-sha256.GqX '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:18:36.363 00:18:36.363 real 2m32.610s 00:18:36.363 user 5m44.200s 00:18:36.363 sys 0m21.788s 00:18:36.363 00:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:36.363 00:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.363 ************************************ 00:18:36.363 END TEST nvmf_auth_target 00:18:36.363 ************************************ 00:18:36.363 00:26:06 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:18:36.363 00:26:06 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:18:36.363 00:26:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:18:36.363 00:26:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:36.363 00:26:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:36.363 ************************************ 00:18:36.363 START TEST nvmf_bdevio_no_huge 00:18:36.363 ************************************ 00:18:36.363 00:26:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:18:36.363 * Looking for test storage... 00:18:36.363 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:36.363 00:26:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:18:36.363 00:26:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1681 -- # lcov --version 00:18:36.363 00:26:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:18:36.363 00:26:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:18:36.363 00:26:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:36.363 00:26:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:36.363 00:26:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:36.364 00:26:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:18:36.364 00:26:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:18:36.364 00:26:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:18:36.364 00:26:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:18:36.364 00:26:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:18:36.364 00:26:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:18:36.364 00:26:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:18:36.364 00:26:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:36.364 00:26:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:18:36.364 00:26:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:18:36.364 00:26:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:36.364 00:26:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:36.364 00:26:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:18:36.364 00:26:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:18:36.364 00:26:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:36.364 00:26:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:18:36.364 00:26:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:18:36.364 00:26:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:18:36.364 00:26:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:18:36.364 00:26:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:36.364 00:26:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:18:36.364 00:26:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:18:36.364 00:26:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:36.364 00:26:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:36.364 00:26:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:18:36.364 00:26:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:36.364 00:26:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:18:36.364 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:36.364 --rc genhtml_branch_coverage=1 00:18:36.364 --rc genhtml_function_coverage=1 00:18:36.364 --rc genhtml_legend=1 00:18:36.364 --rc geninfo_all_blocks=1 00:18:36.364 --rc geninfo_unexecuted_blocks=1 00:18:36.364 00:18:36.364 ' 00:18:36.364 00:26:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:18:36.364 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:36.364 --rc genhtml_branch_coverage=1 00:18:36.364 --rc genhtml_function_coverage=1 00:18:36.364 --rc genhtml_legend=1 00:18:36.364 --rc geninfo_all_blocks=1 00:18:36.364 --rc geninfo_unexecuted_blocks=1 00:18:36.364 00:18:36.364 ' 00:18:36.364 00:26:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:18:36.364 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:36.364 --rc genhtml_branch_coverage=1 00:18:36.364 --rc genhtml_function_coverage=1 00:18:36.364 --rc genhtml_legend=1 00:18:36.364 --rc geninfo_all_blocks=1 00:18:36.364 --rc geninfo_unexecuted_blocks=1 00:18:36.364 00:18:36.364 ' 00:18:36.364 00:26:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:18:36.364 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:36.364 --rc genhtml_branch_coverage=1 00:18:36.364 --rc genhtml_function_coverage=1 00:18:36.364 --rc genhtml_legend=1 00:18:36.364 --rc geninfo_all_blocks=1 00:18:36.364 --rc geninfo_unexecuted_blocks=1 00:18:36.364 00:18:36.364 ' 00:18:36.364 00:26:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:36.364 00:26:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:18:36.364 00:26:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:36.364 00:26:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:36.364 00:26:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:36.364 00:26:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:36.364 00:26:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:36.364 00:26:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:36.364 00:26:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:36.364 00:26:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:36.364 00:26:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:36.364 00:26:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:36.364 00:26:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:36.364 00:26:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:36.364 00:26:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:36.364 00:26:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:36.364 00:26:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:36.364 00:26:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:36.364 00:26:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:36.364 00:26:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:18:36.364 00:26:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:36.364 00:26:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:36.364 00:26:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:36.364 00:26:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:36.364 00:26:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:36.364 00:26:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:36.364 00:26:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:18:36.364 00:26:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:36.364 00:26:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:18:36.364 00:26:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:36.364 00:26:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:36.364 00:26:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:36.364 00:26:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:36.364 00:26:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:36.364 00:26:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:36.364 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:36.364 00:26:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:36.364 00:26:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:36.364 00:26:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:36.364 00:26:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:36.364 00:26:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:36.364 00:26:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:18:36.364 00:26:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:18:36.364 00:26:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:36.364 00:26:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # prepare_net_devs 00:18:36.364 00:26:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@436 -- # local -g is_hw=no 00:18:36.364 00:26:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # remove_spdk_ns 00:18:36.364 00:26:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:36.364 00:26:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:36.364 00:26:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:36.364 00:26:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:18:36.364 00:26:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:18:36.365 00:26:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@309 -- # xtrace_disable 00:18:36.365 00:26:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:44.520 00:26:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:44.520 00:26:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # pci_devs=() 00:18:44.520 00:26:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:44.520 00:26:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:44.520 00:26:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:44.520 00:26:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:44.520 00:26:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:44.520 00:26:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # net_devs=() 00:18:44.520 00:26:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:44.520 00:26:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # e810=() 00:18:44.520 00:26:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # local -ga e810 00:18:44.520 00:26:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # x722=() 00:18:44.521 00:26:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # local -ga x722 00:18:44.521 00:26:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # mlx=() 00:18:44.521 00:26:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # local -ga mlx 00:18:44.521 00:26:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:44.521 00:26:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:44.521 00:26:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:44.521 00:26:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:44.521 00:26:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:44.521 00:26:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:44.521 00:26:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:44.521 00:26:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:44.521 00:26:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:44.521 00:26:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:44.521 00:26:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:44.521 00:26:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:44.521 00:26:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:44.521 00:26:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:18:44.521 00:26:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:18:44.521 00:26:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:18:44.521 00:26:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:18:44.521 00:26:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:44.521 00:26:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:44.521 00:26:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:18:44.521 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:18:44.521 00:26:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:44.521 00:26:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:44.521 00:26:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:44.521 00:26:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:44.521 00:26:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:44.521 00:26:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:44.521 00:26:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:18:44.521 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:18:44.521 00:26:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:44.521 00:26:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:44.521 00:26:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:44.521 00:26:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:44.521 00:26:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:44.521 00:26:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:44.521 00:26:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:18:44.521 00:26:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:18:44.521 00:26:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:18:44.521 00:26:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:44.521 00:26:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:18:44.521 00:26:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:44.521 00:26:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ up == up ]] 00:18:44.521 00:26:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:18:44.521 00:26:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:44.521 00:26:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:18:44.521 Found net devices under 0000:4b:00.0: cvl_0_0 00:18:44.521 00:26:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:18:44.521 00:26:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:18:44.521 00:26:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:44.521 00:26:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:18:44.521 00:26:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:44.521 00:26:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ up == up ]] 00:18:44.521 00:26:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:18:44.521 00:26:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:44.521 00:26:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:18:44.521 Found net devices under 0000:4b:00.1: cvl_0_1 00:18:44.521 00:26:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:18:44.521 00:26:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:18:44.521 00:26:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # is_hw=yes 00:18:44.521 00:26:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:18:44.521 00:26:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:18:44.521 00:26:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:18:44.521 00:26:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:44.521 00:26:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:44.521 00:26:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:44.521 00:26:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:44.521 00:26:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:18:44.521 00:26:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:44.521 00:26:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:44.521 00:26:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:18:44.521 00:26:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:18:44.521 00:26:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:44.521 00:26:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:44.521 00:26:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:18:44.521 00:26:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:18:44.521 00:26:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:18:44.521 00:26:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:44.521 00:26:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:44.521 00:26:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:44.521 00:26:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:18:44.521 00:26:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:44.521 00:26:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:44.521 00:26:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:44.521 00:26:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:18:44.521 00:26:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:18:44.521 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:44.521 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.572 ms 00:18:44.521 00:18:44.521 --- 10.0.0.2 ping statistics --- 00:18:44.521 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:44.521 rtt min/avg/max/mdev = 0.572/0.572/0.572/0.000 ms 00:18:44.521 00:26:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:44.521 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:44.521 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.326 ms 00:18:44.521 00:18:44.521 --- 10.0.0.1 ping statistics --- 00:18:44.521 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:44.521 rtt min/avg/max/mdev = 0.326/0.326/0.326/0.000 ms 00:18:44.521 00:26:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:44.521 00:26:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # return 0 00:18:44.521 00:26:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:18:44.521 00:26:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:44.521 00:26:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:18:44.521 00:26:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:18:44.521 00:26:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:44.521 00:26:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:18:44.521 00:26:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:18:44.522 00:26:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:18:44.522 00:26:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:18:44.522 00:26:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:44.522 00:26:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:44.522 00:26:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # nvmfpid=3258293 00:18:44.522 00:26:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # waitforlisten 3258293 00:18:44.522 00:26:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:18:44.522 00:26:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@831 -- # '[' -z 3258293 ']' 00:18:44.522 00:26:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:44.522 00:26:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:44.522 00:26:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:44.522 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:44.522 00:26:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:44.522 00:26:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:44.522 [2024-10-09 00:26:14.250130] Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 initialization... 00:18:44.522 [2024-10-09 00:26:14.250203] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:18:44.522 [2024-10-09 00:26:14.331320] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:44.522 [2024-10-09 00:26:14.440717] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:44.522 [2024-10-09 00:26:14.440777] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:44.522 [2024-10-09 00:26:14.440786] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:44.522 [2024-10-09 00:26:14.440794] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:44.522 [2024-10-09 00:26:14.440800] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:44.522 [2024-10-09 00:26:14.442356] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 4 00:18:44.522 [2024-10-09 00:26:14.442450] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 5 00:18:44.522 [2024-10-09 00:26:14.442623] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:18:44.522 [2024-10-09 00:26:14.442623] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 6 00:18:44.522 00:26:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:44.522 00:26:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # return 0 00:18:44.522 00:26:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:18:44.522 00:26:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:44.522 00:26:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:44.522 00:26:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:44.522 00:26:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:44.522 00:26:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:44.522 00:26:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:44.522 [2024-10-09 00:26:15.125599] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:44.522 00:26:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:44.522 00:26:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:44.522 00:26:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:44.522 00:26:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:44.522 Malloc0 00:18:44.522 00:26:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:44.522 00:26:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:44.522 00:26:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:44.522 00:26:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:44.783 00:26:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:44.783 00:26:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:44.783 00:26:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:44.783 00:26:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:44.783 00:26:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:44.783 00:26:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:44.783 00:26:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:44.783 00:26:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:44.783 [2024-10-09 00:26:15.179285] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:44.783 00:26:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:44.783 00:26:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:18:44.783 00:26:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:18:44.783 00:26:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # config=() 00:18:44.783 00:26:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # local subsystem config 00:18:44.783 00:26:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:18:44.783 00:26:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:18:44.783 { 00:18:44.783 "params": { 00:18:44.783 "name": "Nvme$subsystem", 00:18:44.783 "trtype": "$TEST_TRANSPORT", 00:18:44.783 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:44.783 "adrfam": "ipv4", 00:18:44.783 "trsvcid": "$NVMF_PORT", 00:18:44.783 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:44.783 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:44.783 "hdgst": ${hdgst:-false}, 00:18:44.783 "ddgst": ${ddgst:-false} 00:18:44.783 }, 00:18:44.783 "method": "bdev_nvme_attach_controller" 00:18:44.783 } 00:18:44.783 EOF 00:18:44.783 )") 00:18:44.783 00:26:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@580 -- # cat 00:18:44.783 00:26:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # jq . 00:18:44.783 00:26:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@583 -- # IFS=, 00:18:44.783 00:26:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:18:44.783 "params": { 00:18:44.783 "name": "Nvme1", 00:18:44.783 "trtype": "tcp", 00:18:44.783 "traddr": "10.0.0.2", 00:18:44.783 "adrfam": "ipv4", 00:18:44.783 "trsvcid": "4420", 00:18:44.783 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:44.783 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:44.783 "hdgst": false, 00:18:44.783 "ddgst": false 00:18:44.783 }, 00:18:44.783 "method": "bdev_nvme_attach_controller" 00:18:44.783 }' 00:18:44.783 [2024-10-09 00:26:15.236027] Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 initialization... 00:18:44.783 [2024-10-09 00:26:15.236097] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid3258641 ] 00:18:44.783 [2024-10-09 00:26:15.323495] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:45.044 [2024-10-09 00:26:15.431946] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:18:45.044 [2024-10-09 00:26:15.432264] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:18:45.044 [2024-10-09 00:26:15.432265] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:18:45.305 I/O targets: 00:18:45.305 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:18:45.305 00:18:45.305 00:18:45.305 CUnit - A unit testing framework for C - Version 2.1-3 00:18:45.305 http://cunit.sourceforge.net/ 00:18:45.305 00:18:45.305 00:18:45.305 Suite: bdevio tests on: Nvme1n1 00:18:45.305 Test: blockdev write read block ...passed 00:18:45.305 Test: blockdev write zeroes read block ...passed 00:18:45.305 Test: blockdev write zeroes read no split ...passed 00:18:45.305 Test: blockdev write zeroes read split ...passed 00:18:45.566 Test: blockdev write zeroes read split partial ...passed 00:18:45.566 Test: blockdev reset ...[2024-10-09 00:26:15.952004] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:45.566 [2024-10-09 00:26:15.952116] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa768c0 (9): Bad file descriptor 00:18:45.566 [2024-10-09 00:26:16.009685] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:18:45.566 passed 00:18:45.566 Test: blockdev write read 8 blocks ...passed 00:18:45.566 Test: blockdev write read size > 128k ...passed 00:18:45.566 Test: blockdev write read invalid size ...passed 00:18:45.566 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:18:45.566 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:18:45.566 Test: blockdev write read max offset ...passed 00:18:45.827 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:18:45.827 Test: blockdev writev readv 8 blocks ...passed 00:18:45.827 Test: blockdev writev readv 30 x 1block ...passed 00:18:45.827 Test: blockdev writev readv block ...passed 00:18:45.827 Test: blockdev writev readv size > 128k ...passed 00:18:45.827 Test: blockdev writev readv size > 128k in two iovs ...passed 00:18:45.827 Test: blockdev comparev and writev ...[2024-10-09 00:26:16.316154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:45.827 [2024-10-09 00:26:16.316205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:45.827 [2024-10-09 00:26:16.316223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:45.827 [2024-10-09 00:26:16.316232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:45.827 [2024-10-09 00:26:16.316673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:45.827 [2024-10-09 00:26:16.316686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:45.827 [2024-10-09 00:26:16.316701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:45.827 [2024-10-09 00:26:16.316709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:45.827 [2024-10-09 00:26:16.317135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:45.827 [2024-10-09 00:26:16.317146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:45.827 [2024-10-09 00:26:16.317161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:45.827 [2024-10-09 00:26:16.317169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:45.827 [2024-10-09 00:26:16.317595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:45.827 [2024-10-09 00:26:16.317607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:45.827 [2024-10-09 00:26:16.317621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:45.827 [2024-10-09 00:26:16.317630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:45.827 passed 00:18:45.827 Test: blockdev nvme passthru rw ...passed 00:18:45.827 Test: blockdev nvme passthru vendor specific ...[2024-10-09 00:26:16.403602] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:45.827 [2024-10-09 00:26:16.403618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:45.827 [2024-10-09 00:26:16.403899] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:45.827 [2024-10-09 00:26:16.403916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:45.827 [2024-10-09 00:26:16.404198] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:45.827 [2024-10-09 00:26:16.404210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:45.827 [2024-10-09 00:26:16.404585] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:45.827 [2024-10-09 00:26:16.404596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:45.827 passed 00:18:45.827 Test: blockdev nvme admin passthru ...passed 00:18:46.087 Test: blockdev copy ...passed 00:18:46.087 00:18:46.087 Run Summary: Type Total Ran Passed Failed Inactive 00:18:46.087 suites 1 1 n/a 0 0 00:18:46.087 tests 23 23 23 0 0 00:18:46.087 asserts 152 152 152 0 n/a 00:18:46.087 00:18:46.087 Elapsed time = 1.391 seconds 00:18:46.347 00:26:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:46.347 00:26:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.347 00:26:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:46.347 00:26:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.347 00:26:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:18:46.347 00:26:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:18:46.347 00:26:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@514 -- # nvmfcleanup 00:18:46.347 00:26:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:18:46.347 00:26:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:46.347 00:26:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:18:46.347 00:26:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:46.347 00:26:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:46.347 rmmod nvme_tcp 00:18:46.347 rmmod nvme_fabrics 00:18:46.347 rmmod nvme_keyring 00:18:46.347 00:26:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:46.347 00:26:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:18:46.347 00:26:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:18:46.347 00:26:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@515 -- # '[' -n 3258293 ']' 00:18:46.347 00:26:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # killprocess 3258293 00:18:46.347 00:26:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@950 -- # '[' -z 3258293 ']' 00:18:46.347 00:26:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # kill -0 3258293 00:18:46.347 00:26:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@955 -- # uname 00:18:46.347 00:26:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:46.347 00:26:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3258293 00:18:46.347 00:26:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:18:46.347 00:26:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:18:46.347 00:26:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3258293' 00:18:46.347 killing process with pid 3258293 00:18:46.347 00:26:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@969 -- # kill 3258293 00:18:46.347 00:26:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@974 -- # wait 3258293 00:18:46.606 00:26:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:18:46.606 00:26:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:18:46.606 00:26:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:18:46.606 00:26:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:18:46.606 00:26:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@789 -- # iptables-save 00:18:46.606 00:26:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:18:46.606 00:26:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@789 -- # iptables-restore 00:18:46.606 00:26:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:46.606 00:26:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:46.606 00:26:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:46.606 00:26:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:46.606 00:26:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:49.147 00:26:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:49.147 00:18:49.147 real 0m12.774s 00:18:49.147 user 0m15.640s 00:18:49.147 sys 0m6.701s 00:18:49.147 00:26:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:49.147 00:26:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:49.147 ************************************ 00:18:49.147 END TEST nvmf_bdevio_no_huge 00:18:49.147 ************************************ 00:18:49.147 00:26:19 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:18:49.147 00:26:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:18:49.147 00:26:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:49.147 00:26:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:49.147 ************************************ 00:18:49.147 START TEST nvmf_tls 00:18:49.147 ************************************ 00:18:49.148 00:26:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:18:49.148 * Looking for test storage... 00:18:49.148 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:49.148 00:26:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:18:49.148 00:26:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1681 -- # lcov --version 00:18:49.148 00:26:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:18:49.148 00:26:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:18:49.148 00:26:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:49.148 00:26:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:49.148 00:26:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:49.148 00:26:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:18:49.148 00:26:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:18:49.148 00:26:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:18:49.148 00:26:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:18:49.148 00:26:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:18:49.148 00:26:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:18:49.148 00:26:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:18:49.148 00:26:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:49.148 00:26:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:18:49.148 00:26:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:18:49.148 00:26:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:49.148 00:26:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:49.148 00:26:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:18:49.148 00:26:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:18:49.148 00:26:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:49.148 00:26:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:18:49.148 00:26:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:18:49.148 00:26:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:18:49.148 00:26:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:18:49.148 00:26:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:49.148 00:26:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:18:49.148 00:26:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:18:49.148 00:26:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:49.148 00:26:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:49.148 00:26:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:18:49.148 00:26:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:49.148 00:26:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:18:49.148 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:49.148 --rc genhtml_branch_coverage=1 00:18:49.148 --rc genhtml_function_coverage=1 00:18:49.148 --rc genhtml_legend=1 00:18:49.148 --rc geninfo_all_blocks=1 00:18:49.148 --rc geninfo_unexecuted_blocks=1 00:18:49.148 00:18:49.148 ' 00:18:49.148 00:26:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:18:49.148 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:49.148 --rc genhtml_branch_coverage=1 00:18:49.148 --rc genhtml_function_coverage=1 00:18:49.148 --rc genhtml_legend=1 00:18:49.148 --rc geninfo_all_blocks=1 00:18:49.148 --rc geninfo_unexecuted_blocks=1 00:18:49.148 00:18:49.148 ' 00:18:49.148 00:26:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:18:49.148 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:49.148 --rc genhtml_branch_coverage=1 00:18:49.148 --rc genhtml_function_coverage=1 00:18:49.148 --rc genhtml_legend=1 00:18:49.148 --rc geninfo_all_blocks=1 00:18:49.148 --rc geninfo_unexecuted_blocks=1 00:18:49.148 00:18:49.148 ' 00:18:49.148 00:26:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:18:49.148 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:49.148 --rc genhtml_branch_coverage=1 00:18:49.148 --rc genhtml_function_coverage=1 00:18:49.148 --rc genhtml_legend=1 00:18:49.148 --rc geninfo_all_blocks=1 00:18:49.148 --rc geninfo_unexecuted_blocks=1 00:18:49.148 00:18:49.148 ' 00:18:49.148 00:26:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:49.148 00:26:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:18:49.148 00:26:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:49.148 00:26:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:49.148 00:26:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:49.148 00:26:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:49.148 00:26:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:49.148 00:26:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:49.148 00:26:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:49.148 00:26:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:49.148 00:26:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:49.148 00:26:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:49.148 00:26:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:49.148 00:26:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:49.148 00:26:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:49.148 00:26:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:49.148 00:26:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:49.148 00:26:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:49.148 00:26:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:49.148 00:26:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:18:49.148 00:26:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:49.148 00:26:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:49.148 00:26:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:49.148 00:26:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:49.148 00:26:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:49.148 00:26:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:49.148 00:26:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:18:49.148 00:26:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:49.148 00:26:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:18:49.148 00:26:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:49.148 00:26:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:49.148 00:26:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:49.148 00:26:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:49.148 00:26:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:49.148 00:26:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:49.148 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:49.148 00:26:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:49.148 00:26:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:49.148 00:26:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:49.148 00:26:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:49.148 00:26:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:18:49.148 00:26:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:18:49.148 00:26:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:49.148 00:26:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # prepare_net_devs 00:18:49.148 00:26:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@436 -- # local -g is_hw=no 00:18:49.148 00:26:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # remove_spdk_ns 00:18:49.148 00:26:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:49.148 00:26:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:49.148 00:26:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:49.148 00:26:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:18:49.149 00:26:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:18:49.149 00:26:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@309 -- # xtrace_disable 00:18:49.149 00:26:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:57.286 00:26:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:57.286 00:26:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # pci_devs=() 00:18:57.286 00:26:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:57.286 00:26:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:57.286 00:26:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:57.286 00:26:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:57.286 00:26:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:57.286 00:26:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # net_devs=() 00:18:57.286 00:26:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:57.286 00:26:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # e810=() 00:18:57.286 00:26:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # local -ga e810 00:18:57.287 00:26:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # x722=() 00:18:57.287 00:26:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # local -ga x722 00:18:57.287 00:26:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # mlx=() 00:18:57.287 00:26:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # local -ga mlx 00:18:57.287 00:26:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:57.287 00:26:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:57.287 00:26:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:57.287 00:26:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:57.287 00:26:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:57.287 00:26:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:57.287 00:26:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:57.287 00:26:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:57.287 00:26:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:57.287 00:26:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:57.287 00:26:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:57.287 00:26:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:57.287 00:26:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:57.287 00:26:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:18:57.287 00:26:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:18:57.287 00:26:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:18:57.287 00:26:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:18:57.287 00:26:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:57.287 00:26:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:57.287 00:26:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:18:57.287 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:18:57.287 00:26:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:57.287 00:26:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:57.287 00:26:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:57.287 00:26:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:57.287 00:26:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:57.287 00:26:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:57.287 00:26:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:18:57.287 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:18:57.287 00:26:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:57.287 00:26:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:57.287 00:26:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:57.287 00:26:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:57.287 00:26:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:57.287 00:26:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:57.287 00:26:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:18:57.287 00:26:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:18:57.287 00:26:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:18:57.287 00:26:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:57.287 00:26:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:18:57.287 00:26:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:57.287 00:26:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ up == up ]] 00:18:57.287 00:26:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:18:57.287 00:26:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:57.287 00:26:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:18:57.287 Found net devices under 0000:4b:00.0: cvl_0_0 00:18:57.287 00:26:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:18:57.287 00:26:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:18:57.287 00:26:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:57.287 00:26:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:18:57.287 00:26:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:57.287 00:26:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ up == up ]] 00:18:57.287 00:26:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:18:57.287 00:26:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:57.287 00:26:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:18:57.287 Found net devices under 0000:4b:00.1: cvl_0_1 00:18:57.287 00:26:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:18:57.287 00:26:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:18:57.287 00:26:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # is_hw=yes 00:18:57.287 00:26:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:18:57.287 00:26:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:18:57.287 00:26:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:18:57.287 00:26:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:57.287 00:26:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:57.287 00:26:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:57.287 00:26:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:57.287 00:26:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:18:57.287 00:26:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:57.287 00:26:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:57.287 00:26:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:18:57.287 00:26:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:18:57.287 00:26:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:57.287 00:26:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:57.287 00:26:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:18:57.287 00:26:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:18:57.287 00:26:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:18:57.287 00:26:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:57.287 00:26:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:57.287 00:26:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:57.287 00:26:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:18:57.287 00:26:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:57.287 00:26:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:57.287 00:26:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:57.287 00:26:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:18:57.287 00:26:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:18:57.287 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:57.287 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.536 ms 00:18:57.287 00:18:57.287 --- 10.0.0.2 ping statistics --- 00:18:57.287 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:57.287 rtt min/avg/max/mdev = 0.536/0.536/0.536/0.000 ms 00:18:57.287 00:26:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:57.287 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:57.287 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.299 ms 00:18:57.287 00:18:57.287 --- 10.0.0.1 ping statistics --- 00:18:57.287 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:57.287 rtt min/avg/max/mdev = 0.299/0.299/0.299/0.000 ms 00:18:57.287 00:26:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:57.287 00:26:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@448 -- # return 0 00:18:57.287 00:26:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:18:57.287 00:26:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:57.287 00:26:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:18:57.287 00:26:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:18:57.287 00:26:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:57.287 00:26:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:18:57.287 00:26:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:18:57.287 00:26:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:18:57.287 00:26:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:18:57.287 00:26:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:57.287 00:26:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:57.287 00:26:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=3263048 00:18:57.287 00:26:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 3263048 00:18:57.287 00:26:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:18:57.287 00:26:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3263048 ']' 00:18:57.287 00:26:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:57.287 00:26:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:57.288 00:26:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:57.288 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:57.288 00:26:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:57.288 00:26:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:57.288 [2024-10-09 00:26:27.170098] Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 initialization... 00:18:57.288 [2024-10-09 00:26:27.170158] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:57.288 [2024-10-09 00:26:27.261849] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:57.288 [2024-10-09 00:26:27.354621] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:57.288 [2024-10-09 00:26:27.354679] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:57.288 [2024-10-09 00:26:27.354687] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:57.288 [2024-10-09 00:26:27.354700] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:57.288 [2024-10-09 00:26:27.354706] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:57.288 [2024-10-09 00:26:27.355517] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:18:57.548 00:26:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:57.548 00:26:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:18:57.548 00:26:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:18:57.548 00:26:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:57.548 00:26:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:57.548 00:26:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:57.548 00:26:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:18:57.548 00:26:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:18:57.816 true 00:18:57.817 00:26:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:57.817 00:26:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:18:57.817 00:26:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:18:57.817 00:26:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:18:57.817 00:26:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:18:58.081 00:26:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:58.081 00:26:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:18:58.342 00:26:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:18:58.342 00:26:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:18:58.342 00:26:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:18:58.342 00:26:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:58.342 00:26:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:18:58.602 00:26:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:18:58.602 00:26:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:18:58.602 00:26:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:58.602 00:26:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:18:58.862 00:26:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:18:58.862 00:26:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:18:58.862 00:26:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:18:59.123 00:26:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:59.123 00:26:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:18:59.123 00:26:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:18:59.123 00:26:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:18:59.123 00:26:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:18:59.384 00:26:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:59.384 00:26:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:18:59.645 00:26:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:18:59.645 00:26:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:18:59.645 00:26:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:18:59.645 00:26:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:18:59.645 00:26:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # local prefix key digest 00:18:59.645 00:26:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:18:59.645 00:26:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # key=00112233445566778899aabbccddeeff 00:18:59.645 00:26:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # digest=1 00:18:59.645 00:26:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@731 -- # python - 00:18:59.645 00:26:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:18:59.645 00:26:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:18:59.645 00:26:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:18:59.645 00:26:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # local prefix key digest 00:18:59.645 00:26:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:18:59.645 00:26:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # key=ffeeddccbbaa99887766554433221100 00:18:59.645 00:26:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # digest=1 00:18:59.645 00:26:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@731 -- # python - 00:18:59.645 00:26:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:18:59.645 00:26:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:18:59.645 00:26:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.hUEznNur8V 00:18:59.645 00:26:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:18:59.645 00:26:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.VCOFV9tFaE 00:18:59.645 00:26:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:18:59.645 00:26:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:18:59.645 00:26:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.hUEznNur8V 00:18:59.645 00:26:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.VCOFV9tFaE 00:18:59.645 00:26:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:18:59.905 00:26:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:19:00.165 00:26:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.hUEznNur8V 00:19:00.165 00:26:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.hUEznNur8V 00:19:00.165 00:26:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:00.165 [2024-10-09 00:26:30.764532] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:00.165 00:26:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:00.425 00:26:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:00.685 [2024-10-09 00:26:31.117395] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:00.685 [2024-10-09 00:26:31.117589] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:00.685 00:26:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:00.950 malloc0 00:19:00.950 00:26:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:00.950 00:26:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.hUEznNur8V 00:19:01.210 00:26:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:19:01.470 00:26:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.hUEznNur8V 00:19:11.476 Initializing NVMe Controllers 00:19:11.476 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:19:11.476 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:19:11.476 Initialization complete. Launching workers. 00:19:11.476 ======================================================== 00:19:11.476 Latency(us) 00:19:11.476 Device Information : IOPS MiB/s Average min max 00:19:11.476 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18558.88 72.50 3448.70 1163.03 4675.11 00:19:11.476 ======================================================== 00:19:11.476 Total : 18558.88 72.50 3448.70 1163.03 4675.11 00:19:11.476 00:19:11.476 00:26:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.hUEznNur8V 00:19:11.476 00:26:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:11.476 00:26:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:11.476 00:26:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:11.476 00:26:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.hUEznNur8V 00:19:11.476 00:26:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:11.476 00:26:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3266051 00:19:11.476 00:26:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:11.476 00:26:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3266051 /var/tmp/bdevperf.sock 00:19:11.476 00:26:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:11.476 00:26:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3266051 ']' 00:19:11.476 00:26:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:11.476 00:26:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:11.476 00:26:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:11.476 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:11.476 00:26:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:11.476 00:26:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:11.476 [2024-10-09 00:26:42.041819] Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 initialization... 00:19:11.476 [2024-10-09 00:26:42.041877] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3266051 ] 00:19:11.737 [2024-10-09 00:26:42.120584] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:11.737 [2024-10-09 00:26:42.184470] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:19:12.309 00:26:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:12.309 00:26:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:12.309 00:26:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.hUEznNur8V 00:19:12.570 00:26:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:12.570 [2024-10-09 00:26:43.184055] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:12.832 TLSTESTn1 00:19:12.832 00:26:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:19:12.832 Running I/O for 10 seconds... 00:19:15.192 4549.00 IOPS, 17.77 MiB/s [2024-10-08T22:26:46.408Z] 5157.50 IOPS, 20.15 MiB/s [2024-10-08T22:26:47.791Z] 4931.33 IOPS, 19.26 MiB/s [2024-10-08T22:26:48.740Z] 5010.50 IOPS, 19.57 MiB/s [2024-10-08T22:26:49.685Z] 5036.20 IOPS, 19.67 MiB/s [2024-10-08T22:26:50.626Z] 5239.00 IOPS, 20.46 MiB/s [2024-10-08T22:26:51.567Z] 5332.86 IOPS, 20.83 MiB/s [2024-10-08T22:26:52.506Z] 5297.88 IOPS, 20.69 MiB/s [2024-10-08T22:26:53.454Z] 5221.00 IOPS, 20.39 MiB/s [2024-10-08T22:26:53.454Z] 5287.20 IOPS, 20.65 MiB/s 00:19:22.819 Latency(us) 00:19:22.819 [2024-10-08T22:26:53.454Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:22.819 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:22.819 Verification LBA range: start 0x0 length 0x2000 00:19:22.819 TLSTESTn1 : 10.01 5293.94 20.68 0.00 0.00 24144.64 4587.52 27415.89 00:19:22.819 [2024-10-08T22:26:53.454Z] =================================================================================================================== 00:19:22.819 [2024-10-08T22:26:53.454Z] Total : 5293.94 20.68 0.00 0.00 24144.64 4587.52 27415.89 00:19:22.819 { 00:19:22.819 "results": [ 00:19:22.819 { 00:19:22.819 "job": "TLSTESTn1", 00:19:22.819 "core_mask": "0x4", 00:19:22.819 "workload": "verify", 00:19:22.819 "status": "finished", 00:19:22.819 "verify_range": { 00:19:22.819 "start": 0, 00:19:22.819 "length": 8192 00:19:22.819 }, 00:19:22.819 "queue_depth": 128, 00:19:22.819 "io_size": 4096, 00:19:22.819 "runtime": 10.011062, 00:19:22.819 "iops": 5293.943839324938, 00:19:22.819 "mibps": 20.67946812236304, 00:19:22.819 "io_failed": 0, 00:19:22.819 "io_timeout": 0, 00:19:22.819 "avg_latency_us": 24144.64117539027, 00:19:22.819 "min_latency_us": 4587.52, 00:19:22.819 "max_latency_us": 27415.893333333333 00:19:22.819 } 00:19:22.819 ], 00:19:22.819 "core_count": 1 00:19:22.819 } 00:19:22.819 00:26:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:22.819 00:26:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 3266051 00:19:22.819 00:26:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3266051 ']' 00:19:22.819 00:26:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3266051 00:19:22.819 00:26:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:22.819 00:26:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:22.819 00:26:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3266051 00:19:23.084 00:26:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:19:23.084 00:26:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:19:23.084 00:26:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3266051' 00:19:23.084 killing process with pid 3266051 00:19:23.084 00:26:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3266051 00:19:23.084 Received shutdown signal, test time was about 10.000000 seconds 00:19:23.084 00:19:23.084 Latency(us) 00:19:23.084 [2024-10-08T22:26:53.719Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:23.084 [2024-10-08T22:26:53.719Z] =================================================================================================================== 00:19:23.084 [2024-10-08T22:26:53.719Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:23.084 00:26:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3266051 00:19:23.084 00:26:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.VCOFV9tFaE 00:19:23.084 00:26:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:19:23.084 00:26:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.VCOFV9tFaE 00:19:23.084 00:26:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:19:23.084 00:26:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:23.084 00:26:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:19:23.084 00:26:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:23.084 00:26:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.VCOFV9tFaE 00:19:23.084 00:26:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:23.084 00:26:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:23.084 00:26:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:23.084 00:26:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.VCOFV9tFaE 00:19:23.084 00:26:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:23.084 00:26:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3268324 00:19:23.084 00:26:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:23.084 00:26:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3268324 /var/tmp/bdevperf.sock 00:19:23.084 00:26:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:23.084 00:26:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3268324 ']' 00:19:23.084 00:26:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:23.084 00:26:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:23.084 00:26:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:23.084 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:23.084 00:26:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:23.084 00:26:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:23.084 [2024-10-09 00:26:53.662414] Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 initialization... 00:19:23.084 [2024-10-09 00:26:53.662471] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3268324 ] 00:19:23.344 [2024-10-09 00:26:53.740225] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:23.344 [2024-10-09 00:26:53.791647] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:19:23.915 00:26:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:23.915 00:26:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:23.915 00:26:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.VCOFV9tFaE 00:19:24.176 00:26:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:24.176 [2024-10-09 00:26:54.786139] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:24.176 [2024-10-09 00:26:54.793907] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:24.176 [2024-10-09 00:26:54.794364] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x71ec70 (107): Transport endpoint is not connected 00:19:24.176 [2024-10-09 00:26:54.795359] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x71ec70 (9): Bad file descriptor 00:19:24.176 [2024-10-09 00:26:54.796361] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:24.176 [2024-10-09 00:26:54.796369] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:24.176 [2024-10-09 00:26:54.796375] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:19:24.176 [2024-10-09 00:26:54.796382] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:24.176 request: 00:19:24.176 { 00:19:24.176 "name": "TLSTEST", 00:19:24.176 "trtype": "tcp", 00:19:24.176 "traddr": "10.0.0.2", 00:19:24.176 "adrfam": "ipv4", 00:19:24.176 "trsvcid": "4420", 00:19:24.176 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:24.176 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:24.176 "prchk_reftag": false, 00:19:24.176 "prchk_guard": false, 00:19:24.176 "hdgst": false, 00:19:24.176 "ddgst": false, 00:19:24.176 "psk": "key0", 00:19:24.176 "allow_unrecognized_csi": false, 00:19:24.176 "method": "bdev_nvme_attach_controller", 00:19:24.176 "req_id": 1 00:19:24.176 } 00:19:24.176 Got JSON-RPC error response 00:19:24.176 response: 00:19:24.176 { 00:19:24.176 "code": -5, 00:19:24.176 "message": "Input/output error" 00:19:24.176 } 00:19:24.438 00:26:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 3268324 00:19:24.438 00:26:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3268324 ']' 00:19:24.438 00:26:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3268324 00:19:24.438 00:26:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:24.438 00:26:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:24.438 00:26:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3268324 00:19:24.438 00:26:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:19:24.438 00:26:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:19:24.438 00:26:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3268324' 00:19:24.438 killing process with pid 3268324 00:19:24.438 00:26:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3268324 00:19:24.438 Received shutdown signal, test time was about 10.000000 seconds 00:19:24.438 00:19:24.438 Latency(us) 00:19:24.438 [2024-10-08T22:26:55.073Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:24.438 [2024-10-08T22:26:55.073Z] =================================================================================================================== 00:19:24.438 [2024-10-08T22:26:55.073Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:24.438 00:26:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3268324 00:19:24.438 00:26:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:19:24.438 00:26:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:19:24.438 00:26:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:24.438 00:26:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:24.438 00:26:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:24.438 00:26:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.hUEznNur8V 00:19:24.438 00:26:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:19:24.438 00:26:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.hUEznNur8V 00:19:24.438 00:26:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:19:24.438 00:26:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:24.438 00:26:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:19:24.438 00:26:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:24.438 00:26:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.hUEznNur8V 00:19:24.438 00:26:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:24.438 00:26:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:24.438 00:26:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:19:24.438 00:26:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.hUEznNur8V 00:19:24.438 00:26:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:24.438 00:26:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3268511 00:19:24.438 00:26:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:24.438 00:26:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3268511 /var/tmp/bdevperf.sock 00:19:24.438 00:26:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:24.438 00:26:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3268511 ']' 00:19:24.438 00:26:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:24.438 00:26:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:24.438 00:26:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:24.438 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:24.438 00:26:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:24.438 00:26:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:24.438 [2024-10-09 00:26:55.063782] Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 initialization... 00:19:24.438 [2024-10-09 00:26:55.063837] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3268511 ] 00:19:24.699 [2024-10-09 00:26:55.140994] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:24.699 [2024-10-09 00:26:55.193058] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:19:25.278 00:26:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:25.278 00:26:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:25.278 00:26:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.hUEznNur8V 00:19:25.537 00:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:19:25.797 [2024-10-09 00:26:56.183603] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:25.797 [2024-10-09 00:26:56.188047] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:19:25.797 [2024-10-09 00:26:56.188065] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:19:25.797 [2024-10-09 00:26:56.188084] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:25.797 [2024-10-09 00:26:56.188732] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f1c70 (107): Transport endpoint is not connected 00:19:25.797 [2024-10-09 00:26:56.189727] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f1c70 (9): Bad file descriptor 00:19:25.797 [2024-10-09 00:26:56.190729] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:25.797 [2024-10-09 00:26:56.190735] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:25.797 [2024-10-09 00:26:56.190741] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:19:25.797 [2024-10-09 00:26:56.190749] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:25.797 request: 00:19:25.797 { 00:19:25.797 "name": "TLSTEST", 00:19:25.797 "trtype": "tcp", 00:19:25.797 "traddr": "10.0.0.2", 00:19:25.797 "adrfam": "ipv4", 00:19:25.797 "trsvcid": "4420", 00:19:25.797 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:25.797 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:19:25.797 "prchk_reftag": false, 00:19:25.797 "prchk_guard": false, 00:19:25.797 "hdgst": false, 00:19:25.797 "ddgst": false, 00:19:25.797 "psk": "key0", 00:19:25.797 "allow_unrecognized_csi": false, 00:19:25.797 "method": "bdev_nvme_attach_controller", 00:19:25.797 "req_id": 1 00:19:25.797 } 00:19:25.797 Got JSON-RPC error response 00:19:25.797 response: 00:19:25.797 { 00:19:25.797 "code": -5, 00:19:25.797 "message": "Input/output error" 00:19:25.797 } 00:19:25.797 00:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 3268511 00:19:25.797 00:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3268511 ']' 00:19:25.797 00:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3268511 00:19:25.797 00:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:25.797 00:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:25.797 00:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3268511 00:19:25.797 00:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:19:25.797 00:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:19:25.797 00:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3268511' 00:19:25.797 killing process with pid 3268511 00:19:25.797 00:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3268511 00:19:25.797 Received shutdown signal, test time was about 10.000000 seconds 00:19:25.797 00:19:25.797 Latency(us) 00:19:25.797 [2024-10-08T22:26:56.432Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:25.797 [2024-10-08T22:26:56.432Z] =================================================================================================================== 00:19:25.797 [2024-10-08T22:26:56.432Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:25.797 00:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3268511 00:19:25.797 00:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:19:25.798 00:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:19:25.798 00:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:25.798 00:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:25.798 00:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:25.798 00:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.hUEznNur8V 00:19:25.798 00:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:19:25.798 00:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.hUEznNur8V 00:19:25.798 00:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:19:25.798 00:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:25.798 00:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:19:25.798 00:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:25.798 00:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.hUEznNur8V 00:19:25.798 00:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:25.798 00:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:19:25.798 00:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:25.798 00:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.hUEznNur8V 00:19:25.798 00:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:25.798 00:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3268763 00:19:25.798 00:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:25.798 00:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3268763 /var/tmp/bdevperf.sock 00:19:25.798 00:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:25.798 00:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3268763 ']' 00:19:25.798 00:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:25.798 00:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:25.798 00:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:25.798 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:25.798 00:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:25.798 00:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:26.058 [2024-10-09 00:26:56.446892] Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 initialization... 00:19:26.058 [2024-10-09 00:26:56.446946] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3268763 ] 00:19:26.058 [2024-10-09 00:26:56.522353] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:26.058 [2024-10-09 00:26:56.573407] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:19:26.627 00:26:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:26.627 00:26:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:26.627 00:26:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.hUEznNur8V 00:19:26.892 00:26:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:27.185 [2024-10-09 00:26:57.567958] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:27.185 [2024-10-09 00:26:57.572235] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:19:27.185 [2024-10-09 00:26:57.572252] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:19:27.185 [2024-10-09 00:26:57.572271] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:27.185 [2024-10-09 00:26:57.572931] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cd6c70 (107): Transport endpoint is not connected 00:19:27.185 [2024-10-09 00:26:57.573925] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cd6c70 (9): Bad file descriptor 00:19:27.185 [2024-10-09 00:26:57.574927] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:19:27.185 [2024-10-09 00:26:57.574933] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:27.185 [2024-10-09 00:26:57.574940] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:19:27.185 [2024-10-09 00:26:57.574948] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:19:27.185 request: 00:19:27.185 { 00:19:27.185 "name": "TLSTEST", 00:19:27.185 "trtype": "tcp", 00:19:27.185 "traddr": "10.0.0.2", 00:19:27.185 "adrfam": "ipv4", 00:19:27.185 "trsvcid": "4420", 00:19:27.185 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:19:27.185 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:27.185 "prchk_reftag": false, 00:19:27.185 "prchk_guard": false, 00:19:27.185 "hdgst": false, 00:19:27.185 "ddgst": false, 00:19:27.185 "psk": "key0", 00:19:27.185 "allow_unrecognized_csi": false, 00:19:27.185 "method": "bdev_nvme_attach_controller", 00:19:27.185 "req_id": 1 00:19:27.185 } 00:19:27.185 Got JSON-RPC error response 00:19:27.185 response: 00:19:27.185 { 00:19:27.185 "code": -5, 00:19:27.185 "message": "Input/output error" 00:19:27.185 } 00:19:27.185 00:26:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 3268763 00:19:27.185 00:26:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3268763 ']' 00:19:27.185 00:26:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3268763 00:19:27.185 00:26:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:27.185 00:26:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:27.185 00:26:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3268763 00:19:27.185 00:26:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:19:27.185 00:26:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:19:27.185 00:26:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3268763' 00:19:27.185 killing process with pid 3268763 00:19:27.185 00:26:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3268763 00:19:27.185 Received shutdown signal, test time was about 10.000000 seconds 00:19:27.185 00:19:27.185 Latency(us) 00:19:27.185 [2024-10-08T22:26:57.820Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:27.185 [2024-10-08T22:26:57.820Z] =================================================================================================================== 00:19:27.185 [2024-10-08T22:26:57.820Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:27.185 00:26:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3268763 00:19:27.185 00:26:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:19:27.185 00:26:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:19:27.185 00:26:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:27.185 00:26:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:27.185 00:26:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:27.185 00:26:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:19:27.185 00:26:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:19:27.185 00:26:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:19:27.185 00:26:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:19:27.185 00:26:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:27.185 00:26:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:19:27.185 00:26:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:27.185 00:26:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:19:27.185 00:26:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:27.185 00:26:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:27.185 00:26:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:27.185 00:26:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:19:27.185 00:26:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:27.185 00:26:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3269098 00:19:27.185 00:26:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:27.185 00:26:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3269098 /var/tmp/bdevperf.sock 00:19:27.185 00:26:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:27.185 00:26:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3269098 ']' 00:19:27.185 00:26:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:27.186 00:26:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:27.186 00:26:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:27.186 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:27.186 00:26:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:27.186 00:26:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:27.472 [2024-10-09 00:26:57.834816] Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 initialization... 00:19:27.472 [2024-10-09 00:26:57.834874] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3269098 ] 00:19:27.472 [2024-10-09 00:26:57.908708] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:27.472 [2024-10-09 00:26:57.960076] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:19:28.076 00:26:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:28.076 00:26:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:28.076 00:26:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:19:28.336 [2024-10-09 00:26:58.789887] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:19:28.336 [2024-10-09 00:26:58.789909] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:19:28.336 request: 00:19:28.336 { 00:19:28.336 "name": "key0", 00:19:28.336 "path": "", 00:19:28.336 "method": "keyring_file_add_key", 00:19:28.336 "req_id": 1 00:19:28.336 } 00:19:28.336 Got JSON-RPC error response 00:19:28.336 response: 00:19:28.336 { 00:19:28.336 "code": -1, 00:19:28.336 "message": "Operation not permitted" 00:19:28.336 } 00:19:28.336 00:26:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:28.598 [2024-10-09 00:26:58.974427] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:28.598 [2024-10-09 00:26:58.974452] bdev_nvme.c:6391:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:19:28.598 request: 00:19:28.598 { 00:19:28.598 "name": "TLSTEST", 00:19:28.598 "trtype": "tcp", 00:19:28.598 "traddr": "10.0.0.2", 00:19:28.598 "adrfam": "ipv4", 00:19:28.598 "trsvcid": "4420", 00:19:28.598 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:28.598 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:28.598 "prchk_reftag": false, 00:19:28.598 "prchk_guard": false, 00:19:28.598 "hdgst": false, 00:19:28.598 "ddgst": false, 00:19:28.598 "psk": "key0", 00:19:28.598 "allow_unrecognized_csi": false, 00:19:28.598 "method": "bdev_nvme_attach_controller", 00:19:28.598 "req_id": 1 00:19:28.598 } 00:19:28.598 Got JSON-RPC error response 00:19:28.598 response: 00:19:28.598 { 00:19:28.598 "code": -126, 00:19:28.598 "message": "Required key not available" 00:19:28.598 } 00:19:28.598 00:26:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 3269098 00:19:28.598 00:26:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3269098 ']' 00:19:28.598 00:26:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3269098 00:19:28.598 00:26:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:28.598 00:26:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:28.598 00:26:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3269098 00:19:28.598 00:26:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:19:28.598 00:26:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:19:28.598 00:26:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3269098' 00:19:28.598 killing process with pid 3269098 00:19:28.598 00:26:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3269098 00:19:28.598 Received shutdown signal, test time was about 10.000000 seconds 00:19:28.598 00:19:28.598 Latency(us) 00:19:28.598 [2024-10-08T22:26:59.233Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:28.598 [2024-10-08T22:26:59.233Z] =================================================================================================================== 00:19:28.598 [2024-10-08T22:26:59.233Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:28.598 00:26:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3269098 00:19:28.598 00:26:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:19:28.598 00:26:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:19:28.598 00:26:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:28.598 00:26:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:28.598 00:26:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:28.598 00:26:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 3263048 00:19:28.598 00:26:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3263048 ']' 00:19:28.598 00:26:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3263048 00:19:28.598 00:26:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:28.598 00:26:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:28.598 00:26:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3263048 00:19:28.860 00:26:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:19:28.860 00:26:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:19:28.860 00:26:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3263048' 00:19:28.860 killing process with pid 3263048 00:19:28.860 00:26:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3263048 00:19:28.860 00:26:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3263048 00:19:28.860 00:26:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:19:28.860 00:26:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:19:28.860 00:26:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # local prefix key digest 00:19:28.860 00:26:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:19:28.860 00:26:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:19:28.860 00:26:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # digest=2 00:19:28.860 00:26:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@731 -- # python - 00:19:28.860 00:26:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:19:28.860 00:26:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:19:28.860 00:26:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.z1OMhTULXn 00:19:28.860 00:26:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:19:28.860 00:26:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.z1OMhTULXn 00:19:28.860 00:26:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:19:28.860 00:26:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:19:28.860 00:26:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:28.860 00:26:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:28.860 00:26:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=3269465 00:19:28.860 00:26:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 3269465 00:19:28.860 00:26:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:28.860 00:26:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3269465 ']' 00:19:28.860 00:26:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:28.860 00:26:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:28.860 00:26:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:28.860 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:28.860 00:26:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:28.860 00:26:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:28.860 [2024-10-09 00:26:59.481042] Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 initialization... 00:19:28.860 [2024-10-09 00:26:59.481098] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:29.130 [2024-10-09 00:26:59.564200] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:29.130 [2024-10-09 00:26:59.617390] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:29.130 [2024-10-09 00:26:59.617423] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:29.130 [2024-10-09 00:26:59.617430] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:29.130 [2024-10-09 00:26:59.617434] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:29.130 [2024-10-09 00:26:59.617439] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:29.130 [2024-10-09 00:26:59.617906] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:19:29.704 00:27:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:29.704 00:27:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:29.704 00:27:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:19:29.704 00:27:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:29.704 00:27:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:29.704 00:27:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:29.704 00:27:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.z1OMhTULXn 00:19:29.704 00:27:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.z1OMhTULXn 00:19:29.704 00:27:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:29.965 [2024-10-09 00:27:00.484977] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:29.965 00:27:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:30.226 00:27:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:30.226 [2024-10-09 00:27:00.805770] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:30.226 [2024-10-09 00:27:00.805960] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:30.226 00:27:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:30.487 malloc0 00:19:30.487 00:27:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:30.748 00:27:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.z1OMhTULXn 00:19:30.748 00:27:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:19:31.010 00:27:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.z1OMhTULXn 00:19:31.010 00:27:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:31.010 00:27:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:31.010 00:27:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:31.010 00:27:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.z1OMhTULXn 00:19:31.010 00:27:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:31.010 00:27:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3269921 00:19:31.010 00:27:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:31.010 00:27:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3269921 /var/tmp/bdevperf.sock 00:19:31.010 00:27:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:31.010 00:27:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3269921 ']' 00:19:31.010 00:27:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:31.010 00:27:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:31.010 00:27:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:31.010 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:31.010 00:27:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:31.010 00:27:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:31.010 [2024-10-09 00:27:01.531170] Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 initialization... 00:19:31.010 [2024-10-09 00:27:01.531223] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3269921 ] 00:19:31.010 [2024-10-09 00:27:01.606771] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:31.271 [2024-10-09 00:27:01.659083] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:19:31.843 00:27:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:31.843 00:27:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:31.843 00:27:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.z1OMhTULXn 00:19:32.103 00:27:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:32.103 [2024-10-09 00:27:02.621624] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:32.103 TLSTESTn1 00:19:32.103 00:27:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:19:32.364 Running I/O for 10 seconds... 00:19:34.248 6165.00 IOPS, 24.08 MiB/s [2024-10-08T22:27:05.824Z] 6255.00 IOPS, 24.43 MiB/s [2024-10-08T22:27:07.204Z] 6366.00 IOPS, 24.87 MiB/s [2024-10-08T22:27:08.170Z] 6256.25 IOPS, 24.44 MiB/s [2024-10-08T22:27:09.112Z] 6259.00 IOPS, 24.45 MiB/s [2024-10-08T22:27:10.053Z] 6295.00 IOPS, 24.59 MiB/s [2024-10-08T22:27:10.993Z] 6282.29 IOPS, 24.54 MiB/s [2024-10-08T22:27:11.936Z] 6296.62 IOPS, 24.60 MiB/s [2024-10-08T22:27:12.879Z] 6263.11 IOPS, 24.47 MiB/s [2024-10-08T22:27:12.879Z] 6239.20 IOPS, 24.37 MiB/s 00:19:42.244 Latency(us) 00:19:42.244 [2024-10-08T22:27:12.879Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:42.244 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:42.244 Verification LBA range: start 0x0 length 0x2000 00:19:42.244 TLSTESTn1 : 10.05 6223.84 24.31 0.00 0.00 20509.33 5051.73 44346.03 00:19:42.244 [2024-10-08T22:27:12.879Z] =================================================================================================================== 00:19:42.244 [2024-10-08T22:27:12.879Z] Total : 6223.84 24.31 0.00 0.00 20509.33 5051.73 44346.03 00:19:42.244 { 00:19:42.244 "results": [ 00:19:42.244 { 00:19:42.244 "job": "TLSTESTn1", 00:19:42.244 "core_mask": "0x4", 00:19:42.244 "workload": "verify", 00:19:42.244 "status": "finished", 00:19:42.244 "verify_range": { 00:19:42.244 "start": 0, 00:19:42.244 "length": 8192 00:19:42.244 }, 00:19:42.244 "queue_depth": 128, 00:19:42.244 "io_size": 4096, 00:19:42.244 "runtime": 10.04509, 00:19:42.244 "iops": 6223.83672022849, 00:19:42.244 "mibps": 24.31186218839254, 00:19:42.244 "io_failed": 0, 00:19:42.244 "io_timeout": 0, 00:19:42.244 "avg_latency_us": 20509.32630912203, 00:19:42.244 "min_latency_us": 5051.733333333334, 00:19:42.244 "max_latency_us": 44346.026666666665 00:19:42.244 } 00:19:42.244 ], 00:19:42.244 "core_count": 1 00:19:42.244 } 00:19:42.504 00:27:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:42.504 00:27:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 3269921 00:19:42.504 00:27:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3269921 ']' 00:19:42.504 00:27:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3269921 00:19:42.504 00:27:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:42.504 00:27:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:42.504 00:27:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3269921 00:19:42.504 00:27:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:19:42.504 00:27:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:19:42.504 00:27:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3269921' 00:19:42.504 killing process with pid 3269921 00:19:42.505 00:27:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3269921 00:19:42.505 Received shutdown signal, test time was about 10.000000 seconds 00:19:42.505 00:19:42.505 Latency(us) 00:19:42.505 [2024-10-08T22:27:13.140Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:42.505 [2024-10-08T22:27:13.140Z] =================================================================================================================== 00:19:42.505 [2024-10-08T22:27:13.140Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:42.505 00:27:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3269921 00:19:42.505 00:27:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.z1OMhTULXn 00:19:42.505 00:27:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.z1OMhTULXn 00:19:42.505 00:27:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:19:42.505 00:27:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.z1OMhTULXn 00:19:42.505 00:27:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:19:42.505 00:27:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:42.505 00:27:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:19:42.505 00:27:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:42.505 00:27:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.z1OMhTULXn 00:19:42.505 00:27:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:42.505 00:27:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:42.505 00:27:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:42.505 00:27:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.z1OMhTULXn 00:19:42.505 00:27:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:42.505 00:27:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3272737 00:19:42.505 00:27:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:42.505 00:27:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3272737 /var/tmp/bdevperf.sock 00:19:42.505 00:27:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:42.505 00:27:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3272737 ']' 00:19:42.505 00:27:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:42.505 00:27:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:42.505 00:27:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:42.505 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:42.505 00:27:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:42.505 00:27:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:42.505 [2024-10-09 00:27:13.129640] Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 initialization... 00:19:42.505 [2024-10-09 00:27:13.129697] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3272737 ] 00:19:42.765 [2024-10-09 00:27:13.206847] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:42.765 [2024-10-09 00:27:13.257898] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:19:43.335 00:27:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:43.335 00:27:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:43.335 00:27:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.z1OMhTULXn 00:19:43.595 [2024-10-09 00:27:14.079892] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.z1OMhTULXn': 0100666 00:19:43.595 [2024-10-09 00:27:14.079919] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:19:43.595 request: 00:19:43.595 { 00:19:43.595 "name": "key0", 00:19:43.595 "path": "/tmp/tmp.z1OMhTULXn", 00:19:43.595 "method": "keyring_file_add_key", 00:19:43.595 "req_id": 1 00:19:43.595 } 00:19:43.595 Got JSON-RPC error response 00:19:43.595 response: 00:19:43.595 { 00:19:43.595 "code": -1, 00:19:43.595 "message": "Operation not permitted" 00:19:43.595 } 00:19:43.595 00:27:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:43.855 [2024-10-09 00:27:14.256407] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:43.855 [2024-10-09 00:27:14.256433] bdev_nvme.c:6391:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:19:43.855 request: 00:19:43.855 { 00:19:43.855 "name": "TLSTEST", 00:19:43.855 "trtype": "tcp", 00:19:43.855 "traddr": "10.0.0.2", 00:19:43.855 "adrfam": "ipv4", 00:19:43.855 "trsvcid": "4420", 00:19:43.855 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:43.855 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:43.855 "prchk_reftag": false, 00:19:43.855 "prchk_guard": false, 00:19:43.855 "hdgst": false, 00:19:43.855 "ddgst": false, 00:19:43.855 "psk": "key0", 00:19:43.855 "allow_unrecognized_csi": false, 00:19:43.855 "method": "bdev_nvme_attach_controller", 00:19:43.855 "req_id": 1 00:19:43.855 } 00:19:43.855 Got JSON-RPC error response 00:19:43.855 response: 00:19:43.855 { 00:19:43.855 "code": -126, 00:19:43.855 "message": "Required key not available" 00:19:43.855 } 00:19:43.855 00:27:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 3272737 00:19:43.855 00:27:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3272737 ']' 00:19:43.855 00:27:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3272737 00:19:43.855 00:27:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:43.855 00:27:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:43.855 00:27:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3272737 00:19:43.855 00:27:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:19:43.855 00:27:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:19:43.855 00:27:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3272737' 00:19:43.855 killing process with pid 3272737 00:19:43.855 00:27:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3272737 00:19:43.855 Received shutdown signal, test time was about 10.000000 seconds 00:19:43.855 00:19:43.855 Latency(us) 00:19:43.855 [2024-10-08T22:27:14.490Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:43.855 [2024-10-08T22:27:14.490Z] =================================================================================================================== 00:19:43.855 [2024-10-08T22:27:14.490Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:43.855 00:27:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3272737 00:19:43.855 00:27:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:19:43.855 00:27:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:19:43.855 00:27:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:43.855 00:27:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:43.855 00:27:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:43.855 00:27:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 3269465 00:19:43.855 00:27:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3269465 ']' 00:19:43.855 00:27:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3269465 00:19:43.856 00:27:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:43.856 00:27:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:43.856 00:27:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3269465 00:19:44.116 00:27:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:19:44.116 00:27:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:19:44.116 00:27:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3269465' 00:19:44.116 killing process with pid 3269465 00:19:44.116 00:27:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3269465 00:19:44.117 00:27:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3269465 00:19:44.117 00:27:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:19:44.117 00:27:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:19:44.117 00:27:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:44.117 00:27:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:44.117 00:27:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=3273085 00:19:44.117 00:27:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 3273085 00:19:44.117 00:27:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:44.117 00:27:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3273085 ']' 00:19:44.117 00:27:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:44.117 00:27:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:44.117 00:27:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:44.117 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:44.117 00:27:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:44.117 00:27:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:44.117 [2024-10-09 00:27:14.706254] Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 initialization... 00:19:44.117 [2024-10-09 00:27:14.706307] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:44.377 [2024-10-09 00:27:14.789291] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:44.377 [2024-10-09 00:27:14.844986] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:44.377 [2024-10-09 00:27:14.845023] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:44.377 [2024-10-09 00:27:14.845029] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:44.377 [2024-10-09 00:27:14.845034] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:44.377 [2024-10-09 00:27:14.845038] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:44.377 [2024-10-09 00:27:14.845497] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:19:44.949 00:27:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:44.949 00:27:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:44.949 00:27:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:19:44.949 00:27:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:44.949 00:27:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:44.949 00:27:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:44.949 00:27:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.z1OMhTULXn 00:19:44.949 00:27:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:19:44.949 00:27:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.z1OMhTULXn 00:19:44.949 00:27:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=setup_nvmf_tgt 00:19:44.949 00:27:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:44.949 00:27:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t setup_nvmf_tgt 00:19:44.949 00:27:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:44.949 00:27:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # setup_nvmf_tgt /tmp/tmp.z1OMhTULXn 00:19:44.949 00:27:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.z1OMhTULXn 00:19:44.949 00:27:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:45.210 [2024-10-09 00:27:15.712676] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:45.210 00:27:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:45.470 00:27:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:45.470 [2024-10-09 00:27:16.081582] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:45.470 [2024-10-09 00:27:16.081798] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:45.731 00:27:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:45.731 malloc0 00:19:45.731 00:27:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:45.991 00:27:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.z1OMhTULXn 00:19:46.252 [2024-10-09 00:27:16.643283] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.z1OMhTULXn': 0100666 00:19:46.252 [2024-10-09 00:27:16.643310] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:19:46.252 request: 00:19:46.252 { 00:19:46.252 "name": "key0", 00:19:46.252 "path": "/tmp/tmp.z1OMhTULXn", 00:19:46.252 "method": "keyring_file_add_key", 00:19:46.252 "req_id": 1 00:19:46.252 } 00:19:46.252 Got JSON-RPC error response 00:19:46.252 response: 00:19:46.252 { 00:19:46.252 "code": -1, 00:19:46.252 "message": "Operation not permitted" 00:19:46.252 } 00:19:46.252 00:27:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:19:46.252 [2024-10-09 00:27:16.819743] tcp.c:3792:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:19:46.252 [2024-10-09 00:27:16.819773] subsystem.c:1055:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:19:46.252 request: 00:19:46.252 { 00:19:46.252 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:46.252 "host": "nqn.2016-06.io.spdk:host1", 00:19:46.252 "psk": "key0", 00:19:46.252 "method": "nvmf_subsystem_add_host", 00:19:46.252 "req_id": 1 00:19:46.252 } 00:19:46.252 Got JSON-RPC error response 00:19:46.252 response: 00:19:46.252 { 00:19:46.252 "code": -32603, 00:19:46.252 "message": "Internal error" 00:19:46.252 } 00:19:46.252 00:27:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:19:46.252 00:27:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:46.252 00:27:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:46.252 00:27:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:46.252 00:27:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 3273085 00:19:46.252 00:27:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3273085 ']' 00:19:46.252 00:27:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3273085 00:19:46.252 00:27:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:46.252 00:27:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:46.252 00:27:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3273085 00:19:46.513 00:27:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:19:46.513 00:27:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:19:46.513 00:27:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3273085' 00:19:46.513 killing process with pid 3273085 00:19:46.513 00:27:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3273085 00:19:46.513 00:27:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3273085 00:19:46.513 00:27:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.z1OMhTULXn 00:19:46.513 00:27:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:19:46.513 00:27:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:19:46.513 00:27:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:46.513 00:27:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:46.513 00:27:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=3273479 00:19:46.513 00:27:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 3273479 00:19:46.513 00:27:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:46.513 00:27:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3273479 ']' 00:19:46.513 00:27:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:46.513 00:27:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:46.513 00:27:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:46.513 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:46.513 00:27:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:46.513 00:27:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:46.513 [2024-10-09 00:27:17.109250] Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 initialization... 00:19:46.513 [2024-10-09 00:27:17.109302] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:46.774 [2024-10-09 00:27:17.192571] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:46.774 [2024-10-09 00:27:17.245215] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:46.774 [2024-10-09 00:27:17.245250] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:46.774 [2024-10-09 00:27:17.245256] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:46.774 [2024-10-09 00:27:17.245261] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:46.774 [2024-10-09 00:27:17.245265] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:46.774 [2024-10-09 00:27:17.245746] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:19:47.345 00:27:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:47.345 00:27:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:47.345 00:27:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:19:47.345 00:27:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:47.345 00:27:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:47.345 00:27:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:47.345 00:27:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.z1OMhTULXn 00:19:47.345 00:27:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.z1OMhTULXn 00:19:47.345 00:27:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:47.606 [2024-10-09 00:27:18.100826] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:47.606 00:27:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:47.866 00:27:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:47.866 [2024-10-09 00:27:18.461713] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:47.866 [2024-10-09 00:27:18.461897] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:47.866 00:27:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:48.128 malloc0 00:19:48.128 00:27:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:48.393 00:27:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.z1OMhTULXn 00:19:48.654 00:27:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:19:48.654 00:27:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=3273925 00:19:48.654 00:27:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:48.654 00:27:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:48.654 00:27:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 3273925 /var/tmp/bdevperf.sock 00:19:48.654 00:27:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3273925 ']' 00:19:48.654 00:27:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:48.654 00:27:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:48.654 00:27:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:48.654 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:48.654 00:27:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:48.654 00:27:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:48.654 [2024-10-09 00:27:19.279095] Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 initialization... 00:19:48.655 [2024-10-09 00:27:19.279152] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3273925 ] 00:19:48.923 [2024-10-09 00:27:19.357049] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:48.924 [2024-10-09 00:27:19.419976] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:19:49.495 00:27:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:49.495 00:27:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:49.495 00:27:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.z1OMhTULXn 00:19:49.755 00:27:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:50.016 [2024-10-09 00:27:20.411405] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:50.016 TLSTESTn1 00:19:50.016 00:27:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:19:50.278 00:27:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:19:50.278 "subsystems": [ 00:19:50.278 { 00:19:50.278 "subsystem": "keyring", 00:19:50.278 "config": [ 00:19:50.278 { 00:19:50.278 "method": "keyring_file_add_key", 00:19:50.278 "params": { 00:19:50.278 "name": "key0", 00:19:50.278 "path": "/tmp/tmp.z1OMhTULXn" 00:19:50.278 } 00:19:50.278 } 00:19:50.278 ] 00:19:50.278 }, 00:19:50.278 { 00:19:50.278 "subsystem": "iobuf", 00:19:50.278 "config": [ 00:19:50.278 { 00:19:50.278 "method": "iobuf_set_options", 00:19:50.278 "params": { 00:19:50.278 "small_pool_count": 8192, 00:19:50.278 "large_pool_count": 1024, 00:19:50.278 "small_bufsize": 8192, 00:19:50.278 "large_bufsize": 135168 00:19:50.278 } 00:19:50.278 } 00:19:50.278 ] 00:19:50.278 }, 00:19:50.278 { 00:19:50.278 "subsystem": "sock", 00:19:50.278 "config": [ 00:19:50.278 { 00:19:50.278 "method": "sock_set_default_impl", 00:19:50.278 "params": { 00:19:50.278 "impl_name": "posix" 00:19:50.278 } 00:19:50.278 }, 00:19:50.278 { 00:19:50.278 "method": "sock_impl_set_options", 00:19:50.278 "params": { 00:19:50.278 "impl_name": "ssl", 00:19:50.279 "recv_buf_size": 4096, 00:19:50.279 "send_buf_size": 4096, 00:19:50.279 "enable_recv_pipe": true, 00:19:50.279 "enable_quickack": false, 00:19:50.279 "enable_placement_id": 0, 00:19:50.279 "enable_zerocopy_send_server": true, 00:19:50.279 "enable_zerocopy_send_client": false, 00:19:50.279 "zerocopy_threshold": 0, 00:19:50.279 "tls_version": 0, 00:19:50.279 "enable_ktls": false 00:19:50.279 } 00:19:50.279 }, 00:19:50.279 { 00:19:50.279 "method": "sock_impl_set_options", 00:19:50.279 "params": { 00:19:50.279 "impl_name": "posix", 00:19:50.279 "recv_buf_size": 2097152, 00:19:50.279 "send_buf_size": 2097152, 00:19:50.279 "enable_recv_pipe": true, 00:19:50.279 "enable_quickack": false, 00:19:50.279 "enable_placement_id": 0, 00:19:50.279 "enable_zerocopy_send_server": true, 00:19:50.279 "enable_zerocopy_send_client": false, 00:19:50.279 "zerocopy_threshold": 0, 00:19:50.279 "tls_version": 0, 00:19:50.279 "enable_ktls": false 00:19:50.279 } 00:19:50.279 } 00:19:50.279 ] 00:19:50.279 }, 00:19:50.279 { 00:19:50.279 "subsystem": "vmd", 00:19:50.279 "config": [] 00:19:50.279 }, 00:19:50.279 { 00:19:50.279 "subsystem": "accel", 00:19:50.279 "config": [ 00:19:50.279 { 00:19:50.279 "method": "accel_set_options", 00:19:50.279 "params": { 00:19:50.279 "small_cache_size": 128, 00:19:50.279 "large_cache_size": 16, 00:19:50.279 "task_count": 2048, 00:19:50.279 "sequence_count": 2048, 00:19:50.279 "buf_count": 2048 00:19:50.279 } 00:19:50.279 } 00:19:50.279 ] 00:19:50.279 }, 00:19:50.279 { 00:19:50.279 "subsystem": "bdev", 00:19:50.279 "config": [ 00:19:50.279 { 00:19:50.279 "method": "bdev_set_options", 00:19:50.279 "params": { 00:19:50.279 "bdev_io_pool_size": 65535, 00:19:50.279 "bdev_io_cache_size": 256, 00:19:50.279 "bdev_auto_examine": true, 00:19:50.279 "iobuf_small_cache_size": 128, 00:19:50.279 "iobuf_large_cache_size": 16 00:19:50.279 } 00:19:50.279 }, 00:19:50.279 { 00:19:50.279 "method": "bdev_raid_set_options", 00:19:50.279 "params": { 00:19:50.279 "process_window_size_kb": 1024, 00:19:50.279 "process_max_bandwidth_mb_sec": 0 00:19:50.279 } 00:19:50.279 }, 00:19:50.279 { 00:19:50.279 "method": "bdev_iscsi_set_options", 00:19:50.279 "params": { 00:19:50.279 "timeout_sec": 30 00:19:50.279 } 00:19:50.279 }, 00:19:50.279 { 00:19:50.279 "method": "bdev_nvme_set_options", 00:19:50.279 "params": { 00:19:50.279 "action_on_timeout": "none", 00:19:50.279 "timeout_us": 0, 00:19:50.279 "timeout_admin_us": 0, 00:19:50.279 "keep_alive_timeout_ms": 10000, 00:19:50.279 "arbitration_burst": 0, 00:19:50.279 "low_priority_weight": 0, 00:19:50.279 "medium_priority_weight": 0, 00:19:50.279 "high_priority_weight": 0, 00:19:50.279 "nvme_adminq_poll_period_us": 10000, 00:19:50.279 "nvme_ioq_poll_period_us": 0, 00:19:50.279 "io_queue_requests": 0, 00:19:50.279 "delay_cmd_submit": true, 00:19:50.279 "transport_retry_count": 4, 00:19:50.279 "bdev_retry_count": 3, 00:19:50.279 "transport_ack_timeout": 0, 00:19:50.279 "ctrlr_loss_timeout_sec": 0, 00:19:50.279 "reconnect_delay_sec": 0, 00:19:50.279 "fast_io_fail_timeout_sec": 0, 00:19:50.279 "disable_auto_failback": false, 00:19:50.279 "generate_uuids": false, 00:19:50.279 "transport_tos": 0, 00:19:50.279 "nvme_error_stat": false, 00:19:50.279 "rdma_srq_size": 0, 00:19:50.279 "io_path_stat": false, 00:19:50.279 "allow_accel_sequence": false, 00:19:50.279 "rdma_max_cq_size": 0, 00:19:50.279 "rdma_cm_event_timeout_ms": 0, 00:19:50.279 "dhchap_digests": [ 00:19:50.279 "sha256", 00:19:50.279 "sha384", 00:19:50.279 "sha512" 00:19:50.279 ], 00:19:50.279 "dhchap_dhgroups": [ 00:19:50.279 "null", 00:19:50.279 "ffdhe2048", 00:19:50.279 "ffdhe3072", 00:19:50.279 "ffdhe4096", 00:19:50.279 "ffdhe6144", 00:19:50.279 "ffdhe8192" 00:19:50.279 ] 00:19:50.279 } 00:19:50.279 }, 00:19:50.279 { 00:19:50.279 "method": "bdev_nvme_set_hotplug", 00:19:50.279 "params": { 00:19:50.279 "period_us": 100000, 00:19:50.279 "enable": false 00:19:50.279 } 00:19:50.279 }, 00:19:50.279 { 00:19:50.279 "method": "bdev_malloc_create", 00:19:50.279 "params": { 00:19:50.279 "name": "malloc0", 00:19:50.279 "num_blocks": 8192, 00:19:50.279 "block_size": 4096, 00:19:50.279 "physical_block_size": 4096, 00:19:50.279 "uuid": "3fe5900d-0677-4a50-ab80-6150743ff857", 00:19:50.279 "optimal_io_boundary": 0, 00:19:50.279 "md_size": 0, 00:19:50.279 "dif_type": 0, 00:19:50.279 "dif_is_head_of_md": false, 00:19:50.279 "dif_pi_format": 0 00:19:50.279 } 00:19:50.279 }, 00:19:50.279 { 00:19:50.279 "method": "bdev_wait_for_examine" 00:19:50.279 } 00:19:50.279 ] 00:19:50.279 }, 00:19:50.279 { 00:19:50.279 "subsystem": "nbd", 00:19:50.279 "config": [] 00:19:50.279 }, 00:19:50.279 { 00:19:50.279 "subsystem": "scheduler", 00:19:50.279 "config": [ 00:19:50.279 { 00:19:50.279 "method": "framework_set_scheduler", 00:19:50.279 "params": { 00:19:50.279 "name": "static" 00:19:50.279 } 00:19:50.279 } 00:19:50.279 ] 00:19:50.279 }, 00:19:50.279 { 00:19:50.279 "subsystem": "nvmf", 00:19:50.279 "config": [ 00:19:50.279 { 00:19:50.279 "method": "nvmf_set_config", 00:19:50.279 "params": { 00:19:50.279 "discovery_filter": "match_any", 00:19:50.279 "admin_cmd_passthru": { 00:19:50.279 "identify_ctrlr": false 00:19:50.279 }, 00:19:50.279 "dhchap_digests": [ 00:19:50.279 "sha256", 00:19:50.279 "sha384", 00:19:50.279 "sha512" 00:19:50.279 ], 00:19:50.279 "dhchap_dhgroups": [ 00:19:50.279 "null", 00:19:50.279 "ffdhe2048", 00:19:50.279 "ffdhe3072", 00:19:50.279 "ffdhe4096", 00:19:50.279 "ffdhe6144", 00:19:50.279 "ffdhe8192" 00:19:50.279 ] 00:19:50.279 } 00:19:50.279 }, 00:19:50.279 { 00:19:50.279 "method": "nvmf_set_max_subsystems", 00:19:50.279 "params": { 00:19:50.279 "max_subsystems": 1024 00:19:50.279 } 00:19:50.279 }, 00:19:50.279 { 00:19:50.279 "method": "nvmf_set_crdt", 00:19:50.279 "params": { 00:19:50.279 "crdt1": 0, 00:19:50.279 "crdt2": 0, 00:19:50.279 "crdt3": 0 00:19:50.279 } 00:19:50.279 }, 00:19:50.279 { 00:19:50.279 "method": "nvmf_create_transport", 00:19:50.279 "params": { 00:19:50.279 "trtype": "TCP", 00:19:50.279 "max_queue_depth": 128, 00:19:50.279 "max_io_qpairs_per_ctrlr": 127, 00:19:50.279 "in_capsule_data_size": 4096, 00:19:50.279 "max_io_size": 131072, 00:19:50.279 "io_unit_size": 131072, 00:19:50.279 "max_aq_depth": 128, 00:19:50.279 "num_shared_buffers": 511, 00:19:50.279 "buf_cache_size": 4294967295, 00:19:50.279 "dif_insert_or_strip": false, 00:19:50.279 "zcopy": false, 00:19:50.279 "c2h_success": false, 00:19:50.279 "sock_priority": 0, 00:19:50.279 "abort_timeout_sec": 1, 00:19:50.279 "ack_timeout": 0, 00:19:50.279 "data_wr_pool_size": 0 00:19:50.279 } 00:19:50.279 }, 00:19:50.279 { 00:19:50.279 "method": "nvmf_create_subsystem", 00:19:50.279 "params": { 00:19:50.279 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:50.279 "allow_any_host": false, 00:19:50.279 "serial_number": "SPDK00000000000001", 00:19:50.279 "model_number": "SPDK bdev Controller", 00:19:50.279 "max_namespaces": 10, 00:19:50.279 "min_cntlid": 1, 00:19:50.279 "max_cntlid": 65519, 00:19:50.279 "ana_reporting": false 00:19:50.279 } 00:19:50.279 }, 00:19:50.279 { 00:19:50.279 "method": "nvmf_subsystem_add_host", 00:19:50.279 "params": { 00:19:50.279 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:50.279 "host": "nqn.2016-06.io.spdk:host1", 00:19:50.279 "psk": "key0" 00:19:50.279 } 00:19:50.279 }, 00:19:50.279 { 00:19:50.279 "method": "nvmf_subsystem_add_ns", 00:19:50.279 "params": { 00:19:50.279 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:50.279 "namespace": { 00:19:50.279 "nsid": 1, 00:19:50.279 "bdev_name": "malloc0", 00:19:50.279 "nguid": "3FE5900D06774A50AB806150743FF857", 00:19:50.279 "uuid": "3fe5900d-0677-4a50-ab80-6150743ff857", 00:19:50.279 "no_auto_visible": false 00:19:50.279 } 00:19:50.279 } 00:19:50.280 }, 00:19:50.280 { 00:19:50.280 "method": "nvmf_subsystem_add_listener", 00:19:50.280 "params": { 00:19:50.280 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:50.280 "listen_address": { 00:19:50.280 "trtype": "TCP", 00:19:50.280 "adrfam": "IPv4", 00:19:50.280 "traddr": "10.0.0.2", 00:19:50.280 "trsvcid": "4420" 00:19:50.280 }, 00:19:50.280 "secure_channel": true 00:19:50.280 } 00:19:50.280 } 00:19:50.280 ] 00:19:50.280 } 00:19:50.280 ] 00:19:50.280 }' 00:19:50.280 00:27:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:19:50.541 00:27:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:19:50.541 "subsystems": [ 00:19:50.541 { 00:19:50.541 "subsystem": "keyring", 00:19:50.541 "config": [ 00:19:50.541 { 00:19:50.541 "method": "keyring_file_add_key", 00:19:50.541 "params": { 00:19:50.541 "name": "key0", 00:19:50.541 "path": "/tmp/tmp.z1OMhTULXn" 00:19:50.541 } 00:19:50.541 } 00:19:50.541 ] 00:19:50.541 }, 00:19:50.541 { 00:19:50.541 "subsystem": "iobuf", 00:19:50.541 "config": [ 00:19:50.541 { 00:19:50.541 "method": "iobuf_set_options", 00:19:50.541 "params": { 00:19:50.541 "small_pool_count": 8192, 00:19:50.541 "large_pool_count": 1024, 00:19:50.541 "small_bufsize": 8192, 00:19:50.541 "large_bufsize": 135168 00:19:50.541 } 00:19:50.541 } 00:19:50.541 ] 00:19:50.541 }, 00:19:50.541 { 00:19:50.541 "subsystem": "sock", 00:19:50.541 "config": [ 00:19:50.541 { 00:19:50.541 "method": "sock_set_default_impl", 00:19:50.541 "params": { 00:19:50.541 "impl_name": "posix" 00:19:50.541 } 00:19:50.541 }, 00:19:50.541 { 00:19:50.541 "method": "sock_impl_set_options", 00:19:50.541 "params": { 00:19:50.541 "impl_name": "ssl", 00:19:50.541 "recv_buf_size": 4096, 00:19:50.541 "send_buf_size": 4096, 00:19:50.541 "enable_recv_pipe": true, 00:19:50.541 "enable_quickack": false, 00:19:50.541 "enable_placement_id": 0, 00:19:50.541 "enable_zerocopy_send_server": true, 00:19:50.541 "enable_zerocopy_send_client": false, 00:19:50.541 "zerocopy_threshold": 0, 00:19:50.541 "tls_version": 0, 00:19:50.541 "enable_ktls": false 00:19:50.541 } 00:19:50.541 }, 00:19:50.541 { 00:19:50.541 "method": "sock_impl_set_options", 00:19:50.541 "params": { 00:19:50.541 "impl_name": "posix", 00:19:50.541 "recv_buf_size": 2097152, 00:19:50.541 "send_buf_size": 2097152, 00:19:50.541 "enable_recv_pipe": true, 00:19:50.541 "enable_quickack": false, 00:19:50.541 "enable_placement_id": 0, 00:19:50.541 "enable_zerocopy_send_server": true, 00:19:50.541 "enable_zerocopy_send_client": false, 00:19:50.541 "zerocopy_threshold": 0, 00:19:50.541 "tls_version": 0, 00:19:50.541 "enable_ktls": false 00:19:50.541 } 00:19:50.542 } 00:19:50.542 ] 00:19:50.542 }, 00:19:50.542 { 00:19:50.542 "subsystem": "vmd", 00:19:50.542 "config": [] 00:19:50.542 }, 00:19:50.542 { 00:19:50.542 "subsystem": "accel", 00:19:50.542 "config": [ 00:19:50.542 { 00:19:50.542 "method": "accel_set_options", 00:19:50.542 "params": { 00:19:50.542 "small_cache_size": 128, 00:19:50.542 "large_cache_size": 16, 00:19:50.542 "task_count": 2048, 00:19:50.542 "sequence_count": 2048, 00:19:50.542 "buf_count": 2048 00:19:50.542 } 00:19:50.542 } 00:19:50.542 ] 00:19:50.542 }, 00:19:50.542 { 00:19:50.542 "subsystem": "bdev", 00:19:50.542 "config": [ 00:19:50.542 { 00:19:50.542 "method": "bdev_set_options", 00:19:50.542 "params": { 00:19:50.542 "bdev_io_pool_size": 65535, 00:19:50.542 "bdev_io_cache_size": 256, 00:19:50.542 "bdev_auto_examine": true, 00:19:50.542 "iobuf_small_cache_size": 128, 00:19:50.542 "iobuf_large_cache_size": 16 00:19:50.542 } 00:19:50.542 }, 00:19:50.542 { 00:19:50.542 "method": "bdev_raid_set_options", 00:19:50.542 "params": { 00:19:50.542 "process_window_size_kb": 1024, 00:19:50.542 "process_max_bandwidth_mb_sec": 0 00:19:50.542 } 00:19:50.542 }, 00:19:50.542 { 00:19:50.542 "method": "bdev_iscsi_set_options", 00:19:50.542 "params": { 00:19:50.542 "timeout_sec": 30 00:19:50.542 } 00:19:50.542 }, 00:19:50.542 { 00:19:50.542 "method": "bdev_nvme_set_options", 00:19:50.542 "params": { 00:19:50.542 "action_on_timeout": "none", 00:19:50.542 "timeout_us": 0, 00:19:50.542 "timeout_admin_us": 0, 00:19:50.542 "keep_alive_timeout_ms": 10000, 00:19:50.542 "arbitration_burst": 0, 00:19:50.542 "low_priority_weight": 0, 00:19:50.542 "medium_priority_weight": 0, 00:19:50.542 "high_priority_weight": 0, 00:19:50.542 "nvme_adminq_poll_period_us": 10000, 00:19:50.542 "nvme_ioq_poll_period_us": 0, 00:19:50.542 "io_queue_requests": 512, 00:19:50.542 "delay_cmd_submit": true, 00:19:50.542 "transport_retry_count": 4, 00:19:50.542 "bdev_retry_count": 3, 00:19:50.542 "transport_ack_timeout": 0, 00:19:50.542 "ctrlr_loss_timeout_sec": 0, 00:19:50.542 "reconnect_delay_sec": 0, 00:19:50.542 "fast_io_fail_timeout_sec": 0, 00:19:50.542 "disable_auto_failback": false, 00:19:50.542 "generate_uuids": false, 00:19:50.542 "transport_tos": 0, 00:19:50.542 "nvme_error_stat": false, 00:19:50.542 "rdma_srq_size": 0, 00:19:50.542 "io_path_stat": false, 00:19:50.542 "allow_accel_sequence": false, 00:19:50.542 "rdma_max_cq_size": 0, 00:19:50.542 "rdma_cm_event_timeout_ms": 0, 00:19:50.542 "dhchap_digests": [ 00:19:50.542 "sha256", 00:19:50.542 "sha384", 00:19:50.542 "sha512" 00:19:50.542 ], 00:19:50.542 "dhchap_dhgroups": [ 00:19:50.542 "null", 00:19:50.542 "ffdhe2048", 00:19:50.542 "ffdhe3072", 00:19:50.542 "ffdhe4096", 00:19:50.542 "ffdhe6144", 00:19:50.542 "ffdhe8192" 00:19:50.542 ] 00:19:50.542 } 00:19:50.542 }, 00:19:50.542 { 00:19:50.542 "method": "bdev_nvme_attach_controller", 00:19:50.542 "params": { 00:19:50.542 "name": "TLSTEST", 00:19:50.542 "trtype": "TCP", 00:19:50.542 "adrfam": "IPv4", 00:19:50.542 "traddr": "10.0.0.2", 00:19:50.542 "trsvcid": "4420", 00:19:50.542 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:50.542 "prchk_reftag": false, 00:19:50.542 "prchk_guard": false, 00:19:50.542 "ctrlr_loss_timeout_sec": 0, 00:19:50.542 "reconnect_delay_sec": 0, 00:19:50.542 "fast_io_fail_timeout_sec": 0, 00:19:50.542 "psk": "key0", 00:19:50.542 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:50.542 "hdgst": false, 00:19:50.542 "ddgst": false, 00:19:50.542 "multipath": "multipath" 00:19:50.542 } 00:19:50.542 }, 00:19:50.542 { 00:19:50.542 "method": "bdev_nvme_set_hotplug", 00:19:50.542 "params": { 00:19:50.542 "period_us": 100000, 00:19:50.542 "enable": false 00:19:50.542 } 00:19:50.542 }, 00:19:50.542 { 00:19:50.542 "method": "bdev_wait_for_examine" 00:19:50.542 } 00:19:50.542 ] 00:19:50.542 }, 00:19:50.542 { 00:19:50.542 "subsystem": "nbd", 00:19:50.542 "config": [] 00:19:50.542 } 00:19:50.542 ] 00:19:50.542 }' 00:19:50.542 00:27:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 3273925 00:19:50.542 00:27:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3273925 ']' 00:19:50.542 00:27:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3273925 00:19:50.542 00:27:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:50.542 00:27:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:50.542 00:27:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3273925 00:19:50.542 00:27:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:19:50.542 00:27:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:19:50.542 00:27:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3273925' 00:19:50.542 killing process with pid 3273925 00:19:50.542 00:27:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3273925 00:19:50.542 Received shutdown signal, test time was about 10.000000 seconds 00:19:50.542 00:19:50.542 Latency(us) 00:19:50.542 [2024-10-08T22:27:21.177Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:50.542 [2024-10-08T22:27:21.177Z] =================================================================================================================== 00:19:50.542 [2024-10-08T22:27:21.177Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:50.542 00:27:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3273925 00:19:50.804 00:27:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 3273479 00:19:50.804 00:27:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3273479 ']' 00:19:50.804 00:27:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3273479 00:19:50.804 00:27:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:50.804 00:27:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:50.804 00:27:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3273479 00:19:50.804 00:27:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:19:50.804 00:27:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:19:50.804 00:27:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3273479' 00:19:50.804 killing process with pid 3273479 00:19:50.804 00:27:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3273479 00:19:50.804 00:27:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3273479 00:19:50.804 00:27:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:19:50.804 00:27:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:19:50.804 00:27:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:50.804 00:27:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:50.804 00:27:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:19:50.804 "subsystems": [ 00:19:50.804 { 00:19:50.804 "subsystem": "keyring", 00:19:50.804 "config": [ 00:19:50.804 { 00:19:50.804 "method": "keyring_file_add_key", 00:19:50.804 "params": { 00:19:50.804 "name": "key0", 00:19:50.804 "path": "/tmp/tmp.z1OMhTULXn" 00:19:50.804 } 00:19:50.804 } 00:19:50.804 ] 00:19:50.804 }, 00:19:50.804 { 00:19:50.804 "subsystem": "iobuf", 00:19:50.804 "config": [ 00:19:50.804 { 00:19:50.804 "method": "iobuf_set_options", 00:19:50.804 "params": { 00:19:50.804 "small_pool_count": 8192, 00:19:50.804 "large_pool_count": 1024, 00:19:50.804 "small_bufsize": 8192, 00:19:50.804 "large_bufsize": 135168 00:19:50.804 } 00:19:50.804 } 00:19:50.804 ] 00:19:50.804 }, 00:19:50.804 { 00:19:50.804 "subsystem": "sock", 00:19:50.804 "config": [ 00:19:50.804 { 00:19:50.804 "method": "sock_set_default_impl", 00:19:50.804 "params": { 00:19:50.804 "impl_name": "posix" 00:19:50.804 } 00:19:50.804 }, 00:19:50.804 { 00:19:50.804 "method": "sock_impl_set_options", 00:19:50.804 "params": { 00:19:50.804 "impl_name": "ssl", 00:19:50.804 "recv_buf_size": 4096, 00:19:50.804 "send_buf_size": 4096, 00:19:50.804 "enable_recv_pipe": true, 00:19:50.804 "enable_quickack": false, 00:19:50.804 "enable_placement_id": 0, 00:19:50.804 "enable_zerocopy_send_server": true, 00:19:50.804 "enable_zerocopy_send_client": false, 00:19:50.804 "zerocopy_threshold": 0, 00:19:50.804 "tls_version": 0, 00:19:50.804 "enable_ktls": false 00:19:50.804 } 00:19:50.804 }, 00:19:50.804 { 00:19:50.804 "method": "sock_impl_set_options", 00:19:50.804 "params": { 00:19:50.804 "impl_name": "posix", 00:19:50.804 "recv_buf_size": 2097152, 00:19:50.804 "send_buf_size": 2097152, 00:19:50.804 "enable_recv_pipe": true, 00:19:50.804 "enable_quickack": false, 00:19:50.804 "enable_placement_id": 0, 00:19:50.804 "enable_zerocopy_send_server": true, 00:19:50.804 "enable_zerocopy_send_client": false, 00:19:50.804 "zerocopy_threshold": 0, 00:19:50.804 "tls_version": 0, 00:19:50.804 "enable_ktls": false 00:19:50.804 } 00:19:50.804 } 00:19:50.804 ] 00:19:50.804 }, 00:19:50.804 { 00:19:50.804 "subsystem": "vmd", 00:19:50.804 "config": [] 00:19:50.804 }, 00:19:50.804 { 00:19:50.804 "subsystem": "accel", 00:19:50.804 "config": [ 00:19:50.804 { 00:19:50.804 "method": "accel_set_options", 00:19:50.804 "params": { 00:19:50.804 "small_cache_size": 128, 00:19:50.804 "large_cache_size": 16, 00:19:50.804 "task_count": 2048, 00:19:50.804 "sequence_count": 2048, 00:19:50.804 "buf_count": 2048 00:19:50.804 } 00:19:50.804 } 00:19:50.804 ] 00:19:50.804 }, 00:19:50.804 { 00:19:50.804 "subsystem": "bdev", 00:19:50.804 "config": [ 00:19:50.804 { 00:19:50.804 "method": "bdev_set_options", 00:19:50.804 "params": { 00:19:50.804 "bdev_io_pool_size": 65535, 00:19:50.804 "bdev_io_cache_size": 256, 00:19:50.804 "bdev_auto_examine": true, 00:19:50.804 "iobuf_small_cache_size": 128, 00:19:50.804 "iobuf_large_cache_size": 16 00:19:50.804 } 00:19:50.804 }, 00:19:50.804 { 00:19:50.804 "method": "bdev_raid_set_options", 00:19:50.804 "params": { 00:19:50.804 "process_window_size_kb": 1024, 00:19:50.804 "process_max_bandwidth_mb_sec": 0 00:19:50.804 } 00:19:50.804 }, 00:19:50.804 { 00:19:50.804 "method": "bdev_iscsi_set_options", 00:19:50.804 "params": { 00:19:50.804 "timeout_sec": 30 00:19:50.804 } 00:19:50.804 }, 00:19:50.804 { 00:19:50.804 "method": "bdev_nvme_set_options", 00:19:50.804 "params": { 00:19:50.804 "action_on_timeout": "none", 00:19:50.804 "timeout_us": 0, 00:19:50.804 "timeout_admin_us": 0, 00:19:50.804 "keep_alive_timeout_ms": 10000, 00:19:50.804 "arbitration_burst": 0, 00:19:50.804 "low_priority_weight": 0, 00:19:50.804 "medium_priority_weight": 0, 00:19:50.804 "high_priority_weight": 0, 00:19:50.804 "nvme_adminq_poll_period_us": 10000, 00:19:50.804 "nvme_ioq_poll_period_us": 0, 00:19:50.804 "io_queue_requests": 0, 00:19:50.804 "delay_cmd_submit": true, 00:19:50.805 "transport_retry_count": 4, 00:19:50.805 "bdev_retry_count": 3, 00:19:50.805 "transport_ack_timeout": 0, 00:19:50.805 "ctrlr_loss_timeout_sec": 0, 00:19:50.805 "reconnect_delay_sec": 0, 00:19:50.805 "fast_io_fail_timeout_sec": 0, 00:19:50.805 "disable_auto_failback": false, 00:19:50.805 "generate_uuids": false, 00:19:50.805 "transport_tos": 0, 00:19:50.805 "nvme_error_stat": false, 00:19:50.805 "rdma_srq_size": 0, 00:19:50.805 "io_path_stat": false, 00:19:50.805 "allow_accel_sequence": false, 00:19:50.805 "rdma_max_cq_size": 0, 00:19:50.805 "rdma_cm_event_timeout_ms": 0, 00:19:50.805 "dhchap_digests": [ 00:19:50.805 "sha256", 00:19:50.805 "sha384", 00:19:50.805 "sha512" 00:19:50.805 ], 00:19:50.805 "dhchap_dhgroups": [ 00:19:50.805 "null", 00:19:50.805 "ffdhe2048", 00:19:50.805 "ffdhe3072", 00:19:50.805 "ffdhe4096", 00:19:50.805 "ffdhe6144", 00:19:50.805 "ffdhe8192" 00:19:50.805 ] 00:19:50.805 } 00:19:50.805 }, 00:19:50.805 { 00:19:50.805 "method": "bdev_nvme_set_hotplug", 00:19:50.805 "params": { 00:19:50.805 "period_us": 100000, 00:19:50.805 "enable": false 00:19:50.805 } 00:19:50.805 }, 00:19:50.805 { 00:19:50.805 "method": "bdev_malloc_create", 00:19:50.805 "params": { 00:19:50.805 "name": "malloc0", 00:19:50.805 "num_blocks": 8192, 00:19:50.805 "block_size": 4096, 00:19:50.805 "physical_block_size": 4096, 00:19:50.805 "uuid": "3fe5900d-0677-4a50-ab80-6150743ff857", 00:19:50.805 "optimal_io_boundary": 0, 00:19:50.805 "md_size": 0, 00:19:50.805 "dif_type": 0, 00:19:50.805 "dif_is_head_of_md": false, 00:19:50.805 "dif_pi_format": 0 00:19:50.805 } 00:19:50.805 }, 00:19:50.805 { 00:19:50.805 "method": "bdev_wait_for_examine" 00:19:50.805 } 00:19:50.805 ] 00:19:50.805 }, 00:19:50.805 { 00:19:50.805 "subsystem": "nbd", 00:19:50.805 "config": [] 00:19:50.805 }, 00:19:50.805 { 00:19:50.805 "subsystem": "scheduler", 00:19:50.805 "config": [ 00:19:50.805 { 00:19:50.805 "method": "framework_set_scheduler", 00:19:50.805 "params": { 00:19:50.805 "name": "static" 00:19:50.805 } 00:19:50.805 } 00:19:50.805 ] 00:19:50.805 }, 00:19:50.805 { 00:19:50.805 "subsystem": "nvmf", 00:19:50.805 "config": [ 00:19:50.805 { 00:19:50.805 "method": "nvmf_set_config", 00:19:50.805 "params": { 00:19:50.805 "discovery_filter": "match_any", 00:19:50.805 "admin_cmd_passthru": { 00:19:50.805 "identify_ctrlr": false 00:19:50.805 }, 00:19:50.805 "dhchap_digests": [ 00:19:50.805 "sha256", 00:19:50.805 "sha384", 00:19:50.805 "sha512" 00:19:50.805 ], 00:19:50.805 "dhchap_dhgroups": [ 00:19:50.805 "null", 00:19:50.805 "ffdhe2048", 00:19:50.805 "ffdhe3072", 00:19:50.805 "ffdhe4096", 00:19:50.805 "ffdhe6144", 00:19:50.805 "ffdhe8192" 00:19:50.805 ] 00:19:50.805 } 00:19:50.805 }, 00:19:50.805 { 00:19:50.805 "method": "nvmf_set_max_subsystems", 00:19:50.805 "params": { 00:19:50.805 "max_subsystems": 1024 00:19:50.805 } 00:19:50.805 }, 00:19:50.805 { 00:19:50.805 "method": "nvmf_set_crdt", 00:19:50.805 "params": { 00:19:50.805 "crdt1": 0, 00:19:50.805 "crdt2": 0, 00:19:50.805 "crdt3": 0 00:19:50.805 } 00:19:50.805 }, 00:19:50.805 { 00:19:50.805 "method": "nvmf_create_transport", 00:19:50.805 "params": { 00:19:50.805 "trtype": "TCP", 00:19:50.805 "max_queue_depth": 128, 00:19:50.805 "max_io_qpairs_per_ctrlr": 127, 00:19:50.805 "in_capsule_data_size": 4096, 00:19:50.805 "max_io_size": 131072, 00:19:50.805 "io_unit_size": 131072, 00:19:50.805 "max_aq_depth": 128, 00:19:50.805 "num_shared_buffers": 511, 00:19:50.805 "buf_cache_size": 4294967295, 00:19:50.805 "dif_insert_or_strip": false, 00:19:50.805 "zcopy": false, 00:19:50.805 "c2h_success": false, 00:19:50.805 "sock_priority": 0, 00:19:50.805 "abort_timeout_sec": 1, 00:19:50.805 "ack_timeout": 0, 00:19:50.805 "data_wr_pool_size": 0 00:19:50.805 } 00:19:50.805 }, 00:19:50.805 { 00:19:50.805 "method": "nvmf_create_subsystem", 00:19:50.805 "params": { 00:19:50.805 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:50.805 "allow_any_host": false, 00:19:50.805 "serial_number": "SPDK00000000000001", 00:19:50.805 "model_number": "SPDK bdev Controller", 00:19:50.805 "max_namespaces": 10, 00:19:50.805 "min_cntlid": 1, 00:19:50.805 "max_cntlid": 65519, 00:19:50.805 "ana_reporting": false 00:19:50.805 } 00:19:50.805 }, 00:19:50.805 { 00:19:50.805 "method": "nvmf_subsystem_add_host", 00:19:50.805 "params": { 00:19:50.805 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:50.805 "host": "nqn.2016-06.io.spdk:host1", 00:19:50.805 "psk": "key0" 00:19:50.805 } 00:19:50.805 }, 00:19:50.805 { 00:19:50.805 "method": "nvmf_subsystem_add_ns", 00:19:50.805 "params": { 00:19:50.805 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:50.805 "namespace": { 00:19:50.805 "nsid": 1, 00:19:50.805 "bdev_name": "malloc0", 00:19:50.805 "nguid": "3FE5900D06774A50AB806150743FF857", 00:19:50.805 "uuid": "3fe5900d-0677-4a50-ab80-6150743ff857", 00:19:50.805 "no_auto_visible": false 00:19:50.805 } 00:19:50.805 } 00:19:50.805 }, 00:19:50.805 { 00:19:50.805 "method": "nvmf_subsystem_add_listener", 00:19:50.805 "params": { 00:19:50.805 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:50.805 "listen_address": { 00:19:50.805 "trtype": "TCP", 00:19:50.805 "adrfam": "IPv4", 00:19:50.805 "traddr": "10.0.0.2", 00:19:50.805 "trsvcid": "4420" 00:19:50.805 }, 00:19:50.805 "secure_channel": true 00:19:50.805 } 00:19:50.805 } 00:19:50.805 ] 00:19:50.805 } 00:19:50.805 ] 00:19:50.805 }' 00:19:50.805 00:27:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=3274490 00:19:50.805 00:27:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 3274490 00:19:50.805 00:27:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:19:50.805 00:27:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3274490 ']' 00:19:50.805 00:27:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:50.805 00:27:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:50.805 00:27:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:50.805 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:50.805 00:27:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:50.805 00:27:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:51.066 [2024-10-09 00:27:21.467150] Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 initialization... 00:19:51.066 [2024-10-09 00:27:21.467208] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:51.066 [2024-10-09 00:27:21.548059] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:51.066 [2024-10-09 00:27:21.601277] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:51.066 [2024-10-09 00:27:21.601309] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:51.066 [2024-10-09 00:27:21.601315] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:51.066 [2024-10-09 00:27:21.601320] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:51.066 [2024-10-09 00:27:21.601324] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:51.066 [2024-10-09 00:27:21.601828] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:19:51.327 [2024-10-09 00:27:21.803272] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:51.327 [2024-10-09 00:27:21.835275] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:51.327 [2024-10-09 00:27:21.835482] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:51.903 00:27:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:51.903 00:27:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:51.903 00:27:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:19:51.903 00:27:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:51.903 00:27:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:51.903 00:27:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:51.903 00:27:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=3274553 00:19:51.903 00:27:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 3274553 /var/tmp/bdevperf.sock 00:19:51.903 00:27:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3274553 ']' 00:19:51.903 00:27:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:51.903 00:27:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:51.903 00:27:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:51.903 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:51.903 00:27:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:19:51.903 00:27:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:51.903 00:27:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:51.903 00:27:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:19:51.903 "subsystems": [ 00:19:51.903 { 00:19:51.903 "subsystem": "keyring", 00:19:51.903 "config": [ 00:19:51.903 { 00:19:51.903 "method": "keyring_file_add_key", 00:19:51.903 "params": { 00:19:51.903 "name": "key0", 00:19:51.903 "path": "/tmp/tmp.z1OMhTULXn" 00:19:51.903 } 00:19:51.903 } 00:19:51.903 ] 00:19:51.903 }, 00:19:51.903 { 00:19:51.903 "subsystem": "iobuf", 00:19:51.903 "config": [ 00:19:51.903 { 00:19:51.903 "method": "iobuf_set_options", 00:19:51.903 "params": { 00:19:51.903 "small_pool_count": 8192, 00:19:51.903 "large_pool_count": 1024, 00:19:51.903 "small_bufsize": 8192, 00:19:51.903 "large_bufsize": 135168 00:19:51.903 } 00:19:51.903 } 00:19:51.903 ] 00:19:51.903 }, 00:19:51.903 { 00:19:51.903 "subsystem": "sock", 00:19:51.903 "config": [ 00:19:51.903 { 00:19:51.903 "method": "sock_set_default_impl", 00:19:51.903 "params": { 00:19:51.903 "impl_name": "posix" 00:19:51.903 } 00:19:51.903 }, 00:19:51.903 { 00:19:51.903 "method": "sock_impl_set_options", 00:19:51.903 "params": { 00:19:51.903 "impl_name": "ssl", 00:19:51.903 "recv_buf_size": 4096, 00:19:51.903 "send_buf_size": 4096, 00:19:51.903 "enable_recv_pipe": true, 00:19:51.903 "enable_quickack": false, 00:19:51.903 "enable_placement_id": 0, 00:19:51.903 "enable_zerocopy_send_server": true, 00:19:51.903 "enable_zerocopy_send_client": false, 00:19:51.903 "zerocopy_threshold": 0, 00:19:51.903 "tls_version": 0, 00:19:51.903 "enable_ktls": false 00:19:51.903 } 00:19:51.903 }, 00:19:51.903 { 00:19:51.903 "method": "sock_impl_set_options", 00:19:51.903 "params": { 00:19:51.903 "impl_name": "posix", 00:19:51.903 "recv_buf_size": 2097152, 00:19:51.903 "send_buf_size": 2097152, 00:19:51.903 "enable_recv_pipe": true, 00:19:51.903 "enable_quickack": false, 00:19:51.903 "enable_placement_id": 0, 00:19:51.903 "enable_zerocopy_send_server": true, 00:19:51.903 "enable_zerocopy_send_client": false, 00:19:51.903 "zerocopy_threshold": 0, 00:19:51.903 "tls_version": 0, 00:19:51.903 "enable_ktls": false 00:19:51.903 } 00:19:51.903 } 00:19:51.903 ] 00:19:51.903 }, 00:19:51.903 { 00:19:51.903 "subsystem": "vmd", 00:19:51.903 "config": [] 00:19:51.903 }, 00:19:51.903 { 00:19:51.903 "subsystem": "accel", 00:19:51.903 "config": [ 00:19:51.903 { 00:19:51.903 "method": "accel_set_options", 00:19:51.903 "params": { 00:19:51.903 "small_cache_size": 128, 00:19:51.903 "large_cache_size": 16, 00:19:51.903 "task_count": 2048, 00:19:51.903 "sequence_count": 2048, 00:19:51.903 "buf_count": 2048 00:19:51.903 } 00:19:51.903 } 00:19:51.903 ] 00:19:51.903 }, 00:19:51.903 { 00:19:51.903 "subsystem": "bdev", 00:19:51.903 "config": [ 00:19:51.903 { 00:19:51.903 "method": "bdev_set_options", 00:19:51.903 "params": { 00:19:51.903 "bdev_io_pool_size": 65535, 00:19:51.903 "bdev_io_cache_size": 256, 00:19:51.903 "bdev_auto_examine": true, 00:19:51.903 "iobuf_small_cache_size": 128, 00:19:51.903 "iobuf_large_cache_size": 16 00:19:51.903 } 00:19:51.903 }, 00:19:51.903 { 00:19:51.903 "method": "bdev_raid_set_options", 00:19:51.903 "params": { 00:19:51.903 "process_window_size_kb": 1024, 00:19:51.903 "process_max_bandwidth_mb_sec": 0 00:19:51.903 } 00:19:51.903 }, 00:19:51.903 { 00:19:51.903 "method": "bdev_iscsi_set_options", 00:19:51.903 "params": { 00:19:51.903 "timeout_sec": 30 00:19:51.903 } 00:19:51.903 }, 00:19:51.903 { 00:19:51.903 "method": "bdev_nvme_set_options", 00:19:51.903 "params": { 00:19:51.903 "action_on_timeout": "none", 00:19:51.903 "timeout_us": 0, 00:19:51.903 "timeout_admin_us": 0, 00:19:51.904 "keep_alive_timeout_ms": 10000, 00:19:51.904 "arbitration_burst": 0, 00:19:51.904 "low_priority_weight": 0, 00:19:51.904 "medium_priority_weight": 0, 00:19:51.904 "high_priority_weight": 0, 00:19:51.904 "nvme_adminq_poll_period_us": 10000, 00:19:51.904 "nvme_ioq_poll_period_us": 0, 00:19:51.904 "io_queue_requests": 512, 00:19:51.904 "delay_cmd_submit": true, 00:19:51.904 "transport_retry_count": 4, 00:19:51.904 "bdev_retry_count": 3, 00:19:51.904 "transport_ack_timeout": 0, 00:19:51.904 "ctrlr_loss_timeout_sec": 0, 00:19:51.904 "reconnect_delay_sec": 0, 00:19:51.904 "fast_io_fail_timeout_sec": 0, 00:19:51.904 "disable_auto_failback": false, 00:19:51.904 "generate_uuids": false, 00:19:51.904 "transport_tos": 0, 00:19:51.904 "nvme_error_stat": false, 00:19:51.904 "rdma_srq_size": 0, 00:19:51.904 "io_path_stat": false, 00:19:51.904 "allow_accel_sequence": false, 00:19:51.904 "rdma_max_cq_size": 0, 00:19:51.904 "rdma_cm_event_timeout_ms": 0, 00:19:51.904 "dhchap_digests": [ 00:19:51.904 "sha256", 00:19:51.904 "sha384", 00:19:51.904 "sha512" 00:19:51.904 ], 00:19:51.904 "dhchap_dhgroups": [ 00:19:51.904 "null", 00:19:51.904 "ffdhe2048", 00:19:51.904 "ffdhe3072", 00:19:51.904 "ffdhe4096", 00:19:51.904 "ffdhe6144", 00:19:51.904 "ffdhe8192" 00:19:51.904 ] 00:19:51.904 } 00:19:51.904 }, 00:19:51.904 { 00:19:51.904 "method": "bdev_nvme_attach_controller", 00:19:51.904 "params": { 00:19:51.904 "name": "TLSTEST", 00:19:51.904 "trtype": "TCP", 00:19:51.904 "adrfam": "IPv4", 00:19:51.904 "traddr": "10.0.0.2", 00:19:51.904 "trsvcid": "4420", 00:19:51.904 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:51.904 "prchk_reftag": false, 00:19:51.904 "prchk_guard": false, 00:19:51.904 "ctrlr_loss_timeout_sec": 0, 00:19:51.904 "reconnect_delay_sec": 0, 00:19:51.904 "fast_io_fail_timeout_sec": 0, 00:19:51.904 "psk": "key0", 00:19:51.904 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:51.904 "hdgst": false, 00:19:51.904 "ddgst": false, 00:19:51.904 "multipath": "multipath" 00:19:51.904 } 00:19:51.904 }, 00:19:51.904 { 00:19:51.904 "method": "bdev_nvme_set_hotplug", 00:19:51.904 "params": { 00:19:51.904 "period_us": 100000, 00:19:51.904 "enable": false 00:19:51.904 } 00:19:51.904 }, 00:19:51.904 { 00:19:51.904 "method": "bdev_wait_for_examine" 00:19:51.904 } 00:19:51.904 ] 00:19:51.904 }, 00:19:51.904 { 00:19:51.904 "subsystem": "nbd", 00:19:51.904 "config": [] 00:19:51.904 } 00:19:51.904 ] 00:19:51.904 }' 00:19:51.904 [2024-10-09 00:27:22.348418] Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 initialization... 00:19:51.904 [2024-10-09 00:27:22.348508] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3274553 ] 00:19:51.904 [2024-10-09 00:27:22.429963] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:51.904 [2024-10-09 00:27:22.493924] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:19:52.165 [2024-10-09 00:27:22.633370] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:52.739 00:27:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:52.739 00:27:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:52.739 00:27:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:19:52.739 Running I/O for 10 seconds... 00:19:54.621 3980.00 IOPS, 15.55 MiB/s [2024-10-08T22:27:26.647Z] 4806.50 IOPS, 18.78 MiB/s [2024-10-08T22:27:27.589Z] 5063.00 IOPS, 19.78 MiB/s [2024-10-08T22:27:28.528Z] 5304.00 IOPS, 20.72 MiB/s [2024-10-08T22:27:29.469Z] 5107.20 IOPS, 19.95 MiB/s [2024-10-08T22:27:30.411Z] 5309.67 IOPS, 20.74 MiB/s [2024-10-08T22:27:31.351Z] 5345.43 IOPS, 20.88 MiB/s [2024-10-08T22:27:32.292Z] 5360.88 IOPS, 20.94 MiB/s [2024-10-08T22:27:33.672Z] 5406.11 IOPS, 21.12 MiB/s [2024-10-08T22:27:33.673Z] 5461.50 IOPS, 21.33 MiB/s 00:20:03.038 Latency(us) 00:20:03.038 [2024-10-08T22:27:33.673Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:03.038 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:03.038 Verification LBA range: start 0x0 length 0x2000 00:20:03.038 TLSTESTn1 : 10.01 5467.56 21.36 0.00 0.00 23377.55 4915.20 75147.95 00:20:03.038 [2024-10-08T22:27:33.673Z] =================================================================================================================== 00:20:03.038 [2024-10-08T22:27:33.673Z] Total : 5467.56 21.36 0.00 0.00 23377.55 4915.20 75147.95 00:20:03.038 { 00:20:03.038 "results": [ 00:20:03.038 { 00:20:03.038 "job": "TLSTESTn1", 00:20:03.038 "core_mask": "0x4", 00:20:03.038 "workload": "verify", 00:20:03.038 "status": "finished", 00:20:03.038 "verify_range": { 00:20:03.038 "start": 0, 00:20:03.038 "length": 8192 00:20:03.038 }, 00:20:03.038 "queue_depth": 128, 00:20:03.038 "io_size": 4096, 00:20:03.038 "runtime": 10.012138, 00:20:03.038 "iops": 5467.563471458344, 00:20:03.038 "mibps": 21.357669810384156, 00:20:03.038 "io_failed": 0, 00:20:03.038 "io_timeout": 0, 00:20:03.038 "avg_latency_us": 23377.545507288738, 00:20:03.038 "min_latency_us": 4915.2, 00:20:03.038 "max_latency_us": 75147.94666666667 00:20:03.038 } 00:20:03.038 ], 00:20:03.038 "core_count": 1 00:20:03.038 } 00:20:03.038 00:27:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:03.038 00:27:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 3274553 00:20:03.038 00:27:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3274553 ']' 00:20:03.038 00:27:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3274553 00:20:03.038 00:27:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:20:03.038 00:27:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:03.038 00:27:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3274553 00:20:03.038 00:27:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:20:03.038 00:27:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:20:03.038 00:27:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3274553' 00:20:03.038 killing process with pid 3274553 00:20:03.038 00:27:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3274553 00:20:03.038 Received shutdown signal, test time was about 10.000000 seconds 00:20:03.038 00:20:03.038 Latency(us) 00:20:03.038 [2024-10-08T22:27:33.673Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:03.038 [2024-10-08T22:27:33.673Z] =================================================================================================================== 00:20:03.038 [2024-10-08T22:27:33.673Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:03.038 00:27:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3274553 00:20:03.038 00:27:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 3274490 00:20:03.038 00:27:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3274490 ']' 00:20:03.038 00:27:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3274490 00:20:03.038 00:27:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:20:03.038 00:27:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:03.038 00:27:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3274490 00:20:03.038 00:27:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:20:03.038 00:27:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:20:03.038 00:27:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3274490' 00:20:03.038 killing process with pid 3274490 00:20:03.038 00:27:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3274490 00:20:03.038 00:27:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3274490 00:20:03.038 00:27:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:20:03.038 00:27:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:20:03.038 00:27:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:03.038 00:27:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:03.038 00:27:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=3276881 00:20:03.038 00:27:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 3276881 00:20:03.038 00:27:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:20:03.038 00:27:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3276881 ']' 00:20:03.038 00:27:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:03.038 00:27:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:03.038 00:27:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:03.038 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:03.038 00:27:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:03.038 00:27:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:03.302 [2024-10-09 00:27:33.698325] Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 initialization... 00:20:03.302 [2024-10-09 00:27:33.698384] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:03.302 [2024-10-09 00:27:33.781349] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:03.302 [2024-10-09 00:27:33.867020] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:03.302 [2024-10-09 00:27:33.867080] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:03.303 [2024-10-09 00:27:33.867089] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:03.303 [2024-10-09 00:27:33.867096] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:03.303 [2024-10-09 00:27:33.867103] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:03.303 [2024-10-09 00:27:33.867950] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:20:03.879 00:27:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:03.879 00:27:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:20:03.879 00:27:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:20:03.879 00:27:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:03.879 00:27:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:04.140 00:27:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:04.140 00:27:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.z1OMhTULXn 00:20:04.140 00:27:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.z1OMhTULXn 00:20:04.140 00:27:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:04.140 [2024-10-09 00:27:34.714027] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:04.140 00:27:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:04.402 00:27:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:04.663 [2024-10-09 00:27:35.115035] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:04.663 [2024-10-09 00:27:35.115378] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:04.663 00:27:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:04.924 malloc0 00:20:04.924 00:27:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:04.924 00:27:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.z1OMhTULXn 00:20:05.185 00:27:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:20:05.445 00:27:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:20:05.445 00:27:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=3277258 00:20:05.445 00:27:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:05.445 00:27:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 3277258 /var/tmp/bdevperf.sock 00:20:05.445 00:27:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3277258 ']' 00:20:05.445 00:27:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:05.445 00:27:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:05.445 00:27:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:05.445 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:05.445 00:27:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:05.445 00:27:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:05.445 [2024-10-09 00:27:35.982152] Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 initialization... 00:20:05.445 [2024-10-09 00:27:35.982228] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3277258 ] 00:20:05.445 [2024-10-09 00:27:36.062658] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:05.704 [2024-10-09 00:27:36.124194] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:20:06.275 00:27:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:06.275 00:27:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:20:06.275 00:27:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.z1OMhTULXn 00:20:06.535 00:27:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:20:06.535 [2024-10-09 00:27:37.137006] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:06.795 nvme0n1 00:20:06.795 00:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:06.795 Running I/O for 1 seconds... 00:20:07.791 4690.00 IOPS, 18.32 MiB/s 00:20:07.791 Latency(us) 00:20:07.791 [2024-10-08T22:27:38.426Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:07.791 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:07.791 Verification LBA range: start 0x0 length 0x2000 00:20:07.791 nvme0n1 : 1.02 4706.79 18.39 0.00 0.00 26935.52 4560.21 32549.55 00:20:07.791 [2024-10-08T22:27:38.426Z] =================================================================================================================== 00:20:07.791 [2024-10-08T22:27:38.426Z] Total : 4706.79 18.39 0.00 0.00 26935.52 4560.21 32549.55 00:20:07.791 { 00:20:07.791 "results": [ 00:20:07.791 { 00:20:07.791 "job": "nvme0n1", 00:20:07.791 "core_mask": "0x2", 00:20:07.791 "workload": "verify", 00:20:07.791 "status": "finished", 00:20:07.791 "verify_range": { 00:20:07.791 "start": 0, 00:20:07.791 "length": 8192 00:20:07.791 }, 00:20:07.791 "queue_depth": 128, 00:20:07.791 "io_size": 4096, 00:20:07.791 "runtime": 1.023628, 00:20:07.791 "iops": 4706.788012832787, 00:20:07.791 "mibps": 18.385890675128074, 00:20:07.791 "io_failed": 0, 00:20:07.791 "io_timeout": 0, 00:20:07.791 "avg_latency_us": 26935.518738065588, 00:20:07.791 "min_latency_us": 4560.213333333333, 00:20:07.791 "max_latency_us": 32549.546666666665 00:20:07.791 } 00:20:07.791 ], 00:20:07.791 "core_count": 1 00:20:07.791 } 00:20:07.791 00:27:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 3277258 00:20:07.791 00:27:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3277258 ']' 00:20:07.791 00:27:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3277258 00:20:07.791 00:27:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:20:07.791 00:27:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:07.791 00:27:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3277258 00:20:08.071 00:27:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:20:08.071 00:27:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:20:08.071 00:27:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3277258' 00:20:08.071 killing process with pid 3277258 00:20:08.071 00:27:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3277258 00:20:08.071 Received shutdown signal, test time was about 1.000000 seconds 00:20:08.071 00:20:08.071 Latency(us) 00:20:08.071 [2024-10-08T22:27:38.706Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:08.071 [2024-10-08T22:27:38.706Z] =================================================================================================================== 00:20:08.071 [2024-10-08T22:27:38.706Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:08.071 00:27:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3277258 00:20:08.071 00:27:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 3276881 00:20:08.071 00:27:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3276881 ']' 00:20:08.071 00:27:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3276881 00:20:08.071 00:27:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:20:08.071 00:27:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:08.071 00:27:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3276881 00:20:08.071 00:27:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:08.071 00:27:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:08.071 00:27:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3276881' 00:20:08.071 killing process with pid 3276881 00:20:08.071 00:27:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3276881 00:20:08.071 00:27:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3276881 00:20:08.343 00:27:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:20:08.343 00:27:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:20:08.343 00:27:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:08.343 00:27:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:08.343 00:27:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=3277871 00:20:08.343 00:27:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 3277871 00:20:08.343 00:27:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:20:08.343 00:27:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3277871 ']' 00:20:08.343 00:27:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:08.343 00:27:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:08.343 00:27:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:08.343 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:08.344 00:27:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:08.344 00:27:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:08.344 [2024-10-09 00:27:38.821472] Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 initialization... 00:20:08.344 [2024-10-09 00:27:38.821531] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:08.344 [2024-10-09 00:27:38.904940] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:08.344 [2024-10-09 00:27:38.959061] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:08.344 [2024-10-09 00:27:38.959093] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:08.344 [2024-10-09 00:27:38.959099] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:08.344 [2024-10-09 00:27:38.959104] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:08.344 [2024-10-09 00:27:38.959108] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:08.344 [2024-10-09 00:27:38.959562] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:20:09.285 00:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:09.285 00:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:20:09.285 00:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:20:09.285 00:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:09.285 00:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:09.285 00:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:09.285 00:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:20:09.285 00:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:09.285 00:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:09.285 [2024-10-09 00:27:39.654674] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:09.285 malloc0 00:20:09.285 [2024-10-09 00:27:39.695216] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:09.285 [2024-10-09 00:27:39.695431] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:09.285 00:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:09.285 00:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=3277975 00:20:09.285 00:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 3277975 /var/tmp/bdevperf.sock 00:20:09.285 00:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:20:09.285 00:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3277975 ']' 00:20:09.285 00:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:09.285 00:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:09.285 00:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:09.285 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:09.285 00:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:09.285 00:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:09.285 [2024-10-09 00:27:39.784785] Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 initialization... 00:20:09.285 [2024-10-09 00:27:39.784834] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3277975 ] 00:20:09.285 [2024-10-09 00:27:39.862292] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:09.285 [2024-10-09 00:27:39.915872] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:20:10.227 00:27:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:10.227 00:27:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:20:10.227 00:27:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.z1OMhTULXn 00:20:10.227 00:27:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:20:10.487 [2024-10-09 00:27:40.879257] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:10.487 nvme0n1 00:20:10.487 00:27:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:10.487 Running I/O for 1 seconds... 00:20:11.871 5560.00 IOPS, 21.72 MiB/s 00:20:11.871 Latency(us) 00:20:11.871 [2024-10-08T22:27:42.506Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:11.871 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:11.871 Verification LBA range: start 0x0 length 0x2000 00:20:11.871 nvme0n1 : 1.05 5427.21 21.20 0.00 0.00 23131.93 5406.72 46312.11 00:20:11.871 [2024-10-08T22:27:42.506Z] =================================================================================================================== 00:20:11.871 [2024-10-08T22:27:42.506Z] Total : 5427.21 21.20 0.00 0.00 23131.93 5406.72 46312.11 00:20:11.871 { 00:20:11.871 "results": [ 00:20:11.871 { 00:20:11.871 "job": "nvme0n1", 00:20:11.871 "core_mask": "0x2", 00:20:11.871 "workload": "verify", 00:20:11.871 "status": "finished", 00:20:11.871 "verify_range": { 00:20:11.871 "start": 0, 00:20:11.871 "length": 8192 00:20:11.871 }, 00:20:11.871 "queue_depth": 128, 00:20:11.871 "io_size": 4096, 00:20:11.871 "runtime": 1.048052, 00:20:11.871 "iops": 5427.211626904009, 00:20:11.871 "mibps": 21.200045417593785, 00:20:11.871 "io_failed": 0, 00:20:11.871 "io_timeout": 0, 00:20:11.871 "avg_latency_us": 23131.927163619315, 00:20:11.871 "min_latency_us": 5406.72, 00:20:11.871 "max_latency_us": 46312.10666666667 00:20:11.871 } 00:20:11.871 ], 00:20:11.871 "core_count": 1 00:20:11.871 } 00:20:11.871 00:27:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:20:11.871 00:27:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:11.871 00:27:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:11.871 00:27:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:11.871 00:27:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:20:11.871 "subsystems": [ 00:20:11.871 { 00:20:11.871 "subsystem": "keyring", 00:20:11.871 "config": [ 00:20:11.871 { 00:20:11.871 "method": "keyring_file_add_key", 00:20:11.871 "params": { 00:20:11.871 "name": "key0", 00:20:11.871 "path": "/tmp/tmp.z1OMhTULXn" 00:20:11.871 } 00:20:11.871 } 00:20:11.871 ] 00:20:11.871 }, 00:20:11.871 { 00:20:11.871 "subsystem": "iobuf", 00:20:11.871 "config": [ 00:20:11.871 { 00:20:11.871 "method": "iobuf_set_options", 00:20:11.871 "params": { 00:20:11.871 "small_pool_count": 8192, 00:20:11.871 "large_pool_count": 1024, 00:20:11.871 "small_bufsize": 8192, 00:20:11.871 "large_bufsize": 135168 00:20:11.871 } 00:20:11.871 } 00:20:11.871 ] 00:20:11.871 }, 00:20:11.871 { 00:20:11.871 "subsystem": "sock", 00:20:11.871 "config": [ 00:20:11.871 { 00:20:11.871 "method": "sock_set_default_impl", 00:20:11.871 "params": { 00:20:11.871 "impl_name": "posix" 00:20:11.871 } 00:20:11.871 }, 00:20:11.871 { 00:20:11.871 "method": "sock_impl_set_options", 00:20:11.871 "params": { 00:20:11.871 "impl_name": "ssl", 00:20:11.871 "recv_buf_size": 4096, 00:20:11.871 "send_buf_size": 4096, 00:20:11.871 "enable_recv_pipe": true, 00:20:11.871 "enable_quickack": false, 00:20:11.871 "enable_placement_id": 0, 00:20:11.871 "enable_zerocopy_send_server": true, 00:20:11.871 "enable_zerocopy_send_client": false, 00:20:11.871 "zerocopy_threshold": 0, 00:20:11.871 "tls_version": 0, 00:20:11.871 "enable_ktls": false 00:20:11.871 } 00:20:11.872 }, 00:20:11.872 { 00:20:11.872 "method": "sock_impl_set_options", 00:20:11.872 "params": { 00:20:11.872 "impl_name": "posix", 00:20:11.872 "recv_buf_size": 2097152, 00:20:11.872 "send_buf_size": 2097152, 00:20:11.872 "enable_recv_pipe": true, 00:20:11.872 "enable_quickack": false, 00:20:11.872 "enable_placement_id": 0, 00:20:11.872 "enable_zerocopy_send_server": true, 00:20:11.872 "enable_zerocopy_send_client": false, 00:20:11.872 "zerocopy_threshold": 0, 00:20:11.872 "tls_version": 0, 00:20:11.872 "enable_ktls": false 00:20:11.872 } 00:20:11.872 } 00:20:11.872 ] 00:20:11.872 }, 00:20:11.872 { 00:20:11.872 "subsystem": "vmd", 00:20:11.872 "config": [] 00:20:11.872 }, 00:20:11.872 { 00:20:11.872 "subsystem": "accel", 00:20:11.872 "config": [ 00:20:11.872 { 00:20:11.872 "method": "accel_set_options", 00:20:11.872 "params": { 00:20:11.872 "small_cache_size": 128, 00:20:11.872 "large_cache_size": 16, 00:20:11.872 "task_count": 2048, 00:20:11.872 "sequence_count": 2048, 00:20:11.872 "buf_count": 2048 00:20:11.872 } 00:20:11.872 } 00:20:11.872 ] 00:20:11.872 }, 00:20:11.872 { 00:20:11.872 "subsystem": "bdev", 00:20:11.872 "config": [ 00:20:11.872 { 00:20:11.872 "method": "bdev_set_options", 00:20:11.872 "params": { 00:20:11.872 "bdev_io_pool_size": 65535, 00:20:11.872 "bdev_io_cache_size": 256, 00:20:11.872 "bdev_auto_examine": true, 00:20:11.872 "iobuf_small_cache_size": 128, 00:20:11.872 "iobuf_large_cache_size": 16 00:20:11.872 } 00:20:11.872 }, 00:20:11.872 { 00:20:11.872 "method": "bdev_raid_set_options", 00:20:11.872 "params": { 00:20:11.872 "process_window_size_kb": 1024, 00:20:11.872 "process_max_bandwidth_mb_sec": 0 00:20:11.872 } 00:20:11.872 }, 00:20:11.872 { 00:20:11.872 "method": "bdev_iscsi_set_options", 00:20:11.872 "params": { 00:20:11.872 "timeout_sec": 30 00:20:11.872 } 00:20:11.872 }, 00:20:11.872 { 00:20:11.872 "method": "bdev_nvme_set_options", 00:20:11.872 "params": { 00:20:11.872 "action_on_timeout": "none", 00:20:11.872 "timeout_us": 0, 00:20:11.872 "timeout_admin_us": 0, 00:20:11.872 "keep_alive_timeout_ms": 10000, 00:20:11.872 "arbitration_burst": 0, 00:20:11.872 "low_priority_weight": 0, 00:20:11.872 "medium_priority_weight": 0, 00:20:11.872 "high_priority_weight": 0, 00:20:11.872 "nvme_adminq_poll_period_us": 10000, 00:20:11.872 "nvme_ioq_poll_period_us": 0, 00:20:11.872 "io_queue_requests": 0, 00:20:11.872 "delay_cmd_submit": true, 00:20:11.872 "transport_retry_count": 4, 00:20:11.872 "bdev_retry_count": 3, 00:20:11.872 "transport_ack_timeout": 0, 00:20:11.872 "ctrlr_loss_timeout_sec": 0, 00:20:11.872 "reconnect_delay_sec": 0, 00:20:11.872 "fast_io_fail_timeout_sec": 0, 00:20:11.872 "disable_auto_failback": false, 00:20:11.872 "generate_uuids": false, 00:20:11.872 "transport_tos": 0, 00:20:11.872 "nvme_error_stat": false, 00:20:11.872 "rdma_srq_size": 0, 00:20:11.872 "io_path_stat": false, 00:20:11.872 "allow_accel_sequence": false, 00:20:11.872 "rdma_max_cq_size": 0, 00:20:11.872 "rdma_cm_event_timeout_ms": 0, 00:20:11.872 "dhchap_digests": [ 00:20:11.872 "sha256", 00:20:11.872 "sha384", 00:20:11.872 "sha512" 00:20:11.872 ], 00:20:11.872 "dhchap_dhgroups": [ 00:20:11.872 "null", 00:20:11.872 "ffdhe2048", 00:20:11.872 "ffdhe3072", 00:20:11.872 "ffdhe4096", 00:20:11.872 "ffdhe6144", 00:20:11.872 "ffdhe8192" 00:20:11.872 ] 00:20:11.872 } 00:20:11.872 }, 00:20:11.872 { 00:20:11.872 "method": "bdev_nvme_set_hotplug", 00:20:11.872 "params": { 00:20:11.872 "period_us": 100000, 00:20:11.872 "enable": false 00:20:11.872 } 00:20:11.872 }, 00:20:11.872 { 00:20:11.872 "method": "bdev_malloc_create", 00:20:11.872 "params": { 00:20:11.872 "name": "malloc0", 00:20:11.872 "num_blocks": 8192, 00:20:11.872 "block_size": 4096, 00:20:11.872 "physical_block_size": 4096, 00:20:11.872 "uuid": "47900c2f-f8c0-42cc-9c17-29b4a60f5544", 00:20:11.872 "optimal_io_boundary": 0, 00:20:11.872 "md_size": 0, 00:20:11.872 "dif_type": 0, 00:20:11.872 "dif_is_head_of_md": false, 00:20:11.872 "dif_pi_format": 0 00:20:11.872 } 00:20:11.872 }, 00:20:11.872 { 00:20:11.872 "method": "bdev_wait_for_examine" 00:20:11.872 } 00:20:11.872 ] 00:20:11.872 }, 00:20:11.872 { 00:20:11.872 "subsystem": "nbd", 00:20:11.872 "config": [] 00:20:11.872 }, 00:20:11.872 { 00:20:11.872 "subsystem": "scheduler", 00:20:11.872 "config": [ 00:20:11.872 { 00:20:11.872 "method": "framework_set_scheduler", 00:20:11.872 "params": { 00:20:11.872 "name": "static" 00:20:11.872 } 00:20:11.872 } 00:20:11.872 ] 00:20:11.872 }, 00:20:11.872 { 00:20:11.872 "subsystem": "nvmf", 00:20:11.872 "config": [ 00:20:11.872 { 00:20:11.872 "method": "nvmf_set_config", 00:20:11.872 "params": { 00:20:11.872 "discovery_filter": "match_any", 00:20:11.872 "admin_cmd_passthru": { 00:20:11.872 "identify_ctrlr": false 00:20:11.872 }, 00:20:11.872 "dhchap_digests": [ 00:20:11.872 "sha256", 00:20:11.872 "sha384", 00:20:11.872 "sha512" 00:20:11.872 ], 00:20:11.872 "dhchap_dhgroups": [ 00:20:11.872 "null", 00:20:11.872 "ffdhe2048", 00:20:11.872 "ffdhe3072", 00:20:11.872 "ffdhe4096", 00:20:11.872 "ffdhe6144", 00:20:11.872 "ffdhe8192" 00:20:11.872 ] 00:20:11.872 } 00:20:11.872 }, 00:20:11.872 { 00:20:11.872 "method": "nvmf_set_max_subsystems", 00:20:11.872 "params": { 00:20:11.872 "max_subsystems": 1024 00:20:11.872 } 00:20:11.872 }, 00:20:11.872 { 00:20:11.872 "method": "nvmf_set_crdt", 00:20:11.872 "params": { 00:20:11.872 "crdt1": 0, 00:20:11.872 "crdt2": 0, 00:20:11.872 "crdt3": 0 00:20:11.872 } 00:20:11.872 }, 00:20:11.872 { 00:20:11.872 "method": "nvmf_create_transport", 00:20:11.872 "params": { 00:20:11.872 "trtype": "TCP", 00:20:11.872 "max_queue_depth": 128, 00:20:11.872 "max_io_qpairs_per_ctrlr": 127, 00:20:11.872 "in_capsule_data_size": 4096, 00:20:11.872 "max_io_size": 131072, 00:20:11.872 "io_unit_size": 131072, 00:20:11.872 "max_aq_depth": 128, 00:20:11.872 "num_shared_buffers": 511, 00:20:11.872 "buf_cache_size": 4294967295, 00:20:11.872 "dif_insert_or_strip": false, 00:20:11.872 "zcopy": false, 00:20:11.872 "c2h_success": false, 00:20:11.872 "sock_priority": 0, 00:20:11.872 "abort_timeout_sec": 1, 00:20:11.872 "ack_timeout": 0, 00:20:11.872 "data_wr_pool_size": 0 00:20:11.872 } 00:20:11.872 }, 00:20:11.872 { 00:20:11.872 "method": "nvmf_create_subsystem", 00:20:11.872 "params": { 00:20:11.872 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:11.872 "allow_any_host": false, 00:20:11.872 "serial_number": "00000000000000000000", 00:20:11.872 "model_number": "SPDK bdev Controller", 00:20:11.872 "max_namespaces": 32, 00:20:11.872 "min_cntlid": 1, 00:20:11.872 "max_cntlid": 65519, 00:20:11.872 "ana_reporting": false 00:20:11.872 } 00:20:11.872 }, 00:20:11.872 { 00:20:11.872 "method": "nvmf_subsystem_add_host", 00:20:11.872 "params": { 00:20:11.872 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:11.872 "host": "nqn.2016-06.io.spdk:host1", 00:20:11.872 "psk": "key0" 00:20:11.872 } 00:20:11.872 }, 00:20:11.872 { 00:20:11.872 "method": "nvmf_subsystem_add_ns", 00:20:11.872 "params": { 00:20:11.872 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:11.872 "namespace": { 00:20:11.872 "nsid": 1, 00:20:11.872 "bdev_name": "malloc0", 00:20:11.872 "nguid": "47900C2FF8C042CC9C1729B4A60F5544", 00:20:11.872 "uuid": "47900c2f-f8c0-42cc-9c17-29b4a60f5544", 00:20:11.872 "no_auto_visible": false 00:20:11.872 } 00:20:11.872 } 00:20:11.872 }, 00:20:11.872 { 00:20:11.872 "method": "nvmf_subsystem_add_listener", 00:20:11.872 "params": { 00:20:11.872 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:11.872 "listen_address": { 00:20:11.872 "trtype": "TCP", 00:20:11.872 "adrfam": "IPv4", 00:20:11.872 "traddr": "10.0.0.2", 00:20:11.872 "trsvcid": "4420" 00:20:11.872 }, 00:20:11.872 "secure_channel": false, 00:20:11.872 "sock_impl": "ssl" 00:20:11.872 } 00:20:11.872 } 00:20:11.872 ] 00:20:11.872 } 00:20:11.872 ] 00:20:11.872 }' 00:20:11.872 00:27:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:20:11.872 00:27:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:20:11.872 "subsystems": [ 00:20:11.872 { 00:20:11.872 "subsystem": "keyring", 00:20:11.872 "config": [ 00:20:11.872 { 00:20:11.872 "method": "keyring_file_add_key", 00:20:11.872 "params": { 00:20:11.872 "name": "key0", 00:20:11.872 "path": "/tmp/tmp.z1OMhTULXn" 00:20:11.872 } 00:20:11.872 } 00:20:11.872 ] 00:20:11.872 }, 00:20:11.872 { 00:20:11.872 "subsystem": "iobuf", 00:20:11.872 "config": [ 00:20:11.872 { 00:20:11.872 "method": "iobuf_set_options", 00:20:11.872 "params": { 00:20:11.872 "small_pool_count": 8192, 00:20:11.872 "large_pool_count": 1024, 00:20:11.872 "small_bufsize": 8192, 00:20:11.872 "large_bufsize": 135168 00:20:11.872 } 00:20:11.872 } 00:20:11.872 ] 00:20:11.872 }, 00:20:11.872 { 00:20:11.872 "subsystem": "sock", 00:20:11.872 "config": [ 00:20:11.872 { 00:20:11.872 "method": "sock_set_default_impl", 00:20:11.872 "params": { 00:20:11.872 "impl_name": "posix" 00:20:11.872 } 00:20:11.872 }, 00:20:11.873 { 00:20:11.873 "method": "sock_impl_set_options", 00:20:11.873 "params": { 00:20:11.873 "impl_name": "ssl", 00:20:11.873 "recv_buf_size": 4096, 00:20:11.873 "send_buf_size": 4096, 00:20:11.873 "enable_recv_pipe": true, 00:20:11.873 "enable_quickack": false, 00:20:11.873 "enable_placement_id": 0, 00:20:11.873 "enable_zerocopy_send_server": true, 00:20:11.873 "enable_zerocopy_send_client": false, 00:20:11.873 "zerocopy_threshold": 0, 00:20:11.873 "tls_version": 0, 00:20:11.873 "enable_ktls": false 00:20:11.873 } 00:20:11.873 }, 00:20:11.873 { 00:20:11.873 "method": "sock_impl_set_options", 00:20:11.873 "params": { 00:20:11.873 "impl_name": "posix", 00:20:11.873 "recv_buf_size": 2097152, 00:20:11.873 "send_buf_size": 2097152, 00:20:11.873 "enable_recv_pipe": true, 00:20:11.873 "enable_quickack": false, 00:20:11.873 "enable_placement_id": 0, 00:20:11.873 "enable_zerocopy_send_server": true, 00:20:11.873 "enable_zerocopy_send_client": false, 00:20:11.873 "zerocopy_threshold": 0, 00:20:11.873 "tls_version": 0, 00:20:11.873 "enable_ktls": false 00:20:11.873 } 00:20:11.873 } 00:20:11.873 ] 00:20:11.873 }, 00:20:11.873 { 00:20:11.873 "subsystem": "vmd", 00:20:11.873 "config": [] 00:20:11.873 }, 00:20:11.873 { 00:20:11.873 "subsystem": "accel", 00:20:11.873 "config": [ 00:20:11.873 { 00:20:11.873 "method": "accel_set_options", 00:20:11.873 "params": { 00:20:11.873 "small_cache_size": 128, 00:20:11.873 "large_cache_size": 16, 00:20:11.873 "task_count": 2048, 00:20:11.873 "sequence_count": 2048, 00:20:11.873 "buf_count": 2048 00:20:11.873 } 00:20:11.873 } 00:20:11.873 ] 00:20:11.873 }, 00:20:11.873 { 00:20:11.873 "subsystem": "bdev", 00:20:11.873 "config": [ 00:20:11.873 { 00:20:11.873 "method": "bdev_set_options", 00:20:11.873 "params": { 00:20:11.873 "bdev_io_pool_size": 65535, 00:20:11.873 "bdev_io_cache_size": 256, 00:20:11.873 "bdev_auto_examine": true, 00:20:11.873 "iobuf_small_cache_size": 128, 00:20:11.873 "iobuf_large_cache_size": 16 00:20:11.873 } 00:20:11.873 }, 00:20:11.873 { 00:20:11.873 "method": "bdev_raid_set_options", 00:20:11.873 "params": { 00:20:11.873 "process_window_size_kb": 1024, 00:20:11.873 "process_max_bandwidth_mb_sec": 0 00:20:11.873 } 00:20:11.873 }, 00:20:11.873 { 00:20:11.873 "method": "bdev_iscsi_set_options", 00:20:11.873 "params": { 00:20:11.873 "timeout_sec": 30 00:20:11.873 } 00:20:11.873 }, 00:20:11.873 { 00:20:11.873 "method": "bdev_nvme_set_options", 00:20:11.873 "params": { 00:20:11.873 "action_on_timeout": "none", 00:20:11.873 "timeout_us": 0, 00:20:11.873 "timeout_admin_us": 0, 00:20:11.873 "keep_alive_timeout_ms": 10000, 00:20:11.873 "arbitration_burst": 0, 00:20:11.873 "low_priority_weight": 0, 00:20:11.873 "medium_priority_weight": 0, 00:20:11.873 "high_priority_weight": 0, 00:20:11.873 "nvme_adminq_poll_period_us": 10000, 00:20:11.873 "nvme_ioq_poll_period_us": 0, 00:20:11.873 "io_queue_requests": 512, 00:20:11.873 "delay_cmd_submit": true, 00:20:11.873 "transport_retry_count": 4, 00:20:11.873 "bdev_retry_count": 3, 00:20:11.873 "transport_ack_timeout": 0, 00:20:11.873 "ctrlr_loss_timeout_sec": 0, 00:20:11.873 "reconnect_delay_sec": 0, 00:20:11.873 "fast_io_fail_timeout_sec": 0, 00:20:11.873 "disable_auto_failback": false, 00:20:11.873 "generate_uuids": false, 00:20:11.873 "transport_tos": 0, 00:20:11.873 "nvme_error_stat": false, 00:20:11.873 "rdma_srq_size": 0, 00:20:11.873 "io_path_stat": false, 00:20:11.873 "allow_accel_sequence": false, 00:20:11.873 "rdma_max_cq_size": 0, 00:20:11.873 "rdma_cm_event_timeout_ms": 0, 00:20:11.873 "dhchap_digests": [ 00:20:11.873 "sha256", 00:20:11.873 "sha384", 00:20:11.873 "sha512" 00:20:11.873 ], 00:20:11.873 "dhchap_dhgroups": [ 00:20:11.873 "null", 00:20:11.873 "ffdhe2048", 00:20:11.873 "ffdhe3072", 00:20:11.873 "ffdhe4096", 00:20:11.873 "ffdhe6144", 00:20:11.873 "ffdhe8192" 00:20:11.873 ] 00:20:11.873 } 00:20:11.873 }, 00:20:11.873 { 00:20:11.873 "method": "bdev_nvme_attach_controller", 00:20:11.873 "params": { 00:20:11.873 "name": "nvme0", 00:20:11.873 "trtype": "TCP", 00:20:11.873 "adrfam": "IPv4", 00:20:11.873 "traddr": "10.0.0.2", 00:20:11.873 "trsvcid": "4420", 00:20:11.873 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:11.873 "prchk_reftag": false, 00:20:11.873 "prchk_guard": false, 00:20:11.873 "ctrlr_loss_timeout_sec": 0, 00:20:11.873 "reconnect_delay_sec": 0, 00:20:11.873 "fast_io_fail_timeout_sec": 0, 00:20:11.873 "psk": "key0", 00:20:11.873 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:11.873 "hdgst": false, 00:20:11.873 "ddgst": false, 00:20:11.873 "multipath": "multipath" 00:20:11.873 } 00:20:11.873 }, 00:20:11.873 { 00:20:11.873 "method": "bdev_nvme_set_hotplug", 00:20:11.873 "params": { 00:20:11.873 "period_us": 100000, 00:20:11.873 "enable": false 00:20:11.873 } 00:20:11.873 }, 00:20:11.873 { 00:20:11.873 "method": "bdev_enable_histogram", 00:20:11.873 "params": { 00:20:11.873 "name": "nvme0n1", 00:20:11.873 "enable": true 00:20:11.873 } 00:20:11.873 }, 00:20:11.873 { 00:20:11.873 "method": "bdev_wait_for_examine" 00:20:11.873 } 00:20:11.873 ] 00:20:11.873 }, 00:20:11.873 { 00:20:11.873 "subsystem": "nbd", 00:20:11.873 "config": [] 00:20:11.873 } 00:20:11.873 ] 00:20:11.873 }' 00:20:11.873 00:27:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 3277975 00:20:11.873 00:27:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3277975 ']' 00:20:11.873 00:27:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3277975 00:20:11.873 00:27:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:20:12.133 00:27:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:12.134 00:27:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3277975 00:20:12.134 00:27:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:20:12.134 00:27:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:20:12.134 00:27:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3277975' 00:20:12.134 killing process with pid 3277975 00:20:12.134 00:27:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3277975 00:20:12.134 Received shutdown signal, test time was about 1.000000 seconds 00:20:12.134 00:20:12.134 Latency(us) 00:20:12.134 [2024-10-08T22:27:42.769Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:12.134 [2024-10-08T22:27:42.769Z] =================================================================================================================== 00:20:12.134 [2024-10-08T22:27:42.769Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:12.134 00:27:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3277975 00:20:12.134 00:27:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 3277871 00:20:12.134 00:27:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3277871 ']' 00:20:12.134 00:27:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3277871 00:20:12.134 00:27:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:20:12.134 00:27:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:12.134 00:27:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3277871 00:20:12.134 00:27:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:12.134 00:27:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:12.134 00:27:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3277871' 00:20:12.134 killing process with pid 3277871 00:20:12.134 00:27:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3277871 00:20:12.134 00:27:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3277871 00:20:12.394 00:27:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:20:12.394 00:27:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:20:12.394 00:27:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:12.394 00:27:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:20:12.394 "subsystems": [ 00:20:12.394 { 00:20:12.394 "subsystem": "keyring", 00:20:12.394 "config": [ 00:20:12.394 { 00:20:12.394 "method": "keyring_file_add_key", 00:20:12.394 "params": { 00:20:12.394 "name": "key0", 00:20:12.394 "path": "/tmp/tmp.z1OMhTULXn" 00:20:12.394 } 00:20:12.394 } 00:20:12.394 ] 00:20:12.394 }, 00:20:12.394 { 00:20:12.394 "subsystem": "iobuf", 00:20:12.394 "config": [ 00:20:12.394 { 00:20:12.394 "method": "iobuf_set_options", 00:20:12.394 "params": { 00:20:12.394 "small_pool_count": 8192, 00:20:12.394 "large_pool_count": 1024, 00:20:12.394 "small_bufsize": 8192, 00:20:12.394 "large_bufsize": 135168 00:20:12.394 } 00:20:12.394 } 00:20:12.394 ] 00:20:12.394 }, 00:20:12.394 { 00:20:12.394 "subsystem": "sock", 00:20:12.394 "config": [ 00:20:12.394 { 00:20:12.394 "method": "sock_set_default_impl", 00:20:12.394 "params": { 00:20:12.394 "impl_name": "posix" 00:20:12.394 } 00:20:12.394 }, 00:20:12.394 { 00:20:12.394 "method": "sock_impl_set_options", 00:20:12.394 "params": { 00:20:12.394 "impl_name": "ssl", 00:20:12.394 "recv_buf_size": 4096, 00:20:12.394 "send_buf_size": 4096, 00:20:12.394 "enable_recv_pipe": true, 00:20:12.395 "enable_quickack": false, 00:20:12.395 "enable_placement_id": 0, 00:20:12.395 "enable_zerocopy_send_server": true, 00:20:12.395 "enable_zerocopy_send_client": false, 00:20:12.395 "zerocopy_threshold": 0, 00:20:12.395 "tls_version": 0, 00:20:12.395 "enable_ktls": false 00:20:12.395 } 00:20:12.395 }, 00:20:12.395 { 00:20:12.395 "method": "sock_impl_set_options", 00:20:12.395 "params": { 00:20:12.395 "impl_name": "posix", 00:20:12.395 "recv_buf_size": 2097152, 00:20:12.395 "send_buf_size": 2097152, 00:20:12.395 "enable_recv_pipe": true, 00:20:12.395 "enable_quickack": false, 00:20:12.395 "enable_placement_id": 0, 00:20:12.395 "enable_zerocopy_send_server": true, 00:20:12.395 "enable_zerocopy_send_client": false, 00:20:12.395 "zerocopy_threshold": 0, 00:20:12.395 "tls_version": 0, 00:20:12.395 "enable_ktls": false 00:20:12.395 } 00:20:12.395 } 00:20:12.395 ] 00:20:12.395 }, 00:20:12.395 { 00:20:12.395 "subsystem": "vmd", 00:20:12.395 "config": [] 00:20:12.395 }, 00:20:12.395 { 00:20:12.395 "subsystem": "accel", 00:20:12.395 "config": [ 00:20:12.395 { 00:20:12.395 "method": "accel_set_options", 00:20:12.395 "params": { 00:20:12.395 "small_cache_size": 128, 00:20:12.395 "large_cache_size": 16, 00:20:12.395 "task_count": 2048, 00:20:12.395 "sequence_count": 2048, 00:20:12.395 "buf_count": 2048 00:20:12.395 } 00:20:12.395 } 00:20:12.395 ] 00:20:12.395 }, 00:20:12.395 { 00:20:12.395 "subsystem": "bdev", 00:20:12.395 "config": [ 00:20:12.395 { 00:20:12.395 "method": "bdev_set_options", 00:20:12.395 "params": { 00:20:12.395 "bdev_io_pool_size": 65535, 00:20:12.395 "bdev_io_cache_size": 256, 00:20:12.395 "bdev_auto_examine": true, 00:20:12.395 "iobuf_small_cache_size": 128, 00:20:12.395 "iobuf_large_cache_size": 16 00:20:12.395 } 00:20:12.395 }, 00:20:12.395 { 00:20:12.395 "method": "bdev_raid_set_options", 00:20:12.395 "params": { 00:20:12.395 "process_window_size_kb": 1024, 00:20:12.395 "process_max_bandwidth_mb_sec": 0 00:20:12.395 } 00:20:12.395 }, 00:20:12.395 { 00:20:12.395 "method": "bdev_iscsi_set_options", 00:20:12.395 "params": { 00:20:12.395 "timeout_sec": 30 00:20:12.395 } 00:20:12.395 }, 00:20:12.395 { 00:20:12.395 "method": "bdev_nvme_set_options", 00:20:12.395 "params": { 00:20:12.395 "action_on_timeout": "none", 00:20:12.395 "timeout_us": 0, 00:20:12.395 "timeout_admin_us": 0, 00:20:12.395 "keep_alive_timeout_ms": 10000, 00:20:12.395 "arbitration_burst": 0, 00:20:12.395 "low_priority_weight": 0, 00:20:12.395 "medium_priority_weight": 0, 00:20:12.395 "high_priority_weight": 0, 00:20:12.395 "nvme_adminq_poll_period_us": 10000, 00:20:12.395 "nvme_ioq_poll_period_us": 0, 00:20:12.395 "io_queue_requests": 0, 00:20:12.395 "delay_cmd_submit": true, 00:20:12.395 "transport_retry_count": 4, 00:20:12.395 "bdev_retry_count": 3, 00:20:12.395 "transport_ack_timeout": 0, 00:20:12.395 "ctrlr_loss_timeout_sec": 0, 00:20:12.395 "reconnect_delay_sec": 0, 00:20:12.395 "fast_io_fail_timeout_sec": 0, 00:20:12.395 "disable_auto_failback": false, 00:20:12.395 "generate_uuids": false, 00:20:12.395 "transport_tos": 0, 00:20:12.395 "nvme_error_stat": false, 00:20:12.395 "rdma_srq_size": 0, 00:20:12.395 "io_path_stat": false, 00:20:12.395 "allow_accel_sequence": false, 00:20:12.395 "rdma_max_cq_size": 0, 00:20:12.395 "rdma_cm_event_timeout_ms": 0, 00:20:12.395 "dhchap_digests": [ 00:20:12.395 "sha256", 00:20:12.395 "sha384", 00:20:12.395 "sha512" 00:20:12.395 ], 00:20:12.395 "dhchap_dhgroups": [ 00:20:12.395 "null", 00:20:12.395 "ffdhe2048", 00:20:12.395 "ffdhe3072", 00:20:12.395 "ffdhe4096", 00:20:12.395 "ffdhe6144", 00:20:12.395 "ffdhe8192" 00:20:12.395 ] 00:20:12.395 } 00:20:12.395 }, 00:20:12.395 { 00:20:12.395 "method": "bdev_nvme_set_hotplug", 00:20:12.395 "params": { 00:20:12.395 "period_us": 100000, 00:20:12.395 "enable": false 00:20:12.395 } 00:20:12.395 }, 00:20:12.395 { 00:20:12.395 "method": "bdev_malloc_create", 00:20:12.395 "params": { 00:20:12.395 "name": "malloc0", 00:20:12.395 "num_blocks": 8192, 00:20:12.395 "block_size": 4096, 00:20:12.395 "physical_block_size": 4096, 00:20:12.395 "uuid": "47900c2f-f8c0-42cc-9c17-29b4a60f5544", 00:20:12.395 "optimal_io_boundary": 0, 00:20:12.395 "md_size": 0, 00:20:12.395 "dif_type": 0, 00:20:12.395 "dif_is_head_of_md": false, 00:20:12.395 "dif_pi_format": 0 00:20:12.395 } 00:20:12.395 }, 00:20:12.395 { 00:20:12.395 "method": "bdev_wait_for_examine" 00:20:12.395 } 00:20:12.395 ] 00:20:12.395 }, 00:20:12.395 { 00:20:12.395 "subsystem": "nbd", 00:20:12.395 "config": [] 00:20:12.395 }, 00:20:12.395 { 00:20:12.395 "subsystem": "scheduler", 00:20:12.395 "config": [ 00:20:12.395 { 00:20:12.395 "method": "framework_set_scheduler", 00:20:12.395 "params": { 00:20:12.395 "name": "static" 00:20:12.395 } 00:20:12.395 } 00:20:12.395 ] 00:20:12.395 }, 00:20:12.395 { 00:20:12.395 "subsystem": "nvmf", 00:20:12.395 "config": [ 00:20:12.395 { 00:20:12.395 "method": "nvmf_set_config", 00:20:12.395 "params": { 00:20:12.395 "discovery_filter": "match_any", 00:20:12.395 "admin_cmd_passthru": { 00:20:12.395 "identify_ctrlr": false 00:20:12.395 }, 00:20:12.395 "dhchap_digests": [ 00:20:12.395 "sha256", 00:20:12.395 "sha384", 00:20:12.395 "sha512" 00:20:12.395 ], 00:20:12.395 "dhchap_dhgroups": [ 00:20:12.395 "null", 00:20:12.395 "ffdhe2048", 00:20:12.395 "ffdhe3072", 00:20:12.395 "ffdhe4096", 00:20:12.395 "ffdhe6144", 00:20:12.395 "ffdhe8192" 00:20:12.395 ] 00:20:12.395 } 00:20:12.395 }, 00:20:12.395 { 00:20:12.395 "method": "nvmf_set_max_subsystems", 00:20:12.395 "params": { 00:20:12.395 "max_subsystems": 1024 00:20:12.395 } 00:20:12.395 }, 00:20:12.395 { 00:20:12.395 "method": "nvmf_set_crdt", 00:20:12.395 "params": { 00:20:12.395 "crdt1": 0, 00:20:12.395 "crdt2": 0, 00:20:12.395 "crdt3": 0 00:20:12.395 } 00:20:12.395 }, 00:20:12.395 { 00:20:12.395 "method": "nvmf_create_transport", 00:20:12.395 "params": { 00:20:12.395 "trtype": "TCP", 00:20:12.395 "max_queue_depth": 128, 00:20:12.395 "max_io_qpairs_per_ctrlr": 127, 00:20:12.395 "in_capsule_data_size": 4096, 00:20:12.395 "max_io_size": 131072, 00:20:12.395 "io_unit_size": 131072, 00:20:12.395 "max_aq_depth": 128, 00:20:12.395 "num_shared_buffers": 511, 00:20:12.395 "buf_cache_size": 4294967295, 00:20:12.395 "dif_insert_or_strip": false, 00:20:12.395 "zcopy": false, 00:20:12.395 "c2h_success": false, 00:20:12.395 "sock_priority": 0, 00:20:12.395 "abort_timeout_sec": 1, 00:20:12.395 "ack_timeout": 0, 00:20:12.395 "data_wr_pool_size": 0 00:20:12.395 } 00:20:12.395 }, 00:20:12.395 { 00:20:12.395 "method": "nvmf_create_subsystem", 00:20:12.395 "params": { 00:20:12.395 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:12.395 "allow_any_host": false, 00:20:12.395 "serial_number": "00000000000000000000", 00:20:12.395 "model_number": "SPDK bdev Controller", 00:20:12.395 "max_namespaces": 32, 00:20:12.395 "min_cntlid": 1, 00:20:12.395 "max_cntlid": 65519, 00:20:12.395 "ana_reporting": false 00:20:12.395 } 00:20:12.395 }, 00:20:12.395 { 00:20:12.395 "method": "nvmf_subsystem_add_host", 00:20:12.395 "params": { 00:20:12.395 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:12.395 "host": "nqn.2016-06.io.spdk:host1", 00:20:12.395 "psk": "key0" 00:20:12.395 } 00:20:12.395 }, 00:20:12.395 { 00:20:12.395 "method": "nvmf_subsystem_add_ns", 00:20:12.395 "params": { 00:20:12.395 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:12.395 "namespace": { 00:20:12.395 "nsid": 1, 00:20:12.395 "bdev_name": "malloc0", 00:20:12.395 "nguid": "47900C2FF8C042CC9C1729B4A60F5544", 00:20:12.395 "uuid": "47900c2f-f8c0-42cc-9c17-29b4a60f5544", 00:20:12.395 "no_auto_visible": false 00:20:12.395 } 00:20:12.395 } 00:20:12.395 }, 00:20:12.395 { 00:20:12.395 "method": "nvmf_subsystem_add_listener", 00:20:12.395 "params": { 00:20:12.395 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:12.395 "listen_address": { 00:20:12.395 "trtype": "TCP", 00:20:12.395 "adrfam": "IPv4", 00:20:12.395 "traddr": "10.0.0.2", 00:20:12.395 "trsvcid": "4420" 00:20:12.395 }, 00:20:12.395 "secure_channel": false, 00:20:12.395 "sock_impl": "ssl" 00:20:12.395 } 00:20:12.395 } 00:20:12.395 ] 00:20:12.395 } 00:20:12.395 ] 00:20:12.395 }' 00:20:12.395 00:27:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:12.395 00:27:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=3278657 00:20:12.395 00:27:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 3278657 00:20:12.395 00:27:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:20:12.395 00:27:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3278657 ']' 00:20:12.395 00:27:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:12.395 00:27:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:12.395 00:27:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:12.395 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:12.395 00:27:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:12.395 00:27:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:12.395 [2024-10-09 00:27:42.942753] Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 initialization... 00:20:12.395 [2024-10-09 00:27:42.942813] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:12.395 [2024-10-09 00:27:43.025979] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:12.656 [2024-10-09 00:27:43.080084] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:12.656 [2024-10-09 00:27:43.080115] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:12.656 [2024-10-09 00:27:43.080121] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:12.656 [2024-10-09 00:27:43.080126] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:12.656 [2024-10-09 00:27:43.080130] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:12.656 [2024-10-09 00:27:43.080609] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:20:12.656 [2024-10-09 00:27:43.281911] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:12.914 [2024-10-09 00:27:43.313936] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:12.914 [2024-10-09 00:27:43.314130] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:13.175 00:27:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:13.175 00:27:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:20:13.175 00:27:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:20:13.175 00:27:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:13.175 00:27:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:13.175 00:27:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:13.175 00:27:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=3278825 00:20:13.175 00:27:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 3278825 /var/tmp/bdevperf.sock 00:20:13.175 00:27:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3278825 ']' 00:20:13.176 00:27:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:13.176 00:27:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:13.176 00:27:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:13.176 00:27:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:20:13.176 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:13.176 00:27:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:13.176 00:27:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:13.176 00:27:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:20:13.176 "subsystems": [ 00:20:13.176 { 00:20:13.176 "subsystem": "keyring", 00:20:13.176 "config": [ 00:20:13.176 { 00:20:13.176 "method": "keyring_file_add_key", 00:20:13.176 "params": { 00:20:13.176 "name": "key0", 00:20:13.176 "path": "/tmp/tmp.z1OMhTULXn" 00:20:13.176 } 00:20:13.176 } 00:20:13.176 ] 00:20:13.176 }, 00:20:13.176 { 00:20:13.176 "subsystem": "iobuf", 00:20:13.176 "config": [ 00:20:13.176 { 00:20:13.176 "method": "iobuf_set_options", 00:20:13.176 "params": { 00:20:13.176 "small_pool_count": 8192, 00:20:13.176 "large_pool_count": 1024, 00:20:13.176 "small_bufsize": 8192, 00:20:13.176 "large_bufsize": 135168 00:20:13.176 } 00:20:13.176 } 00:20:13.176 ] 00:20:13.176 }, 00:20:13.176 { 00:20:13.176 "subsystem": "sock", 00:20:13.176 "config": [ 00:20:13.176 { 00:20:13.176 "method": "sock_set_default_impl", 00:20:13.176 "params": { 00:20:13.176 "impl_name": "posix" 00:20:13.176 } 00:20:13.176 }, 00:20:13.176 { 00:20:13.176 "method": "sock_impl_set_options", 00:20:13.176 "params": { 00:20:13.176 "impl_name": "ssl", 00:20:13.176 "recv_buf_size": 4096, 00:20:13.176 "send_buf_size": 4096, 00:20:13.176 "enable_recv_pipe": true, 00:20:13.176 "enable_quickack": false, 00:20:13.176 "enable_placement_id": 0, 00:20:13.176 "enable_zerocopy_send_server": true, 00:20:13.176 "enable_zerocopy_send_client": false, 00:20:13.176 "zerocopy_threshold": 0, 00:20:13.176 "tls_version": 0, 00:20:13.176 "enable_ktls": false 00:20:13.176 } 00:20:13.176 }, 00:20:13.176 { 00:20:13.176 "method": "sock_impl_set_options", 00:20:13.176 "params": { 00:20:13.176 "impl_name": "posix", 00:20:13.176 "recv_buf_size": 2097152, 00:20:13.176 "send_buf_size": 2097152, 00:20:13.176 "enable_recv_pipe": true, 00:20:13.176 "enable_quickack": false, 00:20:13.176 "enable_placement_id": 0, 00:20:13.176 "enable_zerocopy_send_server": true, 00:20:13.176 "enable_zerocopy_send_client": false, 00:20:13.176 "zerocopy_threshold": 0, 00:20:13.176 "tls_version": 0, 00:20:13.176 "enable_ktls": false 00:20:13.176 } 00:20:13.176 } 00:20:13.176 ] 00:20:13.176 }, 00:20:13.176 { 00:20:13.176 "subsystem": "vmd", 00:20:13.176 "config": [] 00:20:13.176 }, 00:20:13.176 { 00:20:13.176 "subsystem": "accel", 00:20:13.176 "config": [ 00:20:13.176 { 00:20:13.176 "method": "accel_set_options", 00:20:13.176 "params": { 00:20:13.176 "small_cache_size": 128, 00:20:13.176 "large_cache_size": 16, 00:20:13.176 "task_count": 2048, 00:20:13.176 "sequence_count": 2048, 00:20:13.176 "buf_count": 2048 00:20:13.176 } 00:20:13.176 } 00:20:13.176 ] 00:20:13.176 }, 00:20:13.176 { 00:20:13.176 "subsystem": "bdev", 00:20:13.176 "config": [ 00:20:13.176 { 00:20:13.176 "method": "bdev_set_options", 00:20:13.176 "params": { 00:20:13.176 "bdev_io_pool_size": 65535, 00:20:13.176 "bdev_io_cache_size": 256, 00:20:13.176 "bdev_auto_examine": true, 00:20:13.176 "iobuf_small_cache_size": 128, 00:20:13.176 "iobuf_large_cache_size": 16 00:20:13.176 } 00:20:13.176 }, 00:20:13.176 { 00:20:13.176 "method": "bdev_raid_set_options", 00:20:13.176 "params": { 00:20:13.176 "process_window_size_kb": 1024, 00:20:13.176 "process_max_bandwidth_mb_sec": 0 00:20:13.176 } 00:20:13.176 }, 00:20:13.176 { 00:20:13.176 "method": "bdev_iscsi_set_options", 00:20:13.176 "params": { 00:20:13.176 "timeout_sec": 30 00:20:13.176 } 00:20:13.176 }, 00:20:13.176 { 00:20:13.176 "method": "bdev_nvme_set_options", 00:20:13.176 "params": { 00:20:13.176 "action_on_timeout": "none", 00:20:13.176 "timeout_us": 0, 00:20:13.176 "timeout_admin_us": 0, 00:20:13.176 "keep_alive_timeout_ms": 10000, 00:20:13.176 "arbitration_burst": 0, 00:20:13.176 "low_priority_weight": 0, 00:20:13.176 "medium_priority_weight": 0, 00:20:13.176 "high_priority_weight": 0, 00:20:13.176 "nvme_adminq_poll_period_us": 10000, 00:20:13.176 "nvme_ioq_poll_period_us": 0, 00:20:13.176 "io_queue_requests": 512, 00:20:13.176 "delay_cmd_submit": true, 00:20:13.176 "transport_retry_count": 4, 00:20:13.176 "bdev_retry_count": 3, 00:20:13.176 "transport_ack_timeout": 0, 00:20:13.176 "ctrlr_loss_timeout_sec": 0, 00:20:13.176 "reconnect_delay_sec": 0, 00:20:13.176 "fast_io_fail_timeout_sec": 0, 00:20:13.176 "disable_auto_failback": false, 00:20:13.176 "generate_uuids": false, 00:20:13.176 "transport_tos": 0, 00:20:13.176 "nvme_error_stat": false, 00:20:13.176 "rdma_srq_size": 0, 00:20:13.176 "io_path_stat": false, 00:20:13.176 "allow_accel_sequence": false, 00:20:13.176 "rdma_max_cq_size": 0, 00:20:13.176 "rdma_cm_event_timeout_ms": 0, 00:20:13.176 "dhchap_digests": [ 00:20:13.176 "sha256", 00:20:13.176 "sha384", 00:20:13.176 "sha512" 00:20:13.176 ], 00:20:13.176 "dhchap_dhgroups": [ 00:20:13.176 "null", 00:20:13.176 "ffdhe2048", 00:20:13.176 "ffdhe3072", 00:20:13.176 "ffdhe4096", 00:20:13.176 "ffdhe6144", 00:20:13.176 "ffdhe8192" 00:20:13.176 ] 00:20:13.176 } 00:20:13.176 }, 00:20:13.176 { 00:20:13.176 "method": "bdev_nvme_attach_controller", 00:20:13.176 "params": { 00:20:13.176 "name": "nvme0", 00:20:13.176 "trtype": "TCP", 00:20:13.176 "adrfam": "IPv4", 00:20:13.176 "traddr": "10.0.0.2", 00:20:13.176 "trsvcid": "4420", 00:20:13.176 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:13.176 "prchk_reftag": false, 00:20:13.176 "prchk_guard": false, 00:20:13.176 "ctrlr_loss_timeout_sec": 0, 00:20:13.176 "reconnect_delay_sec": 0, 00:20:13.176 "fast_io_fail_timeout_sec": 0, 00:20:13.176 "psk": "key0", 00:20:13.176 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:13.176 "hdgst": false, 00:20:13.176 "ddgst": false, 00:20:13.176 "multipath": "multipath" 00:20:13.176 } 00:20:13.176 }, 00:20:13.176 { 00:20:13.176 "method": "bdev_nvme_set_hotplug", 00:20:13.176 "params": { 00:20:13.176 "period_us": 100000, 00:20:13.176 "enable": false 00:20:13.176 } 00:20:13.176 }, 00:20:13.176 { 00:20:13.176 "method": "bdev_enable_histogram", 00:20:13.176 "params": { 00:20:13.176 "name": "nvme0n1", 00:20:13.176 "enable": true 00:20:13.176 } 00:20:13.176 }, 00:20:13.176 { 00:20:13.176 "method": "bdev_wait_for_examine" 00:20:13.176 } 00:20:13.176 ] 00:20:13.176 }, 00:20:13.176 { 00:20:13.176 "subsystem": "nbd", 00:20:13.176 "config": [] 00:20:13.176 } 00:20:13.176 ] 00:20:13.176 }' 00:20:13.438 [2024-10-09 00:27:43.811402] Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 initialization... 00:20:13.438 [2024-10-09 00:27:43.811453] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3278825 ] 00:20:13.438 [2024-10-09 00:27:43.885391] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:13.438 [2024-10-09 00:27:43.938895] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:20:13.698 [2024-10-09 00:27:44.073623] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:14.269 00:27:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:14.269 00:27:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:20:14.269 00:27:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:14.269 00:27:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:20:14.269 00:27:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:14.269 00:27:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:14.269 Running I/O for 1 seconds... 00:20:15.502 5473.00 IOPS, 21.38 MiB/s 00:20:15.502 Latency(us) 00:20:15.502 [2024-10-08T22:27:46.137Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:15.502 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:15.502 Verification LBA range: start 0x0 length 0x2000 00:20:15.502 nvme0n1 : 1.02 5492.65 21.46 0.00 0.00 23101.96 4587.52 34297.17 00:20:15.502 [2024-10-08T22:27:46.137Z] =================================================================================================================== 00:20:15.502 [2024-10-08T22:27:46.137Z] Total : 5492.65 21.46 0.00 0.00 23101.96 4587.52 34297.17 00:20:15.502 { 00:20:15.502 "results": [ 00:20:15.502 { 00:20:15.502 "job": "nvme0n1", 00:20:15.502 "core_mask": "0x2", 00:20:15.502 "workload": "verify", 00:20:15.502 "status": "finished", 00:20:15.502 "verify_range": { 00:20:15.502 "start": 0, 00:20:15.502 "length": 8192 00:20:15.502 }, 00:20:15.502 "queue_depth": 128, 00:20:15.502 "io_size": 4096, 00:20:15.502 "runtime": 1.019726, 00:20:15.502 "iops": 5492.651947680063, 00:20:15.502 "mibps": 21.455671670625247, 00:20:15.502 "io_failed": 0, 00:20:15.502 "io_timeout": 0, 00:20:15.502 "avg_latency_us": 23101.956783907637, 00:20:15.502 "min_latency_us": 4587.52, 00:20:15.502 "max_latency_us": 34297.17333333333 00:20:15.502 } 00:20:15.502 ], 00:20:15.502 "core_count": 1 00:20:15.502 } 00:20:15.502 00:27:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:20:15.502 00:27:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:20:15.502 00:27:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:20:15.502 00:27:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@808 -- # type=--id 00:20:15.502 00:27:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@809 -- # id=0 00:20:15.502 00:27:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:20:15.502 00:27:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:20:15.502 00:27:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:20:15.502 00:27:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:20:15.502 00:27:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # for n in $shm_files 00:20:15.502 00:27:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:20:15.502 nvmf_trace.0 00:20:15.502 00:27:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@823 -- # return 0 00:20:15.502 00:27:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 3278825 00:20:15.502 00:27:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3278825 ']' 00:20:15.502 00:27:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3278825 00:20:15.502 00:27:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:20:15.502 00:27:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:15.502 00:27:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3278825 00:20:15.502 00:27:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:20:15.502 00:27:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:20:15.502 00:27:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3278825' 00:20:15.502 killing process with pid 3278825 00:20:15.502 00:27:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3278825 00:20:15.502 Received shutdown signal, test time was about 1.000000 seconds 00:20:15.502 00:20:15.502 Latency(us) 00:20:15.502 [2024-10-08T22:27:46.137Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:15.502 [2024-10-08T22:27:46.137Z] =================================================================================================================== 00:20:15.502 [2024-10-08T22:27:46.137Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:15.502 00:27:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3278825 00:20:15.764 00:27:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:20:15.764 00:27:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@514 -- # nvmfcleanup 00:20:15.764 00:27:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:20:15.764 00:27:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:15.764 00:27:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:20:15.764 00:27:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:15.764 00:27:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:15.764 rmmod nvme_tcp 00:20:15.764 rmmod nvme_fabrics 00:20:15.764 rmmod nvme_keyring 00:20:15.764 00:27:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:15.764 00:27:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:20:15.765 00:27:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:20:15.765 00:27:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@515 -- # '[' -n 3278657 ']' 00:20:15.765 00:27:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # killprocess 3278657 00:20:15.765 00:27:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3278657 ']' 00:20:15.765 00:27:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3278657 00:20:15.765 00:27:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:20:15.765 00:27:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:15.765 00:27:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3278657 00:20:15.765 00:27:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:15.765 00:27:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:15.765 00:27:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3278657' 00:20:15.765 killing process with pid 3278657 00:20:15.765 00:27:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3278657 00:20:15.765 00:27:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3278657 00:20:16.025 00:27:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:20:16.025 00:27:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:20:16.025 00:27:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:20:16.025 00:27:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:20:16.025 00:27:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@789 -- # iptables-save 00:20:16.025 00:27:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:20:16.025 00:27:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@789 -- # iptables-restore 00:20:16.025 00:27:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:16.025 00:27:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:16.025 00:27:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:16.025 00:27:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:16.025 00:27:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:17.936 00:27:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:17.936 00:27:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.hUEznNur8V /tmp/tmp.VCOFV9tFaE /tmp/tmp.z1OMhTULXn 00:20:17.936 00:20:17.936 real 1m29.184s 00:20:17.936 user 2m21.514s 00:20:17.936 sys 0m27.020s 00:20:17.936 00:27:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:17.936 00:27:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:17.936 ************************************ 00:20:17.936 END TEST nvmf_tls 00:20:17.936 ************************************ 00:20:17.936 00:27:48 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:20:17.936 00:27:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:20:17.936 00:27:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:17.936 00:27:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:18.210 ************************************ 00:20:18.210 START TEST nvmf_fips 00:20:18.210 ************************************ 00:20:18.211 00:27:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:20:18.211 * Looking for test storage... 00:20:18.211 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:20:18.211 00:27:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:20:18.211 00:27:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1681 -- # lcov --version 00:20:18.211 00:27:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:20:18.211 00:27:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:20:18.211 00:27:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:18.211 00:27:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:18.211 00:27:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:18.211 00:27:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:20:18.211 00:27:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:20:18.211 00:27:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:20:18.211 00:27:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:20:18.211 00:27:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:20:18.211 00:27:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:20:18.211 00:27:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:20:18.211 00:27:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:18.211 00:27:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:20:18.211 00:27:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:20:18.211 00:27:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:18.211 00:27:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:18.211 00:27:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:20:18.211 00:27:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:20:18.211 00:27:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:18.211 00:27:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:20:18.211 00:27:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:20:18.211 00:27:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:20:18.211 00:27:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:20:18.211 00:27:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:18.211 00:27:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:20:18.211 00:27:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:20:18.211 00:27:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:18.211 00:27:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:18.211 00:27:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:20:18.211 00:27:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:18.211 00:27:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:20:18.211 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:18.211 --rc genhtml_branch_coverage=1 00:20:18.211 --rc genhtml_function_coverage=1 00:20:18.211 --rc genhtml_legend=1 00:20:18.211 --rc geninfo_all_blocks=1 00:20:18.211 --rc geninfo_unexecuted_blocks=1 00:20:18.211 00:20:18.211 ' 00:20:18.211 00:27:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:20:18.211 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:18.211 --rc genhtml_branch_coverage=1 00:20:18.211 --rc genhtml_function_coverage=1 00:20:18.211 --rc genhtml_legend=1 00:20:18.211 --rc geninfo_all_blocks=1 00:20:18.211 --rc geninfo_unexecuted_blocks=1 00:20:18.211 00:20:18.211 ' 00:20:18.211 00:27:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:20:18.211 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:18.211 --rc genhtml_branch_coverage=1 00:20:18.211 --rc genhtml_function_coverage=1 00:20:18.211 --rc genhtml_legend=1 00:20:18.211 --rc geninfo_all_blocks=1 00:20:18.211 --rc geninfo_unexecuted_blocks=1 00:20:18.211 00:20:18.211 ' 00:20:18.211 00:27:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:20:18.211 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:18.211 --rc genhtml_branch_coverage=1 00:20:18.211 --rc genhtml_function_coverage=1 00:20:18.211 --rc genhtml_legend=1 00:20:18.211 --rc geninfo_all_blocks=1 00:20:18.211 --rc geninfo_unexecuted_blocks=1 00:20:18.211 00:20:18.211 ' 00:20:18.211 00:27:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:18.211 00:27:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:20:18.211 00:27:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:18.211 00:27:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:18.211 00:27:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:18.211 00:27:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:18.211 00:27:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:18.211 00:27:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:18.211 00:27:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:18.211 00:27:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:18.211 00:27:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:18.211 00:27:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:18.211 00:27:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:18.211 00:27:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:18.211 00:27:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:18.211 00:27:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:18.211 00:27:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:18.211 00:27:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:18.211 00:27:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:18.211 00:27:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:20:18.211 00:27:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:18.211 00:27:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:18.211 00:27:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:18.211 00:27:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:18.211 00:27:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:18.211 00:27:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:18.211 00:27:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:20:18.211 00:27:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:18.211 00:27:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:20:18.211 00:27:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:18.211 00:27:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:18.211 00:27:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:18.211 00:27:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:18.211 00:27:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:18.211 00:27:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:18.211 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:18.211 00:27:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:18.211 00:27:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:18.211 00:27:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:18.211 00:27:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:18.211 00:27:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:20:18.211 00:27:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:20:18.211 00:27:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:20:18.211 00:27:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:20:18.472 00:27:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:20:18.472 00:27:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:20:18.472 00:27:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:18.472 00:27:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:18.472 00:27:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:20:18.472 00:27:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:20:18.472 00:27:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:20:18.472 00:27:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:20:18.472 00:27:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:20:18.472 00:27:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:20:18.472 00:27:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:20:18.472 00:27:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:18.472 00:27:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:20:18.472 00:27:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:20:18.472 00:27:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:18.472 00:27:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:18.472 00:27:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:20:18.472 00:27:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:20:18.472 00:27:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:20:18.472 00:27:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:20:18.472 00:27:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:20:18.472 00:27:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:20:18.472 00:27:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:20:18.472 00:27:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:20:18.472 00:27:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:20:18.472 00:27:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:20:18.472 00:27:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:18.472 00:27:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:18.472 00:27:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:20:18.472 00:27:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:18.472 00:27:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:20:18.472 00:27:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:20:18.472 00:27:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:18.472 00:27:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:20:18.472 00:27:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:20:18.472 00:27:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:20:18.472 00:27:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:20:18.472 00:27:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:20:18.472 00:27:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:20:18.473 00:27:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:20:18.473 00:27:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:18.473 00:27:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:20:18.473 00:27:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:20:18.473 00:27:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:20:18.473 00:27:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:20:18.473 00:27:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:20:18.473 00:27:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:20:18.473 00:27:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:20:18.473 00:27:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:20:18.473 00:27:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:20:18.473 00:27:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:20:18.473 00:27:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:20:18.473 00:27:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:20:18.473 00:27:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:20:18.473 00:27:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:20:18.473 00:27:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:20:18.473 00:27:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:20:18.473 00:27:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:20:18.473 00:27:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:20:18.473 00:27:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:20:18.473 00:27:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:20:18.473 00:27:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:20:18.473 00:27:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@650 -- # local es=0 00:20:18.473 00:27:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # valid_exec_arg openssl md5 /dev/fd/62 00:20:18.473 00:27:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@638 -- # local arg=openssl 00:20:18.473 00:27:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:20:18.473 00:27:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:18.473 00:27:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # type -t openssl 00:20:18.473 00:27:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:18.473 00:27:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -P openssl 00:20:18.473 00:27:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:18.473 00:27:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # arg=/usr/bin/openssl 00:20:18.473 00:27:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # [[ -x /usr/bin/openssl ]] 00:20:18.473 00:27:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # openssl md5 /dev/fd/62 00:20:18.473 Error setting digest 00:20:18.473 4012506B427F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:20:18.473 4012506B427F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:20:18.473 00:27:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # es=1 00:20:18.473 00:27:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:18.473 00:27:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:18.473 00:27:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:18.473 00:27:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:20:18.473 00:27:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:20:18.473 00:27:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:18.473 00:27:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # prepare_net_devs 00:20:18.473 00:27:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@436 -- # local -g is_hw=no 00:20:18.473 00:27:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # remove_spdk_ns 00:20:18.473 00:27:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:18.473 00:27:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:18.473 00:27:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:18.473 00:27:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:20:18.473 00:27:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:20:18.473 00:27:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@309 -- # xtrace_disable 00:20:18.473 00:27:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:26.608 00:27:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:26.608 00:27:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # pci_devs=() 00:20:26.608 00:27:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:26.608 00:27:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:26.608 00:27:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:26.608 00:27:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:26.608 00:27:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:26.608 00:27:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # net_devs=() 00:20:26.608 00:27:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:26.608 00:27:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # e810=() 00:20:26.608 00:27:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # local -ga e810 00:20:26.608 00:27:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # x722=() 00:20:26.608 00:27:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # local -ga x722 00:20:26.608 00:27:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # mlx=() 00:20:26.608 00:27:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # local -ga mlx 00:20:26.608 00:27:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:26.608 00:27:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:26.608 00:27:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:26.608 00:27:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:26.608 00:27:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:26.608 00:27:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:26.608 00:27:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:26.608 00:27:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:26.608 00:27:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:26.608 00:27:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:26.608 00:27:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:26.608 00:27:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:26.608 00:27:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:26.608 00:27:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:26.608 00:27:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:26.608 00:27:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:26.608 00:27:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:26.608 00:27:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:26.608 00:27:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:26.608 00:27:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:20:26.608 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:20:26.608 00:27:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:26.608 00:27:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:26.608 00:27:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:26.608 00:27:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:26.608 00:27:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:26.608 00:27:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:26.608 00:27:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:20:26.608 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:20:26.608 00:27:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:26.608 00:27:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:26.608 00:27:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:26.608 00:27:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:26.608 00:27:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:26.608 00:27:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:26.608 00:27:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:26.608 00:27:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:26.608 00:27:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:20:26.608 00:27:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:26.608 00:27:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:20:26.608 00:27:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:26.608 00:27:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ up == up ]] 00:20:26.608 00:27:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:20:26.608 00:27:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:26.608 00:27:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:20:26.608 Found net devices under 0000:4b:00.0: cvl_0_0 00:20:26.608 00:27:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:20:26.608 00:27:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:20:26.608 00:27:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:26.608 00:27:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:20:26.608 00:27:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:26.608 00:27:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ up == up ]] 00:20:26.608 00:27:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:20:26.608 00:27:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:26.608 00:27:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:20:26.608 Found net devices under 0000:4b:00.1: cvl_0_1 00:20:26.608 00:27:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:20:26.608 00:27:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:20:26.608 00:27:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # is_hw=yes 00:20:26.608 00:27:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:20:26.608 00:27:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:20:26.608 00:27:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:20:26.608 00:27:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:26.608 00:27:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:26.608 00:27:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:26.608 00:27:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:26.608 00:27:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:26.608 00:27:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:26.608 00:27:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:26.608 00:27:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:26.608 00:27:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:26.608 00:27:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:26.608 00:27:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:26.608 00:27:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:26.608 00:27:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:26.608 00:27:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:26.608 00:27:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:26.608 00:27:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:26.608 00:27:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:26.608 00:27:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:26.608 00:27:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:26.608 00:27:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:26.608 00:27:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:26.608 00:27:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:26.608 00:27:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:26.608 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:26.608 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.549 ms 00:20:26.608 00:20:26.608 --- 10.0.0.2 ping statistics --- 00:20:26.608 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:26.608 rtt min/avg/max/mdev = 0.549/0.549/0.549/0.000 ms 00:20:26.608 00:27:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:26.608 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:26.608 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.334 ms 00:20:26.608 00:20:26.608 --- 10.0.0.1 ping statistics --- 00:20:26.608 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:26.608 rtt min/avg/max/mdev = 0.334/0.334/0.334/0.000 ms 00:20:26.608 00:27:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:26.608 00:27:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@448 -- # return 0 00:20:26.608 00:27:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:20:26.609 00:27:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:26.609 00:27:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:20:26.609 00:27:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:20:26.609 00:27:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:26.609 00:27:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:20:26.609 00:27:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:20:26.609 00:27:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:20:26.609 00:27:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:20:26.609 00:27:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:26.609 00:27:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:26.609 00:27:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # nvmfpid=3283650 00:20:26.609 00:27:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # waitforlisten 3283650 00:20:26.609 00:27:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:26.609 00:27:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@831 -- # '[' -z 3283650 ']' 00:20:26.609 00:27:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:26.609 00:27:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:26.609 00:27:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:26.609 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:26.609 00:27:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:26.609 00:27:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:26.609 [2024-10-09 00:27:56.617178] Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 initialization... 00:20:26.609 [2024-10-09 00:27:56.617255] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:26.609 [2024-10-09 00:27:56.708808] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:26.609 [2024-10-09 00:27:56.799616] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:26.609 [2024-10-09 00:27:56.799681] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:26.609 [2024-10-09 00:27:56.799689] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:26.609 [2024-10-09 00:27:56.799696] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:26.609 [2024-10-09 00:27:56.799702] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:26.609 [2024-10-09 00:27:56.800546] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:20:26.870 00:27:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:26.870 00:27:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # return 0 00:20:26.870 00:27:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:20:26.870 00:27:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:26.870 00:27:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:26.870 00:27:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:26.870 00:27:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:20:26.870 00:27:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:20:26.870 00:27:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:20:26.870 00:27:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.STQ 00:20:26.870 00:27:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:20:26.870 00:27:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.STQ 00:20:26.870 00:27:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.STQ 00:20:26.870 00:27:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.STQ 00:20:26.870 00:27:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:27.141 [2024-10-09 00:27:57.654543] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:27.141 [2024-10-09 00:27:57.670517] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:27.141 [2024-10-09 00:27:57.670866] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:27.141 malloc0 00:20:27.141 00:27:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:27.141 00:27:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=3283765 00:20:27.141 00:27:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 3283765 /var/tmp/bdevperf.sock 00:20:27.141 00:27:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:27.141 00:27:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@831 -- # '[' -z 3283765 ']' 00:20:27.141 00:27:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:27.141 00:27:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:27.141 00:27:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:27.141 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:27.141 00:27:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:27.141 00:27:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:27.404 [2024-10-09 00:27:57.831381] Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 initialization... 00:20:27.404 [2024-10-09 00:27:57.831456] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3283765 ] 00:20:27.404 [2024-10-09 00:27:57.914278] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:27.404 [2024-10-09 00:27:58.006861] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:20:28.346 00:27:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:28.346 00:27:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # return 0 00:20:28.346 00:27:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.STQ 00:20:28.346 00:27:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:20:28.607 [2024-10-09 00:27:59.004897] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:28.607 TLSTESTn1 00:20:28.607 00:27:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:28.607 Running I/O for 10 seconds... 00:20:30.931 4714.00 IOPS, 18.41 MiB/s [2024-10-08T22:28:02.505Z] 5492.00 IOPS, 21.45 MiB/s [2024-10-08T22:28:03.446Z] 5638.67 IOPS, 22.03 MiB/s [2024-10-08T22:28:04.387Z] 5691.75 IOPS, 22.23 MiB/s [2024-10-08T22:28:05.331Z] 5798.20 IOPS, 22.65 MiB/s [2024-10-08T22:28:06.270Z] 5870.50 IOPS, 22.93 MiB/s [2024-10-08T22:28:07.659Z] 5915.86 IOPS, 23.11 MiB/s [2024-10-08T22:28:08.598Z] 5922.50 IOPS, 23.13 MiB/s [2024-10-08T22:28:09.558Z] 5932.11 IOPS, 23.17 MiB/s [2024-10-08T22:28:09.558Z] 5946.70 IOPS, 23.23 MiB/s 00:20:38.923 Latency(us) 00:20:38.923 [2024-10-08T22:28:09.558Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:38.923 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:38.923 Verification LBA range: start 0x0 length 0x2000 00:20:38.923 TLSTESTn1 : 10.01 5950.76 23.25 0.00 0.00 21474.34 5925.55 48496.64 00:20:38.923 [2024-10-08T22:28:09.558Z] =================================================================================================================== 00:20:38.923 [2024-10-08T22:28:09.558Z] Total : 5950.76 23.25 0.00 0.00 21474.34 5925.55 48496.64 00:20:38.923 { 00:20:38.923 "results": [ 00:20:38.923 { 00:20:38.923 "job": "TLSTESTn1", 00:20:38.923 "core_mask": "0x4", 00:20:38.923 "workload": "verify", 00:20:38.923 "status": "finished", 00:20:38.923 "verify_range": { 00:20:38.923 "start": 0, 00:20:38.923 "length": 8192 00:20:38.923 }, 00:20:38.923 "queue_depth": 128, 00:20:38.923 "io_size": 4096, 00:20:38.923 "runtime": 10.01452, 00:20:38.923 "iops": 5950.759497210051, 00:20:38.923 "mibps": 23.245154285976763, 00:20:38.923 "io_failed": 0, 00:20:38.923 "io_timeout": 0, 00:20:38.923 "avg_latency_us": 21474.338812632144, 00:20:38.923 "min_latency_us": 5925.546666666667, 00:20:38.924 "max_latency_us": 48496.64 00:20:38.924 } 00:20:38.924 ], 00:20:38.924 "core_count": 1 00:20:38.924 } 00:20:38.924 00:28:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:20:38.924 00:28:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:20:38.924 00:28:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@808 -- # type=--id 00:20:38.924 00:28:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@809 -- # id=0 00:20:38.924 00:28:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:20:38.924 00:28:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:20:38.924 00:28:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:20:38.924 00:28:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:20:38.924 00:28:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # for n in $shm_files 00:20:38.924 00:28:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:20:38.924 nvmf_trace.0 00:20:38.924 00:28:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@823 -- # return 0 00:20:38.924 00:28:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 3283765 00:20:38.924 00:28:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@950 -- # '[' -z 3283765 ']' 00:20:38.924 00:28:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # kill -0 3283765 00:20:38.924 00:28:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # uname 00:20:38.924 00:28:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:38.924 00:28:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3283765 00:20:38.924 00:28:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:20:38.924 00:28:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:20:38.924 00:28:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3283765' 00:20:38.924 killing process with pid 3283765 00:20:38.924 00:28:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@969 -- # kill 3283765 00:20:38.924 Received shutdown signal, test time was about 10.000000 seconds 00:20:38.924 00:20:38.924 Latency(us) 00:20:38.924 [2024-10-08T22:28:09.559Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:38.924 [2024-10-08T22:28:09.559Z] =================================================================================================================== 00:20:38.924 [2024-10-08T22:28:09.559Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:38.924 00:28:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@974 -- # wait 3283765 00:20:38.924 00:28:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:20:38.924 00:28:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@514 -- # nvmfcleanup 00:20:38.924 00:28:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:20:39.186 00:28:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:39.186 00:28:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:20:39.186 00:28:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:39.186 00:28:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:39.186 rmmod nvme_tcp 00:20:39.186 rmmod nvme_fabrics 00:20:39.186 rmmod nvme_keyring 00:20:39.186 00:28:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:39.186 00:28:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:20:39.186 00:28:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:20:39.186 00:28:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@515 -- # '[' -n 3283650 ']' 00:20:39.186 00:28:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # killprocess 3283650 00:20:39.186 00:28:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@950 -- # '[' -z 3283650 ']' 00:20:39.186 00:28:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # kill -0 3283650 00:20:39.186 00:28:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # uname 00:20:39.186 00:28:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:39.186 00:28:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3283650 00:20:39.186 00:28:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:20:39.186 00:28:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:20:39.186 00:28:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3283650' 00:20:39.186 killing process with pid 3283650 00:20:39.186 00:28:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@969 -- # kill 3283650 00:20:39.186 00:28:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@974 -- # wait 3283650 00:20:39.186 00:28:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:20:39.186 00:28:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:20:39.186 00:28:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:20:39.186 00:28:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:20:39.186 00:28:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@789 -- # iptables-save 00:20:39.186 00:28:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:20:39.186 00:28:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@789 -- # iptables-restore 00:20:39.186 00:28:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:39.186 00:28:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:39.186 00:28:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:39.186 00:28:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:39.186 00:28:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:41.733 00:28:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:41.733 00:28:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.STQ 00:20:41.733 00:20:41.733 real 0m23.284s 00:20:41.733 user 0m25.207s 00:20:41.733 sys 0m9.472s 00:20:41.733 00:28:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:41.733 00:28:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:41.733 ************************************ 00:20:41.733 END TEST nvmf_fips 00:20:41.733 ************************************ 00:20:41.734 00:28:11 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:20:41.734 00:28:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:20:41.734 00:28:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:41.734 00:28:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:41.734 ************************************ 00:20:41.734 START TEST nvmf_control_msg_list 00:20:41.734 ************************************ 00:20:41.734 00:28:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:20:41.734 * Looking for test storage... 00:20:41.734 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:41.734 00:28:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:20:41.734 00:28:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1681 -- # lcov --version 00:20:41.734 00:28:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:20:41.734 00:28:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:20:41.734 00:28:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:41.734 00:28:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:41.734 00:28:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:41.734 00:28:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:20:41.734 00:28:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:20:41.734 00:28:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:20:41.734 00:28:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:20:41.734 00:28:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:20:41.734 00:28:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:20:41.734 00:28:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:20:41.734 00:28:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:41.734 00:28:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:20:41.734 00:28:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:20:41.734 00:28:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:41.734 00:28:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:41.734 00:28:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:20:41.734 00:28:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:20:41.734 00:28:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:41.734 00:28:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:20:41.734 00:28:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:20:41.734 00:28:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:20:41.734 00:28:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:20:41.734 00:28:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:41.734 00:28:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:20:41.734 00:28:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:20:41.734 00:28:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:41.734 00:28:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:41.734 00:28:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:20:41.734 00:28:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:41.734 00:28:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:20:41.734 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:41.734 --rc genhtml_branch_coverage=1 00:20:41.734 --rc genhtml_function_coverage=1 00:20:41.734 --rc genhtml_legend=1 00:20:41.734 --rc geninfo_all_blocks=1 00:20:41.734 --rc geninfo_unexecuted_blocks=1 00:20:41.734 00:20:41.734 ' 00:20:41.734 00:28:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:20:41.734 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:41.734 --rc genhtml_branch_coverage=1 00:20:41.734 --rc genhtml_function_coverage=1 00:20:41.734 --rc genhtml_legend=1 00:20:41.734 --rc geninfo_all_blocks=1 00:20:41.734 --rc geninfo_unexecuted_blocks=1 00:20:41.734 00:20:41.734 ' 00:20:41.734 00:28:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:20:41.734 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:41.734 --rc genhtml_branch_coverage=1 00:20:41.734 --rc genhtml_function_coverage=1 00:20:41.734 --rc genhtml_legend=1 00:20:41.734 --rc geninfo_all_blocks=1 00:20:41.734 --rc geninfo_unexecuted_blocks=1 00:20:41.734 00:20:41.734 ' 00:20:41.734 00:28:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:20:41.734 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:41.734 --rc genhtml_branch_coverage=1 00:20:41.734 --rc genhtml_function_coverage=1 00:20:41.734 --rc genhtml_legend=1 00:20:41.734 --rc geninfo_all_blocks=1 00:20:41.734 --rc geninfo_unexecuted_blocks=1 00:20:41.734 00:20:41.734 ' 00:20:41.734 00:28:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:41.734 00:28:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:20:41.734 00:28:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:41.734 00:28:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:41.734 00:28:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:41.734 00:28:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:41.734 00:28:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:41.734 00:28:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:41.734 00:28:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:41.734 00:28:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:41.734 00:28:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:41.734 00:28:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:41.734 00:28:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:41.734 00:28:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:41.734 00:28:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:41.734 00:28:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:41.734 00:28:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:41.734 00:28:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:41.734 00:28:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:41.734 00:28:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:20:41.734 00:28:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:41.734 00:28:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:41.734 00:28:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:41.734 00:28:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:41.734 00:28:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:41.734 00:28:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:41.734 00:28:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:20:41.734 00:28:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:41.734 00:28:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:20:41.734 00:28:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:41.734 00:28:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:41.734 00:28:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:41.734 00:28:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:41.734 00:28:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:41.734 00:28:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:41.734 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:41.735 00:28:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:41.735 00:28:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:41.735 00:28:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:41.735 00:28:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:20:41.735 00:28:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:20:41.735 00:28:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:41.735 00:28:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # prepare_net_devs 00:20:41.735 00:28:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@436 -- # local -g is_hw=no 00:20:41.735 00:28:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # remove_spdk_ns 00:20:41.735 00:28:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:41.735 00:28:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:41.735 00:28:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:41.735 00:28:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:20:41.735 00:28:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:20:41.735 00:28:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@309 -- # xtrace_disable 00:20:41.735 00:28:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:49.881 00:28:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:49.881 00:28:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # pci_devs=() 00:20:49.881 00:28:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:49.881 00:28:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:49.881 00:28:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:49.881 00:28:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:49.881 00:28:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:49.881 00:28:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # net_devs=() 00:20:49.881 00:28:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:49.882 00:28:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # e810=() 00:20:49.882 00:28:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # local -ga e810 00:20:49.882 00:28:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # x722=() 00:20:49.882 00:28:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # local -ga x722 00:20:49.882 00:28:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # mlx=() 00:20:49.882 00:28:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # local -ga mlx 00:20:49.882 00:28:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:49.882 00:28:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:49.882 00:28:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:49.882 00:28:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:49.882 00:28:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:49.882 00:28:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:49.882 00:28:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:49.882 00:28:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:49.882 00:28:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:49.882 00:28:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:49.882 00:28:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:49.882 00:28:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:49.882 00:28:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:49.882 00:28:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:49.882 00:28:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:49.882 00:28:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:49.882 00:28:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:49.882 00:28:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:49.882 00:28:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:49.882 00:28:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:20:49.882 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:20:49.882 00:28:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:49.882 00:28:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:49.882 00:28:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:49.882 00:28:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:49.882 00:28:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:49.882 00:28:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:49.882 00:28:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:20:49.882 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:20:49.882 00:28:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:49.882 00:28:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:49.882 00:28:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:49.882 00:28:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:49.882 00:28:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:49.882 00:28:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:49.882 00:28:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:49.882 00:28:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:49.882 00:28:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:20:49.882 00:28:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:49.882 00:28:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:20:49.882 00:28:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:49.882 00:28:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ up == up ]] 00:20:49.882 00:28:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:20:49.882 00:28:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:49.882 00:28:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:20:49.882 Found net devices under 0000:4b:00.0: cvl_0_0 00:20:49.882 00:28:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:20:49.882 00:28:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:20:49.882 00:28:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:49.882 00:28:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:20:49.882 00:28:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:49.882 00:28:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ up == up ]] 00:20:49.882 00:28:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:20:49.882 00:28:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:49.882 00:28:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:20:49.882 Found net devices under 0000:4b:00.1: cvl_0_1 00:20:49.882 00:28:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:20:49.882 00:28:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:20:49.882 00:28:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # is_hw=yes 00:20:49.882 00:28:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:20:49.882 00:28:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:20:49.882 00:28:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:20:49.882 00:28:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:49.882 00:28:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:49.882 00:28:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:49.882 00:28:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:49.882 00:28:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:49.882 00:28:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:49.882 00:28:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:49.882 00:28:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:49.882 00:28:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:49.882 00:28:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:49.882 00:28:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:49.882 00:28:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:49.882 00:28:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:49.882 00:28:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:49.882 00:28:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:49.882 00:28:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:49.882 00:28:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:49.882 00:28:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:49.882 00:28:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:49.882 00:28:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:49.882 00:28:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:49.882 00:28:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:49.882 00:28:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:49.882 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:49.882 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.546 ms 00:20:49.882 00:20:49.882 --- 10.0.0.2 ping statistics --- 00:20:49.882 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:49.882 rtt min/avg/max/mdev = 0.546/0.546/0.546/0.000 ms 00:20:49.882 00:28:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:49.882 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:49.882 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.334 ms 00:20:49.882 00:20:49.882 --- 10.0.0.1 ping statistics --- 00:20:49.882 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:49.882 rtt min/avg/max/mdev = 0.334/0.334/0.334/0.000 ms 00:20:49.882 00:28:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:49.882 00:28:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@448 -- # return 0 00:20:49.882 00:28:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:20:49.882 00:28:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:49.882 00:28:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:20:49.882 00:28:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:20:49.882 00:28:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:49.883 00:28:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:20:49.883 00:28:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:20:49.883 00:28:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:20:49.883 00:28:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:20:49.883 00:28:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:49.883 00:28:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:49.883 00:28:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # nvmfpid=3290374 00:20:49.883 00:28:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # waitforlisten 3290374 00:20:49.883 00:28:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:20:49.883 00:28:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@831 -- # '[' -z 3290374 ']' 00:20:49.883 00:28:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:49.883 00:28:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:49.883 00:28:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:49.883 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:49.883 00:28:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:49.883 00:28:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:49.883 [2024-10-09 00:28:19.716008] Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 initialization... 00:20:49.883 [2024-10-09 00:28:19.716074] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:49.883 [2024-10-09 00:28:19.804597] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:49.883 [2024-10-09 00:28:19.897834] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:49.883 [2024-10-09 00:28:19.897891] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:49.883 [2024-10-09 00:28:19.897900] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:49.883 [2024-10-09 00:28:19.897907] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:49.883 [2024-10-09 00:28:19.897913] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:49.883 [2024-10-09 00:28:19.898706] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:20:50.144 00:28:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:50.144 00:28:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # return 0 00:20:50.144 00:28:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:20:50.144 00:28:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:50.144 00:28:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:50.144 00:28:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:50.144 00:28:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:20:50.144 00:28:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:20:50.144 00:28:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:20:50.144 00:28:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:50.144 00:28:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:50.144 [2024-10-09 00:28:20.604504] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:50.144 00:28:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:50.144 00:28:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:20:50.144 00:28:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:50.144 00:28:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:50.144 00:28:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:50.144 00:28:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:20:50.144 00:28:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:50.144 00:28:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:50.144 Malloc0 00:20:50.144 00:28:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:50.144 00:28:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:20:50.144 00:28:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:50.144 00:28:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:50.144 00:28:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:50.144 00:28:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:50.144 00:28:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:50.144 00:28:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:50.144 [2024-10-09 00:28:20.669687] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:50.144 00:28:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:50.144 00:28:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=3290451 00:20:50.144 00:28:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:50.144 00:28:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=3290452 00:20:50.144 00:28:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:50.144 00:28:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=3290453 00:20:50.144 00:28:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 3290451 00:20:50.144 00:28:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:50.144 [2024-10-09 00:28:20.770617] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:20:50.144 [2024-10-09 00:28:20.771002] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:20:50.144 [2024-10-09 00:28:20.771222] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:20:51.545 Initializing NVMe Controllers 00:20:51.545 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:20:51.545 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:20:51.545 Initialization complete. Launching workers. 00:20:51.545 ======================================================== 00:20:51.545 Latency(us) 00:20:51.545 Device Information : IOPS MiB/s Average min max 00:20:51.545 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 25.00 0.10 40922.56 40807.43 41369.15 00:20:51.545 ======================================================== 00:20:51.545 Total : 25.00 0.10 40922.56 40807.43 41369.15 00:20:51.545 00:20:51.545 00:28:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 3290452 00:20:51.545 Initializing NVMe Controllers 00:20:51.545 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:20:51.545 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:20:51.545 Initialization complete. Launching workers. 00:20:51.545 ======================================================== 00:20:51.545 Latency(us) 00:20:51.545 Device Information : IOPS MiB/s Average min max 00:20:51.545 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 1457.00 5.69 686.24 285.50 982.87 00:20:51.545 ======================================================== 00:20:51.545 Total : 1457.00 5.69 686.24 285.50 982.87 00:20:51.545 00:20:51.545 00:28:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 3290453 00:20:51.545 Initializing NVMe Controllers 00:20:51.545 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:20:51.545 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:20:51.545 Initialization complete. Launching workers. 00:20:51.545 ======================================================== 00:20:51.545 Latency(us) 00:20:51.545 Device Information : IOPS MiB/s Average min max 00:20:51.545 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 1460.00 5.70 685.02 286.52 952.96 00:20:51.545 ======================================================== 00:20:51.545 Total : 1460.00 5.70 685.02 286.52 952.96 00:20:51.545 00:20:51.545 00:28:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:20:51.545 00:28:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:20:51.545 00:28:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@514 -- # nvmfcleanup 00:20:51.545 00:28:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:20:51.545 00:28:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:51.545 00:28:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:20:51.545 00:28:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:51.545 00:28:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:51.545 rmmod nvme_tcp 00:20:51.545 rmmod nvme_fabrics 00:20:51.545 rmmod nvme_keyring 00:20:51.545 00:28:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:51.545 00:28:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:20:51.545 00:28:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:20:51.545 00:28:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@515 -- # '[' -n 3290374 ']' 00:20:51.545 00:28:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # killprocess 3290374 00:20:51.545 00:28:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@950 -- # '[' -z 3290374 ']' 00:20:51.545 00:28:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # kill -0 3290374 00:20:51.545 00:28:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@955 -- # uname 00:20:51.545 00:28:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:51.545 00:28:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3290374 00:20:51.545 00:28:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:51.545 00:28:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:51.545 00:28:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3290374' 00:20:51.545 killing process with pid 3290374 00:20:51.545 00:28:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@969 -- # kill 3290374 00:20:51.545 00:28:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@974 -- # wait 3290374 00:20:51.806 00:28:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:20:51.806 00:28:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:20:51.806 00:28:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:20:51.806 00:28:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:20:51.806 00:28:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@789 -- # iptables-save 00:20:51.806 00:28:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:20:51.806 00:28:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@789 -- # iptables-restore 00:20:51.806 00:28:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:51.806 00:28:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:51.806 00:28:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:51.806 00:28:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:51.806 00:28:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:54.348 00:28:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:54.348 00:20:54.348 real 0m12.465s 00:20:54.348 user 0m8.025s 00:20:54.348 sys 0m6.568s 00:20:54.348 00:28:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:54.348 00:28:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:54.348 ************************************ 00:20:54.348 END TEST nvmf_control_msg_list 00:20:54.349 ************************************ 00:20:54.349 00:28:24 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:20:54.349 00:28:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:20:54.349 00:28:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:54.349 00:28:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:54.349 ************************************ 00:20:54.349 START TEST nvmf_wait_for_buf 00:20:54.349 ************************************ 00:20:54.349 00:28:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:20:54.349 * Looking for test storage... 00:20:54.349 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:54.349 00:28:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:20:54.349 00:28:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1681 -- # lcov --version 00:20:54.349 00:28:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:20:54.349 00:28:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:20:54.349 00:28:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:54.349 00:28:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:54.349 00:28:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:54.349 00:28:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:20:54.349 00:28:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:20:54.349 00:28:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:20:54.349 00:28:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:20:54.349 00:28:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:20:54.349 00:28:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:20:54.349 00:28:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:20:54.349 00:28:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:54.349 00:28:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:20:54.349 00:28:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:20:54.349 00:28:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:54.349 00:28:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:54.349 00:28:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:20:54.349 00:28:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:20:54.349 00:28:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:54.349 00:28:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:20:54.349 00:28:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:20:54.349 00:28:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:20:54.349 00:28:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:20:54.349 00:28:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:54.349 00:28:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:20:54.349 00:28:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:20:54.349 00:28:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:54.349 00:28:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:54.349 00:28:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:20:54.349 00:28:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:54.349 00:28:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:20:54.349 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:54.349 --rc genhtml_branch_coverage=1 00:20:54.349 --rc genhtml_function_coverage=1 00:20:54.349 --rc genhtml_legend=1 00:20:54.349 --rc geninfo_all_blocks=1 00:20:54.349 --rc geninfo_unexecuted_blocks=1 00:20:54.349 00:20:54.349 ' 00:20:54.349 00:28:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:20:54.349 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:54.349 --rc genhtml_branch_coverage=1 00:20:54.349 --rc genhtml_function_coverage=1 00:20:54.349 --rc genhtml_legend=1 00:20:54.349 --rc geninfo_all_blocks=1 00:20:54.349 --rc geninfo_unexecuted_blocks=1 00:20:54.349 00:20:54.349 ' 00:20:54.349 00:28:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:20:54.349 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:54.349 --rc genhtml_branch_coverage=1 00:20:54.349 --rc genhtml_function_coverage=1 00:20:54.349 --rc genhtml_legend=1 00:20:54.349 --rc geninfo_all_blocks=1 00:20:54.349 --rc geninfo_unexecuted_blocks=1 00:20:54.349 00:20:54.349 ' 00:20:54.349 00:28:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:20:54.349 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:54.349 --rc genhtml_branch_coverage=1 00:20:54.349 --rc genhtml_function_coverage=1 00:20:54.349 --rc genhtml_legend=1 00:20:54.349 --rc geninfo_all_blocks=1 00:20:54.349 --rc geninfo_unexecuted_blocks=1 00:20:54.349 00:20:54.349 ' 00:20:54.349 00:28:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:54.349 00:28:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:20:54.349 00:28:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:54.349 00:28:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:54.349 00:28:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:54.349 00:28:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:54.349 00:28:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:54.349 00:28:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:54.349 00:28:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:54.349 00:28:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:54.349 00:28:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:54.349 00:28:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:54.349 00:28:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:54.349 00:28:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:54.349 00:28:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:54.349 00:28:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:54.349 00:28:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:54.349 00:28:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:54.349 00:28:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:54.349 00:28:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:20:54.349 00:28:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:54.349 00:28:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:54.349 00:28:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:54.349 00:28:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:54.349 00:28:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:54.349 00:28:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:54.349 00:28:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:20:54.349 00:28:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:54.349 00:28:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:20:54.349 00:28:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:54.349 00:28:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:54.349 00:28:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:54.349 00:28:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:54.349 00:28:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:54.349 00:28:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:54.349 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:54.349 00:28:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:54.349 00:28:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:54.349 00:28:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:54.349 00:28:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:20:54.349 00:28:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:20:54.349 00:28:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:54.349 00:28:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # prepare_net_devs 00:20:54.349 00:28:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@436 -- # local -g is_hw=no 00:20:54.349 00:28:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # remove_spdk_ns 00:20:54.349 00:28:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:54.349 00:28:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:54.349 00:28:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:54.349 00:28:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:20:54.349 00:28:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:20:54.349 00:28:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@309 -- # xtrace_disable 00:20:54.349 00:28:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:02.498 00:28:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:02.498 00:28:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # pci_devs=() 00:21:02.498 00:28:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:02.498 00:28:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:02.498 00:28:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:02.498 00:28:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:02.498 00:28:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:02.498 00:28:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # net_devs=() 00:21:02.498 00:28:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:02.498 00:28:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # e810=() 00:21:02.498 00:28:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # local -ga e810 00:21:02.498 00:28:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # x722=() 00:21:02.498 00:28:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # local -ga x722 00:21:02.498 00:28:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # mlx=() 00:21:02.498 00:28:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # local -ga mlx 00:21:02.498 00:28:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:02.498 00:28:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:02.498 00:28:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:02.498 00:28:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:02.498 00:28:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:02.498 00:28:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:02.498 00:28:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:02.498 00:28:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:02.498 00:28:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:02.498 00:28:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:02.498 00:28:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:02.498 00:28:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:02.498 00:28:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:02.498 00:28:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:02.498 00:28:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:02.498 00:28:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:02.498 00:28:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:02.498 00:28:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:02.498 00:28:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:02.498 00:28:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:21:02.498 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:21:02.498 00:28:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:02.498 00:28:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:02.498 00:28:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:02.498 00:28:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:02.498 00:28:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:02.498 00:28:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:02.498 00:28:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:21:02.498 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:21:02.498 00:28:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:02.498 00:28:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:02.498 00:28:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:02.498 00:28:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:02.498 00:28:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:02.498 00:28:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:02.498 00:28:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:02.498 00:28:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:02.498 00:28:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:21:02.498 00:28:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:02.498 00:28:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:21:02.498 00:28:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:02.498 00:28:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ up == up ]] 00:21:02.499 00:28:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:21:02.499 00:28:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:02.499 00:28:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:21:02.499 Found net devices under 0000:4b:00.0: cvl_0_0 00:21:02.499 00:28:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:21:02.499 00:28:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:21:02.499 00:28:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:02.499 00:28:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:21:02.499 00:28:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:02.499 00:28:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ up == up ]] 00:21:02.499 00:28:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:21:02.499 00:28:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:02.499 00:28:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:21:02.499 Found net devices under 0000:4b:00.1: cvl_0_1 00:21:02.499 00:28:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:21:02.499 00:28:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:21:02.499 00:28:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # is_hw=yes 00:21:02.499 00:28:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:21:02.499 00:28:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:21:02.499 00:28:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:21:02.499 00:28:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:02.499 00:28:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:02.499 00:28:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:02.499 00:28:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:02.499 00:28:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:02.499 00:28:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:02.499 00:28:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:02.499 00:28:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:02.499 00:28:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:02.499 00:28:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:02.499 00:28:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:02.499 00:28:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:02.499 00:28:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:02.499 00:28:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:02.499 00:28:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:02.499 00:28:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:02.499 00:28:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:02.499 00:28:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:02.499 00:28:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:02.499 00:28:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:02.499 00:28:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:02.499 00:28:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:02.499 00:28:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:02.499 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:02.499 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.684 ms 00:21:02.499 00:21:02.499 --- 10.0.0.2 ping statistics --- 00:21:02.499 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:02.499 rtt min/avg/max/mdev = 0.684/0.684/0.684/0.000 ms 00:21:02.499 00:28:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:02.499 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:02.499 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.281 ms 00:21:02.499 00:21:02.499 --- 10.0.0.1 ping statistics --- 00:21:02.499 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:02.499 rtt min/avg/max/mdev = 0.281/0.281/0.281/0.000 ms 00:21:02.499 00:28:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:02.499 00:28:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@448 -- # return 0 00:21:02.499 00:28:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:21:02.499 00:28:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:02.499 00:28:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:21:02.499 00:28:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:21:02.499 00:28:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:02.499 00:28:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:21:02.499 00:28:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:21:02.499 00:28:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:21:02.499 00:28:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:21:02.499 00:28:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:02.499 00:28:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:02.499 00:28:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # nvmfpid=3295059 00:21:02.499 00:28:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # waitforlisten 3295059 00:21:02.499 00:28:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:21:02.499 00:28:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@831 -- # '[' -z 3295059 ']' 00:21:02.499 00:28:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:02.499 00:28:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:02.499 00:28:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:02.499 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:02.499 00:28:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:02.499 00:28:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:02.499 [2024-10-09 00:28:32.350002] Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 initialization... 00:21:02.499 [2024-10-09 00:28:32.350069] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:02.499 [2024-10-09 00:28:32.437893] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:02.499 [2024-10-09 00:28:32.531268] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:02.499 [2024-10-09 00:28:32.531326] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:02.499 [2024-10-09 00:28:32.531335] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:02.499 [2024-10-09 00:28:32.531342] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:02.499 [2024-10-09 00:28:32.531349] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:02.499 [2024-10-09 00:28:32.532148] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:21:02.761 00:28:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:02.761 00:28:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # return 0 00:21:02.761 00:28:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:21:02.761 00:28:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:02.761 00:28:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:02.761 00:28:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:02.761 00:28:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:21:02.761 00:28:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:21:02.761 00:28:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:21:02.761 00:28:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.761 00:28:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:02.761 00:28:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.761 00:28:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:21:02.761 00:28:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.761 00:28:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:02.761 00:28:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.761 00:28:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:21:02.761 00:28:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.761 00:28:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:02.761 00:28:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.761 00:28:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:21:02.761 00:28:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.761 00:28:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:02.761 Malloc0 00:21:02.761 00:28:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.761 00:28:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:21:02.761 00:28:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.761 00:28:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:02.761 [2024-10-09 00:28:33.323683] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:02.761 00:28:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.761 00:28:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:21:02.761 00:28:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.761 00:28:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:02.761 00:28:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.761 00:28:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:21:02.761 00:28:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.761 00:28:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:02.761 00:28:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.761 00:28:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:02.761 00:28:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.761 00:28:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:02.761 [2024-10-09 00:28:33.360034] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:02.761 00:28:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.761 00:28:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:03.025 [2024-10-09 00:28:33.442821] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:21:04.417 Initializing NVMe Controllers 00:21:04.417 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:21:04.417 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:21:04.417 Initialization complete. Launching workers. 00:21:04.417 ======================================================== 00:21:04.417 Latency(us) 00:21:04.417 Device Information : IOPS MiB/s Average min max 00:21:04.417 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 119.00 14.87 34975.00 8003.38 111735.51 00:21:04.417 ======================================================== 00:21:04.417 Total : 119.00 14.87 34975.00 8003.38 111735.51 00:21:04.417 00:21:04.417 00:28:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:21:04.417 00:28:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:21:04.417 00:28:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.417 00:28:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:04.417 00:28:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.417 00:28:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=1878 00:21:04.417 00:28:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 1878 -eq 0 ]] 00:21:04.417 00:28:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:21:04.417 00:28:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:21:04.417 00:28:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@514 -- # nvmfcleanup 00:21:04.417 00:28:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:21:04.417 00:28:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:04.417 00:28:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:21:04.417 00:28:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:04.417 00:28:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:04.417 rmmod nvme_tcp 00:21:04.417 rmmod nvme_fabrics 00:21:04.417 rmmod nvme_keyring 00:21:04.417 00:28:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:04.417 00:28:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:21:04.417 00:28:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:21:04.417 00:28:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@515 -- # '[' -n 3295059 ']' 00:21:04.417 00:28:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # killprocess 3295059 00:21:04.417 00:28:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@950 -- # '[' -z 3295059 ']' 00:21:04.417 00:28:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # kill -0 3295059 00:21:04.417 00:28:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@955 -- # uname 00:21:04.417 00:28:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:04.417 00:28:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3295059 00:21:04.417 00:28:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:04.417 00:28:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:04.417 00:28:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3295059' 00:21:04.417 killing process with pid 3295059 00:21:04.417 00:28:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@969 -- # kill 3295059 00:21:04.417 00:28:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@974 -- # wait 3295059 00:21:04.678 00:28:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:21:04.678 00:28:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:21:04.678 00:28:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:21:04.678 00:28:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:21:04.678 00:28:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@789 -- # iptables-save 00:21:04.678 00:28:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:21:04.678 00:28:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@789 -- # iptables-restore 00:21:04.678 00:28:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:04.678 00:28:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:04.678 00:28:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:04.678 00:28:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:04.678 00:28:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:07.225 00:28:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:07.225 00:21:07.225 real 0m12.766s 00:21:07.225 user 0m5.106s 00:21:07.225 sys 0m6.228s 00:21:07.225 00:28:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:07.225 00:28:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:07.225 ************************************ 00:21:07.225 END TEST nvmf_wait_for_buf 00:21:07.225 ************************************ 00:21:07.225 00:28:37 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 0 -eq 1 ']' 00:21:07.225 00:28:37 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ phy == phy ]] 00:21:07.225 00:28:37 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # '[' tcp = tcp ']' 00:21:07.225 00:28:37 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # gather_supported_nvmf_pci_devs 00:21:07.225 00:28:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@309 -- # xtrace_disable 00:21:07.225 00:28:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:13.909 00:28:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:13.909 00:28:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # pci_devs=() 00:21:13.909 00:28:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:13.909 00:28:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:13.909 00:28:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:13.909 00:28:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:13.909 00:28:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:13.909 00:28:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # net_devs=() 00:21:13.909 00:28:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:13.909 00:28:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # e810=() 00:21:13.909 00:28:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # local -ga e810 00:21:13.909 00:28:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # x722=() 00:21:13.909 00:28:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # local -ga x722 00:21:13.909 00:28:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # mlx=() 00:21:13.909 00:28:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # local -ga mlx 00:21:13.909 00:28:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:13.909 00:28:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:13.909 00:28:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:13.909 00:28:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:13.909 00:28:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:13.909 00:28:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:13.909 00:28:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:13.909 00:28:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:13.909 00:28:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:13.909 00:28:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:13.909 00:28:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:13.909 00:28:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:13.909 00:28:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:13.909 00:28:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:13.909 00:28:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:13.909 00:28:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:13.909 00:28:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:13.909 00:28:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:13.909 00:28:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:13.909 00:28:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:21:13.909 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:21:13.909 00:28:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:13.909 00:28:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:13.909 00:28:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:13.909 00:28:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:13.909 00:28:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:13.909 00:28:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:13.909 00:28:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:21:13.909 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:21:13.909 00:28:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:13.909 00:28:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:13.909 00:28:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:13.909 00:28:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:13.909 00:28:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:13.909 00:28:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:13.909 00:28:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:13.909 00:28:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:13.909 00:28:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:21:13.909 00:28:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:13.909 00:28:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:21:13.909 00:28:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:13.909 00:28:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ up == up ]] 00:21:13.909 00:28:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:21:13.909 00:28:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:13.909 00:28:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:21:13.909 Found net devices under 0000:4b:00.0: cvl_0_0 00:21:13.909 00:28:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:21:13.909 00:28:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:21:13.909 00:28:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:13.909 00:28:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:21:13.909 00:28:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:13.909 00:28:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ up == up ]] 00:21:13.909 00:28:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:21:13.909 00:28:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:13.909 00:28:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:21:13.909 Found net devices under 0000:4b:00.1: cvl_0_1 00:21:13.909 00:28:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:21:13.909 00:28:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:21:13.909 00:28:44 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:13.909 00:28:44 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@57 -- # (( 2 > 0 )) 00:21:13.909 00:28:44 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@58 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:21:13.909 00:28:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:21:13.909 00:28:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:13.909 00:28:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:13.909 ************************************ 00:21:13.909 START TEST nvmf_perf_adq 00:21:13.909 ************************************ 00:21:13.909 00:28:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:21:14.172 * Looking for test storage... 00:21:14.172 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:14.172 00:28:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:21:14.172 00:28:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1681 -- # lcov --version 00:21:14.172 00:28:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:21:14.172 00:28:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:21:14.172 00:28:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:14.172 00:28:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:14.172 00:28:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:14.172 00:28:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # IFS=.-: 00:21:14.172 00:28:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # read -ra ver1 00:21:14.172 00:28:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # IFS=.-: 00:21:14.172 00:28:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # read -ra ver2 00:21:14.172 00:28:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@338 -- # local 'op=<' 00:21:14.172 00:28:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@340 -- # ver1_l=2 00:21:14.172 00:28:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@341 -- # ver2_l=1 00:21:14.172 00:28:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:14.172 00:28:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@344 -- # case "$op" in 00:21:14.172 00:28:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@345 -- # : 1 00:21:14.172 00:28:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:14.172 00:28:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:14.172 00:28:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # decimal 1 00:21:14.172 00:28:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=1 00:21:14.172 00:28:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:14.172 00:28:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 1 00:21:14.172 00:28:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # ver1[v]=1 00:21:14.172 00:28:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # decimal 2 00:21:14.172 00:28:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=2 00:21:14.172 00:28:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:14.172 00:28:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 2 00:21:14.172 00:28:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # ver2[v]=2 00:21:14.172 00:28:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:14.172 00:28:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:14.172 00:28:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # return 0 00:21:14.172 00:28:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:14.172 00:28:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:21:14.172 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:14.172 --rc genhtml_branch_coverage=1 00:21:14.172 --rc genhtml_function_coverage=1 00:21:14.172 --rc genhtml_legend=1 00:21:14.172 --rc geninfo_all_blocks=1 00:21:14.172 --rc geninfo_unexecuted_blocks=1 00:21:14.172 00:21:14.172 ' 00:21:14.172 00:28:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:21:14.172 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:14.172 --rc genhtml_branch_coverage=1 00:21:14.172 --rc genhtml_function_coverage=1 00:21:14.172 --rc genhtml_legend=1 00:21:14.172 --rc geninfo_all_blocks=1 00:21:14.172 --rc geninfo_unexecuted_blocks=1 00:21:14.172 00:21:14.172 ' 00:21:14.172 00:28:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:21:14.172 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:14.172 --rc genhtml_branch_coverage=1 00:21:14.172 --rc genhtml_function_coverage=1 00:21:14.172 --rc genhtml_legend=1 00:21:14.172 --rc geninfo_all_blocks=1 00:21:14.172 --rc geninfo_unexecuted_blocks=1 00:21:14.172 00:21:14.172 ' 00:21:14.172 00:28:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:21:14.172 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:14.172 --rc genhtml_branch_coverage=1 00:21:14.172 --rc genhtml_function_coverage=1 00:21:14.172 --rc genhtml_legend=1 00:21:14.172 --rc geninfo_all_blocks=1 00:21:14.172 --rc geninfo_unexecuted_blocks=1 00:21:14.172 00:21:14.172 ' 00:21:14.172 00:28:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:14.172 00:28:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:21:14.172 00:28:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:14.172 00:28:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:14.172 00:28:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:14.172 00:28:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:14.172 00:28:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:14.172 00:28:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:14.172 00:28:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:14.172 00:28:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:14.172 00:28:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:14.172 00:28:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:14.172 00:28:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:14.172 00:28:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:14.172 00:28:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:14.172 00:28:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:14.172 00:28:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:14.172 00:28:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:14.172 00:28:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:14.172 00:28:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@15 -- # shopt -s extglob 00:21:14.172 00:28:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:14.172 00:28:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:14.172 00:28:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:14.172 00:28:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:14.172 00:28:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:14.172 00:28:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:14.172 00:28:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:21:14.173 00:28:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:14.173 00:28:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # : 0 00:21:14.173 00:28:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:14.173 00:28:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:14.173 00:28:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:14.173 00:28:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:14.173 00:28:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:14.173 00:28:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:14.173 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:14.173 00:28:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:14.173 00:28:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:14.173 00:28:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:14.173 00:28:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:21:14.173 00:28:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:21:14.173 00:28:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:22.320 00:28:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:22.321 00:28:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:21:22.321 00:28:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:22.321 00:28:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:22.321 00:28:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:22.321 00:28:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:22.321 00:28:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:22.321 00:28:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:21:22.321 00:28:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:22.321 00:28:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:21:22.321 00:28:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:21:22.321 00:28:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:21:22.321 00:28:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:21:22.321 00:28:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:21:22.321 00:28:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:21:22.321 00:28:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:22.321 00:28:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:22.321 00:28:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:22.321 00:28:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:22.321 00:28:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:22.321 00:28:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:22.321 00:28:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:22.321 00:28:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:22.321 00:28:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:22.321 00:28:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:22.321 00:28:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:22.321 00:28:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:22.321 00:28:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:22.321 00:28:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:22.321 00:28:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:22.321 00:28:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:22.321 00:28:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:22.321 00:28:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:22.321 00:28:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:22.321 00:28:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:21:22.321 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:21:22.321 00:28:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:22.321 00:28:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:22.321 00:28:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:22.321 00:28:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:22.321 00:28:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:22.321 00:28:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:22.321 00:28:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:21:22.321 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:21:22.321 00:28:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:22.321 00:28:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:22.321 00:28:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:22.321 00:28:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:22.321 00:28:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:22.321 00:28:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:22.321 00:28:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:22.321 00:28:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:22.321 00:28:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:21:22.321 00:28:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:22.321 00:28:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:21:22.321 00:28:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:22.321 00:28:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ up == up ]] 00:21:22.321 00:28:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:21:22.321 00:28:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:22.321 00:28:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:21:22.321 Found net devices under 0000:4b:00.0: cvl_0_0 00:21:22.321 00:28:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:21:22.321 00:28:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:21:22.321 00:28:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:22.321 00:28:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:21:22.321 00:28:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:22.321 00:28:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ up == up ]] 00:21:22.321 00:28:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:21:22.321 00:28:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:22.321 00:28:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:21:22.321 Found net devices under 0000:4b:00.1: cvl_0_1 00:21:22.321 00:28:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:21:22.321 00:28:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:21:22.321 00:28:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:22.321 00:28:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:21:22.321 00:28:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:21:22.321 00:28:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # adq_reload_driver 00:21:22.321 00:28:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:21:22.321 00:28:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:21:22.895 00:28:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:21:24.812 00:28:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:21:30.110 00:29:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@76 -- # nvmftestinit 00:21:30.110 00:29:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:21:30.110 00:29:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:30.110 00:29:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # prepare_net_devs 00:21:30.110 00:29:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@436 -- # local -g is_hw=no 00:21:30.110 00:29:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # remove_spdk_ns 00:21:30.110 00:29:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:30.110 00:29:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:30.110 00:29:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:30.110 00:29:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:21:30.110 00:29:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:21:30.110 00:29:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:21:30.110 00:29:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:30.110 00:29:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:30.110 00:29:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:21:30.110 00:29:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:30.110 00:29:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:30.110 00:29:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:30.110 00:29:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:30.110 00:29:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:30.110 00:29:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:21:30.110 00:29:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:30.110 00:29:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:21:30.110 00:29:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:21:30.110 00:29:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:21:30.110 00:29:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:21:30.110 00:29:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:21:30.110 00:29:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:21:30.110 00:29:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:30.110 00:29:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:30.110 00:29:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:30.110 00:29:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:30.110 00:29:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:30.110 00:29:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:30.110 00:29:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:30.110 00:29:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:30.110 00:29:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:30.110 00:29:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:30.110 00:29:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:30.110 00:29:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:30.110 00:29:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:30.110 00:29:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:30.110 00:29:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:30.110 00:29:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:30.110 00:29:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:30.110 00:29:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:30.110 00:29:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:30.110 00:29:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:21:30.110 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:21:30.110 00:29:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:30.110 00:29:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:30.110 00:29:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:30.110 00:29:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:30.110 00:29:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:30.110 00:29:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:30.110 00:29:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:21:30.110 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:21:30.110 00:29:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:30.110 00:29:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:30.110 00:29:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:30.110 00:29:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:30.110 00:29:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:30.110 00:29:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:30.110 00:29:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:30.110 00:29:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:30.110 00:29:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:21:30.110 00:29:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:30.110 00:29:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:21:30.110 00:29:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:30.110 00:29:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ up == up ]] 00:21:30.110 00:29:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:21:30.110 00:29:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:30.110 00:29:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:21:30.110 Found net devices under 0000:4b:00.0: cvl_0_0 00:21:30.110 00:29:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:21:30.110 00:29:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:21:30.110 00:29:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:30.111 00:29:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:21:30.111 00:29:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:30.111 00:29:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ up == up ]] 00:21:30.111 00:29:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:21:30.111 00:29:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:30.111 00:29:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:21:30.111 Found net devices under 0000:4b:00.1: cvl_0_1 00:21:30.111 00:29:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:21:30.111 00:29:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:21:30.111 00:29:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # is_hw=yes 00:21:30.111 00:29:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:21:30.111 00:29:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:21:30.111 00:29:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:21:30.111 00:29:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:30.111 00:29:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:30.111 00:29:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:30.111 00:29:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:30.111 00:29:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:30.111 00:29:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:30.111 00:29:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:30.111 00:29:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:30.111 00:29:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:30.111 00:29:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:30.111 00:29:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:30.111 00:29:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:30.111 00:29:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:30.111 00:29:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:30.111 00:29:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:30.111 00:29:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:30.111 00:29:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:30.111 00:29:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:30.111 00:29:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:30.111 00:29:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:30.111 00:29:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:30.111 00:29:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:30.111 00:29:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:30.111 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:30.111 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.530 ms 00:21:30.111 00:21:30.111 --- 10.0.0.2 ping statistics --- 00:21:30.111 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:30.111 rtt min/avg/max/mdev = 0.530/0.530/0.530/0.000 ms 00:21:30.111 00:29:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:30.111 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:30.111 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.264 ms 00:21:30.111 00:21:30.111 --- 10.0.0.1 ping statistics --- 00:21:30.111 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:30.111 rtt min/avg/max/mdev = 0.264/0.264/0.264/0.000 ms 00:21:30.111 00:29:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:30.111 00:29:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@448 -- # return 0 00:21:30.111 00:29:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:21:30.111 00:29:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:30.111 00:29:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:21:30.111 00:29:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:21:30.111 00:29:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:30.111 00:29:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:21:30.111 00:29:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:21:30.111 00:29:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmfappstart -m 0xF --wait-for-rpc 00:21:30.111 00:29:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:21:30.111 00:29:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:30.111 00:29:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:30.111 00:29:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # nvmfpid=3305042 00:21:30.111 00:29:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # waitforlisten 3305042 00:21:30.111 00:29:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:21:30.111 00:29:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@831 -- # '[' -z 3305042 ']' 00:21:30.111 00:29:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:30.111 00:29:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:30.111 00:29:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:30.111 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:30.111 00:29:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:30.111 00:29:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:30.111 [2024-10-09 00:29:00.533168] Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 initialization... 00:21:30.111 [2024-10-09 00:29:00.533232] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:30.111 [2024-10-09 00:29:00.620966] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:30.111 [2024-10-09 00:29:00.716518] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:30.111 [2024-10-09 00:29:00.716578] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:30.111 [2024-10-09 00:29:00.716587] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:30.111 [2024-10-09 00:29:00.716595] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:30.111 [2024-10-09 00:29:00.716602] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:30.111 [2024-10-09 00:29:00.718713] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:21:30.111 [2024-10-09 00:29:00.718874] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:21:30.111 [2024-10-09 00:29:00.719146] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:21:30.111 [2024-10-09 00:29:00.719149] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:21:31.055 00:29:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:31.055 00:29:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # return 0 00:21:31.055 00:29:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:21:31.055 00:29:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:31.055 00:29:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:31.055 00:29:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:31.055 00:29:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # adq_configure_nvmf_target 0 00:21:31.055 00:29:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:21:31.055 00:29:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:21:31.055 00:29:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:31.055 00:29:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:31.055 00:29:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:31.055 00:29:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:21:31.055 00:29:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:21:31.055 00:29:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:31.055 00:29:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:31.055 00:29:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:31.055 00:29:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:21:31.055 00:29:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:31.055 00:29:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:31.055 00:29:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:31.055 00:29:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:21:31.055 00:29:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:31.055 00:29:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:31.055 [2024-10-09 00:29:01.558541] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:31.055 00:29:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:31.055 00:29:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:21:31.055 00:29:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:31.055 00:29:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:31.055 Malloc1 00:21:31.055 00:29:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:31.055 00:29:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:31.055 00:29:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:31.055 00:29:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:31.055 00:29:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:31.055 00:29:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:21:31.055 00:29:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:31.055 00:29:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:31.055 00:29:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:31.055 00:29:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:31.055 00:29:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:31.055 00:29:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:31.055 [2024-10-09 00:29:01.612173] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:31.055 00:29:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:31.055 00:29:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@82 -- # perfpid=3305392 00:21:31.055 00:29:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # sleep 2 00:21:31.055 00:29:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:21:33.626 00:29:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # rpc_cmd nvmf_get_stats 00:21:33.626 00:29:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:33.626 00:29:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:33.626 00:29:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:33.626 00:29:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # nvmf_stats='{ 00:21:33.626 "tick_rate": 2400000000, 00:21:33.626 "poll_groups": [ 00:21:33.626 { 00:21:33.626 "name": "nvmf_tgt_poll_group_000", 00:21:33.626 "admin_qpairs": 1, 00:21:33.626 "io_qpairs": 1, 00:21:33.627 "current_admin_qpairs": 1, 00:21:33.627 "current_io_qpairs": 1, 00:21:33.627 "pending_bdev_io": 0, 00:21:33.627 "completed_nvme_io": 16925, 00:21:33.627 "transports": [ 00:21:33.627 { 00:21:33.627 "trtype": "TCP" 00:21:33.627 } 00:21:33.627 ] 00:21:33.627 }, 00:21:33.627 { 00:21:33.627 "name": "nvmf_tgt_poll_group_001", 00:21:33.627 "admin_qpairs": 0, 00:21:33.627 "io_qpairs": 1, 00:21:33.627 "current_admin_qpairs": 0, 00:21:33.627 "current_io_qpairs": 1, 00:21:33.627 "pending_bdev_io": 0, 00:21:33.627 "completed_nvme_io": 20014, 00:21:33.627 "transports": [ 00:21:33.627 { 00:21:33.627 "trtype": "TCP" 00:21:33.627 } 00:21:33.627 ] 00:21:33.627 }, 00:21:33.627 { 00:21:33.627 "name": "nvmf_tgt_poll_group_002", 00:21:33.627 "admin_qpairs": 0, 00:21:33.627 "io_qpairs": 1, 00:21:33.627 "current_admin_qpairs": 0, 00:21:33.627 "current_io_qpairs": 1, 00:21:33.627 "pending_bdev_io": 0, 00:21:33.627 "completed_nvme_io": 17529, 00:21:33.627 "transports": [ 00:21:33.627 { 00:21:33.627 "trtype": "TCP" 00:21:33.627 } 00:21:33.627 ] 00:21:33.627 }, 00:21:33.627 { 00:21:33.627 "name": "nvmf_tgt_poll_group_003", 00:21:33.627 "admin_qpairs": 0, 00:21:33.627 "io_qpairs": 1, 00:21:33.627 "current_admin_qpairs": 0, 00:21:33.628 "current_io_qpairs": 1, 00:21:33.628 "pending_bdev_io": 0, 00:21:33.628 "completed_nvme_io": 17427, 00:21:33.628 "transports": [ 00:21:33.628 { 00:21:33.628 "trtype": "TCP" 00:21:33.628 } 00:21:33.628 ] 00:21:33.628 } 00:21:33.628 ] 00:21:33.628 }' 00:21:33.628 00:29:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:21:33.628 00:29:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # wc -l 00:21:33.628 00:29:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # count=4 00:21:33.628 00:29:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@87 -- # [[ 4 -ne 4 ]] 00:21:33.628 00:29:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # wait 3305392 00:21:41.760 Initializing NVMe Controllers 00:21:41.760 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:41.760 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:21:41.760 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:21:41.760 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:21:41.760 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:21:41.760 Initialization complete. Launching workers. 00:21:41.760 ======================================================== 00:21:41.760 Latency(us) 00:21:41.760 Device Information : IOPS MiB/s Average min max 00:21:41.760 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 12652.70 49.42 5073.71 1207.63 44255.65 00:21:41.760 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 13527.20 52.84 4730.41 1286.44 11780.94 00:21:41.760 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 13564.40 52.99 4718.61 1198.08 13625.19 00:21:41.760 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 13028.60 50.89 4912.04 1266.05 13751.29 00:21:41.760 ======================================================== 00:21:41.760 Total : 52772.90 206.14 4854.52 1198.08 44255.65 00:21:41.760 00:21:41.760 00:29:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # nvmftestfini 00:21:41.760 00:29:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@514 -- # nvmfcleanup 00:21:41.760 00:29:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:21:41.760 00:29:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:41.760 00:29:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:21:41.760 00:29:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:41.760 00:29:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:41.760 rmmod nvme_tcp 00:21:41.760 rmmod nvme_fabrics 00:21:41.760 rmmod nvme_keyring 00:21:41.760 00:29:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:41.760 00:29:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:21:41.760 00:29:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:21:41.760 00:29:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@515 -- # '[' -n 3305042 ']' 00:21:41.760 00:29:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # killprocess 3305042 00:21:41.760 00:29:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@950 -- # '[' -z 3305042 ']' 00:21:41.760 00:29:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # kill -0 3305042 00:21:41.760 00:29:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # uname 00:21:41.760 00:29:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:41.760 00:29:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3305042 00:21:41.760 00:29:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:41.760 00:29:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:41.760 00:29:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3305042' 00:21:41.760 killing process with pid 3305042 00:21:41.760 00:29:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@969 -- # kill 3305042 00:21:41.760 00:29:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@974 -- # wait 3305042 00:21:41.760 00:29:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:21:41.760 00:29:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:21:41.760 00:29:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:21:41.760 00:29:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:21:41.760 00:29:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@789 -- # iptables-save 00:21:41.760 00:29:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:21:41.760 00:29:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@789 -- # iptables-restore 00:21:41.760 00:29:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:41.760 00:29:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:41.760 00:29:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:41.760 00:29:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:41.760 00:29:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:43.671 00:29:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:43.671 00:29:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@94 -- # adq_reload_driver 00:21:43.671 00:29:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:21:43.671 00:29:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:21:45.583 00:29:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:21:46.966 00:29:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:21:52.256 00:29:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # nvmftestinit 00:21:52.256 00:29:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:21:52.256 00:29:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:52.256 00:29:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # prepare_net_devs 00:21:52.256 00:29:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@436 -- # local -g is_hw=no 00:21:52.256 00:29:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # remove_spdk_ns 00:21:52.256 00:29:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:52.256 00:29:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:52.256 00:29:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:52.256 00:29:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:21:52.256 00:29:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:21:52.256 00:29:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:21:52.256 00:29:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:52.256 00:29:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:52.256 00:29:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:21:52.256 00:29:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:52.256 00:29:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:52.256 00:29:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:52.256 00:29:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:52.256 00:29:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:52.256 00:29:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:21:52.256 00:29:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:52.256 00:29:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:21:52.256 00:29:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:21:52.256 00:29:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:21:52.256 00:29:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:21:52.256 00:29:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:21:52.256 00:29:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:21:52.256 00:29:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:52.256 00:29:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:52.256 00:29:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:52.256 00:29:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:52.256 00:29:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:52.256 00:29:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:52.256 00:29:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:52.256 00:29:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:52.256 00:29:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:52.256 00:29:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:52.256 00:29:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:52.256 00:29:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:52.256 00:29:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:52.256 00:29:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:52.256 00:29:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:52.256 00:29:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:52.256 00:29:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:52.256 00:29:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:52.256 00:29:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:52.256 00:29:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:21:52.256 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:21:52.256 00:29:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:52.256 00:29:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:52.256 00:29:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:52.256 00:29:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:52.256 00:29:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:52.256 00:29:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:52.256 00:29:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:21:52.256 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:21:52.256 00:29:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:52.256 00:29:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:52.256 00:29:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:52.256 00:29:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:52.256 00:29:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:52.256 00:29:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:52.256 00:29:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:52.256 00:29:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:52.256 00:29:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:21:52.256 00:29:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:52.256 00:29:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:21:52.256 00:29:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:52.256 00:29:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ up == up ]] 00:21:52.256 00:29:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:21:52.256 00:29:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:52.256 00:29:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:21:52.256 Found net devices under 0000:4b:00.0: cvl_0_0 00:21:52.256 00:29:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:21:52.256 00:29:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:21:52.256 00:29:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:52.256 00:29:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:21:52.256 00:29:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:52.256 00:29:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ up == up ]] 00:21:52.256 00:29:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:21:52.256 00:29:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:52.257 00:29:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:21:52.257 Found net devices under 0000:4b:00.1: cvl_0_1 00:21:52.257 00:29:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:21:52.257 00:29:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:21:52.257 00:29:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # is_hw=yes 00:21:52.257 00:29:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:21:52.257 00:29:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:21:52.257 00:29:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:21:52.257 00:29:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:52.257 00:29:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:52.257 00:29:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:52.257 00:29:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:52.257 00:29:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:52.257 00:29:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:52.257 00:29:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:52.257 00:29:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:52.257 00:29:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:52.257 00:29:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:52.257 00:29:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:52.257 00:29:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:52.257 00:29:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:52.257 00:29:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:52.257 00:29:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:52.257 00:29:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:52.257 00:29:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:52.257 00:29:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:52.257 00:29:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:52.518 00:29:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:52.518 00:29:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:52.518 00:29:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:52.518 00:29:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:52.518 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:52.518 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.542 ms 00:21:52.518 00:21:52.518 --- 10.0.0.2 ping statistics --- 00:21:52.518 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:52.518 rtt min/avg/max/mdev = 0.542/0.542/0.542/0.000 ms 00:21:52.518 00:29:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:52.518 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:52.518 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.288 ms 00:21:52.518 00:21:52.518 --- 10.0.0.1 ping statistics --- 00:21:52.518 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:52.518 rtt min/avg/max/mdev = 0.288/0.288/0.288/0.000 ms 00:21:52.518 00:29:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:52.518 00:29:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@448 -- # return 0 00:21:52.518 00:29:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:21:52.518 00:29:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:52.518 00:29:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:21:52.518 00:29:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:21:52.518 00:29:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:52.518 00:29:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:21:52.518 00:29:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:21:52.518 00:29:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@98 -- # adq_configure_driver 00:21:52.518 00:29:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:21:52.518 00:29:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:21:52.518 00:29:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:21:52.518 net.core.busy_poll = 1 00:21:52.518 00:29:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:21:52.518 net.core.busy_read = 1 00:21:52.518 00:29:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:21:52.518 00:29:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:21:52.779 00:29:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:21:52.779 00:29:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:21:52.779 00:29:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:21:52.779 00:29:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmfappstart -m 0xF --wait-for-rpc 00:21:52.779 00:29:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:21:52.779 00:29:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:52.779 00:29:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:52.779 00:29:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # nvmfpid=3309855 00:21:52.779 00:29:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # waitforlisten 3309855 00:21:52.779 00:29:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:21:52.779 00:29:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@831 -- # '[' -z 3309855 ']' 00:21:52.779 00:29:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:52.779 00:29:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:52.779 00:29:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:52.779 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:52.779 00:29:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:52.779 00:29:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:52.779 [2024-10-09 00:29:23.301948] Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 initialization... 00:21:52.779 [2024-10-09 00:29:23.302016] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:52.779 [2024-10-09 00:29:23.391363] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:53.050 [2024-10-09 00:29:23.485790] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:53.050 [2024-10-09 00:29:23.485841] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:53.050 [2024-10-09 00:29:23.485850] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:53.050 [2024-10-09 00:29:23.485857] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:53.050 [2024-10-09 00:29:23.485864] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:53.050 [2024-10-09 00:29:23.487874] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:21:53.050 [2024-10-09 00:29:23.488035] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:21:53.050 [2024-10-09 00:29:23.488198] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:21:53.050 [2024-10-09 00:29:23.488199] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:21:53.626 00:29:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:53.626 00:29:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # return 0 00:21:53.626 00:29:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:21:53.626 00:29:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:53.626 00:29:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:53.626 00:29:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:53.626 00:29:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # adq_configure_nvmf_target 1 00:21:53.626 00:29:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:21:53.626 00:29:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:21:53.626 00:29:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:53.626 00:29:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:53.626 00:29:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:53.626 00:29:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:21:53.626 00:29:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:21:53.626 00:29:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:53.626 00:29:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:53.626 00:29:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:53.626 00:29:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:21:53.626 00:29:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:53.626 00:29:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:53.887 00:29:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:53.887 00:29:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:21:53.887 00:29:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:53.887 00:29:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:53.887 [2024-10-09 00:29:24.318875] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:53.887 00:29:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:53.887 00:29:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:21:53.887 00:29:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:53.887 00:29:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:53.887 Malloc1 00:21:53.887 00:29:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:53.887 00:29:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:53.887 00:29:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:53.887 00:29:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:53.887 00:29:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:53.887 00:29:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:21:53.887 00:29:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:53.887 00:29:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:53.887 00:29:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:53.887 00:29:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:53.887 00:29:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:53.887 00:29:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:53.887 [2024-10-09 00:29:24.384506] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:53.887 00:29:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:53.887 00:29:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@104 -- # perfpid=3310206 00:21:53.887 00:29:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@105 -- # sleep 2 00:21:53.887 00:29:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:21:55.800 00:29:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # rpc_cmd nvmf_get_stats 00:21:55.800 00:29:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:55.800 00:29:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:55.800 00:29:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:55.800 00:29:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmf_stats='{ 00:21:55.800 "tick_rate": 2400000000, 00:21:55.800 "poll_groups": [ 00:21:55.800 { 00:21:55.801 "name": "nvmf_tgt_poll_group_000", 00:21:55.801 "admin_qpairs": 1, 00:21:55.801 "io_qpairs": 1, 00:21:55.801 "current_admin_qpairs": 1, 00:21:55.801 "current_io_qpairs": 1, 00:21:55.801 "pending_bdev_io": 0, 00:21:55.801 "completed_nvme_io": 25191, 00:21:55.801 "transports": [ 00:21:55.801 { 00:21:55.801 "trtype": "TCP" 00:21:55.801 } 00:21:55.801 ] 00:21:55.801 }, 00:21:55.801 { 00:21:55.801 "name": "nvmf_tgt_poll_group_001", 00:21:55.801 "admin_qpairs": 0, 00:21:55.801 "io_qpairs": 3, 00:21:55.801 "current_admin_qpairs": 0, 00:21:55.801 "current_io_qpairs": 3, 00:21:55.801 "pending_bdev_io": 0, 00:21:55.801 "completed_nvme_io": 31858, 00:21:55.801 "transports": [ 00:21:55.801 { 00:21:55.801 "trtype": "TCP" 00:21:55.801 } 00:21:55.801 ] 00:21:55.801 }, 00:21:55.801 { 00:21:55.801 "name": "nvmf_tgt_poll_group_002", 00:21:55.801 "admin_qpairs": 0, 00:21:55.801 "io_qpairs": 0, 00:21:55.801 "current_admin_qpairs": 0, 00:21:55.801 "current_io_qpairs": 0, 00:21:55.801 "pending_bdev_io": 0, 00:21:55.801 "completed_nvme_io": 0, 00:21:55.801 "transports": [ 00:21:55.801 { 00:21:55.801 "trtype": "TCP" 00:21:55.801 } 00:21:55.801 ] 00:21:55.801 }, 00:21:55.801 { 00:21:55.801 "name": "nvmf_tgt_poll_group_003", 00:21:55.801 "admin_qpairs": 0, 00:21:55.801 "io_qpairs": 0, 00:21:55.801 "current_admin_qpairs": 0, 00:21:55.801 "current_io_qpairs": 0, 00:21:55.801 "pending_bdev_io": 0, 00:21:55.801 "completed_nvme_io": 0, 00:21:55.801 "transports": [ 00:21:55.801 { 00:21:55.801 "trtype": "TCP" 00:21:55.801 } 00:21:55.801 ] 00:21:55.801 } 00:21:55.801 ] 00:21:55.801 }' 00:21:55.801 00:29:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:21:55.801 00:29:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # wc -l 00:21:56.061 00:29:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # count=2 00:21:56.061 00:29:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # [[ 2 -lt 2 ]] 00:21:56.061 00:29:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@114 -- # wait 3310206 00:22:04.199 Initializing NVMe Controllers 00:22:04.199 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:04.199 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:22:04.199 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:22:04.199 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:22:04.199 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:22:04.199 Initialization complete. Launching workers. 00:22:04.199 ======================================================== 00:22:04.199 Latency(us) 00:22:04.199 Device Information : IOPS MiB/s Average min max 00:22:04.199 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 5706.10 22.29 11253.20 1531.44 59079.40 00:22:04.199 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 6450.30 25.20 9921.29 1397.12 58279.47 00:22:04.199 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 17034.29 66.54 3756.49 875.12 45694.51 00:22:04.199 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 8794.80 34.35 7276.03 1124.68 59478.81 00:22:04.199 ======================================================== 00:22:04.199 Total : 37985.48 148.38 6744.35 875.12 59478.81 00:22:04.199 00:22:04.199 00:29:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@115 -- # nvmftestfini 00:22:04.199 00:29:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@514 -- # nvmfcleanup 00:22:04.199 00:29:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:22:04.199 00:29:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:04.199 00:29:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:22:04.199 00:29:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:04.199 00:29:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:04.199 rmmod nvme_tcp 00:22:04.199 rmmod nvme_fabrics 00:22:04.199 rmmod nvme_keyring 00:22:04.199 00:29:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:04.199 00:29:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:22:04.199 00:29:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:22:04.199 00:29:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@515 -- # '[' -n 3309855 ']' 00:22:04.199 00:29:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # killprocess 3309855 00:22:04.199 00:29:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@950 -- # '[' -z 3309855 ']' 00:22:04.199 00:29:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # kill -0 3309855 00:22:04.199 00:29:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # uname 00:22:04.199 00:29:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:04.199 00:29:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3309855 00:22:04.199 00:29:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:04.199 00:29:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:04.199 00:29:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3309855' 00:22:04.199 killing process with pid 3309855 00:22:04.199 00:29:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@969 -- # kill 3309855 00:22:04.199 00:29:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@974 -- # wait 3309855 00:22:04.459 00:29:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:22:04.459 00:29:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:22:04.459 00:29:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:22:04.459 00:29:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:22:04.459 00:29:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@789 -- # iptables-save 00:22:04.459 00:29:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:22:04.459 00:29:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@789 -- # iptables-restore 00:22:04.459 00:29:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:04.459 00:29:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:04.459 00:29:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:04.459 00:29:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:04.459 00:29:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:06.387 00:29:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:06.387 00:29:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@117 -- # trap - SIGINT SIGTERM EXIT 00:22:06.387 00:22:06.387 real 0m52.427s 00:22:06.387 user 2m49.685s 00:22:06.387 sys 0m11.507s 00:22:06.387 00:29:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:06.387 00:29:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:06.387 ************************************ 00:22:06.387 END TEST nvmf_perf_adq 00:22:06.387 ************************************ 00:22:06.387 00:29:36 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@65 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:22:06.387 00:29:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:22:06.387 00:29:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:06.387 00:29:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:06.652 ************************************ 00:22:06.652 START TEST nvmf_shutdown 00:22:06.652 ************************************ 00:22:06.652 00:29:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:22:06.652 * Looking for test storage... 00:22:06.652 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:06.652 00:29:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:22:06.652 00:29:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1681 -- # lcov --version 00:22:06.652 00:29:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:22:06.652 00:29:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:22:06.652 00:29:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:06.652 00:29:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:06.652 00:29:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:06.652 00:29:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:22:06.652 00:29:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:22:06.652 00:29:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:22:06.652 00:29:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:22:06.652 00:29:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:22:06.652 00:29:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:22:06.652 00:29:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:22:06.652 00:29:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:06.652 00:29:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:22:06.652 00:29:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@345 -- # : 1 00:22:06.652 00:29:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:06.652 00:29:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:06.652 00:29:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # decimal 1 00:22:06.652 00:29:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=1 00:22:06.652 00:29:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:06.652 00:29:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 1 00:22:06.652 00:29:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:22:06.652 00:29:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # decimal 2 00:22:06.652 00:29:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=2 00:22:06.652 00:29:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:06.652 00:29:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 2 00:22:06.652 00:29:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:22:06.652 00:29:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:06.652 00:29:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:06.652 00:29:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # return 0 00:22:06.652 00:29:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:06.652 00:29:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:22:06.652 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:06.652 --rc genhtml_branch_coverage=1 00:22:06.652 --rc genhtml_function_coverage=1 00:22:06.652 --rc genhtml_legend=1 00:22:06.652 --rc geninfo_all_blocks=1 00:22:06.652 --rc geninfo_unexecuted_blocks=1 00:22:06.652 00:22:06.652 ' 00:22:06.652 00:29:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:22:06.652 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:06.652 --rc genhtml_branch_coverage=1 00:22:06.652 --rc genhtml_function_coverage=1 00:22:06.652 --rc genhtml_legend=1 00:22:06.652 --rc geninfo_all_blocks=1 00:22:06.652 --rc geninfo_unexecuted_blocks=1 00:22:06.652 00:22:06.652 ' 00:22:06.652 00:29:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:22:06.652 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:06.652 --rc genhtml_branch_coverage=1 00:22:06.652 --rc genhtml_function_coverage=1 00:22:06.652 --rc genhtml_legend=1 00:22:06.652 --rc geninfo_all_blocks=1 00:22:06.652 --rc geninfo_unexecuted_blocks=1 00:22:06.652 00:22:06.652 ' 00:22:06.652 00:29:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:22:06.652 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:06.652 --rc genhtml_branch_coverage=1 00:22:06.652 --rc genhtml_function_coverage=1 00:22:06.652 --rc genhtml_legend=1 00:22:06.652 --rc geninfo_all_blocks=1 00:22:06.652 --rc geninfo_unexecuted_blocks=1 00:22:06.652 00:22:06.652 ' 00:22:06.652 00:29:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:06.652 00:29:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:22:06.652 00:29:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:06.652 00:29:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:06.652 00:29:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:06.652 00:29:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:06.652 00:29:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:06.652 00:29:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:06.652 00:29:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:06.652 00:29:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:06.652 00:29:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:06.652 00:29:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:06.652 00:29:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:06.652 00:29:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:06.652 00:29:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:06.652 00:29:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:06.652 00:29:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:06.652 00:29:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:06.653 00:29:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:06.653 00:29:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@15 -- # shopt -s extglob 00:22:06.653 00:29:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:06.653 00:29:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:06.653 00:29:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:06.653 00:29:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:06.653 00:29:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:06.653 00:29:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:06.653 00:29:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:22:06.653 00:29:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:06.653 00:29:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # : 0 00:22:06.653 00:29:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:06.653 00:29:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:06.653 00:29:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:06.653 00:29:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:06.653 00:29:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:06.653 00:29:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:06.653 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:06.653 00:29:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:06.653 00:29:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:06.653 00:29:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:06.653 00:29:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BDEV_SIZE=64 00:22:06.653 00:29:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:22:06.653 00:29:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@162 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:22:06.653 00:29:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:22:06.653 00:29:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:06.653 00:29:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:06.914 ************************************ 00:22:06.914 START TEST nvmf_shutdown_tc1 00:22:06.914 ************************************ 00:22:06.914 00:29:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc1 00:22:06.914 00:29:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@75 -- # starttarget 00:22:06.914 00:29:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@16 -- # nvmftestinit 00:22:06.914 00:29:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:22:06.914 00:29:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:06.914 00:29:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # prepare_net_devs 00:22:06.914 00:29:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@436 -- # local -g is_hw=no 00:22:06.914 00:29:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # remove_spdk_ns 00:22:06.914 00:29:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:06.914 00:29:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:06.914 00:29:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:06.914 00:29:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:22:06.914 00:29:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:22:06.914 00:29:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@309 -- # xtrace_disable 00:22:06.914 00:29:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:15.058 00:29:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:15.058 00:29:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # pci_devs=() 00:22:15.058 00:29:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:15.058 00:29:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:15.058 00:29:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:15.058 00:29:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:15.058 00:29:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:15.058 00:29:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # net_devs=() 00:22:15.058 00:29:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:15.058 00:29:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # e810=() 00:22:15.058 00:29:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # local -ga e810 00:22:15.058 00:29:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # x722=() 00:22:15.058 00:29:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # local -ga x722 00:22:15.058 00:29:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # mlx=() 00:22:15.058 00:29:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # local -ga mlx 00:22:15.058 00:29:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:15.058 00:29:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:15.058 00:29:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:15.058 00:29:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:15.058 00:29:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:15.058 00:29:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:15.058 00:29:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:15.058 00:29:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:15.058 00:29:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:15.058 00:29:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:15.058 00:29:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:15.058 00:29:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:15.058 00:29:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:15.058 00:29:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:15.058 00:29:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:15.058 00:29:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:15.058 00:29:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:15.058 00:29:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:15.059 00:29:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:15.059 00:29:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:22:15.059 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:22:15.059 00:29:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:15.059 00:29:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:15.059 00:29:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:15.059 00:29:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:15.059 00:29:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:15.059 00:29:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:15.059 00:29:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:22:15.059 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:22:15.059 00:29:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:15.059 00:29:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:15.059 00:29:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:15.059 00:29:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:15.059 00:29:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:15.059 00:29:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:15.059 00:29:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:15.059 00:29:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:15.059 00:29:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:22:15.059 00:29:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:15.059 00:29:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:22:15.059 00:29:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:15.059 00:29:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ up == up ]] 00:22:15.059 00:29:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:22:15.059 00:29:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:15.059 00:29:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:22:15.059 Found net devices under 0000:4b:00.0: cvl_0_0 00:22:15.059 00:29:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:22:15.059 00:29:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:22:15.059 00:29:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:15.059 00:29:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:22:15.059 00:29:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:15.059 00:29:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ up == up ]] 00:22:15.059 00:29:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:22:15.059 00:29:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:15.059 00:29:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:22:15.059 Found net devices under 0000:4b:00.1: cvl_0_1 00:22:15.059 00:29:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:22:15.059 00:29:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:22:15.059 00:29:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # is_hw=yes 00:22:15.059 00:29:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:22:15.059 00:29:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:22:15.059 00:29:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:22:15.059 00:29:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:15.059 00:29:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:15.059 00:29:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:15.059 00:29:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:15.059 00:29:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:15.059 00:29:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:15.059 00:29:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:15.059 00:29:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:15.059 00:29:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:15.059 00:29:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:15.059 00:29:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:15.059 00:29:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:15.059 00:29:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:15.059 00:29:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:15.059 00:29:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:15.059 00:29:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:15.059 00:29:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:15.059 00:29:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:15.059 00:29:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:15.059 00:29:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:15.059 00:29:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:15.059 00:29:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:15.059 00:29:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:15.059 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:15.059 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.549 ms 00:22:15.059 00:22:15.059 --- 10.0.0.2 ping statistics --- 00:22:15.059 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:15.059 rtt min/avg/max/mdev = 0.549/0.549/0.549/0.000 ms 00:22:15.059 00:29:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:15.059 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:15.059 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.297 ms 00:22:15.059 00:22:15.059 --- 10.0.0.1 ping statistics --- 00:22:15.059 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:15.059 rtt min/avg/max/mdev = 0.297/0.297/0.297/0.000 ms 00:22:15.059 00:29:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:15.059 00:29:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # return 0 00:22:15.059 00:29:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:22:15.059 00:29:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:15.059 00:29:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:22:15.059 00:29:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:22:15.059 00:29:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:15.059 00:29:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:22:15.059 00:29:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:22:15.059 00:29:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:22:15.059 00:29:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:22:15.059 00:29:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:15.059 00:29:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:15.059 00:29:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@507 -- # nvmfpid=3316342 00:22:15.059 00:29:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@508 -- # waitforlisten 3316342 00:22:15.059 00:29:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:22:15.059 00:29:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@831 -- # '[' -z 3316342 ']' 00:22:15.059 00:29:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:15.059 00:29:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:15.059 00:29:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:15.059 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:15.059 00:29:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:15.059 00:29:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:15.059 [2024-10-09 00:29:44.895430] Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 initialization... 00:22:15.059 [2024-10-09 00:29:44.895502] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:15.059 [2024-10-09 00:29:44.990450] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:15.059 [2024-10-09 00:29:45.083822] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:15.059 [2024-10-09 00:29:45.083883] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:15.060 [2024-10-09 00:29:45.083892] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:15.060 [2024-10-09 00:29:45.083901] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:15.060 [2024-10-09 00:29:45.083908] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:15.060 [2024-10-09 00:29:45.086045] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:22:15.060 [2024-10-09 00:29:45.086205] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:22:15.060 [2024-10-09 00:29:45.086365] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 4 00:22:15.060 [2024-10-09 00:29:45.086366] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:22:15.321 00:29:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:15.321 00:29:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # return 0 00:22:15.321 00:29:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:22:15.321 00:29:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:15.321 00:29:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:15.321 00:29:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:15.321 00:29:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:15.321 00:29:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:15.321 00:29:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:15.321 [2024-10-09 00:29:45.772801] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:15.321 00:29:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:15.321 00:29:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:22:15.321 00:29:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:22:15.321 00:29:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:15.321 00:29:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:15.321 00:29:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:15.321 00:29:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:15.321 00:29:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:15.321 00:29:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:15.321 00:29:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:15.321 00:29:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:15.321 00:29:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:15.321 00:29:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:15.321 00:29:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:15.322 00:29:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:15.322 00:29:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:15.322 00:29:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:15.322 00:29:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:15.322 00:29:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:15.322 00:29:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:15.322 00:29:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:15.322 00:29:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:15.322 00:29:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:15.322 00:29:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:15.322 00:29:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:15.322 00:29:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:15.322 00:29:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # rpc_cmd 00:22:15.322 00:29:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:15.322 00:29:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:15.322 Malloc1 00:22:15.322 [2024-10-09 00:29:45.886320] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:15.322 Malloc2 00:22:15.322 Malloc3 00:22:15.583 Malloc4 00:22:15.583 Malloc5 00:22:15.583 Malloc6 00:22:15.583 Malloc7 00:22:15.583 Malloc8 00:22:15.845 Malloc9 00:22:15.845 Malloc10 00:22:15.845 00:29:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:15.845 00:29:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:22:15.845 00:29:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:15.845 00:29:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:15.845 00:29:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # perfpid=3316721 00:22:15.845 00:29:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # waitforlisten 3316721 /var/tmp/bdevperf.sock 00:22:15.845 00:29:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@831 -- # '[' -z 3316721 ']' 00:22:15.845 00:29:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:15.845 00:29:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:15.845 00:29:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:22:15.845 00:29:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:15.845 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:15.845 00:29:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:15.845 00:29:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:15.845 00:29:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:15.845 00:29:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # config=() 00:22:15.845 00:29:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # local subsystem config 00:22:15.845 00:29:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:15.845 00:29:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:15.845 { 00:22:15.845 "params": { 00:22:15.845 "name": "Nvme$subsystem", 00:22:15.845 "trtype": "$TEST_TRANSPORT", 00:22:15.845 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:15.845 "adrfam": "ipv4", 00:22:15.845 "trsvcid": "$NVMF_PORT", 00:22:15.845 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:15.845 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:15.845 "hdgst": ${hdgst:-false}, 00:22:15.845 "ddgst": ${ddgst:-false} 00:22:15.845 }, 00:22:15.845 "method": "bdev_nvme_attach_controller" 00:22:15.845 } 00:22:15.845 EOF 00:22:15.845 )") 00:22:15.845 00:29:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:22:15.845 00:29:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:15.845 00:29:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:15.845 { 00:22:15.845 "params": { 00:22:15.845 "name": "Nvme$subsystem", 00:22:15.845 "trtype": "$TEST_TRANSPORT", 00:22:15.845 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:15.845 "adrfam": "ipv4", 00:22:15.845 "trsvcid": "$NVMF_PORT", 00:22:15.845 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:15.845 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:15.845 "hdgst": ${hdgst:-false}, 00:22:15.845 "ddgst": ${ddgst:-false} 00:22:15.845 }, 00:22:15.845 "method": "bdev_nvme_attach_controller" 00:22:15.845 } 00:22:15.845 EOF 00:22:15.845 )") 00:22:15.845 00:29:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:22:15.845 00:29:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:15.845 00:29:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:15.845 { 00:22:15.845 "params": { 00:22:15.845 "name": "Nvme$subsystem", 00:22:15.845 "trtype": "$TEST_TRANSPORT", 00:22:15.845 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:15.845 "adrfam": "ipv4", 00:22:15.845 "trsvcid": "$NVMF_PORT", 00:22:15.845 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:15.845 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:15.845 "hdgst": ${hdgst:-false}, 00:22:15.845 "ddgst": ${ddgst:-false} 00:22:15.845 }, 00:22:15.845 "method": "bdev_nvme_attach_controller" 00:22:15.845 } 00:22:15.845 EOF 00:22:15.845 )") 00:22:15.845 00:29:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:22:15.845 00:29:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:15.845 00:29:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:15.845 { 00:22:15.845 "params": { 00:22:15.845 "name": "Nvme$subsystem", 00:22:15.845 "trtype": "$TEST_TRANSPORT", 00:22:15.845 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:15.845 "adrfam": "ipv4", 00:22:15.845 "trsvcid": "$NVMF_PORT", 00:22:15.845 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:15.845 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:15.845 "hdgst": ${hdgst:-false}, 00:22:15.845 "ddgst": ${ddgst:-false} 00:22:15.845 }, 00:22:15.845 "method": "bdev_nvme_attach_controller" 00:22:15.845 } 00:22:15.845 EOF 00:22:15.845 )") 00:22:15.845 00:29:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:22:15.845 00:29:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:15.845 00:29:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:15.845 { 00:22:15.845 "params": { 00:22:15.845 "name": "Nvme$subsystem", 00:22:15.845 "trtype": "$TEST_TRANSPORT", 00:22:15.845 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:15.845 "adrfam": "ipv4", 00:22:15.845 "trsvcid": "$NVMF_PORT", 00:22:15.845 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:15.845 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:15.845 "hdgst": ${hdgst:-false}, 00:22:15.845 "ddgst": ${ddgst:-false} 00:22:15.845 }, 00:22:15.845 "method": "bdev_nvme_attach_controller" 00:22:15.845 } 00:22:15.845 EOF 00:22:15.845 )") 00:22:15.845 00:29:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:22:15.845 00:29:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:15.845 00:29:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:15.845 { 00:22:15.845 "params": { 00:22:15.845 "name": "Nvme$subsystem", 00:22:15.845 "trtype": "$TEST_TRANSPORT", 00:22:15.845 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:15.845 "adrfam": "ipv4", 00:22:15.845 "trsvcid": "$NVMF_PORT", 00:22:15.845 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:15.845 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:15.845 "hdgst": ${hdgst:-false}, 00:22:15.845 "ddgst": ${ddgst:-false} 00:22:15.845 }, 00:22:15.845 "method": "bdev_nvme_attach_controller" 00:22:15.845 } 00:22:15.845 EOF 00:22:15.845 )") 00:22:15.845 00:29:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:22:15.845 [2024-10-09 00:29:46.396727] Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 initialization... 00:22:15.845 [2024-10-09 00:29:46.396800] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:22:15.846 00:29:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:15.846 00:29:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:15.846 { 00:22:15.846 "params": { 00:22:15.846 "name": "Nvme$subsystem", 00:22:15.846 "trtype": "$TEST_TRANSPORT", 00:22:15.846 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:15.846 "adrfam": "ipv4", 00:22:15.846 "trsvcid": "$NVMF_PORT", 00:22:15.846 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:15.846 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:15.846 "hdgst": ${hdgst:-false}, 00:22:15.846 "ddgst": ${ddgst:-false} 00:22:15.846 }, 00:22:15.846 "method": "bdev_nvme_attach_controller" 00:22:15.846 } 00:22:15.846 EOF 00:22:15.846 )") 00:22:15.846 00:29:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:22:15.846 00:29:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:15.846 00:29:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:15.846 { 00:22:15.846 "params": { 00:22:15.846 "name": "Nvme$subsystem", 00:22:15.846 "trtype": "$TEST_TRANSPORT", 00:22:15.846 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:15.846 "adrfam": "ipv4", 00:22:15.846 "trsvcid": "$NVMF_PORT", 00:22:15.846 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:15.846 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:15.846 "hdgst": ${hdgst:-false}, 00:22:15.846 "ddgst": ${ddgst:-false} 00:22:15.846 }, 00:22:15.846 "method": "bdev_nvme_attach_controller" 00:22:15.846 } 00:22:15.846 EOF 00:22:15.846 )") 00:22:15.846 00:29:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:22:15.846 00:29:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:15.846 00:29:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:15.846 { 00:22:15.846 "params": { 00:22:15.846 "name": "Nvme$subsystem", 00:22:15.846 "trtype": "$TEST_TRANSPORT", 00:22:15.846 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:15.846 "adrfam": "ipv4", 00:22:15.846 "trsvcid": "$NVMF_PORT", 00:22:15.846 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:15.846 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:15.846 "hdgst": ${hdgst:-false}, 00:22:15.846 "ddgst": ${ddgst:-false} 00:22:15.846 }, 00:22:15.846 "method": "bdev_nvme_attach_controller" 00:22:15.846 } 00:22:15.846 EOF 00:22:15.846 )") 00:22:15.846 00:29:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:22:15.846 00:29:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:15.846 00:29:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:15.846 { 00:22:15.846 "params": { 00:22:15.846 "name": "Nvme$subsystem", 00:22:15.846 "trtype": "$TEST_TRANSPORT", 00:22:15.846 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:15.846 "adrfam": "ipv4", 00:22:15.846 "trsvcid": "$NVMF_PORT", 00:22:15.846 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:15.846 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:15.846 "hdgst": ${hdgst:-false}, 00:22:15.846 "ddgst": ${ddgst:-false} 00:22:15.846 }, 00:22:15.846 "method": "bdev_nvme_attach_controller" 00:22:15.846 } 00:22:15.846 EOF 00:22:15.846 )") 00:22:15.846 00:29:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:22:15.846 00:29:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # jq . 00:22:15.846 00:29:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@583 -- # IFS=, 00:22:15.846 00:29:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:22:15.846 "params": { 00:22:15.846 "name": "Nvme1", 00:22:15.846 "trtype": "tcp", 00:22:15.846 "traddr": "10.0.0.2", 00:22:15.846 "adrfam": "ipv4", 00:22:15.846 "trsvcid": "4420", 00:22:15.846 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:15.846 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:15.846 "hdgst": false, 00:22:15.846 "ddgst": false 00:22:15.846 }, 00:22:15.846 "method": "bdev_nvme_attach_controller" 00:22:15.846 },{ 00:22:15.846 "params": { 00:22:15.846 "name": "Nvme2", 00:22:15.846 "trtype": "tcp", 00:22:15.846 "traddr": "10.0.0.2", 00:22:15.846 "adrfam": "ipv4", 00:22:15.846 "trsvcid": "4420", 00:22:15.846 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:15.846 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:15.846 "hdgst": false, 00:22:15.846 "ddgst": false 00:22:15.846 }, 00:22:15.846 "method": "bdev_nvme_attach_controller" 00:22:15.846 },{ 00:22:15.846 "params": { 00:22:15.846 "name": "Nvme3", 00:22:15.846 "trtype": "tcp", 00:22:15.846 "traddr": "10.0.0.2", 00:22:15.846 "adrfam": "ipv4", 00:22:15.846 "trsvcid": "4420", 00:22:15.846 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:15.846 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:15.846 "hdgst": false, 00:22:15.846 "ddgst": false 00:22:15.846 }, 00:22:15.846 "method": "bdev_nvme_attach_controller" 00:22:15.846 },{ 00:22:15.846 "params": { 00:22:15.846 "name": "Nvme4", 00:22:15.846 "trtype": "tcp", 00:22:15.846 "traddr": "10.0.0.2", 00:22:15.846 "adrfam": "ipv4", 00:22:15.846 "trsvcid": "4420", 00:22:15.846 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:15.846 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:15.846 "hdgst": false, 00:22:15.846 "ddgst": false 00:22:15.846 }, 00:22:15.846 "method": "bdev_nvme_attach_controller" 00:22:15.846 },{ 00:22:15.846 "params": { 00:22:15.846 "name": "Nvme5", 00:22:15.846 "trtype": "tcp", 00:22:15.846 "traddr": "10.0.0.2", 00:22:15.846 "adrfam": "ipv4", 00:22:15.846 "trsvcid": "4420", 00:22:15.846 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:15.846 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:15.846 "hdgst": false, 00:22:15.846 "ddgst": false 00:22:15.846 }, 00:22:15.846 "method": "bdev_nvme_attach_controller" 00:22:15.846 },{ 00:22:15.846 "params": { 00:22:15.846 "name": "Nvme6", 00:22:15.846 "trtype": "tcp", 00:22:15.846 "traddr": "10.0.0.2", 00:22:15.846 "adrfam": "ipv4", 00:22:15.846 "trsvcid": "4420", 00:22:15.846 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:15.846 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:15.846 "hdgst": false, 00:22:15.846 "ddgst": false 00:22:15.846 }, 00:22:15.846 "method": "bdev_nvme_attach_controller" 00:22:15.846 },{ 00:22:15.846 "params": { 00:22:15.846 "name": "Nvme7", 00:22:15.846 "trtype": "tcp", 00:22:15.846 "traddr": "10.0.0.2", 00:22:15.846 "adrfam": "ipv4", 00:22:15.846 "trsvcid": "4420", 00:22:15.846 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:15.846 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:15.846 "hdgst": false, 00:22:15.846 "ddgst": false 00:22:15.846 }, 00:22:15.846 "method": "bdev_nvme_attach_controller" 00:22:15.846 },{ 00:22:15.846 "params": { 00:22:15.846 "name": "Nvme8", 00:22:15.846 "trtype": "tcp", 00:22:15.846 "traddr": "10.0.0.2", 00:22:15.846 "adrfam": "ipv4", 00:22:15.846 "trsvcid": "4420", 00:22:15.846 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:15.846 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:15.846 "hdgst": false, 00:22:15.846 "ddgst": false 00:22:15.846 }, 00:22:15.846 "method": "bdev_nvme_attach_controller" 00:22:15.846 },{ 00:22:15.846 "params": { 00:22:15.846 "name": "Nvme9", 00:22:15.846 "trtype": "tcp", 00:22:15.846 "traddr": "10.0.0.2", 00:22:15.846 "adrfam": "ipv4", 00:22:15.846 "trsvcid": "4420", 00:22:15.846 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:15.846 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:15.846 "hdgst": false, 00:22:15.846 "ddgst": false 00:22:15.846 }, 00:22:15.846 "method": "bdev_nvme_attach_controller" 00:22:15.846 },{ 00:22:15.846 "params": { 00:22:15.846 "name": "Nvme10", 00:22:15.846 "trtype": "tcp", 00:22:15.846 "traddr": "10.0.0.2", 00:22:15.846 "adrfam": "ipv4", 00:22:15.846 "trsvcid": "4420", 00:22:15.846 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:15.846 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:15.846 "hdgst": false, 00:22:15.846 "ddgst": false 00:22:15.846 }, 00:22:15.846 "method": "bdev_nvme_attach_controller" 00:22:15.846 }' 00:22:16.108 [2024-10-09 00:29:46.480652] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:16.108 [2024-10-09 00:29:46.576237] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:22:17.494 00:29:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:17.494 00:29:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # return 0 00:22:17.494 00:29:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@81 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:22:17.494 00:29:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:17.494 00:29:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:17.494 00:29:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:17.494 00:29:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # kill -9 3316721 00:22:17.494 00:29:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@85 -- # rm -f /var/run/spdk_bdev1 00:22:17.494 00:29:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # sleep 1 00:22:18.458 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 74: 3316721 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:22:18.458 00:29:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@89 -- # kill -0 3316342 00:22:18.458 00:29:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:22:18.458 00:29:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:18.458 00:29:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # config=() 00:22:18.458 00:29:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # local subsystem config 00:22:18.458 00:29:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:18.458 00:29:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:18.458 { 00:22:18.458 "params": { 00:22:18.458 "name": "Nvme$subsystem", 00:22:18.458 "trtype": "$TEST_TRANSPORT", 00:22:18.458 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:18.458 "adrfam": "ipv4", 00:22:18.458 "trsvcid": "$NVMF_PORT", 00:22:18.458 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:18.458 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:18.458 "hdgst": ${hdgst:-false}, 00:22:18.458 "ddgst": ${ddgst:-false} 00:22:18.458 }, 00:22:18.458 "method": "bdev_nvme_attach_controller" 00:22:18.458 } 00:22:18.458 EOF 00:22:18.458 )") 00:22:18.458 00:29:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:22:18.458 00:29:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:18.458 00:29:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:18.458 { 00:22:18.458 "params": { 00:22:18.458 "name": "Nvme$subsystem", 00:22:18.458 "trtype": "$TEST_TRANSPORT", 00:22:18.458 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:18.458 "adrfam": "ipv4", 00:22:18.458 "trsvcid": "$NVMF_PORT", 00:22:18.458 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:18.458 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:18.458 "hdgst": ${hdgst:-false}, 00:22:18.458 "ddgst": ${ddgst:-false} 00:22:18.458 }, 00:22:18.458 "method": "bdev_nvme_attach_controller" 00:22:18.458 } 00:22:18.458 EOF 00:22:18.458 )") 00:22:18.458 00:29:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:22:18.458 00:29:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:18.458 00:29:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:18.458 { 00:22:18.458 "params": { 00:22:18.458 "name": "Nvme$subsystem", 00:22:18.458 "trtype": "$TEST_TRANSPORT", 00:22:18.458 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:18.458 "adrfam": "ipv4", 00:22:18.458 "trsvcid": "$NVMF_PORT", 00:22:18.458 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:18.458 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:18.458 "hdgst": ${hdgst:-false}, 00:22:18.458 "ddgst": ${ddgst:-false} 00:22:18.458 }, 00:22:18.458 "method": "bdev_nvme_attach_controller" 00:22:18.458 } 00:22:18.458 EOF 00:22:18.458 )") 00:22:18.459 00:29:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:22:18.459 00:29:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:18.459 00:29:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:18.459 { 00:22:18.459 "params": { 00:22:18.459 "name": "Nvme$subsystem", 00:22:18.459 "trtype": "$TEST_TRANSPORT", 00:22:18.459 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:18.459 "adrfam": "ipv4", 00:22:18.459 "trsvcid": "$NVMF_PORT", 00:22:18.459 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:18.459 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:18.459 "hdgst": ${hdgst:-false}, 00:22:18.459 "ddgst": ${ddgst:-false} 00:22:18.459 }, 00:22:18.459 "method": "bdev_nvme_attach_controller" 00:22:18.459 } 00:22:18.459 EOF 00:22:18.459 )") 00:22:18.459 00:29:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:22:18.459 00:29:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:18.459 00:29:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:18.459 { 00:22:18.459 "params": { 00:22:18.459 "name": "Nvme$subsystem", 00:22:18.459 "trtype": "$TEST_TRANSPORT", 00:22:18.459 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:18.459 "adrfam": "ipv4", 00:22:18.459 "trsvcid": "$NVMF_PORT", 00:22:18.459 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:18.459 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:18.459 "hdgst": ${hdgst:-false}, 00:22:18.459 "ddgst": ${ddgst:-false} 00:22:18.459 }, 00:22:18.459 "method": "bdev_nvme_attach_controller" 00:22:18.459 } 00:22:18.459 EOF 00:22:18.459 )") 00:22:18.459 00:29:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:22:18.459 00:29:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:18.459 00:29:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:18.459 { 00:22:18.459 "params": { 00:22:18.459 "name": "Nvme$subsystem", 00:22:18.459 "trtype": "$TEST_TRANSPORT", 00:22:18.459 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:18.459 "adrfam": "ipv4", 00:22:18.459 "trsvcid": "$NVMF_PORT", 00:22:18.459 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:18.459 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:18.459 "hdgst": ${hdgst:-false}, 00:22:18.459 "ddgst": ${ddgst:-false} 00:22:18.459 }, 00:22:18.459 "method": "bdev_nvme_attach_controller" 00:22:18.459 } 00:22:18.459 EOF 00:22:18.459 )") 00:22:18.459 00:29:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:22:18.459 00:29:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:18.459 00:29:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:18.459 { 00:22:18.459 "params": { 00:22:18.459 "name": "Nvme$subsystem", 00:22:18.459 "trtype": "$TEST_TRANSPORT", 00:22:18.459 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:18.459 "adrfam": "ipv4", 00:22:18.459 "trsvcid": "$NVMF_PORT", 00:22:18.459 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:18.459 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:18.459 "hdgst": ${hdgst:-false}, 00:22:18.459 "ddgst": ${ddgst:-false} 00:22:18.459 }, 00:22:18.459 "method": "bdev_nvme_attach_controller" 00:22:18.459 } 00:22:18.459 EOF 00:22:18.459 )") 00:22:18.459 [2024-10-09 00:29:48.854392] Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 initialization... 00:22:18.459 [2024-10-09 00:29:48.854444] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3317222 ] 00:22:18.459 00:29:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:22:18.459 00:29:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:18.459 00:29:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:18.459 { 00:22:18.459 "params": { 00:22:18.459 "name": "Nvme$subsystem", 00:22:18.459 "trtype": "$TEST_TRANSPORT", 00:22:18.459 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:18.459 "adrfam": "ipv4", 00:22:18.459 "trsvcid": "$NVMF_PORT", 00:22:18.459 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:18.459 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:18.459 "hdgst": ${hdgst:-false}, 00:22:18.459 "ddgst": ${ddgst:-false} 00:22:18.459 }, 00:22:18.459 "method": "bdev_nvme_attach_controller" 00:22:18.459 } 00:22:18.459 EOF 00:22:18.459 )") 00:22:18.459 00:29:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:22:18.459 00:29:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:18.459 00:29:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:18.459 { 00:22:18.459 "params": { 00:22:18.459 "name": "Nvme$subsystem", 00:22:18.459 "trtype": "$TEST_TRANSPORT", 00:22:18.459 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:18.459 "adrfam": "ipv4", 00:22:18.459 "trsvcid": "$NVMF_PORT", 00:22:18.459 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:18.459 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:18.459 "hdgst": ${hdgst:-false}, 00:22:18.459 "ddgst": ${ddgst:-false} 00:22:18.459 }, 00:22:18.459 "method": "bdev_nvme_attach_controller" 00:22:18.459 } 00:22:18.459 EOF 00:22:18.459 )") 00:22:18.459 00:29:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:22:18.459 00:29:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:18.459 00:29:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:18.459 { 00:22:18.459 "params": { 00:22:18.459 "name": "Nvme$subsystem", 00:22:18.459 "trtype": "$TEST_TRANSPORT", 00:22:18.459 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:18.459 "adrfam": "ipv4", 00:22:18.459 "trsvcid": "$NVMF_PORT", 00:22:18.459 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:18.459 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:18.459 "hdgst": ${hdgst:-false}, 00:22:18.459 "ddgst": ${ddgst:-false} 00:22:18.459 }, 00:22:18.459 "method": "bdev_nvme_attach_controller" 00:22:18.459 } 00:22:18.459 EOF 00:22:18.459 )") 00:22:18.459 00:29:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:22:18.459 00:29:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # jq . 00:22:18.459 00:29:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@583 -- # IFS=, 00:22:18.459 00:29:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:22:18.459 "params": { 00:22:18.459 "name": "Nvme1", 00:22:18.459 "trtype": "tcp", 00:22:18.459 "traddr": "10.0.0.2", 00:22:18.459 "adrfam": "ipv4", 00:22:18.459 "trsvcid": "4420", 00:22:18.459 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:18.459 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:18.459 "hdgst": false, 00:22:18.459 "ddgst": false 00:22:18.459 }, 00:22:18.459 "method": "bdev_nvme_attach_controller" 00:22:18.459 },{ 00:22:18.459 "params": { 00:22:18.459 "name": "Nvme2", 00:22:18.459 "trtype": "tcp", 00:22:18.459 "traddr": "10.0.0.2", 00:22:18.459 "adrfam": "ipv4", 00:22:18.459 "trsvcid": "4420", 00:22:18.459 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:18.459 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:18.459 "hdgst": false, 00:22:18.459 "ddgst": false 00:22:18.459 }, 00:22:18.459 "method": "bdev_nvme_attach_controller" 00:22:18.459 },{ 00:22:18.459 "params": { 00:22:18.459 "name": "Nvme3", 00:22:18.459 "trtype": "tcp", 00:22:18.459 "traddr": "10.0.0.2", 00:22:18.459 "adrfam": "ipv4", 00:22:18.459 "trsvcid": "4420", 00:22:18.459 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:18.459 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:18.459 "hdgst": false, 00:22:18.459 "ddgst": false 00:22:18.459 }, 00:22:18.459 "method": "bdev_nvme_attach_controller" 00:22:18.459 },{ 00:22:18.459 "params": { 00:22:18.459 "name": "Nvme4", 00:22:18.459 "trtype": "tcp", 00:22:18.459 "traddr": "10.0.0.2", 00:22:18.459 "adrfam": "ipv4", 00:22:18.459 "trsvcid": "4420", 00:22:18.459 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:18.459 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:18.459 "hdgst": false, 00:22:18.459 "ddgst": false 00:22:18.459 }, 00:22:18.459 "method": "bdev_nvme_attach_controller" 00:22:18.459 },{ 00:22:18.459 "params": { 00:22:18.459 "name": "Nvme5", 00:22:18.459 "trtype": "tcp", 00:22:18.459 "traddr": "10.0.0.2", 00:22:18.459 "adrfam": "ipv4", 00:22:18.459 "trsvcid": "4420", 00:22:18.459 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:18.459 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:18.459 "hdgst": false, 00:22:18.459 "ddgst": false 00:22:18.459 }, 00:22:18.459 "method": "bdev_nvme_attach_controller" 00:22:18.459 },{ 00:22:18.459 "params": { 00:22:18.459 "name": "Nvme6", 00:22:18.459 "trtype": "tcp", 00:22:18.459 "traddr": "10.0.0.2", 00:22:18.459 "adrfam": "ipv4", 00:22:18.459 "trsvcid": "4420", 00:22:18.460 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:18.460 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:18.460 "hdgst": false, 00:22:18.460 "ddgst": false 00:22:18.460 }, 00:22:18.460 "method": "bdev_nvme_attach_controller" 00:22:18.460 },{ 00:22:18.460 "params": { 00:22:18.460 "name": "Nvme7", 00:22:18.460 "trtype": "tcp", 00:22:18.460 "traddr": "10.0.0.2", 00:22:18.460 "adrfam": "ipv4", 00:22:18.460 "trsvcid": "4420", 00:22:18.460 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:18.460 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:18.460 "hdgst": false, 00:22:18.460 "ddgst": false 00:22:18.460 }, 00:22:18.460 "method": "bdev_nvme_attach_controller" 00:22:18.460 },{ 00:22:18.460 "params": { 00:22:18.460 "name": "Nvme8", 00:22:18.460 "trtype": "tcp", 00:22:18.460 "traddr": "10.0.0.2", 00:22:18.460 "adrfam": "ipv4", 00:22:18.460 "trsvcid": "4420", 00:22:18.460 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:18.460 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:18.460 "hdgst": false, 00:22:18.460 "ddgst": false 00:22:18.460 }, 00:22:18.460 "method": "bdev_nvme_attach_controller" 00:22:18.460 },{ 00:22:18.460 "params": { 00:22:18.460 "name": "Nvme9", 00:22:18.460 "trtype": "tcp", 00:22:18.460 "traddr": "10.0.0.2", 00:22:18.460 "adrfam": "ipv4", 00:22:18.460 "trsvcid": "4420", 00:22:18.460 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:18.460 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:18.460 "hdgst": false, 00:22:18.460 "ddgst": false 00:22:18.460 }, 00:22:18.460 "method": "bdev_nvme_attach_controller" 00:22:18.460 },{ 00:22:18.460 "params": { 00:22:18.460 "name": "Nvme10", 00:22:18.460 "trtype": "tcp", 00:22:18.460 "traddr": "10.0.0.2", 00:22:18.460 "adrfam": "ipv4", 00:22:18.460 "trsvcid": "4420", 00:22:18.460 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:18.460 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:18.460 "hdgst": false, 00:22:18.460 "ddgst": false 00:22:18.460 }, 00:22:18.460 "method": "bdev_nvme_attach_controller" 00:22:18.460 }' 00:22:18.460 [2024-10-09 00:29:48.934403] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:18.460 [2024-10-09 00:29:48.998896] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:22:19.842 Running I/O for 1 seconds... 00:22:21.222 1806.00 IOPS, 112.88 MiB/s 00:22:21.222 Latency(us) 00:22:21.222 [2024-10-08T22:29:51.857Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:21.222 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:21.222 Verification LBA range: start 0x0 length 0x400 00:22:21.222 Nvme1n1 : 1.16 220.59 13.79 0.00 0.00 286858.88 14636.37 244667.73 00:22:21.222 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:21.222 Verification LBA range: start 0x0 length 0x400 00:22:21.222 Nvme2n1 : 1.17 219.24 13.70 0.00 0.00 283103.15 17257.81 251658.24 00:22:21.222 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:21.222 Verification LBA range: start 0x0 length 0x400 00:22:21.222 Nvme3n1 : 1.09 240.02 15.00 0.00 0.00 246980.36 10158.08 251658.24 00:22:21.222 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:21.222 Verification LBA range: start 0x0 length 0x400 00:22:21.222 Nvme4n1 : 1.15 222.24 13.89 0.00 0.00 267416.32 21954.56 246415.36 00:22:21.222 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:21.222 Verification LBA range: start 0x0 length 0x400 00:22:21.222 Nvme5n1 : 1.15 222.87 13.93 0.00 0.00 261583.36 16711.68 260396.37 00:22:21.222 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:21.222 Verification LBA range: start 0x0 length 0x400 00:22:21.222 Nvme6n1 : 1.19 214.55 13.41 0.00 0.00 262996.48 32331.09 253405.87 00:22:21.222 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:21.222 Verification LBA range: start 0x0 length 0x400 00:22:21.222 Nvme7n1 : 1.20 270.15 16.88 0.00 0.00 207701.92 2143.57 249910.61 00:22:21.222 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:21.222 Verification LBA range: start 0x0 length 0x400 00:22:21.222 Nvme8n1 : 1.16 219.88 13.74 0.00 0.00 248789.33 36918.61 234181.97 00:22:21.222 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:21.222 Verification LBA range: start 0x0 length 0x400 00:22:21.222 Nvme9n1 : 1.22 263.23 16.45 0.00 0.00 204757.67 9338.88 270882.13 00:22:21.222 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:21.222 Verification LBA range: start 0x0 length 0x400 00:22:21.222 Nvme10n1 : 1.20 269.83 16.86 0.00 0.00 194961.97 467.63 234181.97 00:22:21.222 [2024-10-08T22:29:51.857Z] =================================================================================================================== 00:22:21.222 [2024-10-08T22:29:51.857Z] Total : 2362.59 147.66 0.00 0.00 243310.71 467.63 270882.13 00:22:21.222 00:29:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@95 -- # stoptarget 00:22:21.222 00:29:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:22:21.222 00:29:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:22:21.222 00:29:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:21.222 00:29:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@46 -- # nvmftestfini 00:22:21.222 00:29:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@514 -- # nvmfcleanup 00:22:21.222 00:29:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # sync 00:22:21.222 00:29:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:21.222 00:29:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set +e 00:22:21.222 00:29:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:21.222 00:29:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:21.222 rmmod nvme_tcp 00:22:21.222 rmmod nvme_fabrics 00:22:21.222 rmmod nvme_keyring 00:22:21.222 00:29:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:21.222 00:29:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@128 -- # set -e 00:22:21.222 00:29:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@129 -- # return 0 00:22:21.222 00:29:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@515 -- # '[' -n 3316342 ']' 00:22:21.222 00:29:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@516 -- # killprocess 3316342 00:22:21.222 00:29:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@950 -- # '[' -z 3316342 ']' 00:22:21.222 00:29:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # kill -0 3316342 00:22:21.222 00:29:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@955 -- # uname 00:22:21.222 00:29:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:21.222 00:29:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3316342 00:22:21.482 00:29:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:22:21.482 00:29:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:22:21.482 00:29:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3316342' 00:22:21.482 killing process with pid 3316342 00:22:21.482 00:29:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@969 -- # kill 3316342 00:22:21.482 00:29:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@974 -- # wait 3316342 00:22:21.766 00:29:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:22:21.766 00:29:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:22:21.766 00:29:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:22:21.766 00:29:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # iptr 00:22:21.766 00:29:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@789 -- # iptables-save 00:22:21.766 00:29:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:22:21.766 00:29:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@789 -- # iptables-restore 00:22:21.766 00:29:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:21.766 00:29:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:21.766 00:29:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:21.766 00:29:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:21.766 00:29:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:23.749 00:29:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:23.749 00:22:23.749 real 0m16.931s 00:22:23.749 user 0m34.360s 00:22:23.749 sys 0m6.902s 00:22:23.749 00:29:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:23.749 00:29:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:23.749 ************************************ 00:22:23.749 END TEST nvmf_shutdown_tc1 00:22:23.749 ************************************ 00:22:23.749 00:29:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@163 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:22:23.749 00:29:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:22:23.750 00:29:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:23.750 00:29:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:23.750 ************************************ 00:22:23.750 START TEST nvmf_shutdown_tc2 00:22:23.750 ************************************ 00:22:23.750 00:29:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc2 00:22:23.750 00:29:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@100 -- # starttarget 00:22:23.750 00:29:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@16 -- # nvmftestinit 00:22:23.750 00:29:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:22:23.750 00:29:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:23.750 00:29:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # prepare_net_devs 00:22:23.750 00:29:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@436 -- # local -g is_hw=no 00:22:23.750 00:29:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # remove_spdk_ns 00:22:23.750 00:29:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:23.750 00:29:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:23.750 00:29:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:23.750 00:29:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:22:23.750 00:29:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:22:23.750 00:29:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@309 -- # xtrace_disable 00:22:23.750 00:29:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:23.750 00:29:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:23.750 00:29:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # pci_devs=() 00:22:23.750 00:29:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:23.750 00:29:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:23.750 00:29:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:23.750 00:29:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:23.750 00:29:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:23.750 00:29:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # net_devs=() 00:22:23.750 00:29:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:23.750 00:29:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # e810=() 00:22:23.750 00:29:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # local -ga e810 00:22:23.750 00:29:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # x722=() 00:22:23.750 00:29:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # local -ga x722 00:22:23.750 00:29:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # mlx=() 00:22:23.750 00:29:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # local -ga mlx 00:22:23.750 00:29:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:23.750 00:29:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:23.750 00:29:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:23.750 00:29:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:23.750 00:29:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:23.750 00:29:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:23.750 00:29:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:23.750 00:29:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:23.750 00:29:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:23.750 00:29:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:23.750 00:29:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:23.750 00:29:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:23.750 00:29:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:23.750 00:29:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:23.750 00:29:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:23.750 00:29:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:23.750 00:29:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:23.750 00:29:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:23.750 00:29:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:23.750 00:29:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:22:23.750 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:22:23.750 00:29:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:23.750 00:29:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:23.750 00:29:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:23.750 00:29:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:23.750 00:29:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:23.750 00:29:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:23.750 00:29:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:22:23.750 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:22:23.750 00:29:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:23.750 00:29:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:23.750 00:29:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:23.750 00:29:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:23.750 00:29:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:23.750 00:29:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:23.750 00:29:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:23.750 00:29:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:23.750 00:29:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:22:23.750 00:29:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:23.750 00:29:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:22:23.750 00:29:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:23.750 00:29:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ up == up ]] 00:22:23.750 00:29:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:22:23.750 00:29:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:23.750 00:29:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:22:23.750 Found net devices under 0000:4b:00.0: cvl_0_0 00:22:23.750 00:29:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:22:23.750 00:29:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:22:23.750 00:29:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:23.750 00:29:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:22:23.750 00:29:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:23.750 00:29:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ up == up ]] 00:22:23.750 00:29:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:22:23.750 00:29:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:23.750 00:29:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:22:23.750 Found net devices under 0000:4b:00.1: cvl_0_1 00:22:23.750 00:29:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:22:23.750 00:29:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:22:23.750 00:29:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # is_hw=yes 00:22:23.750 00:29:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:22:23.750 00:29:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:22:23.750 00:29:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:22:23.750 00:29:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:23.750 00:29:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:23.750 00:29:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:23.750 00:29:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:23.750 00:29:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:23.750 00:29:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:23.750 00:29:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:23.750 00:29:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:23.750 00:29:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:23.750 00:29:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:23.751 00:29:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:23.751 00:29:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:23.751 00:29:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:23.751 00:29:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:23.751 00:29:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:24.012 00:29:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:24.012 00:29:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:24.012 00:29:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:24.012 00:29:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:24.012 00:29:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:24.012 00:29:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:24.012 00:29:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:24.012 00:29:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:24.012 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:24.012 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.594 ms 00:22:24.012 00:22:24.012 --- 10.0.0.2 ping statistics --- 00:22:24.012 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:24.012 rtt min/avg/max/mdev = 0.594/0.594/0.594/0.000 ms 00:22:24.012 00:29:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:24.012 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:24.012 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.316 ms 00:22:24.012 00:22:24.012 --- 10.0.0.1 ping statistics --- 00:22:24.012 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:24.012 rtt min/avg/max/mdev = 0.316/0.316/0.316/0.000 ms 00:22:24.012 00:29:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:24.012 00:29:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # return 0 00:22:24.012 00:29:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:22:24.012 00:29:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:24.012 00:29:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:22:24.012 00:29:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:22:24.012 00:29:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:24.012 00:29:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:22:24.012 00:29:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:22:24.273 00:29:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:22:24.273 00:29:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:22:24.273 00:29:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:24.273 00:29:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:24.273 00:29:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@507 -- # nvmfpid=3318538 00:22:24.273 00:29:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@508 -- # waitforlisten 3318538 00:22:24.273 00:29:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:22:24.273 00:29:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@831 -- # '[' -z 3318538 ']' 00:22:24.273 00:29:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:24.273 00:29:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:24.273 00:29:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:24.273 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:24.273 00:29:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:24.273 00:29:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:24.273 [2024-10-09 00:29:54.746792] Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 initialization... 00:22:24.273 [2024-10-09 00:29:54.746855] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:24.273 [2024-10-09 00:29:54.832604] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:24.273 [2024-10-09 00:29:54.892575] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:24.273 [2024-10-09 00:29:54.892609] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:24.273 [2024-10-09 00:29:54.892615] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:24.273 [2024-10-09 00:29:54.892619] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:24.273 [2024-10-09 00:29:54.892623] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:24.273 [2024-10-09 00:29:54.893932] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:22:24.273 [2024-10-09 00:29:54.894165] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:22:24.273 [2024-10-09 00:29:54.894317] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:22:24.273 [2024-10-09 00:29:54.894318] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 4 00:22:25.231 00:29:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:25.231 00:29:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # return 0 00:22:25.231 00:29:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:22:25.231 00:29:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:25.231 00:29:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:25.231 00:29:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:25.231 00:29:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:25.231 00:29:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:25.231 00:29:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:25.231 [2024-10-09 00:29:55.592938] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:25.231 00:29:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:25.231 00:29:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:22:25.231 00:29:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:22:25.232 00:29:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:25.232 00:29:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:25.232 00:29:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:25.232 00:29:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:25.232 00:29:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:25.232 00:29:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:25.232 00:29:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:25.232 00:29:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:25.232 00:29:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:25.232 00:29:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:25.232 00:29:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:25.232 00:29:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:25.232 00:29:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:25.232 00:29:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:25.232 00:29:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:25.232 00:29:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:25.232 00:29:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:25.232 00:29:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:25.232 00:29:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:25.232 00:29:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:25.232 00:29:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:25.232 00:29:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:25.232 00:29:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:25.232 00:29:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # rpc_cmd 00:22:25.232 00:29:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:25.232 00:29:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:25.232 Malloc1 00:22:25.232 [2024-10-09 00:29:55.691562] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:25.232 Malloc2 00:22:25.232 Malloc3 00:22:25.232 Malloc4 00:22:25.232 Malloc5 00:22:25.232 Malloc6 00:22:25.491 Malloc7 00:22:25.491 Malloc8 00:22:25.491 Malloc9 00:22:25.491 Malloc10 00:22:25.491 00:29:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:25.491 00:29:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:22:25.491 00:29:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:25.491 00:29:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:25.491 00:29:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # perfpid=3318799 00:22:25.491 00:29:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # waitforlisten 3318799 /var/tmp/bdevperf.sock 00:22:25.491 00:29:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@831 -- # '[' -z 3318799 ']' 00:22:25.491 00:29:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:25.491 00:29:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:25.491 00:29:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:25.491 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:25.491 00:29:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:22:25.491 00:29:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:25.491 00:29:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:25.491 00:29:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:25.491 00:29:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # config=() 00:22:25.491 00:29:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # local subsystem config 00:22:25.491 00:29:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:25.491 00:29:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:25.491 { 00:22:25.491 "params": { 00:22:25.491 "name": "Nvme$subsystem", 00:22:25.491 "trtype": "$TEST_TRANSPORT", 00:22:25.491 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:25.492 "adrfam": "ipv4", 00:22:25.492 "trsvcid": "$NVMF_PORT", 00:22:25.492 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:25.492 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:25.492 "hdgst": ${hdgst:-false}, 00:22:25.492 "ddgst": ${ddgst:-false} 00:22:25.492 }, 00:22:25.492 "method": "bdev_nvme_attach_controller" 00:22:25.492 } 00:22:25.492 EOF 00:22:25.492 )") 00:22:25.492 00:29:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:22:25.492 00:29:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:25.492 00:29:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:25.492 { 00:22:25.492 "params": { 00:22:25.492 "name": "Nvme$subsystem", 00:22:25.492 "trtype": "$TEST_TRANSPORT", 00:22:25.492 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:25.492 "adrfam": "ipv4", 00:22:25.492 "trsvcid": "$NVMF_PORT", 00:22:25.492 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:25.492 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:25.492 "hdgst": ${hdgst:-false}, 00:22:25.492 "ddgst": ${ddgst:-false} 00:22:25.492 }, 00:22:25.492 "method": "bdev_nvme_attach_controller" 00:22:25.492 } 00:22:25.492 EOF 00:22:25.492 )") 00:22:25.492 00:29:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:22:25.492 00:29:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:25.492 00:29:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:25.492 { 00:22:25.492 "params": { 00:22:25.492 "name": "Nvme$subsystem", 00:22:25.492 "trtype": "$TEST_TRANSPORT", 00:22:25.492 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:25.492 "adrfam": "ipv4", 00:22:25.492 "trsvcid": "$NVMF_PORT", 00:22:25.492 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:25.492 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:25.492 "hdgst": ${hdgst:-false}, 00:22:25.492 "ddgst": ${ddgst:-false} 00:22:25.492 }, 00:22:25.492 "method": "bdev_nvme_attach_controller" 00:22:25.492 } 00:22:25.492 EOF 00:22:25.492 )") 00:22:25.492 00:29:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:22:25.492 00:29:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:25.492 00:29:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:25.492 { 00:22:25.492 "params": { 00:22:25.492 "name": "Nvme$subsystem", 00:22:25.492 "trtype": "$TEST_TRANSPORT", 00:22:25.492 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:25.492 "adrfam": "ipv4", 00:22:25.492 "trsvcid": "$NVMF_PORT", 00:22:25.492 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:25.492 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:25.492 "hdgst": ${hdgst:-false}, 00:22:25.492 "ddgst": ${ddgst:-false} 00:22:25.492 }, 00:22:25.492 "method": "bdev_nvme_attach_controller" 00:22:25.492 } 00:22:25.492 EOF 00:22:25.492 )") 00:22:25.492 00:29:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:22:25.752 00:29:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:25.752 00:29:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:25.752 { 00:22:25.752 "params": { 00:22:25.752 "name": "Nvme$subsystem", 00:22:25.752 "trtype": "$TEST_TRANSPORT", 00:22:25.752 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:25.752 "adrfam": "ipv4", 00:22:25.752 "trsvcid": "$NVMF_PORT", 00:22:25.752 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:25.752 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:25.752 "hdgst": ${hdgst:-false}, 00:22:25.752 "ddgst": ${ddgst:-false} 00:22:25.752 }, 00:22:25.752 "method": "bdev_nvme_attach_controller" 00:22:25.752 } 00:22:25.752 EOF 00:22:25.752 )") 00:22:25.752 00:29:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:22:25.752 00:29:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:25.752 00:29:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:25.752 { 00:22:25.752 "params": { 00:22:25.752 "name": "Nvme$subsystem", 00:22:25.752 "trtype": "$TEST_TRANSPORT", 00:22:25.752 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:25.752 "adrfam": "ipv4", 00:22:25.752 "trsvcid": "$NVMF_PORT", 00:22:25.752 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:25.752 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:25.752 "hdgst": ${hdgst:-false}, 00:22:25.752 "ddgst": ${ddgst:-false} 00:22:25.752 }, 00:22:25.752 "method": "bdev_nvme_attach_controller" 00:22:25.752 } 00:22:25.752 EOF 00:22:25.752 )") 00:22:25.752 00:29:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:22:25.752 [2024-10-09 00:29:56.139186] Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 initialization... 00:22:25.752 [2024-10-09 00:29:56.139240] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3318799 ] 00:22:25.752 00:29:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:25.752 00:29:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:25.752 { 00:22:25.752 "params": { 00:22:25.752 "name": "Nvme$subsystem", 00:22:25.752 "trtype": "$TEST_TRANSPORT", 00:22:25.752 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:25.752 "adrfam": "ipv4", 00:22:25.752 "trsvcid": "$NVMF_PORT", 00:22:25.752 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:25.752 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:25.752 "hdgst": ${hdgst:-false}, 00:22:25.752 "ddgst": ${ddgst:-false} 00:22:25.752 }, 00:22:25.752 "method": "bdev_nvme_attach_controller" 00:22:25.752 } 00:22:25.752 EOF 00:22:25.752 )") 00:22:25.752 00:29:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:22:25.752 00:29:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:25.752 00:29:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:25.752 { 00:22:25.752 "params": { 00:22:25.752 "name": "Nvme$subsystem", 00:22:25.752 "trtype": "$TEST_TRANSPORT", 00:22:25.752 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:25.752 "adrfam": "ipv4", 00:22:25.752 "trsvcid": "$NVMF_PORT", 00:22:25.752 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:25.752 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:25.752 "hdgst": ${hdgst:-false}, 00:22:25.752 "ddgst": ${ddgst:-false} 00:22:25.752 }, 00:22:25.752 "method": "bdev_nvme_attach_controller" 00:22:25.752 } 00:22:25.752 EOF 00:22:25.752 )") 00:22:25.752 00:29:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:22:25.752 00:29:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:25.752 00:29:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:25.752 { 00:22:25.752 "params": { 00:22:25.752 "name": "Nvme$subsystem", 00:22:25.752 "trtype": "$TEST_TRANSPORT", 00:22:25.752 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:25.752 "adrfam": "ipv4", 00:22:25.752 "trsvcid": "$NVMF_PORT", 00:22:25.752 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:25.752 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:25.752 "hdgst": ${hdgst:-false}, 00:22:25.752 "ddgst": ${ddgst:-false} 00:22:25.752 }, 00:22:25.752 "method": "bdev_nvme_attach_controller" 00:22:25.752 } 00:22:25.752 EOF 00:22:25.752 )") 00:22:25.752 00:29:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:22:25.752 00:29:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:25.752 00:29:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:25.752 { 00:22:25.752 "params": { 00:22:25.752 "name": "Nvme$subsystem", 00:22:25.752 "trtype": "$TEST_TRANSPORT", 00:22:25.752 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:25.752 "adrfam": "ipv4", 00:22:25.752 "trsvcid": "$NVMF_PORT", 00:22:25.752 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:25.752 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:25.752 "hdgst": ${hdgst:-false}, 00:22:25.752 "ddgst": ${ddgst:-false} 00:22:25.752 }, 00:22:25.752 "method": "bdev_nvme_attach_controller" 00:22:25.752 } 00:22:25.752 EOF 00:22:25.752 )") 00:22:25.752 00:29:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:22:25.752 00:29:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # jq . 00:22:25.752 00:29:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@583 -- # IFS=, 00:22:25.752 00:29:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:22:25.752 "params": { 00:22:25.752 "name": "Nvme1", 00:22:25.752 "trtype": "tcp", 00:22:25.753 "traddr": "10.0.0.2", 00:22:25.753 "adrfam": "ipv4", 00:22:25.753 "trsvcid": "4420", 00:22:25.753 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:25.753 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:25.753 "hdgst": false, 00:22:25.753 "ddgst": false 00:22:25.753 }, 00:22:25.753 "method": "bdev_nvme_attach_controller" 00:22:25.753 },{ 00:22:25.753 "params": { 00:22:25.753 "name": "Nvme2", 00:22:25.753 "trtype": "tcp", 00:22:25.753 "traddr": "10.0.0.2", 00:22:25.753 "adrfam": "ipv4", 00:22:25.753 "trsvcid": "4420", 00:22:25.753 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:25.753 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:25.753 "hdgst": false, 00:22:25.753 "ddgst": false 00:22:25.753 }, 00:22:25.753 "method": "bdev_nvme_attach_controller" 00:22:25.753 },{ 00:22:25.753 "params": { 00:22:25.753 "name": "Nvme3", 00:22:25.753 "trtype": "tcp", 00:22:25.753 "traddr": "10.0.0.2", 00:22:25.753 "adrfam": "ipv4", 00:22:25.753 "trsvcid": "4420", 00:22:25.753 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:25.753 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:25.753 "hdgst": false, 00:22:25.753 "ddgst": false 00:22:25.753 }, 00:22:25.753 "method": "bdev_nvme_attach_controller" 00:22:25.753 },{ 00:22:25.753 "params": { 00:22:25.753 "name": "Nvme4", 00:22:25.753 "trtype": "tcp", 00:22:25.753 "traddr": "10.0.0.2", 00:22:25.753 "adrfam": "ipv4", 00:22:25.753 "trsvcid": "4420", 00:22:25.753 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:25.753 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:25.753 "hdgst": false, 00:22:25.753 "ddgst": false 00:22:25.753 }, 00:22:25.753 "method": "bdev_nvme_attach_controller" 00:22:25.753 },{ 00:22:25.753 "params": { 00:22:25.753 "name": "Nvme5", 00:22:25.753 "trtype": "tcp", 00:22:25.753 "traddr": "10.0.0.2", 00:22:25.753 "adrfam": "ipv4", 00:22:25.753 "trsvcid": "4420", 00:22:25.753 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:25.753 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:25.753 "hdgst": false, 00:22:25.753 "ddgst": false 00:22:25.753 }, 00:22:25.753 "method": "bdev_nvme_attach_controller" 00:22:25.753 },{ 00:22:25.753 "params": { 00:22:25.753 "name": "Nvme6", 00:22:25.753 "trtype": "tcp", 00:22:25.753 "traddr": "10.0.0.2", 00:22:25.753 "adrfam": "ipv4", 00:22:25.753 "trsvcid": "4420", 00:22:25.753 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:25.753 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:25.753 "hdgst": false, 00:22:25.753 "ddgst": false 00:22:25.753 }, 00:22:25.753 "method": "bdev_nvme_attach_controller" 00:22:25.753 },{ 00:22:25.753 "params": { 00:22:25.753 "name": "Nvme7", 00:22:25.753 "trtype": "tcp", 00:22:25.753 "traddr": "10.0.0.2", 00:22:25.753 "adrfam": "ipv4", 00:22:25.753 "trsvcid": "4420", 00:22:25.753 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:25.753 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:25.753 "hdgst": false, 00:22:25.753 "ddgst": false 00:22:25.753 }, 00:22:25.753 "method": "bdev_nvme_attach_controller" 00:22:25.753 },{ 00:22:25.753 "params": { 00:22:25.753 "name": "Nvme8", 00:22:25.753 "trtype": "tcp", 00:22:25.753 "traddr": "10.0.0.2", 00:22:25.753 "adrfam": "ipv4", 00:22:25.753 "trsvcid": "4420", 00:22:25.753 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:25.753 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:25.753 "hdgst": false, 00:22:25.753 "ddgst": false 00:22:25.753 }, 00:22:25.753 "method": "bdev_nvme_attach_controller" 00:22:25.753 },{ 00:22:25.753 "params": { 00:22:25.753 "name": "Nvme9", 00:22:25.753 "trtype": "tcp", 00:22:25.753 "traddr": "10.0.0.2", 00:22:25.753 "adrfam": "ipv4", 00:22:25.753 "trsvcid": "4420", 00:22:25.753 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:25.753 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:25.753 "hdgst": false, 00:22:25.753 "ddgst": false 00:22:25.753 }, 00:22:25.753 "method": "bdev_nvme_attach_controller" 00:22:25.753 },{ 00:22:25.753 "params": { 00:22:25.753 "name": "Nvme10", 00:22:25.753 "trtype": "tcp", 00:22:25.753 "traddr": "10.0.0.2", 00:22:25.753 "adrfam": "ipv4", 00:22:25.753 "trsvcid": "4420", 00:22:25.753 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:25.753 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:25.753 "hdgst": false, 00:22:25.753 "ddgst": false 00:22:25.753 }, 00:22:25.753 "method": "bdev_nvme_attach_controller" 00:22:25.753 }' 00:22:25.753 [2024-10-09 00:29:56.217026] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:25.753 [2024-10-09 00:29:56.281938] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:22:27.134 Running I/O for 10 seconds... 00:22:27.134 00:29:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:27.134 00:29:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # return 0 00:22:27.134 00:29:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:22:27.134 00:29:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:27.134 00:29:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:27.134 00:29:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:27.134 00:29:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@108 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:22:27.134 00:29:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:22:27.134 00:29:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:22:27.134 00:29:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local ret=1 00:22:27.134 00:29:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # local i 00:22:27.134 00:29:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:22:27.134 00:29:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:22:27.134 00:29:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:27.134 00:29:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:22:27.134 00:29:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:27.134 00:29:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:27.134 00:29:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:27.395 00:29:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=3 00:22:27.395 00:29:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:22:27.395 00:29:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:22:27.655 00:29:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:22:27.655 00:29:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:22:27.655 00:29:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:27.655 00:29:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:22:27.655 00:29:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:27.655 00:29:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:27.655 00:29:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:27.655 00:29:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=67 00:22:27.655 00:29:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:22:27.655 00:29:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:22:27.914 00:29:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:22:27.914 00:29:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:22:27.914 00:29:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:27.914 00:29:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:22:27.914 00:29:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:27.914 00:29:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:27.914 00:29:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:27.914 00:29:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=135 00:22:27.914 00:29:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 135 -ge 100 ']' 00:22:27.914 00:29:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # ret=0 00:22:27.914 00:29:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@66 -- # break 00:22:27.914 00:29:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@70 -- # return 0 00:22:27.914 00:29:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@111 -- # killprocess 3318799 00:22:27.914 00:29:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@950 -- # '[' -z 3318799 ']' 00:22:27.914 00:29:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # kill -0 3318799 00:22:27.914 00:29:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # uname 00:22:27.914 00:29:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:27.914 00:29:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3318799 00:22:27.914 00:29:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:27.914 00:29:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:27.914 00:29:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3318799' 00:22:27.914 killing process with pid 3318799 00:22:27.915 00:29:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@969 -- # kill 3318799 00:22:27.915 00:29:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@974 -- # wait 3318799 00:22:27.915 Received shutdown signal, test time was about 0.970893 seconds 00:22:27.915 00:22:27.915 Latency(us) 00:22:27.915 [2024-10-08T22:29:58.550Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:27.915 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:27.915 Verification LBA range: start 0x0 length 0x400 00:22:27.915 Nvme1n1 : 0.94 208.29 13.02 0.00 0.00 302403.64 1884.16 248162.99 00:22:27.915 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:27.915 Verification LBA range: start 0x0 length 0x400 00:22:27.915 Nvme2n1 : 0.94 203.28 12.70 0.00 0.00 304673.85 13981.01 253405.87 00:22:27.915 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:27.915 Verification LBA range: start 0x0 length 0x400 00:22:27.915 Nvme3n1 : 0.96 267.10 16.69 0.00 0.00 227207.68 14308.69 246415.36 00:22:27.915 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:27.915 Verification LBA range: start 0x0 length 0x400 00:22:27.915 Nvme4n1 : 0.95 274.58 17.16 0.00 0.00 215917.14 3099.31 237677.23 00:22:27.915 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:27.915 Verification LBA range: start 0x0 length 0x400 00:22:27.915 Nvme5n1 : 0.97 268.74 16.80 0.00 0.00 216356.13 1522.35 242920.11 00:22:27.915 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:27.915 Verification LBA range: start 0x0 length 0x400 00:22:27.915 Nvme6n1 : 0.97 263.95 16.50 0.00 0.00 215481.60 16274.77 244667.73 00:22:27.915 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:27.915 Verification LBA range: start 0x0 length 0x400 00:22:27.915 Nvme7n1 : 0.94 210.13 13.13 0.00 0.00 262327.70 3003.73 244667.73 00:22:27.915 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:27.915 Verification LBA range: start 0x0 length 0x400 00:22:27.915 Nvme8n1 : 0.96 266.49 16.66 0.00 0.00 204151.25 19223.89 270882.13 00:22:27.915 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:27.915 Verification LBA range: start 0x0 length 0x400 00:22:27.915 Nvme9n1 : 0.96 265.59 16.60 0.00 0.00 200237.01 20643.84 239424.85 00:22:27.915 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:27.915 Verification LBA range: start 0x0 length 0x400 00:22:27.915 Nvme10n1 : 0.95 201.31 12.58 0.00 0.00 257213.44 19879.25 279620.27 00:22:27.915 [2024-10-08T22:29:58.550Z] =================================================================================================================== 00:22:27.915 [2024-10-08T22:29:58.550Z] Total : 2429.45 151.84 0.00 0.00 236128.70 1522.35 279620.27 00:22:28.175 00:29:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # sleep 1 00:22:29.114 00:29:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@115 -- # kill -0 3318538 00:22:29.114 00:29:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@117 -- # stoptarget 00:22:29.114 00:29:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:22:29.114 00:29:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:22:29.114 00:29:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:29.114 00:29:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@46 -- # nvmftestfini 00:22:29.114 00:29:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@514 -- # nvmfcleanup 00:22:29.114 00:29:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # sync 00:22:29.114 00:29:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:29.114 00:29:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set +e 00:22:29.114 00:29:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:29.114 00:29:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:29.114 rmmod nvme_tcp 00:22:29.114 rmmod nvme_fabrics 00:22:29.114 rmmod nvme_keyring 00:22:29.374 00:29:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:29.374 00:29:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@128 -- # set -e 00:22:29.374 00:29:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@129 -- # return 0 00:22:29.375 00:29:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@515 -- # '[' -n 3318538 ']' 00:22:29.375 00:29:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@516 -- # killprocess 3318538 00:22:29.375 00:29:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@950 -- # '[' -z 3318538 ']' 00:22:29.375 00:29:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # kill -0 3318538 00:22:29.375 00:29:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # uname 00:22:29.375 00:29:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:29.375 00:29:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3318538 00:22:29.375 00:29:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:22:29.375 00:29:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:22:29.375 00:29:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3318538' 00:22:29.375 killing process with pid 3318538 00:22:29.375 00:29:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@969 -- # kill 3318538 00:22:29.375 00:29:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@974 -- # wait 3318538 00:22:29.635 00:30:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:22:29.635 00:30:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:22:29.635 00:30:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:22:29.635 00:30:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # iptr 00:22:29.635 00:30:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@789 -- # iptables-save 00:22:29.635 00:30:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:22:29.635 00:30:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@789 -- # iptables-restore 00:22:29.635 00:30:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:29.635 00:30:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:29.635 00:30:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:29.635 00:30:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:29.635 00:30:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:31.552 00:30:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:31.552 00:22:31.552 real 0m7.831s 00:22:31.552 user 0m23.404s 00:22:31.552 sys 0m1.327s 00:22:31.552 00:30:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:31.552 00:30:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:31.552 ************************************ 00:22:31.552 END TEST nvmf_shutdown_tc2 00:22:31.552 ************************************ 00:22:31.813 00:30:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@164 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:22:31.813 00:30:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:22:31.813 00:30:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:31.813 00:30:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:31.813 ************************************ 00:22:31.813 START TEST nvmf_shutdown_tc3 00:22:31.813 ************************************ 00:22:31.813 00:30:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc3 00:22:31.813 00:30:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@122 -- # starttarget 00:22:31.813 00:30:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@16 -- # nvmftestinit 00:22:31.813 00:30:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:22:31.813 00:30:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:31.813 00:30:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # prepare_net_devs 00:22:31.813 00:30:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@436 -- # local -g is_hw=no 00:22:31.813 00:30:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # remove_spdk_ns 00:22:31.813 00:30:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:31.813 00:30:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:31.813 00:30:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:31.813 00:30:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:22:31.813 00:30:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:22:31.813 00:30:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@309 -- # xtrace_disable 00:22:31.813 00:30:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:31.813 00:30:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:31.813 00:30:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # pci_devs=() 00:22:31.813 00:30:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:31.813 00:30:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:31.813 00:30:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:31.813 00:30:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:31.813 00:30:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:31.813 00:30:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # net_devs=() 00:22:31.813 00:30:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:31.813 00:30:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # e810=() 00:22:31.813 00:30:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # local -ga e810 00:22:31.813 00:30:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # x722=() 00:22:31.813 00:30:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # local -ga x722 00:22:31.813 00:30:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # mlx=() 00:22:31.813 00:30:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # local -ga mlx 00:22:31.813 00:30:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:31.813 00:30:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:31.813 00:30:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:31.813 00:30:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:31.813 00:30:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:31.813 00:30:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:31.813 00:30:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:31.813 00:30:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:31.813 00:30:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:31.813 00:30:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:31.813 00:30:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:31.813 00:30:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:31.813 00:30:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:31.813 00:30:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:31.813 00:30:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:31.813 00:30:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:31.813 00:30:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:31.813 00:30:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:31.813 00:30:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:31.813 00:30:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:22:31.813 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:22:31.813 00:30:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:31.813 00:30:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:31.813 00:30:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:31.813 00:30:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:31.813 00:30:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:31.813 00:30:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:31.813 00:30:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:22:31.813 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:22:31.813 00:30:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:31.813 00:30:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:31.813 00:30:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:31.813 00:30:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:31.813 00:30:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:31.813 00:30:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:31.813 00:30:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:31.813 00:30:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:31.813 00:30:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:22:31.813 00:30:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:31.813 00:30:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:22:31.813 00:30:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:31.813 00:30:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ up == up ]] 00:22:31.813 00:30:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:22:31.813 00:30:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:31.813 00:30:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:22:31.813 Found net devices under 0000:4b:00.0: cvl_0_0 00:22:31.813 00:30:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:22:31.813 00:30:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:22:31.813 00:30:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:31.813 00:30:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:22:31.813 00:30:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:31.813 00:30:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ up == up ]] 00:22:31.813 00:30:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:22:31.813 00:30:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:31.813 00:30:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:22:31.813 Found net devices under 0000:4b:00.1: cvl_0_1 00:22:31.813 00:30:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:22:31.813 00:30:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:22:31.813 00:30:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # is_hw=yes 00:22:31.813 00:30:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:22:31.814 00:30:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:22:31.814 00:30:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:22:31.814 00:30:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:31.814 00:30:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:31.814 00:30:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:31.814 00:30:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:31.814 00:30:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:31.814 00:30:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:31.814 00:30:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:31.814 00:30:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:31.814 00:30:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:31.814 00:30:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:31.814 00:30:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:31.814 00:30:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:31.814 00:30:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:31.814 00:30:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:31.814 00:30:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:31.814 00:30:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:31.814 00:30:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:31.814 00:30:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:31.814 00:30:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:32.074 00:30:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:32.074 00:30:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:32.074 00:30:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:32.074 00:30:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:32.074 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:32.074 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.679 ms 00:22:32.074 00:22:32.074 --- 10.0.0.2 ping statistics --- 00:22:32.074 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:32.074 rtt min/avg/max/mdev = 0.679/0.679/0.679/0.000 ms 00:22:32.074 00:30:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:32.074 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:32.074 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.284 ms 00:22:32.074 00:22:32.074 --- 10.0.0.1 ping statistics --- 00:22:32.074 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:32.074 rtt min/avg/max/mdev = 0.284/0.284/0.284/0.000 ms 00:22:32.074 00:30:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:32.074 00:30:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # return 0 00:22:32.074 00:30:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:22:32.074 00:30:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:32.074 00:30:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:22:32.074 00:30:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:22:32.074 00:30:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:32.074 00:30:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:22:32.074 00:30:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:22:32.074 00:30:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:22:32.074 00:30:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:22:32.074 00:30:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:32.074 00:30:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:32.074 00:30:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@507 -- # nvmfpid=3320170 00:22:32.074 00:30:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@508 -- # waitforlisten 3320170 00:22:32.074 00:30:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:22:32.075 00:30:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@831 -- # '[' -z 3320170 ']' 00:22:32.075 00:30:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:32.075 00:30:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:32.075 00:30:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:32.075 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:32.075 00:30:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:32.075 00:30:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:32.075 [2024-10-09 00:30:02.657814] Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 initialization... 00:22:32.075 [2024-10-09 00:30:02.657874] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:32.335 [2024-10-09 00:30:02.744357] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:32.335 [2024-10-09 00:30:02.814559] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:32.335 [2024-10-09 00:30:02.814597] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:32.335 [2024-10-09 00:30:02.814603] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:32.335 [2024-10-09 00:30:02.814608] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:32.335 [2024-10-09 00:30:02.814613] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:32.335 [2024-10-09 00:30:02.816415] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:22:32.335 [2024-10-09 00:30:02.816571] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:22:32.335 [2024-10-09 00:30:02.816770] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:22:32.335 [2024-10-09 00:30:02.816771] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 4 00:22:32.905 00:30:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:32.905 00:30:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # return 0 00:22:32.905 00:30:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:22:32.905 00:30:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:32.905 00:30:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:32.905 00:30:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:32.905 00:30:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:32.905 00:30:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:32.905 00:30:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:32.905 [2024-10-09 00:30:03.496053] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:32.905 00:30:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:32.905 00:30:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:22:32.905 00:30:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:22:32.905 00:30:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:32.905 00:30:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:32.905 00:30:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:32.905 00:30:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:32.905 00:30:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:32.905 00:30:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:32.905 00:30:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:32.905 00:30:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:32.905 00:30:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:32.905 00:30:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:32.905 00:30:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:32.905 00:30:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:32.905 00:30:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:32.905 00:30:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:32.905 00:30:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:33.165 00:30:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:33.165 00:30:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:33.165 00:30:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:33.165 00:30:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:33.165 00:30:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:33.165 00:30:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:33.165 00:30:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:33.165 00:30:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:33.165 00:30:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # rpc_cmd 00:22:33.165 00:30:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:33.165 00:30:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:33.165 Malloc1 00:22:33.165 [2024-10-09 00:30:03.594741] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:33.165 Malloc2 00:22:33.165 Malloc3 00:22:33.165 Malloc4 00:22:33.165 Malloc5 00:22:33.165 Malloc6 00:22:33.426 Malloc7 00:22:33.426 Malloc8 00:22:33.426 Malloc9 00:22:33.426 Malloc10 00:22:33.426 00:30:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:33.426 00:30:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:22:33.426 00:30:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:33.426 00:30:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:33.426 00:30:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # perfpid=3320554 00:22:33.426 00:30:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # waitforlisten 3320554 /var/tmp/bdevperf.sock 00:22:33.426 00:30:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@831 -- # '[' -z 3320554 ']' 00:22:33.426 00:30:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:33.426 00:30:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:33.426 00:30:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:33.426 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:33.426 00:30:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:22:33.427 00:30:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:33.427 00:30:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:33.427 00:30:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:33.427 00:30:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # config=() 00:22:33.427 00:30:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # local subsystem config 00:22:33.427 00:30:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:33.427 00:30:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:33.427 { 00:22:33.427 "params": { 00:22:33.427 "name": "Nvme$subsystem", 00:22:33.427 "trtype": "$TEST_TRANSPORT", 00:22:33.427 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:33.427 "adrfam": "ipv4", 00:22:33.427 "trsvcid": "$NVMF_PORT", 00:22:33.427 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:33.427 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:33.427 "hdgst": ${hdgst:-false}, 00:22:33.427 "ddgst": ${ddgst:-false} 00:22:33.427 }, 00:22:33.427 "method": "bdev_nvme_attach_controller" 00:22:33.427 } 00:22:33.427 EOF 00:22:33.427 )") 00:22:33.427 00:30:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:22:33.427 00:30:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:33.427 00:30:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:33.427 { 00:22:33.427 "params": { 00:22:33.427 "name": "Nvme$subsystem", 00:22:33.427 "trtype": "$TEST_TRANSPORT", 00:22:33.427 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:33.427 "adrfam": "ipv4", 00:22:33.427 "trsvcid": "$NVMF_PORT", 00:22:33.427 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:33.427 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:33.427 "hdgst": ${hdgst:-false}, 00:22:33.427 "ddgst": ${ddgst:-false} 00:22:33.427 }, 00:22:33.427 "method": "bdev_nvme_attach_controller" 00:22:33.427 } 00:22:33.427 EOF 00:22:33.427 )") 00:22:33.427 00:30:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:22:33.427 00:30:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:33.427 00:30:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:33.427 { 00:22:33.427 "params": { 00:22:33.427 "name": "Nvme$subsystem", 00:22:33.427 "trtype": "$TEST_TRANSPORT", 00:22:33.427 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:33.427 "adrfam": "ipv4", 00:22:33.427 "trsvcid": "$NVMF_PORT", 00:22:33.427 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:33.427 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:33.427 "hdgst": ${hdgst:-false}, 00:22:33.427 "ddgst": ${ddgst:-false} 00:22:33.427 }, 00:22:33.427 "method": "bdev_nvme_attach_controller" 00:22:33.427 } 00:22:33.427 EOF 00:22:33.427 )") 00:22:33.427 00:30:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:22:33.427 00:30:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:33.427 00:30:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:33.427 { 00:22:33.427 "params": { 00:22:33.427 "name": "Nvme$subsystem", 00:22:33.427 "trtype": "$TEST_TRANSPORT", 00:22:33.427 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:33.427 "adrfam": "ipv4", 00:22:33.427 "trsvcid": "$NVMF_PORT", 00:22:33.427 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:33.427 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:33.427 "hdgst": ${hdgst:-false}, 00:22:33.427 "ddgst": ${ddgst:-false} 00:22:33.427 }, 00:22:33.427 "method": "bdev_nvme_attach_controller" 00:22:33.427 } 00:22:33.427 EOF 00:22:33.427 )") 00:22:33.427 00:30:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:22:33.427 00:30:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:33.427 00:30:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:33.427 { 00:22:33.427 "params": { 00:22:33.427 "name": "Nvme$subsystem", 00:22:33.427 "trtype": "$TEST_TRANSPORT", 00:22:33.427 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:33.427 "adrfam": "ipv4", 00:22:33.427 "trsvcid": "$NVMF_PORT", 00:22:33.427 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:33.427 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:33.427 "hdgst": ${hdgst:-false}, 00:22:33.427 "ddgst": ${ddgst:-false} 00:22:33.427 }, 00:22:33.427 "method": "bdev_nvme_attach_controller" 00:22:33.427 } 00:22:33.427 EOF 00:22:33.427 )") 00:22:33.427 00:30:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:22:33.427 00:30:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:33.427 00:30:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:33.427 { 00:22:33.427 "params": { 00:22:33.427 "name": "Nvme$subsystem", 00:22:33.427 "trtype": "$TEST_TRANSPORT", 00:22:33.427 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:33.427 "adrfam": "ipv4", 00:22:33.427 "trsvcid": "$NVMF_PORT", 00:22:33.427 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:33.427 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:33.427 "hdgst": ${hdgst:-false}, 00:22:33.427 "ddgst": ${ddgst:-false} 00:22:33.427 }, 00:22:33.427 "method": "bdev_nvme_attach_controller" 00:22:33.427 } 00:22:33.427 EOF 00:22:33.427 )") 00:22:33.427 00:30:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:22:33.427 [2024-10-09 00:30:04.044265] Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 initialization... 00:22:33.427 [2024-10-09 00:30:04.044319] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3320554 ] 00:22:33.427 00:30:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:33.427 00:30:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:33.427 { 00:22:33.427 "params": { 00:22:33.427 "name": "Nvme$subsystem", 00:22:33.427 "trtype": "$TEST_TRANSPORT", 00:22:33.427 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:33.427 "adrfam": "ipv4", 00:22:33.427 "trsvcid": "$NVMF_PORT", 00:22:33.427 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:33.427 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:33.427 "hdgst": ${hdgst:-false}, 00:22:33.427 "ddgst": ${ddgst:-false} 00:22:33.427 }, 00:22:33.427 "method": "bdev_nvme_attach_controller" 00:22:33.427 } 00:22:33.427 EOF 00:22:33.427 )") 00:22:33.427 00:30:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:22:33.427 00:30:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:33.427 00:30:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:33.427 { 00:22:33.427 "params": { 00:22:33.427 "name": "Nvme$subsystem", 00:22:33.427 "trtype": "$TEST_TRANSPORT", 00:22:33.427 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:33.427 "adrfam": "ipv4", 00:22:33.427 "trsvcid": "$NVMF_PORT", 00:22:33.427 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:33.427 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:33.427 "hdgst": ${hdgst:-false}, 00:22:33.427 "ddgst": ${ddgst:-false} 00:22:33.427 }, 00:22:33.427 "method": "bdev_nvme_attach_controller" 00:22:33.427 } 00:22:33.427 EOF 00:22:33.427 )") 00:22:33.427 00:30:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:22:33.689 00:30:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:33.689 00:30:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:33.689 { 00:22:33.689 "params": { 00:22:33.689 "name": "Nvme$subsystem", 00:22:33.689 "trtype": "$TEST_TRANSPORT", 00:22:33.689 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:33.689 "adrfam": "ipv4", 00:22:33.689 "trsvcid": "$NVMF_PORT", 00:22:33.689 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:33.689 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:33.689 "hdgst": ${hdgst:-false}, 00:22:33.689 "ddgst": ${ddgst:-false} 00:22:33.689 }, 00:22:33.689 "method": "bdev_nvme_attach_controller" 00:22:33.689 } 00:22:33.689 EOF 00:22:33.689 )") 00:22:33.689 00:30:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:22:33.689 00:30:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:33.689 00:30:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:33.689 { 00:22:33.689 "params": { 00:22:33.689 "name": "Nvme$subsystem", 00:22:33.689 "trtype": "$TEST_TRANSPORT", 00:22:33.689 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:33.689 "adrfam": "ipv4", 00:22:33.689 "trsvcid": "$NVMF_PORT", 00:22:33.689 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:33.689 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:33.689 "hdgst": ${hdgst:-false}, 00:22:33.689 "ddgst": ${ddgst:-false} 00:22:33.689 }, 00:22:33.689 "method": "bdev_nvme_attach_controller" 00:22:33.689 } 00:22:33.689 EOF 00:22:33.689 )") 00:22:33.689 00:30:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:22:33.689 00:30:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # jq . 00:22:33.689 00:30:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@583 -- # IFS=, 00:22:33.689 00:30:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:22:33.689 "params": { 00:22:33.689 "name": "Nvme1", 00:22:33.689 "trtype": "tcp", 00:22:33.689 "traddr": "10.0.0.2", 00:22:33.689 "adrfam": "ipv4", 00:22:33.689 "trsvcid": "4420", 00:22:33.689 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:33.689 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:33.689 "hdgst": false, 00:22:33.689 "ddgst": false 00:22:33.689 }, 00:22:33.689 "method": "bdev_nvme_attach_controller" 00:22:33.689 },{ 00:22:33.689 "params": { 00:22:33.689 "name": "Nvme2", 00:22:33.689 "trtype": "tcp", 00:22:33.689 "traddr": "10.0.0.2", 00:22:33.689 "adrfam": "ipv4", 00:22:33.689 "trsvcid": "4420", 00:22:33.689 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:33.689 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:33.689 "hdgst": false, 00:22:33.689 "ddgst": false 00:22:33.689 }, 00:22:33.689 "method": "bdev_nvme_attach_controller" 00:22:33.689 },{ 00:22:33.689 "params": { 00:22:33.689 "name": "Nvme3", 00:22:33.689 "trtype": "tcp", 00:22:33.689 "traddr": "10.0.0.2", 00:22:33.689 "adrfam": "ipv4", 00:22:33.689 "trsvcid": "4420", 00:22:33.689 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:33.689 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:33.689 "hdgst": false, 00:22:33.689 "ddgst": false 00:22:33.689 }, 00:22:33.689 "method": "bdev_nvme_attach_controller" 00:22:33.689 },{ 00:22:33.689 "params": { 00:22:33.689 "name": "Nvme4", 00:22:33.689 "trtype": "tcp", 00:22:33.689 "traddr": "10.0.0.2", 00:22:33.689 "adrfam": "ipv4", 00:22:33.689 "trsvcid": "4420", 00:22:33.689 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:33.689 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:33.689 "hdgst": false, 00:22:33.689 "ddgst": false 00:22:33.689 }, 00:22:33.689 "method": "bdev_nvme_attach_controller" 00:22:33.689 },{ 00:22:33.689 "params": { 00:22:33.689 "name": "Nvme5", 00:22:33.689 "trtype": "tcp", 00:22:33.689 "traddr": "10.0.0.2", 00:22:33.689 "adrfam": "ipv4", 00:22:33.689 "trsvcid": "4420", 00:22:33.689 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:33.689 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:33.689 "hdgst": false, 00:22:33.689 "ddgst": false 00:22:33.689 }, 00:22:33.689 "method": "bdev_nvme_attach_controller" 00:22:33.689 },{ 00:22:33.689 "params": { 00:22:33.689 "name": "Nvme6", 00:22:33.689 "trtype": "tcp", 00:22:33.689 "traddr": "10.0.0.2", 00:22:33.689 "adrfam": "ipv4", 00:22:33.689 "trsvcid": "4420", 00:22:33.689 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:33.689 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:33.689 "hdgst": false, 00:22:33.689 "ddgst": false 00:22:33.689 }, 00:22:33.689 "method": "bdev_nvme_attach_controller" 00:22:33.689 },{ 00:22:33.689 "params": { 00:22:33.689 "name": "Nvme7", 00:22:33.689 "trtype": "tcp", 00:22:33.689 "traddr": "10.0.0.2", 00:22:33.689 "adrfam": "ipv4", 00:22:33.689 "trsvcid": "4420", 00:22:33.689 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:33.689 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:33.689 "hdgst": false, 00:22:33.689 "ddgst": false 00:22:33.689 }, 00:22:33.689 "method": "bdev_nvme_attach_controller" 00:22:33.689 },{ 00:22:33.689 "params": { 00:22:33.689 "name": "Nvme8", 00:22:33.689 "trtype": "tcp", 00:22:33.689 "traddr": "10.0.0.2", 00:22:33.689 "adrfam": "ipv4", 00:22:33.689 "trsvcid": "4420", 00:22:33.689 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:33.689 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:33.689 "hdgst": false, 00:22:33.689 "ddgst": false 00:22:33.689 }, 00:22:33.689 "method": "bdev_nvme_attach_controller" 00:22:33.689 },{ 00:22:33.689 "params": { 00:22:33.689 "name": "Nvme9", 00:22:33.689 "trtype": "tcp", 00:22:33.689 "traddr": "10.0.0.2", 00:22:33.689 "adrfam": "ipv4", 00:22:33.689 "trsvcid": "4420", 00:22:33.689 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:33.689 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:33.689 "hdgst": false, 00:22:33.689 "ddgst": false 00:22:33.689 }, 00:22:33.689 "method": "bdev_nvme_attach_controller" 00:22:33.689 },{ 00:22:33.689 "params": { 00:22:33.689 "name": "Nvme10", 00:22:33.689 "trtype": "tcp", 00:22:33.689 "traddr": "10.0.0.2", 00:22:33.689 "adrfam": "ipv4", 00:22:33.689 "trsvcid": "4420", 00:22:33.689 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:33.689 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:33.689 "hdgst": false, 00:22:33.689 "ddgst": false 00:22:33.689 }, 00:22:33.689 "method": "bdev_nvme_attach_controller" 00:22:33.689 }' 00:22:33.689 [2024-10-09 00:30:04.123483] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:33.689 [2024-10-09 00:30:04.188645] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:22:35.600 Running I/O for 10 seconds... 00:22:36.170 00:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:36.170 00:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # return 0 00:22:36.170 00:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@128 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:22:36.170 00:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:36.170 00:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:36.170 00:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:36.170 00:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@131 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:36.170 00:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@133 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:22:36.170 00:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:22:36.170 00:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:22:36.170 00:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local ret=1 00:22:36.170 00:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # local i 00:22:36.170 00:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:22:36.170 00:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:22:36.171 00:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:36.171 00:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:22:36.171 00:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:36.171 00:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:36.171 00:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:36.171 00:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=67 00:22:36.171 00:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:22:36.171 00:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:22:36.446 00:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:22:36.446 00:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:22:36.446 00:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:36.446 00:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:22:36.446 00:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:36.446 00:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:36.446 00:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:36.446 00:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=131 00:22:36.446 00:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:22:36.446 00:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # ret=0 00:22:36.446 00:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@66 -- # break 00:22:36.446 00:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@70 -- # return 0 00:22:36.446 00:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # killprocess 3320170 00:22:36.446 00:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@950 -- # '[' -z 3320170 ']' 00:22:36.446 00:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # kill -0 3320170 00:22:36.446 00:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@955 -- # uname 00:22:36.446 00:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:36.446 00:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3320170 00:22:36.446 00:30:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:22:36.446 00:30:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:22:36.446 00:30:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3320170' 00:22:36.446 killing process with pid 3320170 00:22:36.446 00:30:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@969 -- # kill 3320170 00:22:36.446 00:30:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@974 -- # wait 3320170 00:22:36.446 [2024-10-09 00:30:07.003579] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe615b0 is same with the state(6) to be set 00:22:36.446 [2024-10-09 00:30:07.003625] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe615b0 is same with the state(6) to be set 00:22:36.446 [2024-10-09 00:30:07.003639] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe615b0 is same with the state(6) to be set 00:22:36.446 [2024-10-09 00:30:07.003645] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe615b0 is same with the state(6) to be set 00:22:36.446 [2024-10-09 00:30:07.003650] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe615b0 is same with the state(6) to be set 00:22:36.446 [2024-10-09 00:30:07.003655] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe615b0 is same with the state(6) to be set 00:22:36.446 [2024-10-09 00:30:07.003661] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe615b0 is same with the state(6) to be set 00:22:36.446 [2024-10-09 00:30:07.003666] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe615b0 is same with the state(6) to be set 00:22:36.446 [2024-10-09 00:30:07.003671] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe615b0 is same with the state(6) to be set 00:22:36.446 [2024-10-09 00:30:07.003676] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe615b0 is same with the state(6) to be set 00:22:36.446 [2024-10-09 00:30:07.003681] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe615b0 is same with the state(6) to be set 00:22:36.446 [2024-10-09 00:30:07.003686] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe615b0 is same with the state(6) to be set 00:22:36.446 [2024-10-09 00:30:07.003691] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe615b0 is same with the state(6) to be set 00:22:36.446 [2024-10-09 00:30:07.003696] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe615b0 is same with the state(6) to be set 00:22:36.446 [2024-10-09 00:30:07.003702] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe615b0 is same with the state(6) to be set 00:22:36.446 [2024-10-09 00:30:07.003707] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe615b0 is same with the state(6) to be set 00:22:36.446 [2024-10-09 00:30:07.003711] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe615b0 is same with the state(6) to be set 00:22:36.446 [2024-10-09 00:30:07.003716] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe615b0 is same with the state(6) to be set 00:22:36.446 [2024-10-09 00:30:07.003724] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe615b0 is same with the state(6) to be set 00:22:36.447 [2024-10-09 00:30:07.003730] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe615b0 is same with the state(6) to be set 00:22:36.447 [2024-10-09 00:30:07.003734] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe615b0 is same with the state(6) to be set 00:22:36.447 [2024-10-09 00:30:07.003739] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe615b0 is same with the state(6) to be set 00:22:36.447 [2024-10-09 00:30:07.003744] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe615b0 is same with the state(6) to be set 00:22:36.447 [2024-10-09 00:30:07.003749] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe615b0 is same with the state(6) to be set 00:22:36.447 [2024-10-09 00:30:07.003754] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe615b0 is same with the state(6) to be set 00:22:36.447 [2024-10-09 00:30:07.003758] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe615b0 is same with the state(6) to be set 00:22:36.447 [2024-10-09 00:30:07.003763] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe615b0 is same with the state(6) to be set 00:22:36.447 [2024-10-09 00:30:07.003768] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe615b0 is same with the state(6) to be set 00:22:36.447 [2024-10-09 00:30:07.003773] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe615b0 is same with the state(6) to be set 00:22:36.447 [2024-10-09 00:30:07.003779] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe615b0 is same with the state(6) to be set 00:22:36.447 [2024-10-09 00:30:07.003784] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe615b0 is same with the state(6) to be set 00:22:36.447 [2024-10-09 00:30:07.003789] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe615b0 is same with the state(6) to be set 00:22:36.447 [2024-10-09 00:30:07.003793] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe615b0 is same with the state(6) to be set 00:22:36.447 [2024-10-09 00:30:07.003798] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe615b0 is same with the state(6) to be set 00:22:36.447 [2024-10-09 00:30:07.003803] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe615b0 is same with the state(6) to be set 00:22:36.447 [2024-10-09 00:30:07.003807] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe615b0 is same with the state(6) to be set 00:22:36.447 [2024-10-09 00:30:07.003812] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe615b0 is same with the state(6) to be set 00:22:36.447 [2024-10-09 00:30:07.003817] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe615b0 is same with the state(6) to be set 00:22:36.447 [2024-10-09 00:30:07.003821] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe615b0 is same with the state(6) to be set 00:22:36.447 [2024-10-09 00:30:07.003826] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe615b0 is same with the state(6) to be set 00:22:36.447 [2024-10-09 00:30:07.003831] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe615b0 is same with the state(6) to be set 00:22:36.447 [2024-10-09 00:30:07.003836] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe615b0 is same with the state(6) to be set 00:22:36.447 [2024-10-09 00:30:07.003840] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe615b0 is same with the state(6) to be set 00:22:36.447 [2024-10-09 00:30:07.003845] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe615b0 is same with the state(6) to be set 00:22:36.447 [2024-10-09 00:30:07.003849] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe615b0 is same with the state(6) to be set 00:22:36.447 [2024-10-09 00:30:07.003854] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe615b0 is same with the state(6) to be set 00:22:36.447 [2024-10-09 00:30:07.003858] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe615b0 is same with the state(6) to be set 00:22:36.447 [2024-10-09 00:30:07.003863] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe615b0 is same with the state(6) to be set 00:22:36.447 [2024-10-09 00:30:07.003868] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe615b0 is same with the state(6) to be set 00:22:36.447 [2024-10-09 00:30:07.003872] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe615b0 is same with the state(6) to be set 00:22:36.447 [2024-10-09 00:30:07.003877] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe615b0 is same with the state(6) to be set 00:22:36.447 [2024-10-09 00:30:07.003881] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe615b0 is same with the state(6) to be set 00:22:36.447 [2024-10-09 00:30:07.003886] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe615b0 is same with the state(6) to be set 00:22:36.447 [2024-10-09 00:30:07.003891] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe615b0 is same with the state(6) to be set 00:22:36.447 [2024-10-09 00:30:07.003895] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe615b0 is same with the state(6) to be set 00:22:36.447 [2024-10-09 00:30:07.003900] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe615b0 is same with the state(6) to be set 00:22:36.447 [2024-10-09 00:30:07.003906] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe615b0 is same with the state(6) to be set 00:22:36.447 [2024-10-09 00:30:07.003911] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe615b0 is same with the state(6) to be set 00:22:36.447 [2024-10-09 00:30:07.003915] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe615b0 is same with the state(6) to be set 00:22:36.447 [2024-10-09 00:30:07.003920] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe615b0 is same with the state(6) to be set 00:22:36.447 [2024-10-09 00:30:07.003926] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe615b0 is same with the state(6) to be set 00:22:36.447 [2024-10-09 00:30:07.003931] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe615b0 is same with the state(6) to be set 00:22:36.447 [2024-10-09 00:30:07.003935] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe615b0 is same with the state(6) to be set 00:22:36.447 [2024-10-09 00:30:07.006152] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:36.447 [2024-10-09 00:30:07.009432] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe90ab0 is same with the state(6) to be set 00:22:36.447 [2024-10-09 00:30:07.009469] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe90ab0 is same with the state(6) to be set 00:22:36.447 [2024-10-09 00:30:07.009475] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe90ab0 is same with the state(6) to be set 00:22:36.447 [2024-10-09 00:30:07.009480] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe90ab0 is same with the state(6) to be set 00:22:36.447 [2024-10-09 00:30:07.009485] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe90ab0 is same with the state(6) to be set 00:22:36.447 [2024-10-09 00:30:07.009491] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe90ab0 is same with the state(6) to be set 00:22:36.447 [2024-10-09 00:30:07.009496] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe90ab0 is same with the state(6) to be set 00:22:36.447 [2024-10-09 00:30:07.009501] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe90ab0 is same with the state(6) to be set 00:22:36.447 [2024-10-09 00:30:07.009506] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe90ab0 is same with the state(6) to be set 00:22:36.447 [2024-10-09 00:30:07.009511] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe90ab0 is same with the state(6) to be set 00:22:36.447 [2024-10-09 00:30:07.009516] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe90ab0 is same with the state(6) to be set 00:22:36.447 [2024-10-09 00:30:07.009520] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe90ab0 is same with the state(6) to be set 00:22:36.447 [2024-10-09 00:30:07.009525] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe90ab0 is same with the state(6) to be set 00:22:36.447 [2024-10-09 00:30:07.009530] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe90ab0 is same with the state(6) to be set 00:22:36.447 [2024-10-09 00:30:07.009535] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe90ab0 is same with the state(6) to be set 00:22:36.447 [2024-10-09 00:30:07.009539] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe90ab0 is same with the state(6) to be set 00:22:36.447 [2024-10-09 00:30:07.009544] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe90ab0 is same with the state(6) to be set 00:22:36.447 [2024-10-09 00:30:07.009549] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe90ab0 is same with the state(6) to be set 00:22:36.447 [2024-10-09 00:30:07.009554] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe90ab0 is same with the state(6) to be set 00:22:36.447 [2024-10-09 00:30:07.009559] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe90ab0 is same with the state(6) to be set 00:22:36.448 [2024-10-09 00:30:07.009567] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe90ab0 is same with the state(6) to be set 00:22:36.448 [2024-10-09 00:30:07.009571] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe90ab0 is same with the state(6) to be set 00:22:36.448 [2024-10-09 00:30:07.009576] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe90ab0 is same with the state(6) to be set 00:22:36.448 [2024-10-09 00:30:07.009581] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe90ab0 is same with the state(6) to be set 00:22:36.448 [2024-10-09 00:30:07.009586] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe90ab0 is same with the state(6) to be set 00:22:36.448 [2024-10-09 00:30:07.009591] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe90ab0 is same with the state(6) to be set 00:22:36.448 [2024-10-09 00:30:07.009596] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe90ab0 is same with the state(6) to be set 00:22:36.448 [2024-10-09 00:30:07.009600] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe90ab0 is same with the state(6) to be set 00:22:36.448 [2024-10-09 00:30:07.009605] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe90ab0 is same with the state(6) to be set 00:22:36.448 [2024-10-09 00:30:07.009610] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe90ab0 is same with the state(6) to be set 00:22:36.448 [2024-10-09 00:30:07.009615] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe90ab0 is same with the state(6) to be set 00:22:36.448 [2024-10-09 00:30:07.009620] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe90ab0 is same with the state(6) to be set 00:22:36.448 [2024-10-09 00:30:07.009624] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe90ab0 is same with the state(6) to be set 00:22:36.448 [2024-10-09 00:30:07.009629] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe90ab0 is same with the state(6) to be set 00:22:36.448 [2024-10-09 00:30:07.009634] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe90ab0 is same with the state(6) to be set 00:22:36.448 [2024-10-09 00:30:07.009638] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe90ab0 is same with the state(6) to be set 00:22:36.448 [2024-10-09 00:30:07.009643] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe90ab0 is same with the state(6) to be set 00:22:36.448 [2024-10-09 00:30:07.009648] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe90ab0 is same with the state(6) to be set 00:22:36.448 [2024-10-09 00:30:07.009653] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe90ab0 is same with the state(6) to be set 00:22:36.448 [2024-10-09 00:30:07.009658] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe90ab0 is same with the state(6) to be set 00:22:36.448 [2024-10-09 00:30:07.009662] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe90ab0 is same with the state(6) to be set 00:22:36.448 [2024-10-09 00:30:07.009667] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe90ab0 is same with the state(6) to be set 00:22:36.448 [2024-10-09 00:30:07.009672] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe90ab0 is same with the state(6) to be set 00:22:36.448 [2024-10-09 00:30:07.009677] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe90ab0 is same with the state(6) to be set 00:22:36.448 [2024-10-09 00:30:07.009681] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe90ab0 is same with the state(6) to be set 00:22:36.448 [2024-10-09 00:30:07.009686] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe90ab0 is same with the state(6) to be set 00:22:36.448 [2024-10-09 00:30:07.009690] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe90ab0 is same with the state(6) to be set 00:22:36.448 [2024-10-09 00:30:07.009698] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe90ab0 is same with the state(6) to be set 00:22:36.448 [2024-10-09 00:30:07.009702] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe90ab0 is same with the state(6) to be set 00:22:36.448 [2024-10-09 00:30:07.009707] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe90ab0 is same with the state(6) to be set 00:22:36.448 [2024-10-09 00:30:07.009712] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe90ab0 is same with the state(6) to be set 00:22:36.448 [2024-10-09 00:30:07.009717] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe90ab0 is same with the state(6) to be set 00:22:36.448 [2024-10-09 00:30:07.009726] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe90ab0 is same with the state(6) to be set 00:22:36.448 [2024-10-09 00:30:07.009731] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe90ab0 is same with the state(6) to be set 00:22:36.448 [2024-10-09 00:30:07.016523] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe61f50 is same with the state(6) to be set 00:22:36.448 [2024-10-09 00:30:07.016546] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe61f50 is same with the state(6) to be set 00:22:36.448 [2024-10-09 00:30:07.016552] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe61f50 is same with the state(6) to be set 00:22:36.448 [2024-10-09 00:30:07.016557] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe61f50 is same with the state(6) to be set 00:22:36.448 [2024-10-09 00:30:07.016562] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe61f50 is same with the state(6) to be set 00:22:36.448 [2024-10-09 00:30:07.016567] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe61f50 is same with the state(6) to be set 00:22:36.448 [2024-10-09 00:30:07.016572] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe61f50 is same with the state(6) to be set 00:22:36.448 [2024-10-09 00:30:07.016577] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe61f50 is same with the state(6) to be set 00:22:36.448 [2024-10-09 00:30:07.016582] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe61f50 is same with the state(6) to be set 00:22:36.448 [2024-10-09 00:30:07.016587] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe61f50 is same with the state(6) to be set 00:22:36.448 [2024-10-09 00:30:07.016592] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe61f50 is same with the state(6) to be set 00:22:36.448 [2024-10-09 00:30:07.016596] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe61f50 is same with the state(6) to be set 00:22:36.448 [2024-10-09 00:30:07.016601] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe61f50 is same with the state(6) to be set 00:22:36.448 [2024-10-09 00:30:07.016606] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe61f50 is same with the state(6) to be set 00:22:36.448 [2024-10-09 00:30:07.016611] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe61f50 is same with the state(6) to be set 00:22:36.448 [2024-10-09 00:30:07.016615] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe61f50 is same with the state(6) to be set 00:22:36.448 [2024-10-09 00:30:07.016620] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe61f50 is same with the state(6) to be set 00:22:36.448 [2024-10-09 00:30:07.016625] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe61f50 is same with the state(6) to be set 00:22:36.448 [2024-10-09 00:30:07.016629] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe61f50 is same with the state(6) to be set 00:22:36.448 [2024-10-09 00:30:07.016634] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe61f50 is same with the state(6) to be set 00:22:36.448 [2024-10-09 00:30:07.016639] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe61f50 is same with the state(6) to be set 00:22:36.448 [2024-10-09 00:30:07.016648] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe61f50 is same with the state(6) to be set 00:22:36.448 [2024-10-09 00:30:07.016652] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe61f50 is same with the state(6) to be set 00:22:36.448 [2024-10-09 00:30:07.016658] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe61f50 is same with the state(6) to be set 00:22:36.448 [2024-10-09 00:30:07.016662] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe61f50 is same with the state(6) to be set 00:22:36.448 [2024-10-09 00:30:07.016667] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe61f50 is same with the state(6) to be set 00:22:36.448 [2024-10-09 00:30:07.016671] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe61f50 is same with the state(6) to be set 00:22:36.448 [2024-10-09 00:30:07.016676] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe61f50 is same with the state(6) to be set 00:22:36.449 [2024-10-09 00:30:07.016681] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe61f50 is same with the state(6) to be set 00:22:36.449 [2024-10-09 00:30:07.016686] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe61f50 is same with the state(6) to be set 00:22:36.449 [2024-10-09 00:30:07.016690] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe61f50 is same with the state(6) to be set 00:22:36.449 [2024-10-09 00:30:07.016695] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe61f50 is same with the state(6) to be set 00:22:36.449 [2024-10-09 00:30:07.016700] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe61f50 is same with the state(6) to be set 00:22:36.449 [2024-10-09 00:30:07.016705] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe61f50 is same with the state(6) to be set 00:22:36.449 [2024-10-09 00:30:07.016709] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe61f50 is same with the state(6) to be set 00:22:36.449 [2024-10-09 00:30:07.016714] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe61f50 is same with the state(6) to be set 00:22:36.449 [2024-10-09 00:30:07.016722] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe61f50 is same with the state(6) to be set 00:22:36.449 [2024-10-09 00:30:07.016728] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe61f50 is same with the state(6) to be set 00:22:36.449 [2024-10-09 00:30:07.016733] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe61f50 is same with the state(6) to be set 00:22:36.449 [2024-10-09 00:30:07.016737] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe61f50 is same with the state(6) to be set 00:22:36.449 [2024-10-09 00:30:07.016742] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe61f50 is same with the state(6) to be set 00:22:36.449 [2024-10-09 00:30:07.016747] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe61f50 is same with the state(6) to be set 00:22:36.449 [2024-10-09 00:30:07.016751] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe61f50 is same with the state(6) to be set 00:22:36.449 [2024-10-09 00:30:07.016756] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe61f50 is same with the state(6) to be set 00:22:36.449 [2024-10-09 00:30:07.016761] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe61f50 is same with the state(6) to be set 00:22:36.449 [2024-10-09 00:30:07.016765] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe61f50 is same with the state(6) to be set 00:22:36.449 [2024-10-09 00:30:07.016770] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe61f50 is same with the state(6) to be set 00:22:36.449 [2024-10-09 00:30:07.016774] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe61f50 is same with the state(6) to be set 00:22:36.449 [2024-10-09 00:30:07.016780] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe61f50 is same with the state(6) to be set 00:22:36.449 [2024-10-09 00:30:07.016785] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe61f50 is same with the state(6) to be set 00:22:36.449 [2024-10-09 00:30:07.016790] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe61f50 is same with the state(6) to be set 00:22:36.449 [2024-10-09 00:30:07.016794] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe61f50 is same with the state(6) to be set 00:22:36.449 [2024-10-09 00:30:07.016799] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe61f50 is same with the state(6) to be set 00:22:36.449 [2024-10-09 00:30:07.016804] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe61f50 is same with the state(6) to be set 00:22:36.449 [2024-10-09 00:30:07.016809] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe61f50 is same with the state(6) to be set 00:22:36.449 [2024-10-09 00:30:07.016813] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe61f50 is same with the state(6) to be set 00:22:36.449 [2024-10-09 00:30:07.016818] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe61f50 is same with the state(6) to be set 00:22:36.449 [2024-10-09 00:30:07.016823] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe61f50 is same with the state(6) to be set 00:22:36.449 [2024-10-09 00:30:07.017646] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe62440 is same with the state(6) to be set 00:22:36.449 [2024-10-09 00:30:07.017669] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe62440 is same with the state(6) to be set 00:22:36.449 [2024-10-09 00:30:07.017675] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe62440 is same with the state(6) to be set 00:22:36.449 [2024-10-09 00:30:07.017680] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe62440 is same with the state(6) to be set 00:22:36.449 [2024-10-09 00:30:07.017685] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe62440 is same with the state(6) to be set 00:22:36.449 [2024-10-09 00:30:07.017691] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe62440 is same with the state(6) to be set 00:22:36.449 [2024-10-09 00:30:07.017696] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe62440 is same with the state(6) to be set 00:22:36.449 [2024-10-09 00:30:07.017701] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe62440 is same with the state(6) to be set 00:22:36.449 [2024-10-09 00:30:07.017707] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe62440 is same with the state(6) to be set 00:22:36.449 [2024-10-09 00:30:07.017711] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe62440 is same with the state(6) to be set 00:22:36.449 [2024-10-09 00:30:07.017716] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe62440 is same with the state(6) to be set 00:22:36.449 [2024-10-09 00:30:07.017724] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe62440 is same with the state(6) to be set 00:22:36.449 [2024-10-09 00:30:07.017729] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe62440 is same with the state(6) to be set 00:22:36.449 [2024-10-09 00:30:07.017734] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe62440 is same with the state(6) to be set 00:22:36.449 [2024-10-09 00:30:07.017739] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe62440 is same with the state(6) to be set 00:22:36.449 [2024-10-09 00:30:07.017744] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe62440 is same with the state(6) to be set 00:22:36.449 [2024-10-09 00:30:07.017749] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe62440 is same with the state(6) to be set 00:22:36.449 [2024-10-09 00:30:07.017758] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe62440 is same with the state(6) to be set 00:22:36.449 [2024-10-09 00:30:07.017763] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe62440 is same with the state(6) to be set 00:22:36.449 [2024-10-09 00:30:07.017768] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe62440 is same with the state(6) to be set 00:22:36.449 [2024-10-09 00:30:07.017773] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe62440 is same with the state(6) to be set 00:22:36.449 [2024-10-09 00:30:07.017778] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe62440 is same with the state(6) to be set 00:22:36.449 [2024-10-09 00:30:07.017782] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe62440 is same with the state(6) to be set 00:22:36.449 [2024-10-09 00:30:07.017787] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe62440 is same with the state(6) to be set 00:22:36.449 [2024-10-09 00:30:07.017792] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe62440 is same with the state(6) to be set 00:22:36.449 [2024-10-09 00:30:07.017797] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe62440 is same with the state(6) to be set 00:22:36.449 [2024-10-09 00:30:07.017802] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe62440 is same with the state(6) to be set 00:22:36.449 [2024-10-09 00:30:07.017807] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe62440 is same with the state(6) to be set 00:22:36.449 [2024-10-09 00:30:07.017811] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe62440 is same with the state(6) to be set 00:22:36.449 [2024-10-09 00:30:07.017816] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe62440 is same with the state(6) to be set 00:22:36.449 [2024-10-09 00:30:07.017821] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe62440 is same with the state(6) to be set 00:22:36.449 [2024-10-09 00:30:07.017826] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe62440 is same with the state(6) to be set 00:22:36.449 [2024-10-09 00:30:07.017831] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe62440 is same with the state(6) to be set 00:22:36.449 [2024-10-09 00:30:07.017836] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe62440 is same with the state(6) to be set 00:22:36.449 [2024-10-09 00:30:07.017840] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe62440 is same with the state(6) to be set 00:22:36.449 [2024-10-09 00:30:07.017845] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe62440 is same with the state(6) to be set 00:22:36.449 [2024-10-09 00:30:07.017850] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe62440 is same with the state(6) to be set 00:22:36.449 [2024-10-09 00:30:07.017855] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe62440 is same with the state(6) to be set 00:22:36.449 [2024-10-09 00:30:07.017860] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe62440 is same with the state(6) to be set 00:22:36.449 [2024-10-09 00:30:07.017865] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe62440 is same with the state(6) to be set 00:22:36.449 [2024-10-09 00:30:07.017870] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe62440 is same with the state(6) to be set 00:22:36.449 [2024-10-09 00:30:07.017875] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe62440 is same with the state(6) to be set 00:22:36.449 [2024-10-09 00:30:07.017879] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe62440 is same with the state(6) to be set 00:22:36.449 [2024-10-09 00:30:07.017884] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe62440 is same with the state(6) to be set 00:22:36.449 [2024-10-09 00:30:07.017890] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe62440 is same with the state(6) to be set 00:22:36.450 [2024-10-09 00:30:07.017894] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe62440 is same with the state(6) to be set 00:22:36.450 [2024-10-09 00:30:07.017899] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe62440 is same with the state(6) to be set 00:22:36.450 [2024-10-09 00:30:07.017904] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe62440 is same with the state(6) to be set 00:22:36.450 [2024-10-09 00:30:07.017909] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe62440 is same with the state(6) to be set 00:22:36.450 [2024-10-09 00:30:07.017914] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe62440 is same with the state(6) to be set 00:22:36.450 [2024-10-09 00:30:07.017919] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe62440 is same with the state(6) to be set 00:22:36.450 [2024-10-09 00:30:07.017923] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe62440 is same with the state(6) to be set 00:22:36.450 [2024-10-09 00:30:07.017928] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe62440 is same with the state(6) to be set 00:22:36.450 [2024-10-09 00:30:07.017932] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe62440 is same with the state(6) to be set 00:22:36.450 [2024-10-09 00:30:07.017937] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe62440 is same with the state(6) to be set 00:22:36.450 [2024-10-09 00:30:07.017942] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe62440 is same with the state(6) to be set 00:22:36.450 [2024-10-09 00:30:07.017947] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe62440 is same with the state(6) to be set 00:22:36.450 [2024-10-09 00:30:07.017952] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe62440 is same with the state(6) to be set 00:22:36.450 [2024-10-09 00:30:07.017956] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe62440 is same with the state(6) to be set 00:22:36.450 [2024-10-09 00:30:07.017961] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe62440 is same with the state(6) to be set 00:22:36.450 [2024-10-09 00:30:07.017966] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe62440 is same with the state(6) to be set 00:22:36.450 [2024-10-09 00:30:07.017970] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe62440 is same with the state(6) to be set 00:22:36.450 [2024-10-09 00:30:07.017975] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe62440 is same with the state(6) to be set 00:22:36.450 [2024-10-09 00:30:07.018631] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe62910 is same with the state(6) to be set 00:22:36.450 [2024-10-09 00:30:07.018647] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe62910 is same with the state(6) to be set 00:22:36.450 [2024-10-09 00:30:07.018652] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe62910 is same with the state(6) to be set 00:22:36.450 [2024-10-09 00:30:07.018657] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe62910 is same with the state(6) to be set 00:22:36.450 [2024-10-09 00:30:07.018662] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe62910 is same with the state(6) to be set 00:22:36.450 [2024-10-09 00:30:07.018667] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe62910 is same with the state(6) to be set 00:22:36.450 [2024-10-09 00:30:07.018671] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe62910 is same with the state(6) to be set 00:22:36.450 [2024-10-09 00:30:07.018676] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe62910 is same with the state(6) to be set 00:22:36.450 [2024-10-09 00:30:07.018681] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe62910 is same with the state(6) to be set 00:22:36.450 [2024-10-09 00:30:07.018691] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe62910 is same with the state(6) to be set 00:22:36.450 [2024-10-09 00:30:07.018696] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe62910 is same with the state(6) to be set 00:22:36.450 [2024-10-09 00:30:07.018701] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe62910 is same with the state(6) to be set 00:22:36.450 [2024-10-09 00:30:07.018706] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe62910 is same with the state(6) to be set 00:22:36.450 [2024-10-09 00:30:07.018711] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe62910 is same with the state(6) to be set 00:22:36.450 [2024-10-09 00:30:07.018716] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe62910 is same with the state(6) to be set 00:22:36.450 [2024-10-09 00:30:07.018726] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe62910 is same with the state(6) to be set 00:22:36.450 [2024-10-09 00:30:07.018731] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe62910 is same with the state(6) to be set 00:22:36.450 [2024-10-09 00:30:07.018736] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe62910 is same with the state(6) to be set 00:22:36.450 [2024-10-09 00:30:07.018741] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe62910 is same with the state(6) to be set 00:22:36.450 [2024-10-09 00:30:07.018745] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe62910 is same with the state(6) to be set 00:22:36.450 [2024-10-09 00:30:07.018750] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe62910 is same with the state(6) to be set 00:22:36.450 [2024-10-09 00:30:07.018755] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe62910 is same with the state(6) to be set 00:22:36.450 [2024-10-09 00:30:07.018761] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe62910 is same with the state(6) to be set 00:22:36.450 [2024-10-09 00:30:07.018766] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe62910 is same with the state(6) to be set 00:22:36.450 [2024-10-09 00:30:07.018771] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe62910 is same with the state(6) to be set 00:22:36.450 [2024-10-09 00:30:07.018776] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe62910 is same with the state(6) to be set 00:22:36.450 [2024-10-09 00:30:07.018781] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe62910 is same with the state(6) to be set 00:22:36.450 [2024-10-09 00:30:07.018786] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe62910 is same with the state(6) to be set 00:22:36.450 [2024-10-09 00:30:07.018790] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe62910 is same with the state(6) to be set 00:22:36.450 [2024-10-09 00:30:07.018795] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe62910 is same with the state(6) to be set 00:22:36.450 [2024-10-09 00:30:07.018800] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe62910 is same with the state(6) to be set 00:22:36.450 [2024-10-09 00:30:07.018804] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe62910 is same with the state(6) to be set 00:22:36.450 [2024-10-09 00:30:07.018809] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe62910 is same with the state(6) to be set 00:22:36.450 [2024-10-09 00:30:07.018814] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe62910 is same with the state(6) to be set 00:22:36.450 [2024-10-09 00:30:07.018818] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe62910 is same with the state(6) to be set 00:22:36.450 [2024-10-09 00:30:07.018823] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe62910 is same with the state(6) to be set 00:22:36.450 [2024-10-09 00:30:07.018830] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe62910 is same with the state(6) to be set 00:22:36.450 [2024-10-09 00:30:07.018834] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe62910 is same with the state(6) to be set 00:22:36.450 [2024-10-09 00:30:07.018839] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe62910 is same with the state(6) to be set 00:22:36.450 [2024-10-09 00:30:07.018844] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe62910 is same with the state(6) to be set 00:22:36.450 [2024-10-09 00:30:07.018848] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe62910 is same with the state(6) to be set 00:22:36.450 [2024-10-09 00:30:07.018853] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe62910 is same with the state(6) to be set 00:22:36.450 [2024-10-09 00:30:07.018857] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe62910 is same with the state(6) to be set 00:22:36.450 [2024-10-09 00:30:07.018862] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe62910 is same with the state(6) to be set 00:22:36.450 [2024-10-09 00:30:07.018867] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe62910 is same with the state(6) to be set 00:22:36.450 [2024-10-09 00:30:07.018871] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe62910 is same with the state(6) to be set 00:22:36.450 [2024-10-09 00:30:07.018876] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe62910 is same with the state(6) to be set 00:22:36.450 [2024-10-09 00:30:07.018881] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe62910 is same with the state(6) to be set 00:22:36.450 [2024-10-09 00:30:07.018885] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe62910 is same with the state(6) to be set 00:22:36.450 [2024-10-09 00:30:07.018891] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe62910 is same with the state(6) to be set 00:22:36.450 [2024-10-09 00:30:07.018895] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe62910 is same with the state(6) to be set 00:22:36.450 [2024-10-09 00:30:07.018900] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe62910 is same with the state(6) to be set 00:22:36.450 [2024-10-09 00:30:07.018905] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe62910 is same with the state(6) to be set 00:22:36.450 [2024-10-09 00:30:07.018910] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe62910 is same with the state(6) to be set 00:22:36.450 [2024-10-09 00:30:07.018915] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe62910 is same with the state(6) to be set 00:22:36.450 [2024-10-09 00:30:07.018919] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe62910 is same with the state(6) to be set 00:22:36.450 [2024-10-09 00:30:07.018924] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe62910 is same with the state(6) to be set 00:22:36.450 [2024-10-09 00:30:07.018929] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe62910 is same with the state(6) to be set 00:22:36.450 [2024-10-09 00:30:07.018934] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe62910 is same with the state(6) to be set 00:22:36.450 [2024-10-09 00:30:07.018938] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe62910 is same with the state(6) to be set 00:22:36.450 [2024-10-09 00:30:07.018943] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe62910 is same with the state(6) to be set 00:22:36.450 [2024-10-09 00:30:07.018947] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe62910 is same with the state(6) to be set 00:22:36.450 [2024-10-09 00:30:07.018952] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe62910 is same with the state(6) to be set 00:22:36.450 [2024-10-09 00:30:07.019702] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe62de0 is same with the state(6) to be set 00:22:36.450 [2024-10-09 00:30:07.019718] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe62de0 is same with the state(6) to be set 00:22:36.450 [2024-10-09 00:30:07.019726] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe62de0 is same with the state(6) to be set 00:22:36.451 [2024-10-09 00:30:07.019731] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe62de0 is same with the state(6) to be set 00:22:36.451 [2024-10-09 00:30:07.019736] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe62de0 is same with the state(6) to be set 00:22:36.451 [2024-10-09 00:30:07.019741] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe62de0 is same with the state(6) to be set 00:22:36.451 [2024-10-09 00:30:07.019746] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe62de0 is same with the state(6) to be set 00:22:36.451 [2024-10-09 00:30:07.019751] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe62de0 is same with the state(6) to be set 00:22:36.451 [2024-10-09 00:30:07.019756] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe62de0 is same with the state(6) to be set 00:22:36.451 [2024-10-09 00:30:07.019761] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe62de0 is same with the state(6) to be set 00:22:36.451 [2024-10-09 00:30:07.019766] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe62de0 is same with the state(6) to be set 00:22:36.451 [2024-10-09 00:30:07.019770] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe62de0 is same with the state(6) to be set 00:22:36.451 [2024-10-09 00:30:07.019775] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe62de0 is same with the state(6) to be set 00:22:36.451 [2024-10-09 00:30:07.019779] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe62de0 is same with the state(6) to be set 00:22:36.451 [2024-10-09 00:30:07.019784] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe62de0 is same with the state(6) to be set 00:22:36.451 [2024-10-09 00:30:07.019788] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe62de0 is same with the state(6) to be set 00:22:36.451 [2024-10-09 00:30:07.019794] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe62de0 is same with the state(6) to be set 00:22:36.451 [2024-10-09 00:30:07.019799] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe62de0 is same with the state(6) to be set 00:22:36.451 [2024-10-09 00:30:07.019803] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe62de0 is same with the state(6) to be set 00:22:36.451 [2024-10-09 00:30:07.019808] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe62de0 is same with the state(6) to be set 00:22:36.451 [2024-10-09 00:30:07.019813] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe62de0 is same with the state(6) to be set 00:22:36.451 [2024-10-09 00:30:07.019818] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe62de0 is same with the state(6) to be set 00:22:36.451 [2024-10-09 00:30:07.019822] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe62de0 is same with the state(6) to be set 00:22:36.451 [2024-10-09 00:30:07.019827] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe62de0 is same with the state(6) to be set 00:22:36.451 [2024-10-09 00:30:07.019831] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe62de0 is same with the state(6) to be set 00:22:36.451 [2024-10-09 00:30:07.019836] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe62de0 is same with the state(6) to be set 00:22:36.451 [2024-10-09 00:30:07.019841] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe62de0 is same with the state(6) to be set 00:22:36.451 [2024-10-09 00:30:07.019846] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe62de0 is same with the state(6) to be set 00:22:36.451 [2024-10-09 00:30:07.019854] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe62de0 is same with the state(6) to be set 00:22:36.451 [2024-10-09 00:30:07.019858] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe62de0 is same with the state(6) to be set 00:22:36.451 [2024-10-09 00:30:07.019863] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe62de0 is same with the state(6) to be set 00:22:36.451 [2024-10-09 00:30:07.019868] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe62de0 is same with the state(6) to be set 00:22:36.451 [2024-10-09 00:30:07.019872] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe62de0 is same with the state(6) to be set 00:22:36.451 [2024-10-09 00:30:07.019877] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe62de0 is same with the state(6) to be set 00:22:36.451 [2024-10-09 00:30:07.019881] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe62de0 is same with the state(6) to be set 00:22:36.451 [2024-10-09 00:30:07.019886] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe62de0 is same with the state(6) to be set 00:22:36.451 [2024-10-09 00:30:07.019891] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe62de0 is same with the state(6) to be set 00:22:36.451 [2024-10-09 00:30:07.019896] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe62de0 is same with the state(6) to be set 00:22:36.451 [2024-10-09 00:30:07.019900] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe62de0 is same with the state(6) to be set 00:22:36.451 [2024-10-09 00:30:07.019905] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe62de0 is same with the state(6) to be set 00:22:36.451 [2024-10-09 00:30:07.019910] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe62de0 is same with the state(6) to be set 00:22:36.451 [2024-10-09 00:30:07.019914] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe62de0 is same with the state(6) to be set 00:22:36.451 [2024-10-09 00:30:07.019919] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe62de0 is same with the state(6) to be set 00:22:36.451 [2024-10-09 00:30:07.019924] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe62de0 is same with the state(6) to be set 00:22:36.451 [2024-10-09 00:30:07.019928] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe62de0 is same with the state(6) to be set 00:22:36.451 [2024-10-09 00:30:07.019932] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe62de0 is same with the state(6) to be set 00:22:36.451 [2024-10-09 00:30:07.019938] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe62de0 is same with the state(6) to be set 00:22:36.451 [2024-10-09 00:30:07.019942] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe62de0 is same with the state(6) to be set 00:22:36.451 [2024-10-09 00:30:07.019947] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe62de0 is same with the state(6) to be set 00:22:36.451 [2024-10-09 00:30:07.019952] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe62de0 is same with the state(6) to be set 00:22:36.451 [2024-10-09 00:30:07.019956] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe62de0 is same with the state(6) to be set 00:22:36.451 [2024-10-09 00:30:07.019961] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe62de0 is same with the state(6) to be set 00:22:36.451 [2024-10-09 00:30:07.019966] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe62de0 is same with the state(6) to be set 00:22:36.451 [2024-10-09 00:30:07.019970] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe62de0 is same with the state(6) to be set 00:22:36.451 [2024-10-09 00:30:07.019975] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe62de0 is same with the state(6) to be set 00:22:36.451 [2024-10-09 00:30:07.019981] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe62de0 is same with the state(6) to be set 00:22:36.451 [2024-10-09 00:30:07.019987] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe62de0 is same with the state(6) to be set 00:22:36.451 [2024-10-09 00:30:07.019991] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe62de0 is same with the state(6) to be set 00:22:36.451 [2024-10-09 00:30:07.019996] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe62de0 is same with the state(6) to be set 00:22:36.451 [2024-10-09 00:30:07.020001] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe62de0 is same with the state(6) to be set 00:22:36.451 [2024-10-09 00:30:07.020005] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe62de0 is same with the state(6) to be set 00:22:36.451 [2024-10-09 00:30:07.020010] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe62de0 is same with the state(6) to be set 00:22:36.451 [2024-10-09 00:30:07.020014] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe62de0 is same with the state(6) to be set 00:22:36.451 [2024-10-09 00:30:07.021025] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe637a0 is same with the state(6) to be set 00:22:36.451 [2024-10-09 00:30:07.021039] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe637a0 is same with the state(6) to be set 00:22:36.451 [2024-10-09 00:30:07.021044] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe637a0 is same with the state(6) to be set 00:22:36.451 [2024-10-09 00:30:07.021049] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe637a0 is same with the state(6) to be set 00:22:36.451 [2024-10-09 00:30:07.021054] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe637a0 is same with the state(6) to be set 00:22:36.451 [2024-10-09 00:30:07.021059] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe637a0 is same with the state(6) to be set 00:22:36.451 [2024-10-09 00:30:07.021064] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe637a0 is same with the state(6) to be set 00:22:36.451 [2024-10-09 00:30:07.021069] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe637a0 is same with the state(6) to be set 00:22:36.451 [2024-10-09 00:30:07.021073] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe637a0 is same with the state(6) to be set 00:22:36.451 [2024-10-09 00:30:07.021077] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe637a0 is same with the state(6) to be set 00:22:36.451 [2024-10-09 00:30:07.021082] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe637a0 is same with the state(6) to be set 00:22:36.451 [2024-10-09 00:30:07.021087] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe637a0 is same with the state(6) to be set 00:22:36.451 [2024-10-09 00:30:07.021091] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe637a0 is same with the state(6) to be set 00:22:36.451 [2024-10-09 00:30:07.021096] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe637a0 is same with the state(6) to be set 00:22:36.451 [2024-10-09 00:30:07.021101] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe637a0 is same with the state(6) to be set 00:22:36.452 [2024-10-09 00:30:07.021105] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe637a0 is same with the state(6) to be set 00:22:36.452 [2024-10-09 00:30:07.021110] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe637a0 is same with the state(6) to be set 00:22:36.452 [2024-10-09 00:30:07.021115] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe637a0 is same with the state(6) to be set 00:22:36.452 [2024-10-09 00:30:07.021120] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe637a0 is same with the state(6) to be set 00:22:36.452 [2024-10-09 00:30:07.021127] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe637a0 is same with the state(6) to be set 00:22:36.452 [2024-10-09 00:30:07.021131] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe637a0 is same with the state(6) to be set 00:22:36.452 [2024-10-09 00:30:07.021136] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe637a0 is same with the state(6) to be set 00:22:36.452 [2024-10-09 00:30:07.021140] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe637a0 is same with the state(6) to be set 00:22:36.452 [2024-10-09 00:30:07.021145] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe637a0 is same with the state(6) to be set 00:22:36.452 [2024-10-09 00:30:07.021150] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe637a0 is same with the state(6) to be set 00:22:36.452 [2024-10-09 00:30:07.021155] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe637a0 is same with the state(6) to be set 00:22:36.452 [2024-10-09 00:30:07.021159] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe637a0 is same with the state(6) to be set 00:22:36.452 [2024-10-09 00:30:07.021164] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe637a0 is same with the state(6) to be set 00:22:36.452 [2024-10-09 00:30:07.021168] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe637a0 is same with the state(6) to be set 00:22:36.452 [2024-10-09 00:30:07.021173] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe637a0 is same with the state(6) to be set 00:22:36.452 [2024-10-09 00:30:07.021178] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe637a0 is same with the state(6) to be set 00:22:36.452 [2024-10-09 00:30:07.021183] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe637a0 is same with the state(6) to be set 00:22:36.452 [2024-10-09 00:30:07.021188] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe637a0 is same with the state(6) to be set 00:22:36.452 [2024-10-09 00:30:07.021193] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe637a0 is same with the state(6) to be set 00:22:36.452 [2024-10-09 00:30:07.021198] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe637a0 is same with the state(6) to be set 00:22:36.452 [2024-10-09 00:30:07.021203] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe637a0 is same with the state(6) to be set 00:22:36.452 [2024-10-09 00:30:07.021208] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe637a0 is same with the state(6) to be set 00:22:36.452 [2024-10-09 00:30:07.021212] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe637a0 is same with the state(6) to be set 00:22:36.452 [2024-10-09 00:30:07.021217] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe637a0 is same with the state(6) to be set 00:22:36.452 [2024-10-09 00:30:07.021221] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe637a0 is same with the state(6) to be set 00:22:36.452 [2024-10-09 00:30:07.021226] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe637a0 is same with the state(6) to be set 00:22:36.452 [2024-10-09 00:30:07.021231] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe637a0 is same with the state(6) to be set 00:22:36.452 [2024-10-09 00:30:07.021236] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe637a0 is same with the state(6) to be set 00:22:36.452 [2024-10-09 00:30:07.021240] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe637a0 is same with the state(6) to be set 00:22:36.452 [2024-10-09 00:30:07.021245] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe637a0 is same with the state(6) to be set 00:22:36.452 [2024-10-09 00:30:07.021250] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe637a0 is same with the state(6) to be set 00:22:36.452 [2024-10-09 00:30:07.021254] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe637a0 is same with the state(6) to be set 00:22:36.452 [2024-10-09 00:30:07.021260] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe637a0 is same with the state(6) to be set 00:22:36.452 [2024-10-09 00:30:07.021265] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe637a0 is same with the state(6) to be set 00:22:36.452 [2024-10-09 00:30:07.021269] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe637a0 is same with the state(6) to be set 00:22:36.452 [2024-10-09 00:30:07.021274] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe637a0 is same with the state(6) to be set 00:22:36.452 [2024-10-09 00:30:07.021279] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe637a0 is same with the state(6) to be set 00:22:36.452 [2024-10-09 00:30:07.021283] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe637a0 is same with the state(6) to be set 00:22:36.452 [2024-10-09 00:30:07.021288] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe637a0 is same with the state(6) to be set 00:22:36.452 [2024-10-09 00:30:07.021293] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe637a0 is same with the state(6) to be set 00:22:36.452 [2024-10-09 00:30:07.021297] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe637a0 is same with the state(6) to be set 00:22:36.452 [2024-10-09 00:30:07.021302] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe637a0 is same with the state(6) to be set 00:22:36.452 [2024-10-09 00:30:07.021306] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe637a0 is same with the state(6) to be set 00:22:36.452 [2024-10-09 00:30:07.021311] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe637a0 is same with the state(6) to be set 00:22:36.452 [2024-10-09 00:30:07.021315] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe637a0 is same with the state(6) to be set 00:22:36.452 [2024-10-09 00:30:07.021320] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe637a0 is same with the state(6) to be set 00:22:36.452 [2024-10-09 00:30:07.021324] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe637a0 is same with the state(6) to be set 00:22:36.452 [2024-10-09 00:30:07.021329] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe637a0 is same with the state(6) to be set 00:22:36.452 [2024-10-09 00:30:07.021776] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe905e0 is same with the state(6) to be set 00:22:36.452 [2024-10-09 00:30:07.021790] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe905e0 is same with the state(6) to be set 00:22:36.452 [2024-10-09 00:30:07.021796] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe905e0 is same with the state(6) to be set 00:22:36.452 [2024-10-09 00:30:07.021800] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe905e0 is same with the state(6) to be set 00:22:36.452 [2024-10-09 00:30:07.021805] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe905e0 is same with the state(6) to be set 00:22:36.452 [2024-10-09 00:30:07.021810] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe905e0 is same with the state(6) to be set 00:22:36.452 [2024-10-09 00:30:07.021815] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe905e0 is same with the state(6) to be set 00:22:36.452 [2024-10-09 00:30:07.021820] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe905e0 is same with the state(6) to be set 00:22:36.452 [2024-10-09 00:30:07.021824] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe905e0 is same with the state(6) to be set 00:22:36.452 [2024-10-09 00:30:07.021829] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe905e0 is same with the state(6) to be set 00:22:36.452 [2024-10-09 00:30:07.021834] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe905e0 is same with the state(6) to be set 00:22:36.452 [2024-10-09 00:30:07.021842] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe905e0 is same with the state(6) to be set 00:22:36.452 [2024-10-09 00:30:07.021847] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe905e0 is same with the state(6) to be set 00:22:36.452 [2024-10-09 00:30:07.021851] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe905e0 is same with the state(6) to be set 00:22:36.452 [2024-10-09 00:30:07.021856] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe905e0 is same with the state(6) to be set 00:22:36.452 [2024-10-09 00:30:07.021861] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe905e0 is same with the state(6) to be set 00:22:36.452 [2024-10-09 00:30:07.021865] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe905e0 is same with the state(6) to be set 00:22:36.452 [2024-10-09 00:30:07.021870] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe905e0 is same with the state(6) to be set 00:22:36.452 [2024-10-09 00:30:07.021875] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe905e0 is same with the state(6) to be set 00:22:36.452 [2024-10-09 00:30:07.021879] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe905e0 is same with the state(6) to be set 00:22:36.452 [2024-10-09 00:30:07.021884] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe905e0 is same with the state(6) to be set 00:22:36.452 [2024-10-09 00:30:07.021889] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe905e0 is same with the state(6) to be set 00:22:36.452 [2024-10-09 00:30:07.021893] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe905e0 is same with the state(6) to be set 00:22:36.452 [2024-10-09 00:30:07.021898] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe905e0 is same with the state(6) to be set 00:22:36.452 [2024-10-09 00:30:07.021903] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe905e0 is same with the state(6) to be set 00:22:36.452 [2024-10-09 00:30:07.021908] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe905e0 is same with the state(6) to be set 00:22:36.452 [2024-10-09 00:30:07.021912] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe905e0 is same with the state(6) to be set 00:22:36.452 [2024-10-09 00:30:07.021917] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe905e0 is same with the state(6) to be set 00:22:36.452 [2024-10-09 00:30:07.021922] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe905e0 is same with the state(6) to be set 00:22:36.452 [2024-10-09 00:30:07.021927] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe905e0 is same with the state(6) to be set 00:22:36.452 [2024-10-09 00:30:07.021931] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe905e0 is same with the state(6) to be set 00:22:36.452 [2024-10-09 00:30:07.021936] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe905e0 is same with the state(6) to be set 00:22:36.452 [2024-10-09 00:30:07.021941] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe905e0 is same with the state(6) to be set 00:22:36.452 [2024-10-09 00:30:07.021946] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe905e0 is same with the state(6) to be set 00:22:36.452 [2024-10-09 00:30:07.021950] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe905e0 is same with the state(6) to be set 00:22:36.452 [2024-10-09 00:30:07.021955] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe905e0 is same with the state(6) to be set 00:22:36.452 [2024-10-09 00:30:07.021960] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe905e0 is same with the state(6) to be set 00:22:36.452 [2024-10-09 00:30:07.021964] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe905e0 is same with the state(6) to be set 00:22:36.452 [2024-10-09 00:30:07.021970] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe905e0 is same with the state(6) to be set 00:22:36.452 [2024-10-09 00:30:07.021975] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe905e0 is same with the state(6) to be set 00:22:36.453 [2024-10-09 00:30:07.021979] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe905e0 is same with the state(6) to be set 00:22:36.453 [2024-10-09 00:30:07.021984] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe905e0 is same with the state(6) to be set 00:22:36.453 [2024-10-09 00:30:07.021989] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe905e0 is same with the state(6) to be set 00:22:36.453 [2024-10-09 00:30:07.021993] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe905e0 is same with the state(6) to be set 00:22:36.453 [2024-10-09 00:30:07.021998] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe905e0 is same with the state(6) to be set 00:22:36.453 [2024-10-09 00:30:07.022002] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe905e0 is same with the state(6) to be set 00:22:36.453 [2024-10-09 00:30:07.022007] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe905e0 is same with the state(6) to be set 00:22:36.453 [2024-10-09 00:30:07.022011] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe905e0 is same with the state(6) to be set 00:22:36.453 [2024-10-09 00:30:07.022016] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe905e0 is same with the state(6) to be set 00:22:36.453 [2024-10-09 00:30:07.022021] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe905e0 is same with the state(6) to be set 00:22:36.453 [2024-10-09 00:30:07.022025] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe905e0 is same with the state(6) to be set 00:22:36.453 [2024-10-09 00:30:07.022030] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe905e0 is same with the state(6) to be set 00:22:36.453 [2024-10-09 00:30:07.022035] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe905e0 is same with the state(6) to be set 00:22:36.453 [2024-10-09 00:30:07.022039] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe905e0 is same with the state(6) to be set 00:22:36.453 [2024-10-09 00:30:07.022044] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe905e0 is same with the state(6) to be set 00:22:36.453 [2024-10-09 00:30:07.022048] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe905e0 is same with the state(6) to be set 00:22:36.453 [2024-10-09 00:30:07.022053] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe905e0 is same with the state(6) to be set 00:22:36.453 [2024-10-09 00:30:07.022057] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe905e0 is same with the state(6) to be set 00:22:36.453 [2024-10-09 00:30:07.022062] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe905e0 is same with the state(6) to be set 00:22:36.453 [2024-10-09 00:30:07.022067] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe905e0 is same with the state(6) to be set 00:22:36.453 [2024-10-09 00:30:07.022071] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe905e0 is same with the state(6) to be set 00:22:36.453 [2024-10-09 00:30:07.022076] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe905e0 is same with the state(6) to be set 00:22:36.453 [2024-10-09 00:30:07.022081] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe905e0 is same with the state(6) to be set 00:22:36.453 [2024-10-09 00:30:07.029236] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:36.453 [2024-10-09 00:30:07.029268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.453 [2024-10-09 00:30:07.029283] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:36.453 [2024-10-09 00:30:07.029291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.453 [2024-10-09 00:30:07.029300] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:36.453 [2024-10-09 00:30:07.029307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.453 [2024-10-09 00:30:07.029315] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:36.453 [2024-10-09 00:30:07.029323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.453 [2024-10-09 00:30:07.029331] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1066bc0 is same with the state(6) to be set 00:22:36.453 [2024-10-09 00:30:07.029366] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:36.453 [2024-10-09 00:30:07.029375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.453 [2024-10-09 00:30:07.029384] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:36.453 [2024-10-09 00:30:07.029391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.453 [2024-10-09 00:30:07.029400] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:36.453 [2024-10-09 00:30:07.029407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.453 [2024-10-09 00:30:07.029415] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:36.453 [2024-10-09 00:30:07.029422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.453 [2024-10-09 00:30:07.029429] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x101e810 is same with the state(6) to be set 00:22:36.453 [2024-10-09 00:30:07.029456] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:36.453 [2024-10-09 00:30:07.029464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.453 [2024-10-09 00:30:07.029473] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:36.453 [2024-10-09 00:30:07.029480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.453 [2024-10-09 00:30:07.029488] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:36.453 [2024-10-09 00:30:07.029495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.453 [2024-10-09 00:30:07.029503] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:36.453 [2024-10-09 00:30:07.029510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.453 [2024-10-09 00:30:07.029517] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x101f280 is same with the state(6) to be set 00:22:36.453 [2024-10-09 00:30:07.029544] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:36.453 [2024-10-09 00:30:07.029555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.453 [2024-10-09 00:30:07.029564] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:36.453 [2024-10-09 00:30:07.029571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.453 [2024-10-09 00:30:07.029579] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:36.453 [2024-10-09 00:30:07.029586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.453 [2024-10-09 00:30:07.029594] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:36.453 [2024-10-09 00:30:07.029601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.453 [2024-10-09 00:30:07.029608] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1018820 is same with the state(6) to be set 00:22:36.453 [2024-10-09 00:30:07.029633] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:36.453 [2024-10-09 00:30:07.029642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.453 [2024-10-09 00:30:07.029650] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:36.453 [2024-10-09 00:30:07.029658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.453 [2024-10-09 00:30:07.029666] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:36.453 [2024-10-09 00:30:07.029674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.453 [2024-10-09 00:30:07.029681] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:36.453 [2024-10-09 00:30:07.029689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.453 [2024-10-09 00:30:07.029696] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbcabe0 is same with the state(6) to be set 00:22:36.453 [2024-10-09 00:30:07.029861] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:36.453 [2024-10-09 00:30:07.029873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.453 [2024-10-09 00:30:07.029882] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:36.453 [2024-10-09 00:30:07.029889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.453 [2024-10-09 00:30:07.029898] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:36.453 [2024-10-09 00:30:07.029905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.453 [2024-10-09 00:30:07.029913] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:36.453 [2024-10-09 00:30:07.029921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.453 [2024-10-09 00:30:07.029930] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd4d70 is same with the state(6) to be set 00:22:36.453 [2024-10-09 00:30:07.029956] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:36.453 [2024-10-09 00:30:07.029965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.453 [2024-10-09 00:30:07.029974] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:36.453 [2024-10-09 00:30:07.029981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.453 [2024-10-09 00:30:07.029989] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:36.453 [2024-10-09 00:30:07.029997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.453 [2024-10-09 00:30:07.030004] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:36.453 [2024-10-09 00:30:07.030012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.453 [2024-10-09 00:30:07.030019] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xff3450 is same with the state(6) to be set 00:22:36.454 [2024-10-09 00:30:07.030041] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:36.454 [2024-10-09 00:30:07.030050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.454 [2024-10-09 00:30:07.030058] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:36.454 [2024-10-09 00:30:07.030066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.454 [2024-10-09 00:30:07.030074] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:36.454 [2024-10-09 00:30:07.030081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.454 [2024-10-09 00:30:07.030090] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:36.454 [2024-10-09 00:30:07.030097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.454 [2024-10-09 00:30:07.030104] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xff19d0 is same with the state(6) to be set 00:22:36.454 [2024-10-09 00:30:07.030128] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:36.454 [2024-10-09 00:30:07.030137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.454 [2024-10-09 00:30:07.030146] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:36.454 [2024-10-09 00:30:07.030153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.454 [2024-10-09 00:30:07.030161] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:36.454 [2024-10-09 00:30:07.030168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.454 [2024-10-09 00:30:07.030176] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:36.454 [2024-10-09 00:30:07.030185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.454 [2024-10-09 00:30:07.030192] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbcba50 is same with the state(6) to be set 00:22:36.454 [2024-10-09 00:30:07.030215] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:36.454 [2024-10-09 00:30:07.030224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.454 [2024-10-09 00:30:07.030232] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:36.454 [2024-10-09 00:30:07.030240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.454 [2024-10-09 00:30:07.030248] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:36.454 [2024-10-09 00:30:07.030255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.454 [2024-10-09 00:30:07.030263] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:36.454 [2024-10-09 00:30:07.030271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.454 [2024-10-09 00:30:07.030278] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd3ee0 is same with the state(6) to be set 00:22:36.454 [2024-10-09 00:30:07.030612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.454 [2024-10-09 00:30:07.030634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.454 [2024-10-09 00:30:07.030648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.454 [2024-10-09 00:30:07.030657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.454 [2024-10-09 00:30:07.030667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.454 [2024-10-09 00:30:07.030675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.454 [2024-10-09 00:30:07.030684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.454 [2024-10-09 00:30:07.030691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.454 [2024-10-09 00:30:07.030701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.454 [2024-10-09 00:30:07.030709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.454 [2024-10-09 00:30:07.030726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.454 [2024-10-09 00:30:07.030733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.454 [2024-10-09 00:30:07.030742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.454 [2024-10-09 00:30:07.030750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.454 [2024-10-09 00:30:07.030766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.454 [2024-10-09 00:30:07.030774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.454 [2024-10-09 00:30:07.030783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.454 [2024-10-09 00:30:07.030791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.454 [2024-10-09 00:30:07.030800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.454 [2024-10-09 00:30:07.030808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.454 [2024-10-09 00:30:07.030817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.454 [2024-10-09 00:30:07.030824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.454 [2024-10-09 00:30:07.030833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.454 [2024-10-09 00:30:07.030841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.454 [2024-10-09 00:30:07.030850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.454 [2024-10-09 00:30:07.030857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.454 [2024-10-09 00:30:07.030866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.454 [2024-10-09 00:30:07.030874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.454 [2024-10-09 00:30:07.030883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.454 [2024-10-09 00:30:07.030890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.454 [2024-10-09 00:30:07.030899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.454 [2024-10-09 00:30:07.030907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.454 [2024-10-09 00:30:07.030916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.454 [2024-10-09 00:30:07.030923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.454 [2024-10-09 00:30:07.030932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.454 [2024-10-09 00:30:07.030939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.454 [2024-10-09 00:30:07.030949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.454 [2024-10-09 00:30:07.030956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.454 [2024-10-09 00:30:07.030966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.454 [2024-10-09 00:30:07.030976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.454 [2024-10-09 00:30:07.030986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.454 [2024-10-09 00:30:07.030993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.454 [2024-10-09 00:30:07.031003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.454 [2024-10-09 00:30:07.031010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.454 [2024-10-09 00:30:07.031019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.454 [2024-10-09 00:30:07.031026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.454 [2024-10-09 00:30:07.031036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.454 [2024-10-09 00:30:07.031043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.454 [2024-10-09 00:30:07.031052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.454 [2024-10-09 00:30:07.031060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.454 [2024-10-09 00:30:07.031069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.454 [2024-10-09 00:30:07.031076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.454 [2024-10-09 00:30:07.031085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.454 [2024-10-09 00:30:07.031092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.454 [2024-10-09 00:30:07.031102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.454 [2024-10-09 00:30:07.031109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.455 [2024-10-09 00:30:07.031118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.455 [2024-10-09 00:30:07.031125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.455 [2024-10-09 00:30:07.031135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.455 [2024-10-09 00:30:07.031142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.455 [2024-10-09 00:30:07.031151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.455 [2024-10-09 00:30:07.031158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.455 [2024-10-09 00:30:07.031168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.455 [2024-10-09 00:30:07.031175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.455 [2024-10-09 00:30:07.031185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.455 [2024-10-09 00:30:07.031192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.455 [2024-10-09 00:30:07.031202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.455 [2024-10-09 00:30:07.031209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.455 [2024-10-09 00:30:07.031218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.455 [2024-10-09 00:30:07.031226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.455 [2024-10-09 00:30:07.031235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.455 [2024-10-09 00:30:07.031242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.455 [2024-10-09 00:30:07.031251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.455 [2024-10-09 00:30:07.031258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.455 [2024-10-09 00:30:07.031268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.455 [2024-10-09 00:30:07.031275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.455 [2024-10-09 00:30:07.031284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.455 [2024-10-09 00:30:07.031291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.455 [2024-10-09 00:30:07.031301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.455 [2024-10-09 00:30:07.031308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.455 [2024-10-09 00:30:07.031317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.455 [2024-10-09 00:30:07.031324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.455 [2024-10-09 00:30:07.031333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.455 [2024-10-09 00:30:07.031341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.455 [2024-10-09 00:30:07.031350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.455 [2024-10-09 00:30:07.031357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.455 [2024-10-09 00:30:07.031366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.455 [2024-10-09 00:30:07.031373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.455 [2024-10-09 00:30:07.031383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.455 [2024-10-09 00:30:07.031392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.455 [2024-10-09 00:30:07.031401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.455 [2024-10-09 00:30:07.031408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.455 [2024-10-09 00:30:07.031417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.455 [2024-10-09 00:30:07.031424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.455 [2024-10-09 00:30:07.031434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.455 [2024-10-09 00:30:07.031441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.455 [2024-10-09 00:30:07.031450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.455 [2024-10-09 00:30:07.031457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.455 [2024-10-09 00:30:07.031467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.455 [2024-10-09 00:30:07.031473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.455 [2024-10-09 00:30:07.031483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.455 [2024-10-09 00:30:07.031490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.455 [2024-10-09 00:30:07.031500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.455 [2024-10-09 00:30:07.031507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.455 [2024-10-09 00:30:07.031517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.455 [2024-10-09 00:30:07.031524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.455 [2024-10-09 00:30:07.031534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.455 [2024-10-09 00:30:07.031541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.455 [2024-10-09 00:30:07.031550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.455 [2024-10-09 00:30:07.031557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.455 [2024-10-09 00:30:07.031567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.455 [2024-10-09 00:30:07.031578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.455 [2024-10-09 00:30:07.031588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.455 [2024-10-09 00:30:07.031595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.455 [2024-10-09 00:30:07.031607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.455 [2024-10-09 00:30:07.031614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.455 [2024-10-09 00:30:07.031624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.455 [2024-10-09 00:30:07.031631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.455 [2024-10-09 00:30:07.031640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.455 [2024-10-09 00:30:07.031647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.455 [2024-10-09 00:30:07.031657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.455 [2024-10-09 00:30:07.031664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.455 [2024-10-09 00:30:07.031673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.455 [2024-10-09 00:30:07.031680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.455 [2024-10-09 00:30:07.031690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.455 [2024-10-09 00:30:07.031697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.455 [2024-10-09 00:30:07.031706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.455 [2024-10-09 00:30:07.031713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.455 [2024-10-09 00:30:07.031743] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:22:36.455 [2024-10-09 00:30:07.031787] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1102420 was disconnected and freed. reset controller. 00:22:36.456 [2024-10-09 00:30:07.032045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.456 [2024-10-09 00:30:07.032062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.456 [2024-10-09 00:30:07.032075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.456 [2024-10-09 00:30:07.032083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.456 [2024-10-09 00:30:07.032092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.456 [2024-10-09 00:30:07.032100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.456 [2024-10-09 00:30:07.032109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.456 [2024-10-09 00:30:07.032117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.456 [2024-10-09 00:30:07.032127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.456 [2024-10-09 00:30:07.032137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.456 [2024-10-09 00:30:07.032147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.456 [2024-10-09 00:30:07.032154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.456 [2024-10-09 00:30:07.032164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.456 [2024-10-09 00:30:07.032172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.456 [2024-10-09 00:30:07.032181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.456 [2024-10-09 00:30:07.032188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.456 [2024-10-09 00:30:07.032198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.456 [2024-10-09 00:30:07.032205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.456 [2024-10-09 00:30:07.032214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.456 [2024-10-09 00:30:07.032222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.456 [2024-10-09 00:30:07.032231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.456 [2024-10-09 00:30:07.032238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.456 [2024-10-09 00:30:07.032248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.456 [2024-10-09 00:30:07.032255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.456 [2024-10-09 00:30:07.032265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.456 [2024-10-09 00:30:07.032272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.456 [2024-10-09 00:30:07.032281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.456 [2024-10-09 00:30:07.032289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.456 [2024-10-09 00:30:07.032299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.456 [2024-10-09 00:30:07.032306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.456 [2024-10-09 00:30:07.032315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.456 [2024-10-09 00:30:07.032322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.456 [2024-10-09 00:30:07.032332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.456 [2024-10-09 00:30:07.032339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.456 [2024-10-09 00:30:07.032350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.456 [2024-10-09 00:30:07.032357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.456 [2024-10-09 00:30:07.032367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.456 [2024-10-09 00:30:07.032377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.456 [2024-10-09 00:30:07.032387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.456 [2024-10-09 00:30:07.032394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.456 [2024-10-09 00:30:07.032403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.456 [2024-10-09 00:30:07.032410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.456 [2024-10-09 00:30:07.032420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.456 [2024-10-09 00:30:07.032428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.456 [2024-10-09 00:30:07.032437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.456 [2024-10-09 00:30:07.032444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.456 [2024-10-09 00:30:07.032454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.456 [2024-10-09 00:30:07.032461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.456 [2024-10-09 00:30:07.032471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.456 [2024-10-09 00:30:07.032478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.456 [2024-10-09 00:30:07.032487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.456 [2024-10-09 00:30:07.032494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.456 [2024-10-09 00:30:07.032504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.456 [2024-10-09 00:30:07.032511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.456 [2024-10-09 00:30:07.032520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.456 [2024-10-09 00:30:07.032528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.456 [2024-10-09 00:30:07.032537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.456 [2024-10-09 00:30:07.032544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.456 [2024-10-09 00:30:07.032553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.456 [2024-10-09 00:30:07.040973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.456 [2024-10-09 00:30:07.041025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.456 [2024-10-09 00:30:07.041035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.456 [2024-10-09 00:30:07.041045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.456 [2024-10-09 00:30:07.041053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.456 [2024-10-09 00:30:07.041064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.456 [2024-10-09 00:30:07.041072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.456 [2024-10-09 00:30:07.041081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.456 [2024-10-09 00:30:07.041089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.456 [2024-10-09 00:30:07.041099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.456 [2024-10-09 00:30:07.041107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.456 [2024-10-09 00:30:07.041117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.456 [2024-10-09 00:30:07.041125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.456 [2024-10-09 00:30:07.041134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.457 [2024-10-09 00:30:07.041141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.457 [2024-10-09 00:30:07.041151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.457 [2024-10-09 00:30:07.041159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.457 [2024-10-09 00:30:07.041168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.457 [2024-10-09 00:30:07.041175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.457 [2024-10-09 00:30:07.041185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.457 [2024-10-09 00:30:07.041192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.457 [2024-10-09 00:30:07.041202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.457 [2024-10-09 00:30:07.041209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.457 [2024-10-09 00:30:07.041218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.457 [2024-10-09 00:30:07.041226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.457 [2024-10-09 00:30:07.041240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.457 [2024-10-09 00:30:07.041248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.457 [2024-10-09 00:30:07.041258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.457 [2024-10-09 00:30:07.041265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.457 [2024-10-09 00:30:07.041275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.457 [2024-10-09 00:30:07.041282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.457 [2024-10-09 00:30:07.041292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.457 [2024-10-09 00:30:07.041299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.457 [2024-10-09 00:30:07.041308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.457 [2024-10-09 00:30:07.041315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.457 [2024-10-09 00:30:07.041325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.457 [2024-10-09 00:30:07.041332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.457 [2024-10-09 00:30:07.041342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.457 [2024-10-09 00:30:07.041349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.457 [2024-10-09 00:30:07.041359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.457 [2024-10-09 00:30:07.041366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.457 [2024-10-09 00:30:07.041376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.457 [2024-10-09 00:30:07.041384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.457 [2024-10-09 00:30:07.041394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.457 [2024-10-09 00:30:07.041401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.457 [2024-10-09 00:30:07.041410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.457 [2024-10-09 00:30:07.041417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.457 [2024-10-09 00:30:07.041427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.457 [2024-10-09 00:30:07.041434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.457 [2024-10-09 00:30:07.041443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.457 [2024-10-09 00:30:07.041452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.457 [2024-10-09 00:30:07.041462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.457 [2024-10-09 00:30:07.041469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.457 [2024-10-09 00:30:07.041478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.457 [2024-10-09 00:30:07.041485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.457 [2024-10-09 00:30:07.041495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.457 [2024-10-09 00:30:07.041502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.457 [2024-10-09 00:30:07.041511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.457 [2024-10-09 00:30:07.041519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.457 [2024-10-09 00:30:07.041528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.457 [2024-10-09 00:30:07.041535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.457 [2024-10-09 00:30:07.041544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.457 [2024-10-09 00:30:07.041552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.457 [2024-10-09 00:30:07.041561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.457 [2024-10-09 00:30:07.041568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.457 [2024-10-09 00:30:07.041578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.457 [2024-10-09 00:30:07.041585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.457 [2024-10-09 00:30:07.041594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.457 [2024-10-09 00:30:07.041602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.457 [2024-10-09 00:30:07.041611] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x105ea10 is same with the state(6) to be set 00:22:36.457 [2024-10-09 00:30:07.041668] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x105ea10 was disconnected and freed. reset controller. 00:22:36.457 [2024-10-09 00:30:07.041930] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1066bc0 (9): Bad file descriptor 00:22:36.457 [2024-10-09 00:30:07.041961] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x101e810 (9): Bad file descriptor 00:22:36.457 [2024-10-09 00:30:07.041973] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x101f280 (9): Bad file descriptor 00:22:36.457 [2024-10-09 00:30:07.041987] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1018820 (9): Bad file descriptor 00:22:36.457 [2024-10-09 00:30:07.042001] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbcabe0 (9): Bad file descriptor 00:22:36.457 [2024-10-09 00:30:07.042022] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd4d70 (9): Bad file descriptor 00:22:36.457 [2024-10-09 00:30:07.042038] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xff3450 (9): Bad file descriptor 00:22:36.457 [2024-10-09 00:30:07.042055] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xff19d0 (9): Bad file descriptor 00:22:36.457 [2024-10-09 00:30:07.042070] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbcba50 (9): Bad file descriptor 00:22:36.457 [2024-10-09 00:30:07.042083] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd3ee0 (9): Bad file descriptor 00:22:36.457 [2024-10-09 00:30:07.043337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.457 [2024-10-09 00:30:07.043352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.457 [2024-10-09 00:30:07.043370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.457 [2024-10-09 00:30:07.043379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.457 [2024-10-09 00:30:07.043391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.457 [2024-10-09 00:30:07.043400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.457 [2024-10-09 00:30:07.043411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.457 [2024-10-09 00:30:07.043420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.457 [2024-10-09 00:30:07.043431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.457 [2024-10-09 00:30:07.043440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.458 [2024-10-09 00:30:07.043451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.458 [2024-10-09 00:30:07.043460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.458 [2024-10-09 00:30:07.043471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.458 [2024-10-09 00:30:07.043478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.458 [2024-10-09 00:30:07.043487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.458 [2024-10-09 00:30:07.043495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.458 [2024-10-09 00:30:07.043504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.458 [2024-10-09 00:30:07.043511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.458 [2024-10-09 00:30:07.043520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.458 [2024-10-09 00:30:07.043528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.458 [2024-10-09 00:30:07.043537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.458 [2024-10-09 00:30:07.043548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.458 [2024-10-09 00:30:07.043558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.458 [2024-10-09 00:30:07.043565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.458 [2024-10-09 00:30:07.043574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.458 [2024-10-09 00:30:07.043582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.458 [2024-10-09 00:30:07.043592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.458 [2024-10-09 00:30:07.043599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.458 [2024-10-09 00:30:07.043608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.458 [2024-10-09 00:30:07.043616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.458 [2024-10-09 00:30:07.043625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.458 [2024-10-09 00:30:07.043632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.458 [2024-10-09 00:30:07.043642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.458 [2024-10-09 00:30:07.043649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.458 [2024-10-09 00:30:07.043659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.458 [2024-10-09 00:30:07.043666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.458 [2024-10-09 00:30:07.043675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.458 [2024-10-09 00:30:07.043683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.458 [2024-10-09 00:30:07.043692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.458 [2024-10-09 00:30:07.043699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.458 [2024-10-09 00:30:07.043709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.458 [2024-10-09 00:30:07.043716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.458 [2024-10-09 00:30:07.043736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.458 [2024-10-09 00:30:07.043743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.458 [2024-10-09 00:30:07.043753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.458 [2024-10-09 00:30:07.043760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.458 [2024-10-09 00:30:07.043771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.458 [2024-10-09 00:30:07.043779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.458 [2024-10-09 00:30:07.043788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.458 [2024-10-09 00:30:07.043795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.458 [2024-10-09 00:30:07.043805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.458 [2024-10-09 00:30:07.043812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.458 [2024-10-09 00:30:07.043822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.458 [2024-10-09 00:30:07.043829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.458 [2024-10-09 00:30:07.043838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.458 [2024-10-09 00:30:07.043846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.458 [2024-10-09 00:30:07.043855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.458 [2024-10-09 00:30:07.043862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.458 [2024-10-09 00:30:07.043872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.458 [2024-10-09 00:30:07.043879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.458 [2024-10-09 00:30:07.043888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.458 [2024-10-09 00:30:07.043895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.458 [2024-10-09 00:30:07.043905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.458 [2024-10-09 00:30:07.043912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.458 [2024-10-09 00:30:07.043921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.458 [2024-10-09 00:30:07.043929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.458 [2024-10-09 00:30:07.043938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.458 [2024-10-09 00:30:07.043945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.458 [2024-10-09 00:30:07.043955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.458 [2024-10-09 00:30:07.043962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.458 [2024-10-09 00:30:07.043971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.458 [2024-10-09 00:30:07.043981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.458 [2024-10-09 00:30:07.043991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.458 [2024-10-09 00:30:07.043998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.458 [2024-10-09 00:30:07.044007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.458 [2024-10-09 00:30:07.044015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.458 [2024-10-09 00:30:07.044024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.458 [2024-10-09 00:30:07.044031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.458 [2024-10-09 00:30:07.044041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.458 [2024-10-09 00:30:07.044048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.458 [2024-10-09 00:30:07.044057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.458 [2024-10-09 00:30:07.044065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.458 [2024-10-09 00:30:07.044074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.458 [2024-10-09 00:30:07.044081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.458 [2024-10-09 00:30:07.044091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.458 [2024-10-09 00:30:07.044098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.458 [2024-10-09 00:30:07.044107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.458 [2024-10-09 00:30:07.044114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.458 [2024-10-09 00:30:07.044124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.458 [2024-10-09 00:30:07.044132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.458 [2024-10-09 00:30:07.044141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.458 [2024-10-09 00:30:07.044148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.458 [2024-10-09 00:30:07.044157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.458 [2024-10-09 00:30:07.044165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.459 [2024-10-09 00:30:07.044174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.459 [2024-10-09 00:30:07.044181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.459 [2024-10-09 00:30:07.044195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.459 [2024-10-09 00:30:07.044203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.459 [2024-10-09 00:30:07.044212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.459 [2024-10-09 00:30:07.044219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.459 [2024-10-09 00:30:07.044229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.459 [2024-10-09 00:30:07.044236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.459 [2024-10-09 00:30:07.044246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.459 [2024-10-09 00:30:07.044253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.459 [2024-10-09 00:30:07.044262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.459 [2024-10-09 00:30:07.044270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.459 [2024-10-09 00:30:07.044279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.459 [2024-10-09 00:30:07.044286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.459 [2024-10-09 00:30:07.044295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.459 [2024-10-09 00:30:07.044303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.459 [2024-10-09 00:30:07.044312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.459 [2024-10-09 00:30:07.044319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.459 [2024-10-09 00:30:07.044329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.459 [2024-10-09 00:30:07.044336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.459 [2024-10-09 00:30:07.044345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.459 [2024-10-09 00:30:07.044353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.459 [2024-10-09 00:30:07.044363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.459 [2024-10-09 00:30:07.044370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.459 [2024-10-09 00:30:07.044379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.459 [2024-10-09 00:30:07.044387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.459 [2024-10-09 00:30:07.044396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.459 [2024-10-09 00:30:07.044405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.459 [2024-10-09 00:30:07.044414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.459 [2024-10-09 00:30:07.044421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.459 [2024-10-09 00:30:07.044431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.459 [2024-10-09 00:30:07.044438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.459 [2024-10-09 00:30:07.044447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.459 [2024-10-09 00:30:07.044454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.459 [2024-10-09 00:30:07.044524] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x10599e0 was disconnected and freed. reset controller. 00:22:36.459 [2024-10-09 00:30:07.045974] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:22:36.459 [2024-10-09 00:30:07.047513] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:22:36.459 [2024-10-09 00:30:07.047715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:36.459 [2024-10-09 00:30:07.047739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbcba50 with addr=10.0.0.2, port=4420 00:22:36.459 [2024-10-09 00:30:07.047748] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbcba50 is same with the state(6) to be set 00:22:36.459 [2024-10-09 00:30:07.048444] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:36.459 [2024-10-09 00:30:07.048493] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:36.459 [2024-10-09 00:30:07.048531] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:36.459 [2024-10-09 00:30:07.048549] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:22:36.459 [2024-10-09 00:30:07.048998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:36.459 [2024-10-09 00:30:07.049039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101f280 with addr=10.0.0.2, port=4420 00:22:36.459 [2024-10-09 00:30:07.049051] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x101f280 is same with the state(6) to be set 00:22:36.459 [2024-10-09 00:30:07.049067] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbcba50 (9): Bad file descriptor 00:22:36.459 [2024-10-09 00:30:07.049403] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:36.459 [2024-10-09 00:30:07.049448] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:36.459 [2024-10-09 00:30:07.050062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:36.459 [2024-10-09 00:30:07.050101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd3ee0 with addr=10.0.0.2, port=4420 00:22:36.459 [2024-10-09 00:30:07.050114] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd3ee0 is same with the state(6) to be set 00:22:36.459 [2024-10-09 00:30:07.050132] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x101f280 (9): Bad file descriptor 00:22:36.459 [2024-10-09 00:30:07.050144] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:22:36.459 [2024-10-09 00:30:07.050153] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:22:36.459 [2024-10-09 00:30:07.050168] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:22:36.459 [2024-10-09 00:30:07.050275] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:36.459 [2024-10-09 00:30:07.050311] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:36.459 [2024-10-09 00:30:07.050322] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd3ee0 (9): Bad file descriptor 00:22:36.459 [2024-10-09 00:30:07.050331] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:22:36.459 [2024-10-09 00:30:07.050338] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:22:36.459 [2024-10-09 00:30:07.050345] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:22:36.459 [2024-10-09 00:30:07.050412] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:36.459 [2024-10-09 00:30:07.050422] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:22:36.459 [2024-10-09 00:30:07.050429] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:22:36.459 [2024-10-09 00:30:07.050436] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:22:36.459 [2024-10-09 00:30:07.050477] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:36.459 [2024-10-09 00:30:07.052056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.459 [2024-10-09 00:30:07.052070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.459 [2024-10-09 00:30:07.052087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.459 [2024-10-09 00:30:07.052095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.459 [2024-10-09 00:30:07.052104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.459 [2024-10-09 00:30:07.052111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.459 [2024-10-09 00:30:07.052121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.459 [2024-10-09 00:30:07.052128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.459 [2024-10-09 00:30:07.052138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.459 [2024-10-09 00:30:07.052145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.459 [2024-10-09 00:30:07.052155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.459 [2024-10-09 00:30:07.052162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.459 [2024-10-09 00:30:07.052172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.459 [2024-10-09 00:30:07.052179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.459 [2024-10-09 00:30:07.052188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.459 [2024-10-09 00:30:07.052196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.459 [2024-10-09 00:30:07.052209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.459 [2024-10-09 00:30:07.052216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.459 [2024-10-09 00:30:07.052226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.459 [2024-10-09 00:30:07.052233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.460 [2024-10-09 00:30:07.052243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.460 [2024-10-09 00:30:07.052250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.460 [2024-10-09 00:30:07.052259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.460 [2024-10-09 00:30:07.052266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.460 [2024-10-09 00:30:07.052276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.460 [2024-10-09 00:30:07.052283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.460 [2024-10-09 00:30:07.052292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.460 [2024-10-09 00:30:07.052300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.460 [2024-10-09 00:30:07.052309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.460 [2024-10-09 00:30:07.052316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.460 [2024-10-09 00:30:07.052326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.460 [2024-10-09 00:30:07.052333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.460 [2024-10-09 00:30:07.052342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.460 [2024-10-09 00:30:07.052350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.460 [2024-10-09 00:30:07.052359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.460 [2024-10-09 00:30:07.052366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.460 [2024-10-09 00:30:07.052376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.460 [2024-10-09 00:30:07.052383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.460 [2024-10-09 00:30:07.052393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.460 [2024-10-09 00:30:07.052400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.460 [2024-10-09 00:30:07.052409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.460 [2024-10-09 00:30:07.052419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.460 [2024-10-09 00:30:07.052428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.460 [2024-10-09 00:30:07.052436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.460 [2024-10-09 00:30:07.052445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.460 [2024-10-09 00:30:07.052453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.460 [2024-10-09 00:30:07.052462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.460 [2024-10-09 00:30:07.052469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.460 [2024-10-09 00:30:07.052479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.460 [2024-10-09 00:30:07.052487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.460 [2024-10-09 00:30:07.052496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.460 [2024-10-09 00:30:07.052503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.460 [2024-10-09 00:30:07.052513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.460 [2024-10-09 00:30:07.052520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.460 [2024-10-09 00:30:07.052529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.460 [2024-10-09 00:30:07.052537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.460 [2024-10-09 00:30:07.052546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.460 [2024-10-09 00:30:07.052554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.460 [2024-10-09 00:30:07.052563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.460 [2024-10-09 00:30:07.052570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.460 [2024-10-09 00:30:07.052580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.460 [2024-10-09 00:30:07.052587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.460 [2024-10-09 00:30:07.052596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.460 [2024-10-09 00:30:07.052603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.460 [2024-10-09 00:30:07.052613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.460 [2024-10-09 00:30:07.052620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.460 [2024-10-09 00:30:07.052631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.460 [2024-10-09 00:30:07.052638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.460 [2024-10-09 00:30:07.052648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.460 [2024-10-09 00:30:07.052656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.460 [2024-10-09 00:30:07.052665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.460 [2024-10-09 00:30:07.052672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.460 [2024-10-09 00:30:07.052682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.460 [2024-10-09 00:30:07.052689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.460 [2024-10-09 00:30:07.052699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.460 [2024-10-09 00:30:07.052706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.460 [2024-10-09 00:30:07.052715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.460 [2024-10-09 00:30:07.052728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.460 [2024-10-09 00:30:07.052737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.460 [2024-10-09 00:30:07.052745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.460 [2024-10-09 00:30:07.052754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.460 [2024-10-09 00:30:07.052761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.460 [2024-10-09 00:30:07.052771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.460 [2024-10-09 00:30:07.052778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.460 [2024-10-09 00:30:07.052787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.460 [2024-10-09 00:30:07.052795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.460 [2024-10-09 00:30:07.052805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.460 [2024-10-09 00:30:07.052812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.460 [2024-10-09 00:30:07.052822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.460 [2024-10-09 00:30:07.052829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.460 [2024-10-09 00:30:07.052838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.460 [2024-10-09 00:30:07.052848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.460 [2024-10-09 00:30:07.052858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.460 [2024-10-09 00:30:07.052865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.461 [2024-10-09 00:30:07.052875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.461 [2024-10-09 00:30:07.052882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.461 [2024-10-09 00:30:07.052892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.461 [2024-10-09 00:30:07.052899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.461 [2024-10-09 00:30:07.052908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.461 [2024-10-09 00:30:07.052915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.461 [2024-10-09 00:30:07.052925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.461 [2024-10-09 00:30:07.052932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.461 [2024-10-09 00:30:07.052942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.461 [2024-10-09 00:30:07.052949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.461 [2024-10-09 00:30:07.052959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.461 [2024-10-09 00:30:07.052966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.461 [2024-10-09 00:30:07.052975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.461 [2024-10-09 00:30:07.052982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.461 [2024-10-09 00:30:07.052992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.461 [2024-10-09 00:30:07.052999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.461 [2024-10-09 00:30:07.053009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.461 [2024-10-09 00:30:07.053016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.461 [2024-10-09 00:30:07.053025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.461 [2024-10-09 00:30:07.053033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.461 [2024-10-09 00:30:07.053042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.461 [2024-10-09 00:30:07.053049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.461 [2024-10-09 00:30:07.053060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.461 [2024-10-09 00:30:07.053068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.461 [2024-10-09 00:30:07.053077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.461 [2024-10-09 00:30:07.053085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.461 [2024-10-09 00:30:07.053094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.461 [2024-10-09 00:30:07.053101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.461 [2024-10-09 00:30:07.053111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.461 [2024-10-09 00:30:07.053118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.461 [2024-10-09 00:30:07.053127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.461 [2024-10-09 00:30:07.053135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.461 [2024-10-09 00:30:07.053144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.461 [2024-10-09 00:30:07.053151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.461 [2024-10-09 00:30:07.053160] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11010b0 is same with the state(6) to be set 00:22:36.461 [2024-10-09 00:30:07.054447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.461 [2024-10-09 00:30:07.054462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.461 [2024-10-09 00:30:07.054476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.461 [2024-10-09 00:30:07.054485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.461 [2024-10-09 00:30:07.054497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.461 [2024-10-09 00:30:07.054506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.461 [2024-10-09 00:30:07.054518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.461 [2024-10-09 00:30:07.054527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.461 [2024-10-09 00:30:07.054538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.461 [2024-10-09 00:30:07.054545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.461 [2024-10-09 00:30:07.054556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.461 [2024-10-09 00:30:07.054563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.461 [2024-10-09 00:30:07.054576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.461 [2024-10-09 00:30:07.054583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.461 [2024-10-09 00:30:07.054593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.461 [2024-10-09 00:30:07.054600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.461 [2024-10-09 00:30:07.054609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.461 [2024-10-09 00:30:07.054617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.461 [2024-10-09 00:30:07.054626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.461 [2024-10-09 00:30:07.054633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.461 [2024-10-09 00:30:07.054643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.461 [2024-10-09 00:30:07.054650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.461 [2024-10-09 00:30:07.054660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.461 [2024-10-09 00:30:07.054667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.461 [2024-10-09 00:30:07.054677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.461 [2024-10-09 00:30:07.054684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.461 [2024-10-09 00:30:07.054693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.461 [2024-10-09 00:30:07.054701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.461 [2024-10-09 00:30:07.054710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.461 [2024-10-09 00:30:07.054718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.461 [2024-10-09 00:30:07.054734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.461 [2024-10-09 00:30:07.054742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.461 [2024-10-09 00:30:07.054751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.461 [2024-10-09 00:30:07.054758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.461 [2024-10-09 00:30:07.054768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.461 [2024-10-09 00:30:07.054775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.461 [2024-10-09 00:30:07.054784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.461 [2024-10-09 00:30:07.054794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.461 [2024-10-09 00:30:07.054803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.461 [2024-10-09 00:30:07.054811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.461 [2024-10-09 00:30:07.054820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.461 [2024-10-09 00:30:07.054827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.461 [2024-10-09 00:30:07.054836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.461 [2024-10-09 00:30:07.054844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.461 [2024-10-09 00:30:07.054853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.461 [2024-10-09 00:30:07.054861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.461 [2024-10-09 00:30:07.054870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.461 [2024-10-09 00:30:07.054877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.462 [2024-10-09 00:30:07.054887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.462 [2024-10-09 00:30:07.054894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.462 [2024-10-09 00:30:07.054904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.462 [2024-10-09 00:30:07.054911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.462 [2024-10-09 00:30:07.054920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.462 [2024-10-09 00:30:07.054928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.462 [2024-10-09 00:30:07.054938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.462 [2024-10-09 00:30:07.054945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.462 [2024-10-09 00:30:07.054954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.462 [2024-10-09 00:30:07.054962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.462 [2024-10-09 00:30:07.054971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.462 [2024-10-09 00:30:07.054978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.462 [2024-10-09 00:30:07.054988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.462 [2024-10-09 00:30:07.054995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.462 [2024-10-09 00:30:07.055006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.462 [2024-10-09 00:30:07.055013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.462 [2024-10-09 00:30:07.055023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.462 [2024-10-09 00:30:07.055030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.462 [2024-10-09 00:30:07.055039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.462 [2024-10-09 00:30:07.055047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.462 [2024-10-09 00:30:07.055057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.462 [2024-10-09 00:30:07.055064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.462 [2024-10-09 00:30:07.055073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.462 [2024-10-09 00:30:07.055081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.462 [2024-10-09 00:30:07.055091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.462 [2024-10-09 00:30:07.055098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.462 [2024-10-09 00:30:07.055108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.462 [2024-10-09 00:30:07.055115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.462 [2024-10-09 00:30:07.055125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.462 [2024-10-09 00:30:07.055132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.462 [2024-10-09 00:30:07.055142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.462 [2024-10-09 00:30:07.055149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.462 [2024-10-09 00:30:07.055159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.462 [2024-10-09 00:30:07.055166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.462 [2024-10-09 00:30:07.055176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.462 [2024-10-09 00:30:07.055183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.462 [2024-10-09 00:30:07.055193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.462 [2024-10-09 00:30:07.055200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.462 [2024-10-09 00:30:07.055209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.462 [2024-10-09 00:30:07.055218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.462 [2024-10-09 00:30:07.055228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.462 [2024-10-09 00:30:07.055235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.462 [2024-10-09 00:30:07.055244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.462 [2024-10-09 00:30:07.055252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.462 [2024-10-09 00:30:07.055261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.462 [2024-10-09 00:30:07.055268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.462 [2024-10-09 00:30:07.055277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.462 [2024-10-09 00:30:07.055285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.462 [2024-10-09 00:30:07.055294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.462 [2024-10-09 00:30:07.055301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.462 [2024-10-09 00:30:07.055311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.462 [2024-10-09 00:30:07.055318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.462 [2024-10-09 00:30:07.055327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.462 [2024-10-09 00:30:07.055334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.462 [2024-10-09 00:30:07.055344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.462 [2024-10-09 00:30:07.055352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.462 [2024-10-09 00:30:07.055361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.462 [2024-10-09 00:30:07.055368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.462 [2024-10-09 00:30:07.055377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.462 [2024-10-09 00:30:07.055385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.462 [2024-10-09 00:30:07.055394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.462 [2024-10-09 00:30:07.055401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.462 [2024-10-09 00:30:07.055411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.462 [2024-10-09 00:30:07.055418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.462 [2024-10-09 00:30:07.055427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.462 [2024-10-09 00:30:07.055436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.462 [2024-10-09 00:30:07.055445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.462 [2024-10-09 00:30:07.055452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.462 [2024-10-09 00:30:07.055462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.462 [2024-10-09 00:30:07.055469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.462 [2024-10-09 00:30:07.055479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.462 [2024-10-09 00:30:07.055486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.462 [2024-10-09 00:30:07.055495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.462 [2024-10-09 00:30:07.055502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.462 [2024-10-09 00:30:07.055511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.462 [2024-10-09 00:30:07.055519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.462 [2024-10-09 00:30:07.055528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.462 [2024-10-09 00:30:07.055535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.462 [2024-10-09 00:30:07.055545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.462 [2024-10-09 00:30:07.055552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.462 [2024-10-09 00:30:07.055560] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x105afa0 is same with the state(6) to be set 00:22:36.462 [2024-10-09 00:30:07.056832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.462 [2024-10-09 00:30:07.056845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.463 [2024-10-09 00:30:07.056859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.463 [2024-10-09 00:30:07.056868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.463 [2024-10-09 00:30:07.056880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.463 [2024-10-09 00:30:07.056889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.463 [2024-10-09 00:30:07.056900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.463 [2024-10-09 00:30:07.056909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.463 [2024-10-09 00:30:07.056920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.463 [2024-10-09 00:30:07.056932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.463 [2024-10-09 00:30:07.056943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.463 [2024-10-09 00:30:07.056951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.463 [2024-10-09 00:30:07.056961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.463 [2024-10-09 00:30:07.056968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.463 [2024-10-09 00:30:07.056977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.463 [2024-10-09 00:30:07.056985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.463 [2024-10-09 00:30:07.056994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.463 [2024-10-09 00:30:07.057001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.463 [2024-10-09 00:30:07.057011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.463 [2024-10-09 00:30:07.057018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.463 [2024-10-09 00:30:07.057027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.463 [2024-10-09 00:30:07.057035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.463 [2024-10-09 00:30:07.057044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.463 [2024-10-09 00:30:07.057051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.463 [2024-10-09 00:30:07.057060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.463 [2024-10-09 00:30:07.057067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.463 [2024-10-09 00:30:07.057077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.463 [2024-10-09 00:30:07.057084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.463 [2024-10-09 00:30:07.057094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.463 [2024-10-09 00:30:07.057101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.463 [2024-10-09 00:30:07.057110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.463 [2024-10-09 00:30:07.057118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.463 [2024-10-09 00:30:07.057127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.463 [2024-10-09 00:30:07.057134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.463 [2024-10-09 00:30:07.057145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.463 [2024-10-09 00:30:07.057153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.463 [2024-10-09 00:30:07.057162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.463 [2024-10-09 00:30:07.057170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.463 [2024-10-09 00:30:07.057179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.463 [2024-10-09 00:30:07.057186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.463 [2024-10-09 00:30:07.057196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.463 [2024-10-09 00:30:07.057203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.463 [2024-10-09 00:30:07.057212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.463 [2024-10-09 00:30:07.057219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.463 [2024-10-09 00:30:07.057229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.463 [2024-10-09 00:30:07.057236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.463 [2024-10-09 00:30:07.057245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.463 [2024-10-09 00:30:07.057253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.463 [2024-10-09 00:30:07.057262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.463 [2024-10-09 00:30:07.057269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.463 [2024-10-09 00:30:07.057279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.463 [2024-10-09 00:30:07.057286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.463 [2024-10-09 00:30:07.057295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.463 [2024-10-09 00:30:07.057302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.463 [2024-10-09 00:30:07.057312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.463 [2024-10-09 00:30:07.057319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.463 [2024-10-09 00:30:07.057329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.463 [2024-10-09 00:30:07.057337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.463 [2024-10-09 00:30:07.057347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.463 [2024-10-09 00:30:07.057359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.463 [2024-10-09 00:30:07.057369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.463 [2024-10-09 00:30:07.057376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.463 [2024-10-09 00:30:07.057386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.463 [2024-10-09 00:30:07.057394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.463 [2024-10-09 00:30:07.057404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.463 [2024-10-09 00:30:07.057411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.463 [2024-10-09 00:30:07.057420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.463 [2024-10-09 00:30:07.057428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.463 [2024-10-09 00:30:07.057437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.463 [2024-10-09 00:30:07.057445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.463 [2024-10-09 00:30:07.057454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.463 [2024-10-09 00:30:07.057461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.463 [2024-10-09 00:30:07.057471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.463 [2024-10-09 00:30:07.057478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.463 [2024-10-09 00:30:07.057487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.463 [2024-10-09 00:30:07.057494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.463 [2024-10-09 00:30:07.057504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.463 [2024-10-09 00:30:07.057511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.463 [2024-10-09 00:30:07.057521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.463 [2024-10-09 00:30:07.057528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.463 [2024-10-09 00:30:07.057537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.463 [2024-10-09 00:30:07.057545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.463 [2024-10-09 00:30:07.057554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.463 [2024-10-09 00:30:07.057561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.463 [2024-10-09 00:30:07.057572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.463 [2024-10-09 00:30:07.057580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.464 [2024-10-09 00:30:07.057589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.464 [2024-10-09 00:30:07.057596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.464 [2024-10-09 00:30:07.057605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.464 [2024-10-09 00:30:07.057613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.464 [2024-10-09 00:30:07.057622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.464 [2024-10-09 00:30:07.057629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.464 [2024-10-09 00:30:07.057639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.464 [2024-10-09 00:30:07.057646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.464 [2024-10-09 00:30:07.057655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.464 [2024-10-09 00:30:07.057662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.464 [2024-10-09 00:30:07.057671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.464 [2024-10-09 00:30:07.057679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.464 [2024-10-09 00:30:07.057688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.464 [2024-10-09 00:30:07.057695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.464 [2024-10-09 00:30:07.057704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.464 [2024-10-09 00:30:07.057711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.464 [2024-10-09 00:30:07.057724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.464 [2024-10-09 00:30:07.057731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.464 [2024-10-09 00:30:07.057740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.464 [2024-10-09 00:30:07.057748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.464 [2024-10-09 00:30:07.057757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.464 [2024-10-09 00:30:07.057764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.464 [2024-10-09 00:30:07.057774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.464 [2024-10-09 00:30:07.057782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.464 [2024-10-09 00:30:07.057792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.464 [2024-10-09 00:30:07.057799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.464 [2024-10-09 00:30:07.057808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.464 [2024-10-09 00:30:07.057816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.464 [2024-10-09 00:30:07.057825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.464 [2024-10-09 00:30:07.057832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.464 [2024-10-09 00:30:07.057842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.464 [2024-10-09 00:30:07.057849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.464 [2024-10-09 00:30:07.057858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.464 [2024-10-09 00:30:07.057866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.464 [2024-10-09 00:30:07.057875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.464 [2024-10-09 00:30:07.057883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.464 [2024-10-09 00:30:07.057892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.464 [2024-10-09 00:30:07.057900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.464 [2024-10-09 00:30:07.057910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.464 [2024-10-09 00:30:07.057917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.464 [2024-10-09 00:30:07.057926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.464 [2024-10-09 00:30:07.057934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.464 [2024-10-09 00:30:07.057942] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x105bf10 is same with the state(6) to be set 00:22:36.464 [2024-10-09 00:30:07.059211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.464 [2024-10-09 00:30:07.059224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.464 [2024-10-09 00:30:07.059237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.464 [2024-10-09 00:30:07.059246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.464 [2024-10-09 00:30:07.059258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.464 [2024-10-09 00:30:07.059269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.464 [2024-10-09 00:30:07.059281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.464 [2024-10-09 00:30:07.059290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.464 [2024-10-09 00:30:07.059301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.464 [2024-10-09 00:30:07.059310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.464 [2024-10-09 00:30:07.059319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.464 [2024-10-09 00:30:07.059327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.464 [2024-10-09 00:30:07.059336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.464 [2024-10-09 00:30:07.059343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.464 [2024-10-09 00:30:07.059353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.464 [2024-10-09 00:30:07.059360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.464 [2024-10-09 00:30:07.059370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.464 [2024-10-09 00:30:07.059377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.464 [2024-10-09 00:30:07.059386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.464 [2024-10-09 00:30:07.059393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.464 [2024-10-09 00:30:07.059403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.464 [2024-10-09 00:30:07.059410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.464 [2024-10-09 00:30:07.059420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.464 [2024-10-09 00:30:07.059427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.464 [2024-10-09 00:30:07.059437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.464 [2024-10-09 00:30:07.059444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.464 [2024-10-09 00:30:07.059454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.464 [2024-10-09 00:30:07.059461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.464 [2024-10-09 00:30:07.059470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.464 [2024-10-09 00:30:07.059477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.464 [2024-10-09 00:30:07.059488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.465 [2024-10-09 00:30:07.059496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.465 [2024-10-09 00:30:07.059505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.465 [2024-10-09 00:30:07.059512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.465 [2024-10-09 00:30:07.059522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.465 [2024-10-09 00:30:07.059530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.465 [2024-10-09 00:30:07.059539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.465 [2024-10-09 00:30:07.059546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.465 [2024-10-09 00:30:07.059556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.465 [2024-10-09 00:30:07.059563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.465 [2024-10-09 00:30:07.059573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.465 [2024-10-09 00:30:07.059580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.465 [2024-10-09 00:30:07.059589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.465 [2024-10-09 00:30:07.059597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.465 [2024-10-09 00:30:07.059606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.465 [2024-10-09 00:30:07.059613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.465 [2024-10-09 00:30:07.059623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.465 [2024-10-09 00:30:07.059630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.465 [2024-10-09 00:30:07.059639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.465 [2024-10-09 00:30:07.059646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.465 [2024-10-09 00:30:07.059656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.465 [2024-10-09 00:30:07.059663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.465 [2024-10-09 00:30:07.059672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.465 [2024-10-09 00:30:07.059679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.465 [2024-10-09 00:30:07.059689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.465 [2024-10-09 00:30:07.059697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.465 [2024-10-09 00:30:07.059707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.465 [2024-10-09 00:30:07.059714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.465 [2024-10-09 00:30:07.059728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.465 [2024-10-09 00:30:07.059735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.465 [2024-10-09 00:30:07.059745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.465 [2024-10-09 00:30:07.059752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.465 [2024-10-09 00:30:07.059761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.465 [2024-10-09 00:30:07.059768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.465 [2024-10-09 00:30:07.059778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.465 [2024-10-09 00:30:07.059785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.465 [2024-10-09 00:30:07.059794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.465 [2024-10-09 00:30:07.059801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.465 [2024-10-09 00:30:07.059811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.465 [2024-10-09 00:30:07.059818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.465 [2024-10-09 00:30:07.059827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.465 [2024-10-09 00:30:07.059834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.465 [2024-10-09 00:30:07.059844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.465 [2024-10-09 00:30:07.059851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.465 [2024-10-09 00:30:07.059860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.465 [2024-10-09 00:30:07.059868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.465 [2024-10-09 00:30:07.059877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.465 [2024-10-09 00:30:07.059885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.465 [2024-10-09 00:30:07.059895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.465 [2024-10-09 00:30:07.059902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.465 [2024-10-09 00:30:07.059913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.465 [2024-10-09 00:30:07.059920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.465 [2024-10-09 00:30:07.059930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.465 [2024-10-09 00:30:07.059937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.465 [2024-10-09 00:30:07.059946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.465 [2024-10-09 00:30:07.059954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.465 [2024-10-09 00:30:07.059963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.465 [2024-10-09 00:30:07.059970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.465 [2024-10-09 00:30:07.059980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.465 [2024-10-09 00:30:07.059987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.465 [2024-10-09 00:30:07.059996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.465 [2024-10-09 00:30:07.060003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.465 [2024-10-09 00:30:07.060013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.465 [2024-10-09 00:30:07.060020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.465 [2024-10-09 00:30:07.060029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.465 [2024-10-09 00:30:07.060036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.465 [2024-10-09 00:30:07.060046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.465 [2024-10-09 00:30:07.060053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.465 [2024-10-09 00:30:07.060062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.465 [2024-10-09 00:30:07.060069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.465 [2024-10-09 00:30:07.060079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.465 [2024-10-09 00:30:07.060086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.465 [2024-10-09 00:30:07.060095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.465 [2024-10-09 00:30:07.060103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.465 [2024-10-09 00:30:07.060112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.465 [2024-10-09 00:30:07.060121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.465 [2024-10-09 00:30:07.060130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.465 [2024-10-09 00:30:07.060137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.465 [2024-10-09 00:30:07.060147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.465 [2024-10-09 00:30:07.060154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.465 [2024-10-09 00:30:07.060164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.465 [2024-10-09 00:30:07.060171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.465 [2024-10-09 00:30:07.060180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.465 [2024-10-09 00:30:07.060187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.466 [2024-10-09 00:30:07.060196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.466 [2024-10-09 00:30:07.060204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.466 [2024-10-09 00:30:07.060213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.466 [2024-10-09 00:30:07.060221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.466 [2024-10-09 00:30:07.060230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.466 [2024-10-09 00:30:07.060237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.466 [2024-10-09 00:30:07.060248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.466 [2024-10-09 00:30:07.060255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.466 [2024-10-09 00:30:07.060265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.466 [2024-10-09 00:30:07.060272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.466 [2024-10-09 00:30:07.060281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.466 [2024-10-09 00:30:07.060288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.466 [2024-10-09 00:30:07.060298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.466 [2024-10-09 00:30:07.060305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.466 [2024-10-09 00:30:07.060313] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x105d490 is same with the state(6) to be set 00:22:36.466 [2024-10-09 00:30:07.061592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.466 [2024-10-09 00:30:07.061609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.466 [2024-10-09 00:30:07.061624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.466 [2024-10-09 00:30:07.061633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.466 [2024-10-09 00:30:07.061644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.466 [2024-10-09 00:30:07.061654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.466 [2024-10-09 00:30:07.061665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.466 [2024-10-09 00:30:07.061673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.466 [2024-10-09 00:30:07.061682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.466 [2024-10-09 00:30:07.061689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.466 [2024-10-09 00:30:07.061699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.466 [2024-10-09 00:30:07.061706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.466 [2024-10-09 00:30:07.061715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.466 [2024-10-09 00:30:07.061726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.466 [2024-10-09 00:30:07.061736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.466 [2024-10-09 00:30:07.061743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.466 [2024-10-09 00:30:07.061753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.466 [2024-10-09 00:30:07.061760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.466 [2024-10-09 00:30:07.061769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.466 [2024-10-09 00:30:07.061777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.466 [2024-10-09 00:30:07.061786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.466 [2024-10-09 00:30:07.061794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.466 [2024-10-09 00:30:07.061803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.466 [2024-10-09 00:30:07.061810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.466 [2024-10-09 00:30:07.061820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.466 [2024-10-09 00:30:07.061827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.466 [2024-10-09 00:30:07.061836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.466 [2024-10-09 00:30:07.061846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.466 [2024-10-09 00:30:07.061855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.466 [2024-10-09 00:30:07.061862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.466 [2024-10-09 00:30:07.061871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.466 [2024-10-09 00:30:07.061879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.466 [2024-10-09 00:30:07.061888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.466 [2024-10-09 00:30:07.061895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.466 [2024-10-09 00:30:07.061904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.466 [2024-10-09 00:30:07.061911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.466 [2024-10-09 00:30:07.061921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.466 [2024-10-09 00:30:07.061928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.466 [2024-10-09 00:30:07.061938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.466 [2024-10-09 00:30:07.061945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.466 [2024-10-09 00:30:07.061955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.466 [2024-10-09 00:30:07.061962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.466 [2024-10-09 00:30:07.061971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.466 [2024-10-09 00:30:07.061979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.466 [2024-10-09 00:30:07.061988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.466 [2024-10-09 00:30:07.061996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.466 [2024-10-09 00:30:07.062005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.466 [2024-10-09 00:30:07.062012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.466 [2024-10-09 00:30:07.062022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.466 [2024-10-09 00:30:07.062029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.466 [2024-10-09 00:30:07.062039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.466 [2024-10-09 00:30:07.062046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.466 [2024-10-09 00:30:07.062057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.466 [2024-10-09 00:30:07.062064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.466 [2024-10-09 00:30:07.062075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.466 [2024-10-09 00:30:07.062082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.466 [2024-10-09 00:30:07.062092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.466 [2024-10-09 00:30:07.062099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.466 [2024-10-09 00:30:07.062109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.466 [2024-10-09 00:30:07.062116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.466 [2024-10-09 00:30:07.062125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.466 [2024-10-09 00:30:07.062132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.466 [2024-10-09 00:30:07.062142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.466 [2024-10-09 00:30:07.062149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.466 [2024-10-09 00:30:07.062159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.466 [2024-10-09 00:30:07.062166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.466 [2024-10-09 00:30:07.062176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.466 [2024-10-09 00:30:07.062183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.466 [2024-10-09 00:30:07.062192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.467 [2024-10-09 00:30:07.062199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.467 [2024-10-09 00:30:07.062209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.467 [2024-10-09 00:30:07.062216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.467 [2024-10-09 00:30:07.062225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.467 [2024-10-09 00:30:07.062233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.467 [2024-10-09 00:30:07.062243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.467 [2024-10-09 00:30:07.062250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.467 [2024-10-09 00:30:07.062259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.467 [2024-10-09 00:30:07.062268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.467 [2024-10-09 00:30:07.062278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.467 [2024-10-09 00:30:07.062285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.467 [2024-10-09 00:30:07.062294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.467 [2024-10-09 00:30:07.062302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.467 [2024-10-09 00:30:07.062311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.467 [2024-10-09 00:30:07.062318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.467 [2024-10-09 00:30:07.062327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.467 [2024-10-09 00:30:07.062334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.467 [2024-10-09 00:30:07.062344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.467 [2024-10-09 00:30:07.062351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.467 [2024-10-09 00:30:07.062360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.467 [2024-10-09 00:30:07.062368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.467 [2024-10-09 00:30:07.062377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.467 [2024-10-09 00:30:07.062384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.467 [2024-10-09 00:30:07.062394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.467 [2024-10-09 00:30:07.062401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.467 [2024-10-09 00:30:07.062410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.467 [2024-10-09 00:30:07.062417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.467 [2024-10-09 00:30:07.062427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.467 [2024-10-09 00:30:07.062435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.467 [2024-10-09 00:30:07.062444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.467 [2024-10-09 00:30:07.062452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.467 [2024-10-09 00:30:07.062461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.467 [2024-10-09 00:30:07.062468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.467 [2024-10-09 00:30:07.062479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.467 [2024-10-09 00:30:07.062487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.467 [2024-10-09 00:30:07.062496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.467 [2024-10-09 00:30:07.062503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.467 [2024-10-09 00:30:07.062512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.467 [2024-10-09 00:30:07.062520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.467 [2024-10-09 00:30:07.062529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.467 [2024-10-09 00:30:07.062536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.467 [2024-10-09 00:30:07.062545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.467 [2024-10-09 00:30:07.062552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.467 [2024-10-09 00:30:07.062562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.467 [2024-10-09 00:30:07.062569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.467 [2024-10-09 00:30:07.062578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.467 [2024-10-09 00:30:07.062585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.467 [2024-10-09 00:30:07.062594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.467 [2024-10-09 00:30:07.062602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.467 [2024-10-09 00:30:07.062611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.467 [2024-10-09 00:30:07.062619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.467 [2024-10-09 00:30:07.062628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.467 [2024-10-09 00:30:07.062635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.467 [2024-10-09 00:30:07.062644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.467 [2024-10-09 00:30:07.062651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.467 [2024-10-09 00:30:07.062661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.467 [2024-10-09 00:30:07.062668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.467 [2024-10-09 00:30:07.062678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.467 [2024-10-09 00:30:07.062686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.467 [2024-10-09 00:30:07.062694] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f1e0e0 is same with the state(6) to be set 00:22:36.467 [2024-10-09 00:30:07.063965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.467 [2024-10-09 00:30:07.063980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.467 [2024-10-09 00:30:07.063994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.467 [2024-10-09 00:30:07.064003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.467 [2024-10-09 00:30:07.064016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.467 [2024-10-09 00:30:07.064024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.467 [2024-10-09 00:30:07.064036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.467 [2024-10-09 00:30:07.064045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.467 [2024-10-09 00:30:07.064056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.467 [2024-10-09 00:30:07.064065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.467 [2024-10-09 00:30:07.064077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.467 [2024-10-09 00:30:07.064085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.467 [2024-10-09 00:30:07.064095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.467 [2024-10-09 00:30:07.064102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.467 [2024-10-09 00:30:07.064111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.467 [2024-10-09 00:30:07.064118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.467 [2024-10-09 00:30:07.064128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.467 [2024-10-09 00:30:07.064135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.467 [2024-10-09 00:30:07.064145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.467 [2024-10-09 00:30:07.064152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.467 [2024-10-09 00:30:07.064162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.467 [2024-10-09 00:30:07.064169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.467 [2024-10-09 00:30:07.064179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.467 [2024-10-09 00:30:07.064189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.468 [2024-10-09 00:30:07.064199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.468 [2024-10-09 00:30:07.064206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.468 [2024-10-09 00:30:07.064216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.468 [2024-10-09 00:30:07.064223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.468 [2024-10-09 00:30:07.064232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.468 [2024-10-09 00:30:07.064239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.468 [2024-10-09 00:30:07.064249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.468 [2024-10-09 00:30:07.064256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.468 [2024-10-09 00:30:07.064266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.468 [2024-10-09 00:30:07.064273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.468 [2024-10-09 00:30:07.064282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.468 [2024-10-09 00:30:07.064290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.468 [2024-10-09 00:30:07.064299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.468 [2024-10-09 00:30:07.064306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.468 [2024-10-09 00:30:07.064315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.468 [2024-10-09 00:30:07.064323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.468 [2024-10-09 00:30:07.064332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.468 [2024-10-09 00:30:07.064339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.468 [2024-10-09 00:30:07.064349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.468 [2024-10-09 00:30:07.064356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.468 [2024-10-09 00:30:07.064366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.468 [2024-10-09 00:30:07.064373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.468 [2024-10-09 00:30:07.064383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.468 [2024-10-09 00:30:07.064390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.468 [2024-10-09 00:30:07.064404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.468 [2024-10-09 00:30:07.064412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.468 [2024-10-09 00:30:07.064421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.468 [2024-10-09 00:30:07.064428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.468 [2024-10-09 00:30:07.064438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.468 [2024-10-09 00:30:07.064446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.468 [2024-10-09 00:30:07.064456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.468 [2024-10-09 00:30:07.064463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.468 [2024-10-09 00:30:07.064473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.468 [2024-10-09 00:30:07.064480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.468 [2024-10-09 00:30:07.064489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.468 [2024-10-09 00:30:07.064496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.468 [2024-10-09 00:30:07.064506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.468 [2024-10-09 00:30:07.064513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.468 [2024-10-09 00:30:07.064523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.468 [2024-10-09 00:30:07.064530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.468 [2024-10-09 00:30:07.064539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.468 [2024-10-09 00:30:07.064546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.468 [2024-10-09 00:30:07.064556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.468 [2024-10-09 00:30:07.064563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.468 [2024-10-09 00:30:07.064572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.468 [2024-10-09 00:30:07.064579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.468 [2024-10-09 00:30:07.064589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.468 [2024-10-09 00:30:07.064597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.468 [2024-10-09 00:30:07.064606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.468 [2024-10-09 00:30:07.064615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.468 [2024-10-09 00:30:07.064624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.468 [2024-10-09 00:30:07.064632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.468 [2024-10-09 00:30:07.064641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.468 [2024-10-09 00:30:07.064648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.468 [2024-10-09 00:30:07.064658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.468 [2024-10-09 00:30:07.064665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.468 [2024-10-09 00:30:07.064674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.468 [2024-10-09 00:30:07.064682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.468 [2024-10-09 00:30:07.064691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.468 [2024-10-09 00:30:07.064698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.468 [2024-10-09 00:30:07.064708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.468 [2024-10-09 00:30:07.064715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.468 [2024-10-09 00:30:07.064728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.468 [2024-10-09 00:30:07.064736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.468 [2024-10-09 00:30:07.064745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.468 [2024-10-09 00:30:07.064752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.468 [2024-10-09 00:30:07.064762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.468 [2024-10-09 00:30:07.064769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.468 [2024-10-09 00:30:07.064778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.468 [2024-10-09 00:30:07.064785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.468 [2024-10-09 00:30:07.064795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.468 [2024-10-09 00:30:07.064802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.468 [2024-10-09 00:30:07.064811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.468 [2024-10-09 00:30:07.064819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.469 [2024-10-09 00:30:07.064830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.469 [2024-10-09 00:30:07.064838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.469 [2024-10-09 00:30:07.064847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.469 [2024-10-09 00:30:07.064855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.469 [2024-10-09 00:30:07.064864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.469 [2024-10-09 00:30:07.064871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.469 [2024-10-09 00:30:07.064880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.469 [2024-10-09 00:30:07.064888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.469 [2024-10-09 00:30:07.064897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.469 [2024-10-09 00:30:07.064905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.469 [2024-10-09 00:30:07.064914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.469 [2024-10-09 00:30:07.064921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.469 [2024-10-09 00:30:07.064931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.469 [2024-10-09 00:30:07.064938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.469 [2024-10-09 00:30:07.064948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.469 [2024-10-09 00:30:07.064955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.469 [2024-10-09 00:30:07.064964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.469 [2024-10-09 00:30:07.064971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.469 [2024-10-09 00:30:07.064981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.469 [2024-10-09 00:30:07.064989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.469 [2024-10-09 00:30:07.064998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.469 [2024-10-09 00:30:07.065005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.469 [2024-10-09 00:30:07.065015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.469 [2024-10-09 00:30:07.065022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.469 [2024-10-09 00:30:07.065031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.469 [2024-10-09 00:30:07.065040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.469 [2024-10-09 00:30:07.065049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.469 [2024-10-09 00:30:07.065057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.469 [2024-10-09 00:30:07.065067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.469 [2024-10-09 00:30:07.065074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.469 [2024-10-09 00:30:07.065082] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdd9090 is same with the state(6) to be set 00:22:36.731 [2024-10-09 00:30:07.067736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.731 [2024-10-09 00:30:07.067770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.732 [2024-10-09 00:30:07.067789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.732 [2024-10-09 00:30:07.067797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.732 [2024-10-09 00:30:07.067807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.732 [2024-10-09 00:30:07.067815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.732 [2024-10-09 00:30:07.067824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.732 [2024-10-09 00:30:07.067832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.732 [2024-10-09 00:30:07.067841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.732 [2024-10-09 00:30:07.067849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.732 [2024-10-09 00:30:07.067858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.732 [2024-10-09 00:30:07.067865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.732 [2024-10-09 00:30:07.067875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.732 [2024-10-09 00:30:07.067882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.732 [2024-10-09 00:30:07.067891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.732 [2024-10-09 00:30:07.067898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.732 [2024-10-09 00:30:07.067908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.732 [2024-10-09 00:30:07.067915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.732 [2024-10-09 00:30:07.067925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.732 [2024-10-09 00:30:07.067937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.732 [2024-10-09 00:30:07.067947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.732 [2024-10-09 00:30:07.067955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.732 [2024-10-09 00:30:07.067964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.732 [2024-10-09 00:30:07.067971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.732 [2024-10-09 00:30:07.067981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.732 [2024-10-09 00:30:07.067988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.732 [2024-10-09 00:30:07.067998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.732 [2024-10-09 00:30:07.068005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.732 [2024-10-09 00:30:07.068014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.732 [2024-10-09 00:30:07.068021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.732 [2024-10-09 00:30:07.068031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.732 [2024-10-09 00:30:07.068038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.732 [2024-10-09 00:30:07.068048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.732 [2024-10-09 00:30:07.068055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.732 [2024-10-09 00:30:07.068065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.732 [2024-10-09 00:30:07.068072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.732 [2024-10-09 00:30:07.068082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.732 [2024-10-09 00:30:07.068089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.732 [2024-10-09 00:30:07.068098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.732 [2024-10-09 00:30:07.068106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.732 [2024-10-09 00:30:07.068115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.732 [2024-10-09 00:30:07.068122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.732 [2024-10-09 00:30:07.068132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.732 [2024-10-09 00:30:07.068139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.732 [2024-10-09 00:30:07.068151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.732 [2024-10-09 00:30:07.068158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.732 [2024-10-09 00:30:07.068168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.732 [2024-10-09 00:30:07.068176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.732 [2024-10-09 00:30:07.068185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.732 [2024-10-09 00:30:07.068192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.732 [2024-10-09 00:30:07.068202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.732 [2024-10-09 00:30:07.068209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.732 [2024-10-09 00:30:07.068219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.732 [2024-10-09 00:30:07.068226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.732 [2024-10-09 00:30:07.068236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.732 [2024-10-09 00:30:07.068243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.732 [2024-10-09 00:30:07.068253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.732 [2024-10-09 00:30:07.068260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.732 [2024-10-09 00:30:07.068270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.732 [2024-10-09 00:30:07.068277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.732 [2024-10-09 00:30:07.068286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.732 [2024-10-09 00:30:07.068294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.732 [2024-10-09 00:30:07.068304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.732 [2024-10-09 00:30:07.068311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.732 [2024-10-09 00:30:07.068321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.732 [2024-10-09 00:30:07.068328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.732 [2024-10-09 00:30:07.068337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.732 [2024-10-09 00:30:07.068344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.732 [2024-10-09 00:30:07.068354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.732 [2024-10-09 00:30:07.068363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.732 [2024-10-09 00:30:07.068372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.732 [2024-10-09 00:30:07.068380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.732 [2024-10-09 00:30:07.068389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.732 [2024-10-09 00:30:07.068397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.732 [2024-10-09 00:30:07.068406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.732 [2024-10-09 00:30:07.068413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.732 [2024-10-09 00:30:07.068423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.732 [2024-10-09 00:30:07.068432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.732 [2024-10-09 00:30:07.068441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.732 [2024-10-09 00:30:07.068448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.732 [2024-10-09 00:30:07.068458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.732 [2024-10-09 00:30:07.068465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.732 [2024-10-09 00:30:07.068475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.732 [2024-10-09 00:30:07.068482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.733 [2024-10-09 00:30:07.068492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.733 [2024-10-09 00:30:07.068499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.733 [2024-10-09 00:30:07.068508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.733 [2024-10-09 00:30:07.068516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.733 [2024-10-09 00:30:07.068525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.733 [2024-10-09 00:30:07.068532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.733 [2024-10-09 00:30:07.068542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.733 [2024-10-09 00:30:07.068549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.733 [2024-10-09 00:30:07.068558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.733 [2024-10-09 00:30:07.068565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.733 [2024-10-09 00:30:07.068576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.733 [2024-10-09 00:30:07.068584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.733 [2024-10-09 00:30:07.068594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.733 [2024-10-09 00:30:07.068601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.733 [2024-10-09 00:30:07.068611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.733 [2024-10-09 00:30:07.068618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.733 [2024-10-09 00:30:07.068628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.733 [2024-10-09 00:30:07.068635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.733 [2024-10-09 00:30:07.068645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.733 [2024-10-09 00:30:07.068652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.733 [2024-10-09 00:30:07.068661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.733 [2024-10-09 00:30:07.068669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.733 [2024-10-09 00:30:07.068678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.733 [2024-10-09 00:30:07.068686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.733 [2024-10-09 00:30:07.068695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.733 [2024-10-09 00:30:07.068703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.733 [2024-10-09 00:30:07.068712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.733 [2024-10-09 00:30:07.068725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.733 [2024-10-09 00:30:07.068735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.733 [2024-10-09 00:30:07.068743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.733 [2024-10-09 00:30:07.068752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.733 [2024-10-09 00:30:07.068760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.733 [2024-10-09 00:30:07.068769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.733 [2024-10-09 00:30:07.068777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.733 [2024-10-09 00:30:07.068786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.733 [2024-10-09 00:30:07.068795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.733 [2024-10-09 00:30:07.068805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.733 [2024-10-09 00:30:07.068812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.733 [2024-10-09 00:30:07.068822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.733 [2024-10-09 00:30:07.068829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.733 [2024-10-09 00:30:07.068838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.733 [2024-10-09 00:30:07.068846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.733 [2024-10-09 00:30:07.068855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.733 [2024-10-09 00:30:07.068862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.733 [2024-10-09 00:30:07.068871] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdda5a0 is same with the state(6) to be set 00:22:36.733 [2024-10-09 00:30:07.070382] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:36.733 [2024-10-09 00:30:07.070408] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:22:36.733 [2024-10-09 00:30:07.070419] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:22:36.733 [2024-10-09 00:30:07.070429] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:22:36.733 [2024-10-09 00:30:07.070513] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:36.733 [2024-10-09 00:30:07.070527] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:36.733 [2024-10-09 00:30:07.070539] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:36.733 [2024-10-09 00:30:07.087492] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:22:36.733 [2024-10-09 00:30:07.087522] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:22:36.733 task offset: 27264 on job bdev=Nvme2n1 fails 00:22:36.733 00:22:36.733 Latency(us) 00:22:36.733 [2024-10-08T22:30:07.368Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:36.733 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:36.733 Job: Nvme1n1 ended in about 0.99 seconds with error 00:22:36.733 Verification LBA range: start 0x0 length 0x400 00:22:36.733 Nvme1n1 : 0.99 129.13 8.07 64.56 0.00 326887.82 19442.35 267386.88 00:22:36.733 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:36.733 Job: Nvme2n1 ended in about 0.98 seconds with error 00:22:36.733 Verification LBA range: start 0x0 length 0x400 00:22:36.733 Nvme2n1 : 0.98 195.88 12.24 65.29 0.00 237479.25 11796.48 265639.25 00:22:36.733 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:36.733 Job: Nvme3n1 ended in about 0.98 seconds with error 00:22:36.733 Verification LBA range: start 0x0 length 0x400 00:22:36.733 Nvme3n1 : 0.98 195.09 12.19 65.03 0.00 233617.71 14417.92 276125.01 00:22:36.733 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:36.733 Job: Nvme4n1 ended in about 0.99 seconds with error 00:22:36.733 Verification LBA range: start 0x0 length 0x400 00:22:36.733 Nvme4n1 : 0.99 193.23 12.08 64.41 0.00 231109.12 10868.05 256901.12 00:22:36.733 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:36.733 Job: Nvme5n1 ended in about 1.00 seconds with error 00:22:36.733 Verification LBA range: start 0x0 length 0x400 00:22:36.733 Nvme5n1 : 1.00 196.78 12.30 64.26 0.00 223388.91 26651.31 199229.44 00:22:36.733 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:36.733 Job: Nvme6n1 ended in about 1.00 seconds with error 00:22:36.733 Verification LBA range: start 0x0 length 0x400 00:22:36.733 Nvme6n1 : 1.00 128.21 8.01 64.10 0.00 296944.36 17039.36 255153.49 00:22:36.733 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:36.733 Job: Nvme7n1 ended in about 0.98 seconds with error 00:22:36.733 Verification LBA range: start 0x0 length 0x400 00:22:36.733 Nvme7n1 : 0.98 195.37 12.21 65.12 0.00 213882.24 15947.09 249910.61 00:22:36.733 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:36.733 Job: Nvme8n1 ended in about 1.00 seconds with error 00:22:36.733 Verification LBA range: start 0x0 length 0x400 00:22:36.733 Nvme8n1 : 1.00 194.85 12.18 63.95 0.00 211068.62 10048.85 248162.99 00:22:36.733 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:36.733 Job: Nvme9n1 ended in about 1.00 seconds with error 00:22:36.733 Verification LBA range: start 0x0 length 0x400 00:22:36.733 Nvme9n1 : 1.00 131.59 8.22 63.80 0.00 273465.12 19114.67 272629.76 00:22:36.733 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:36.733 Job: Nvme10n1 ended in about 1.01 seconds with error 00:22:36.733 Verification LBA range: start 0x0 length 0x400 00:22:36.733 Nvme10n1 : 1.01 190.68 11.92 63.56 0.00 205460.69 17367.04 227191.47 00:22:36.733 [2024-10-08T22:30:07.368Z] =================================================================================================================== 00:22:36.733 [2024-10-08T22:30:07.368Z] Total : 1750.79 109.42 644.08 0.00 240958.12 10048.85 276125.01 00:22:36.733 [2024-10-09 00:30:07.112733] app.c:1062:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:22:36.733 [2024-10-09 00:30:07.112765] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:22:36.733 [2024-10-09 00:30:07.113151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:36.733 [2024-10-09 00:30:07.113170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd4d70 with addr=10.0.0.2, port=4420 00:22:36.733 [2024-10-09 00:30:07.113181] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd4d70 is same with the state(6) to be set 00:22:36.733 [2024-10-09 00:30:07.113453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:36.734 [2024-10-09 00:30:07.113469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff3450 with addr=10.0.0.2, port=4420 00:22:36.734 [2024-10-09 00:30:07.113477] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xff3450 is same with the state(6) to be set 00:22:36.734 [2024-10-09 00:30:07.113807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:36.734 [2024-10-09 00:30:07.113818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff19d0 with addr=10.0.0.2, port=4420 00:22:36.734 [2024-10-09 00:30:07.113826] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xff19d0 is same with the state(6) to be set 00:22:36.734 [2024-10-09 00:30:07.114000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:36.734 [2024-10-09 00:30:07.114010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbcabe0 with addr=10.0.0.2, port=4420 00:22:36.734 [2024-10-09 00:30:07.114018] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbcabe0 is same with the state(6) to be set 00:22:36.734 [2024-10-09 00:30:07.114044] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:36.734 [2024-10-09 00:30:07.114057] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:36.734 [2024-10-09 00:30:07.114075] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:36.734 [2024-10-09 00:30:07.114095] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbcabe0 (9): Bad file descriptor 00:22:36.734 [2024-10-09 00:30:07.114111] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xff19d0 (9): Bad file descriptor 00:22:36.734 [2024-10-09 00:30:07.114125] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xff3450 (9): Bad file descriptor 00:22:36.734 [2024-10-09 00:30:07.114138] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd4d70 (9): Bad file descriptor 00:22:36.734 1750.79 IOPS, 109.42 MiB/s [2024-10-08T22:30:07.369Z] [2024-10-09 00:30:07.116002] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:22:36.734 [2024-10-09 00:30:07.116017] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:22:36.734 [2024-10-09 00:30:07.116269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:36.734 [2024-10-09 00:30:07.116282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1018820 with addr=10.0.0.2, port=4420 00:22:36.734 [2024-10-09 00:30:07.116290] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1018820 is same with the state(6) to be set 00:22:36.734 [2024-10-09 00:30:07.116553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:36.734 [2024-10-09 00:30:07.116564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1066bc0 with addr=10.0.0.2, port=4420 00:22:36.734 [2024-10-09 00:30:07.116572] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1066bc0 is same with the state(6) to be set 00:22:36.734 [2024-10-09 00:30:07.116903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:36.734 [2024-10-09 00:30:07.116914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101e810 with addr=10.0.0.2, port=4420 00:22:36.734 [2024-10-09 00:30:07.116922] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x101e810 is same with the state(6) to be set 00:22:36.734 [2024-10-09 00:30:07.116949] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:36.734 [2024-10-09 00:30:07.116962] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:36.734 [2024-10-09 00:30:07.116973] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:36.734 [2024-10-09 00:30:07.116984] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:36.734 [2024-10-09 00:30:07.116997] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:36.734 [2024-10-09 00:30:07.117261] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:22:36.734 [2024-10-09 00:30:07.117600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:36.734 [2024-10-09 00:30:07.117614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbcba50 with addr=10.0.0.2, port=4420 00:22:36.734 [2024-10-09 00:30:07.117622] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbcba50 is same with the state(6) to be set 00:22:36.734 [2024-10-09 00:30:07.117832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:36.734 [2024-10-09 00:30:07.117843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101f280 with addr=10.0.0.2, port=4420 00:22:36.734 [2024-10-09 00:30:07.117850] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x101f280 is same with the state(6) to be set 00:22:36.734 [2024-10-09 00:30:07.117860] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1018820 (9): Bad file descriptor 00:22:36.734 [2024-10-09 00:30:07.117874] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1066bc0 (9): Bad file descriptor 00:22:36.734 [2024-10-09 00:30:07.117883] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x101e810 (9): Bad file descriptor 00:22:36.734 [2024-10-09 00:30:07.117892] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:36.734 [2024-10-09 00:30:07.117899] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:36.734 [2024-10-09 00:30:07.117907] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:36.734 [2024-10-09 00:30:07.117919] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:22:36.734 [2024-10-09 00:30:07.117925] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:22:36.734 [2024-10-09 00:30:07.117932] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:22:36.734 [2024-10-09 00:30:07.117942] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:22:36.734 [2024-10-09 00:30:07.117949] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:22:36.734 [2024-10-09 00:30:07.117956] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:22:36.734 [2024-10-09 00:30:07.117966] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:22:36.734 [2024-10-09 00:30:07.117972] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:22:36.734 [2024-10-09 00:30:07.117979] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:22:36.734 [2024-10-09 00:30:07.118055] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:36.734 [2024-10-09 00:30:07.118065] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:36.734 [2024-10-09 00:30:07.118071] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:36.734 [2024-10-09 00:30:07.118078] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:36.734 [2024-10-09 00:30:07.118387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:36.734 [2024-10-09 00:30:07.118398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd3ee0 with addr=10.0.0.2, port=4420 00:22:36.734 [2024-10-09 00:30:07.118406] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd3ee0 is same with the state(6) to be set 00:22:36.734 [2024-10-09 00:30:07.118415] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbcba50 (9): Bad file descriptor 00:22:36.734 [2024-10-09 00:30:07.118425] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x101f280 (9): Bad file descriptor 00:22:36.734 [2024-10-09 00:30:07.118433] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:22:36.734 [2024-10-09 00:30:07.118440] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:22:36.734 [2024-10-09 00:30:07.118447] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:22:36.734 [2024-10-09 00:30:07.118456] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:22:36.734 [2024-10-09 00:30:07.118464] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:22:36.734 [2024-10-09 00:30:07.118470] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:22:36.734 [2024-10-09 00:30:07.118480] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:22:36.734 [2024-10-09 00:30:07.118489] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:22:36.734 [2024-10-09 00:30:07.118496] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:22:36.734 [2024-10-09 00:30:07.118523] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:36.734 [2024-10-09 00:30:07.118531] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:36.734 [2024-10-09 00:30:07.118537] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:36.734 [2024-10-09 00:30:07.118545] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd3ee0 (9): Bad file descriptor 00:22:36.735 [2024-10-09 00:30:07.118553] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:22:36.735 [2024-10-09 00:30:07.118559] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:22:36.735 [2024-10-09 00:30:07.118566] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:22:36.735 [2024-10-09 00:30:07.118575] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:22:36.735 [2024-10-09 00:30:07.118582] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:22:36.735 [2024-10-09 00:30:07.118589] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:22:36.735 [2024-10-09 00:30:07.118617] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:36.735 [2024-10-09 00:30:07.118624] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:36.735 [2024-10-09 00:30:07.118631] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:22:36.735 [2024-10-09 00:30:07.118637] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:22:36.735 [2024-10-09 00:30:07.118644] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:22:36.735 [2024-10-09 00:30:07.118673] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:36.735 00:30:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@137 -- # sleep 1 00:22:37.676 00:30:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@138 -- # NOT wait 3320554 00:22:37.676 00:30:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@650 -- # local es=0 00:22:37.676 00:30:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@652 -- # valid_exec_arg wait 3320554 00:22:37.676 00:30:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@638 -- # local arg=wait 00:22:37.676 00:30:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:37.676 00:30:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@642 -- # type -t wait 00:22:37.676 00:30:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:37.676 00:30:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@653 -- # wait 3320554 00:22:37.676 00:30:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@653 -- # es=255 00:22:37.676 00:30:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:37.676 00:30:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@662 -- # es=127 00:22:37.676 00:30:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@663 -- # case "$es" in 00:22:37.676 00:30:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@670 -- # es=1 00:22:37.676 00:30:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:37.676 00:30:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@140 -- # stoptarget 00:22:37.676 00:30:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:22:37.676 00:30:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:22:37.676 00:30:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:37.676 00:30:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@46 -- # nvmftestfini 00:22:37.676 00:30:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@514 -- # nvmfcleanup 00:22:37.676 00:30:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # sync 00:22:37.676 00:30:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:37.676 00:30:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set +e 00:22:37.676 00:30:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:37.676 00:30:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:37.936 rmmod nvme_tcp 00:22:37.936 rmmod nvme_fabrics 00:22:37.936 rmmod nvme_keyring 00:22:37.936 00:30:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:37.936 00:30:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@128 -- # set -e 00:22:37.936 00:30:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@129 -- # return 0 00:22:37.936 00:30:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@515 -- # '[' -n 3320170 ']' 00:22:37.936 00:30:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@516 -- # killprocess 3320170 00:22:37.936 00:30:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@950 -- # '[' -z 3320170 ']' 00:22:37.936 00:30:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # kill -0 3320170 00:22:37.936 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (3320170) - No such process 00:22:37.936 00:30:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@977 -- # echo 'Process with pid 3320170 is not found' 00:22:37.936 Process with pid 3320170 is not found 00:22:37.936 00:30:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:22:37.936 00:30:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:22:37.936 00:30:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:22:37.936 00:30:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # iptr 00:22:37.936 00:30:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@789 -- # iptables-save 00:22:37.936 00:30:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:22:37.936 00:30:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@789 -- # iptables-restore 00:22:37.936 00:30:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:37.936 00:30:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:37.936 00:30:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:37.936 00:30:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:37.936 00:30:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:39.880 00:30:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:39.880 00:22:39.880 real 0m8.226s 00:22:39.880 user 0m21.250s 00:22:39.880 sys 0m1.321s 00:22:39.880 00:30:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:39.880 00:30:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:39.881 ************************************ 00:22:39.881 END TEST nvmf_shutdown_tc3 00:22:39.881 ************************************ 00:22:39.881 00:30:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ e810 == \e\8\1\0 ]] 00:22:39.881 00:30:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ tcp == \r\d\m\a ]] 00:22:39.881 00:30:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@167 -- # run_test nvmf_shutdown_tc4 nvmf_shutdown_tc4 00:22:39.881 00:30:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:22:39.881 00:30:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:39.881 00:30:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:40.142 ************************************ 00:22:40.142 START TEST nvmf_shutdown_tc4 00:22:40.142 ************************************ 00:22:40.142 00:30:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc4 00:22:40.142 00:30:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@145 -- # starttarget 00:22:40.142 00:30:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@16 -- # nvmftestinit 00:22:40.142 00:30:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:22:40.142 00:30:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:40.142 00:30:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@474 -- # prepare_net_devs 00:22:40.142 00:30:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@436 -- # local -g is_hw=no 00:22:40.142 00:30:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # remove_spdk_ns 00:22:40.142 00:30:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:40.142 00:30:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:40.142 00:30:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:40.142 00:30:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:22:40.142 00:30:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:22:40.143 00:30:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@309 -- # xtrace_disable 00:22:40.143 00:30:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:40.143 00:30:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:40.143 00:30:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # pci_devs=() 00:22:40.143 00:30:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:40.143 00:30:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:40.143 00:30:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:40.143 00:30:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:40.143 00:30:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:40.143 00:30:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # net_devs=() 00:22:40.143 00:30:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:40.143 00:30:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # e810=() 00:22:40.143 00:30:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # local -ga e810 00:22:40.143 00:30:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # x722=() 00:22:40.143 00:30:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # local -ga x722 00:22:40.143 00:30:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # mlx=() 00:22:40.143 00:30:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # local -ga mlx 00:22:40.143 00:30:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:40.143 00:30:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:40.143 00:30:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:40.143 00:30:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:40.143 00:30:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:40.143 00:30:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:40.143 00:30:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:40.143 00:30:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:40.143 00:30:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:40.143 00:30:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:40.143 00:30:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:40.143 00:30:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:40.143 00:30:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:40.143 00:30:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:40.143 00:30:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:40.143 00:30:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:40.143 00:30:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:40.143 00:30:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:40.143 00:30:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:40.143 00:30:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:22:40.143 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:22:40.143 00:30:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:40.143 00:30:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:40.143 00:30:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:40.143 00:30:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:40.143 00:30:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:40.143 00:30:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:40.143 00:30:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:22:40.143 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:22:40.143 00:30:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:40.143 00:30:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:40.143 00:30:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:40.143 00:30:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:40.143 00:30:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:40.143 00:30:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:40.143 00:30:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:40.143 00:30:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:40.143 00:30:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:22:40.143 00:30:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:40.143 00:30:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:22:40.143 00:30:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:40.143 00:30:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ up == up ]] 00:22:40.143 00:30:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:22:40.143 00:30:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:40.143 00:30:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:22:40.143 Found net devices under 0000:4b:00.0: cvl_0_0 00:22:40.143 00:30:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:22:40.143 00:30:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:22:40.143 00:30:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:40.143 00:30:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:22:40.143 00:30:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:40.143 00:30:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ up == up ]] 00:22:40.143 00:30:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:22:40.143 00:30:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:40.143 00:30:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:22:40.143 Found net devices under 0000:4b:00.1: cvl_0_1 00:22:40.143 00:30:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:22:40.143 00:30:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:22:40.143 00:30:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # is_hw=yes 00:22:40.143 00:30:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:22:40.143 00:30:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:22:40.143 00:30:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:22:40.143 00:30:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:40.143 00:30:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:40.143 00:30:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:40.143 00:30:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:40.143 00:30:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:40.143 00:30:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:40.143 00:30:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:40.143 00:30:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:40.143 00:30:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:40.143 00:30:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:40.143 00:30:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:40.143 00:30:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:40.143 00:30:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:40.143 00:30:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:40.143 00:30:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:40.143 00:30:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:40.143 00:30:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:40.143 00:30:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:40.143 00:30:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:40.405 00:30:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:40.405 00:30:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:40.405 00:30:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:40.405 00:30:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:40.405 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:40.405 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.580 ms 00:22:40.405 00:22:40.405 --- 10.0.0.2 ping statistics --- 00:22:40.405 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:40.405 rtt min/avg/max/mdev = 0.580/0.580/0.580/0.000 ms 00:22:40.405 00:30:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:40.405 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:40.405 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.314 ms 00:22:40.405 00:22:40.405 --- 10.0.0.1 ping statistics --- 00:22:40.405 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:40.405 rtt min/avg/max/mdev = 0.314/0.314/0.314/0.000 ms 00:22:40.405 00:30:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:40.405 00:30:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@448 -- # return 0 00:22:40.405 00:30:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:22:40.405 00:30:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:40.405 00:30:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:22:40.405 00:30:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:22:40.405 00:30:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:40.405 00:30:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:22:40.405 00:30:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:22:40.405 00:30:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:22:40.405 00:30:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:22:40.405 00:30:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:40.405 00:30:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:40.405 00:30:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@507 -- # nvmfpid=3322427 00:22:40.405 00:30:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@508 -- # waitforlisten 3322427 00:22:40.405 00:30:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@831 -- # '[' -z 3322427 ']' 00:22:40.405 00:30:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:22:40.405 00:30:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:40.405 00:30:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:40.405 00:30:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:40.405 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:40.405 00:30:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:40.405 00:30:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:40.405 [2024-10-09 00:30:10.995397] Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 initialization... 00:22:40.405 [2024-10-09 00:30:10.995467] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:40.665 [2024-10-09 00:30:11.084264] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:40.665 [2024-10-09 00:30:11.146399] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:40.665 [2024-10-09 00:30:11.146432] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:40.665 [2024-10-09 00:30:11.146438] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:40.665 [2024-10-09 00:30:11.146443] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:40.665 [2024-10-09 00:30:11.146448] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:40.665 [2024-10-09 00:30:11.147944] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:22:40.665 [2024-10-09 00:30:11.148151] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:22:40.665 [2024-10-09 00:30:11.148303] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:22:40.665 [2024-10-09 00:30:11.148304] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 4 00:22:41.234 00:30:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:41.234 00:30:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@864 -- # return 0 00:22:41.234 00:30:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:22:41.234 00:30:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:41.234 00:30:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:41.234 00:30:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:41.234 00:30:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:41.234 00:30:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:41.234 00:30:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:41.234 [2024-10-09 00:30:11.836887] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:41.234 00:30:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:41.234 00:30:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:22:41.234 00:30:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:22:41.234 00:30:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:41.234 00:30:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:41.234 00:30:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:41.234 00:30:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:41.234 00:30:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:41.234 00:30:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:41.234 00:30:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:41.234 00:30:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:41.234 00:30:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:41.234 00:30:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:41.234 00:30:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:41.496 00:30:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:41.496 00:30:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:41.496 00:30:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:41.496 00:30:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:41.496 00:30:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:41.496 00:30:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:41.496 00:30:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:41.496 00:30:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:41.496 00:30:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:41.496 00:30:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:41.496 00:30:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:41.496 00:30:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:41.496 00:30:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@36 -- # rpc_cmd 00:22:41.496 00:30:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:41.496 00:30:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:41.496 Malloc1 00:22:41.496 [2024-10-09 00:30:11.935524] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:41.496 Malloc2 00:22:41.496 Malloc3 00:22:41.496 Malloc4 00:22:41.496 Malloc5 00:22:41.496 Malloc6 00:22:41.756 Malloc7 00:22:41.756 Malloc8 00:22:41.757 Malloc9 00:22:41.757 Malloc10 00:22:41.757 00:30:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:41.757 00:30:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:22:41.757 00:30:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:41.757 00:30:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:41.757 00:30:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@149 -- # perfpid=3322853 00:22:41.757 00:30:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@150 -- # sleep 5 00:22:41.757 00:30:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@148 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 45056 -O 4096 -w randwrite -t 20 -r 'trtype:tcp adrfam:IPV4 traddr:10.0.0.2 trsvcid:4420' -P 4 00:22:42.017 [2024-10-09 00:30:12.406184] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:22:47.305 00:30:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@152 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:47.305 00:30:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@155 -- # killprocess 3322427 00:22:47.305 00:30:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@950 -- # '[' -z 3322427 ']' 00:22:47.305 00:30:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # kill -0 3322427 00:22:47.305 00:30:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@955 -- # uname 00:22:47.305 00:30:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:47.305 00:30:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3322427 00:22:47.305 00:30:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:22:47.305 00:30:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:22:47.305 00:30:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3322427' 00:22:47.305 killing process with pid 3322427 00:22:47.305 00:30:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@969 -- # kill 3322427 00:22:47.305 00:30:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@974 -- # wait 3322427 00:22:47.305 [2024-10-09 00:30:17.411397] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13211e0 is same with the state(6) to be set 00:22:47.305 [2024-10-09 00:30:17.411444] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13211e0 is same with the state(6) to be set 00:22:47.305 [2024-10-09 00:30:17.411450] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13211e0 is same with the state(6) to be set 00:22:47.305 [2024-10-09 00:30:17.411462] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13211e0 is same with the state(6) to be set 00:22:47.305 [2024-10-09 00:30:17.411467] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13211e0 is same with the state(6) to be set 00:22:47.305 Write completed with error (sct=0, sc=8) 00:22:47.305 starting I/O failed: -6 00:22:47.305 Write completed with error (sct=0, sc=8) 00:22:47.305 Write completed with error (sct=0, sc=8) 00:22:47.305 Write completed with error (sct=0, sc=8) 00:22:47.305 Write completed with error (sct=0, sc=8) 00:22:47.305 starting I/O failed: -6 00:22:47.305 Write completed with error (sct=0, sc=8) 00:22:47.305 Write completed with error (sct=0, sc=8) 00:22:47.305 Write completed with error (sct=0, sc=8) 00:22:47.305 Write completed with error (sct=0, sc=8) 00:22:47.305 starting I/O failed: -6 00:22:47.305 Write completed with error (sct=0, sc=8) 00:22:47.305 Write completed with error (sct=0, sc=8) 00:22:47.305 Write completed with error (sct=0, sc=8) 00:22:47.305 Write completed with error (sct=0, sc=8) 00:22:47.305 starting I/O failed: -6 00:22:47.305 Write completed with error (sct=0, sc=8) 00:22:47.305 Write completed with error (sct=0, sc=8) 00:22:47.305 Write completed with error (sct=0, sc=8) 00:22:47.305 Write completed with error (sct=0, sc=8) 00:22:47.305 starting I/O failed: -6 00:22:47.305 [2024-10-09 00:30:17.411876] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1321bc0 is same with Write completed with error (sct=0, sc=8) 00:22:47.305 the state(6) to be set 00:22:47.305 [2024-10-09 00:30:17.411903] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1321bc0 is same with the state(6) to be set 00:22:47.305 Write completed with error (sct=0, sc=8) 00:22:47.305 [2024-10-09 00:30:17.411910] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1321bc0 is same with the state(6) to be set 00:22:47.305 Write completed with error (sct=0, sc=8) 00:22:47.305 Write completed with error (sct=0, sc=8) 00:22:47.305 starting I/O failed: -6 00:22:47.305 Write completed with error (sct=0, sc=8) 00:22:47.305 Write completed with error (sct=0, sc=8) 00:22:47.305 Write completed with error (sct=0, sc=8) 00:22:47.305 Write completed with error (sct=0, sc=8) 00:22:47.305 starting I/O failed: -6 00:22:47.305 Write completed with error (sct=0, sc=8) 00:22:47.305 Write completed with error (sct=0, sc=8) 00:22:47.305 Write completed with error (sct=0, sc=8) 00:22:47.305 Write completed with error (sct=0, sc=8) 00:22:47.305 starting I/O failed: -6 00:22:47.305 Write completed with error (sct=0, sc=8) 00:22:47.305 Write completed with error (sct=0, sc=8) 00:22:47.305 Write completed with error (sct=0, sc=8) 00:22:47.305 Write completed with error (sct=0, sc=8) 00:22:47.305 starting I/O failed: -6 00:22:47.305 Write completed with error (sct=0, sc=8) 00:22:47.305 Write completed with error (sct=0, sc=8) 00:22:47.305 [2024-10-09 00:30:17.412166] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1320d10 is same with the state(6) to be set 00:22:47.305 [2024-10-09 00:30:17.412189] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1320d10 is same with the state(6) to be set 00:22:47.305 [2024-10-09 00:30:17.412194] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1320d10 is same with the state(6) to be set 00:22:47.305 [2024-10-09 00:30:17.412200] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1320d10 is same with the state(6) to be set 00:22:47.305 [2024-10-09 00:30:17.412205] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1320d10 is same with the state(6) to be set 00:22:47.305 [2024-10-09 00:30:17.412223] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:22:47.305 Write completed with error (sct=0, sc=8) 00:22:47.305 Write completed with error (sct=0, sc=8) 00:22:47.305 starting I/O failed: -6 00:22:47.305 Write completed with error (sct=0, sc=8) 00:22:47.305 Write completed with error (sct=0, sc=8) 00:22:47.305 starting I/O failed: -6 00:22:47.305 Write completed with error (sct=0, sc=8) 00:22:47.305 Write completed with error (sct=0, sc=8) 00:22:47.305 starting I/O failed: -6 00:22:47.305 Write completed with error (sct=0, sc=8) 00:22:47.305 Write completed with error (sct=0, sc=8) 00:22:47.305 starting I/O failed: -6 00:22:47.305 Write completed with error (sct=0, sc=8) 00:22:47.305 Write completed with error (sct=0, sc=8) 00:22:47.305 starting I/O failed: -6 00:22:47.305 Write completed with error (sct=0, sc=8) 00:22:47.305 Write completed with error (sct=0, sc=8) 00:22:47.305 starting I/O failed: -6 00:22:47.305 Write completed with error (sct=0, sc=8) 00:22:47.306 Write completed with error (sct=0, sc=8) 00:22:47.306 starting I/O failed: -6 00:22:47.306 Write completed with error (sct=0, sc=8) 00:22:47.306 Write completed with error (sct=0, sc=8) 00:22:47.306 starting I/O failed: -6 00:22:47.306 Write completed with error (sct=0, sc=8) 00:22:47.306 Write completed with error (sct=0, sc=8) 00:22:47.306 starting I/O failed: -6 00:22:47.306 Write completed with error (sct=0, sc=8) 00:22:47.306 Write completed with error (sct=0, sc=8) 00:22:47.306 starting I/O failed: -6 00:22:47.306 Write completed with error (sct=0, sc=8) 00:22:47.306 Write completed with error (sct=0, sc=8) 00:22:47.306 starting I/O failed: -6 00:22:47.306 Write completed with error (sct=0, sc=8) 00:22:47.306 Write completed with error (sct=0, sc=8) 00:22:47.306 starting I/O failed: -6 00:22:47.306 Write completed with error (sct=0, sc=8) 00:22:47.306 Write completed with error (sct=0, sc=8) 00:22:47.306 starting I/O failed: -6 00:22:47.306 Write completed with error (sct=0, sc=8) 00:22:47.306 Write completed with error (sct=0, sc=8) 00:22:47.306 starting I/O failed: -6 00:22:47.306 Write completed with error (sct=0, sc=8) 00:22:47.306 Write completed with error (sct=0, sc=8) 00:22:47.306 starting I/O failed: -6 00:22:47.306 Write completed with error (sct=0, sc=8) 00:22:47.306 Write completed with error (sct=0, sc=8) 00:22:47.306 starting I/O failed: -6 00:22:47.306 Write completed with error (sct=0, sc=8) 00:22:47.306 Write completed with error (sct=0, sc=8) 00:22:47.306 starting I/O failed: -6 00:22:47.306 Write completed with error (sct=0, sc=8) 00:22:47.306 Write completed with error (sct=0, sc=8) 00:22:47.306 starting I/O failed: -6 00:22:47.306 Write completed with error (sct=0, sc=8) 00:22:47.306 Write completed with error (sct=0, sc=8) 00:22:47.306 starting I/O failed: -6 00:22:47.306 Write completed with error (sct=0, sc=8) 00:22:47.306 Write completed with error (sct=0, sc=8) 00:22:47.306 starting I/O failed: -6 00:22:47.306 Write completed with error (sct=0, sc=8) 00:22:47.306 Write completed with error (sct=0, sc=8) 00:22:47.306 starting I/O failed: -6 00:22:47.306 Write completed with error (sct=0, sc=8) 00:22:47.306 Write completed with error (sct=0, sc=8) 00:22:47.306 starting I/O failed: -6 00:22:47.306 Write completed with error (sct=0, sc=8) 00:22:47.306 [2024-10-09 00:30:17.413448] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:22:47.306 NVMe io qpair process completion error 00:22:47.306 [2024-10-09 00:30:17.414372] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13517e0 is same with the state(6) to be set 00:22:47.306 [2024-10-09 00:30:17.414389] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13517e0 is same with the state(6) to be set 00:22:47.306 [2024-10-09 00:30:17.414394] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13517e0 is same with the state(6) to be set 00:22:47.306 [2024-10-09 00:30:17.414399] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13517e0 is same with the state(6) to be set 00:22:47.306 [2024-10-09 00:30:17.414773] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1351cb0 is same with the state(6) to be set 00:22:47.306 [2024-10-09 00:30:17.414797] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1351cb0 is same with the state(6) to be set 00:22:47.306 [2024-10-09 00:30:17.414803] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1351cb0 is same with the state(6) to be set 00:22:47.306 [2024-10-09 00:30:17.415046] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1320970 is same with the state(6) to be set 00:22:47.306 [2024-10-09 00:30:17.415064] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1320970 is same with the state(6) to be set 00:22:47.306 [2024-10-09 00:30:17.415070] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1320970 is same with the state(6) to be set 00:22:47.306 [2024-10-09 00:30:17.415075] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1320970 is same with the state(6) to be set 00:22:47.306 [2024-10-09 00:30:17.415081] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1320970 is same with the state(6) to be set 00:22:47.306 [2024-10-09 00:30:17.415086] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1320970 is same with the state(6) to be set 00:22:47.306 [2024-10-09 00:30:17.415091] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1320970 is same with the state(6) to be set 00:22:47.306 [2024-10-09 00:30:17.415095] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1320970 is same with the state(6) to be set 00:22:47.306 [2024-10-09 00:30:17.415103] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1320970 is same with the state(6) to be set 00:22:47.306 [2024-10-09 00:30:17.415108] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1320970 is same with the state(6) to be set 00:22:47.306 [2024-10-09 00:30:17.415346] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133bec0 is same with the state(6) to be set 00:22:47.306 [2024-10-09 00:30:17.415363] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133bec0 is same with the state(6) to be set 00:22:47.306 [2024-10-09 00:30:17.415368] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133bec0 is same with the state(6) to be set 00:22:47.306 [2024-10-09 00:30:17.415373] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133bec0 is same with the state(6) to be set 00:22:47.306 [2024-10-09 00:30:17.415378] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133bec0 is same with the state(6) to be set 00:22:47.306 Write completed with error (sct=0, sc=8) 00:22:47.306 starting I/O failed: -6 00:22:47.306 Write completed with error (sct=0, sc=8) 00:22:47.306 Write completed with error (sct=0, sc=8) 00:22:47.306 Write completed with error (sct=0, sc=8) 00:22:47.306 Write completed with error (sct=0, sc=8) 00:22:47.306 starting I/O failed: -6 00:22:47.306 Write completed with error (sct=0, sc=8) 00:22:47.306 Write completed with error (sct=0, sc=8) 00:22:47.306 Write completed with error (sct=0, sc=8) 00:22:47.306 Write completed with error (sct=0, sc=8) 00:22:47.306 starting I/O failed: -6 00:22:47.306 Write completed with error (sct=0, sc=8) 00:22:47.306 Write completed with error (sct=0, sc=8) 00:22:47.306 Write completed with error (sct=0, sc=8) 00:22:47.306 Write completed with error (sct=0, sc=8) 00:22:47.306 starting I/O failed: -6 00:22:47.306 Write completed with error (sct=0, sc=8) 00:22:47.306 Write completed with error (sct=0, sc=8) 00:22:47.306 Write completed with error (sct=0, sc=8) 00:22:47.306 Write completed with error (sct=0, sc=8) 00:22:47.306 starting I/O failed: -6 00:22:47.306 Write completed with error (sct=0, sc=8) 00:22:47.306 Write completed with error (sct=0, sc=8) 00:22:47.306 Write completed with error (sct=0, sc=8) 00:22:47.306 Write completed with error (sct=0, sc=8) 00:22:47.306 starting I/O failed: -6 00:22:47.306 Write completed with error (sct=0, sc=8) 00:22:47.306 Write completed with error (sct=0, sc=8) 00:22:47.306 Write completed with error (sct=0, sc=8) 00:22:47.306 Write completed with error (sct=0, sc=8) 00:22:47.306 starting I/O failed: -6 00:22:47.306 Write completed with error (sct=0, sc=8) 00:22:47.306 Write completed with error (sct=0, sc=8) 00:22:47.306 Write completed with error (sct=0, sc=8) 00:22:47.306 Write completed with error (sct=0, sc=8) 00:22:47.306 starting I/O failed: -6 00:22:47.306 Write completed with error (sct=0, sc=8) 00:22:47.306 Write completed with error (sct=0, sc=8) 00:22:47.306 Write completed with error (sct=0, sc=8) 00:22:47.306 Write completed with error (sct=0, sc=8) 00:22:47.306 starting I/O failed: -6 00:22:47.306 Write completed with error (sct=0, sc=8) 00:22:47.306 [2024-10-09 00:30:17.417832] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:22:47.306 Write completed with error (sct=0, sc=8) 00:22:47.306 starting I/O failed: -6 00:22:47.306 Write completed with error (sct=0, sc=8) 00:22:47.306 Write completed with error (sct=0, sc=8) 00:22:47.306 starting I/O failed: -6 00:22:47.306 Write completed with error (sct=0, sc=8) 00:22:47.306 Write completed with error (sct=0, sc=8) 00:22:47.306 starting I/O failed: -6 00:22:47.306 Write completed with error (sct=0, sc=8) 00:22:47.306 Write completed with error (sct=0, sc=8) 00:22:47.306 starting I/O failed: -6 00:22:47.306 Write completed with error (sct=0, sc=8) 00:22:47.306 Write completed with error (sct=0, sc=8) 00:22:47.306 starting I/O failed: -6 00:22:47.306 Write completed with error (sct=0, sc=8) 00:22:47.306 Write completed with error (sct=0, sc=8) 00:22:47.306 starting I/O failed: -6 00:22:47.306 Write completed with error (sct=0, sc=8) 00:22:47.306 Write completed with error (sct=0, sc=8) 00:22:47.306 starting I/O failed: -6 00:22:47.306 Write completed with error (sct=0, sc=8) 00:22:47.306 Write completed with error (sct=0, sc=8) 00:22:47.306 starting I/O failed: -6 00:22:47.306 Write completed with error (sct=0, sc=8) 00:22:47.306 Write completed with error (sct=0, sc=8) 00:22:47.306 starting I/O failed: -6 00:22:47.306 Write completed with error (sct=0, sc=8) 00:22:47.306 Write completed with error (sct=0, sc=8) 00:22:47.306 starting I/O failed: -6 00:22:47.306 Write completed with error (sct=0, sc=8) 00:22:47.306 Write completed with error (sct=0, sc=8) 00:22:47.306 starting I/O failed: -6 00:22:47.306 Write completed with error (sct=0, sc=8) 00:22:47.306 Write completed with error (sct=0, sc=8) 00:22:47.306 starting I/O failed: -6 00:22:47.306 Write completed with error (sct=0, sc=8) 00:22:47.306 Write completed with error (sct=0, sc=8) 00:22:47.306 starting I/O failed: -6 00:22:47.306 Write completed with error (sct=0, sc=8) 00:22:47.306 Write completed with error (sct=0, sc=8) 00:22:47.306 starting I/O failed: -6 00:22:47.306 Write completed with error (sct=0, sc=8) 00:22:47.306 Write completed with error (sct=0, sc=8) 00:22:47.306 starting I/O failed: -6 00:22:47.306 Write completed with error (sct=0, sc=8) 00:22:47.306 Write completed with error (sct=0, sc=8) 00:22:47.306 starting I/O failed: -6 00:22:47.306 Write completed with error (sct=0, sc=8) 00:22:47.306 Write completed with error (sct=0, sc=8) 00:22:47.306 starting I/O failed: -6 00:22:47.306 Write completed with error (sct=0, sc=8) 00:22:47.306 Write completed with error (sct=0, sc=8) 00:22:47.306 starting I/O failed: -6 00:22:47.306 Write completed with error (sct=0, sc=8) 00:22:47.306 Write completed with error (sct=0, sc=8) 00:22:47.306 starting I/O failed: -6 00:22:47.306 Write completed with error (sct=0, sc=8) 00:22:47.306 Write completed with error (sct=0, sc=8) 00:22:47.306 starting I/O failed: -6 00:22:47.307 Write completed with error (sct=0, sc=8) 00:22:47.307 Write completed with error (sct=0, sc=8) 00:22:47.307 starting I/O failed: -6 00:22:47.307 Write completed with error (sct=0, sc=8) 00:22:47.307 Write completed with error (sct=0, sc=8) 00:22:47.307 starting I/O failed: -6 00:22:47.307 Write completed with error (sct=0, sc=8) 00:22:47.307 [2024-10-09 00:30:17.418706] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:22:47.307 Write completed with error (sct=0, sc=8) 00:22:47.307 starting I/O failed: -6 00:22:47.307 Write completed with error (sct=0, sc=8) 00:22:47.307 starting I/O failed: -6 00:22:47.307 Write completed with error (sct=0, sc=8) 00:22:47.307 starting I/O failed: -6 00:22:47.307 Write completed with error (sct=0, sc=8) 00:22:47.307 Write completed with error (sct=0, sc=8) 00:22:47.307 starting I/O failed: -6 00:22:47.307 Write completed with error (sct=0, sc=8) 00:22:47.307 starting I/O failed: -6 00:22:47.307 Write completed with error (sct=0, sc=8) 00:22:47.307 starting I/O failed: -6 00:22:47.307 Write completed with error (sct=0, sc=8) 00:22:47.307 starting I/O failed: -6 00:22:47.307 Write completed with error (sct=0, sc=8) 00:22:47.307 starting I/O failed: -6 00:22:47.307 Write completed with error (sct=0, sc=8) 00:22:47.307 starting I/O failed: -6 00:22:47.307 Write completed with error (sct=0, sc=8) 00:22:47.307 starting I/O failed: -6 00:22:47.307 Write completed with error (sct=0, sc=8) 00:22:47.307 starting I/O failed: -6 00:22:47.307 Write completed with error (sct=0, sc=8) 00:22:47.307 starting I/O failed: -6 00:22:47.307 Write completed with error (sct=0, sc=8) 00:22:47.307 starting I/O failed: -6 00:22:47.307 Write completed with error (sct=0, sc=8) 00:22:47.307 starting I/O failed: -6 00:22:47.307 Write completed with error (sct=0, sc=8) 00:22:47.307 starting I/O failed: -6 00:22:47.307 Write completed with error (sct=0, sc=8) 00:22:47.307 starting I/O failed: -6 00:22:47.307 Write completed with error (sct=0, sc=8) 00:22:47.307 starting I/O failed: -6 00:22:47.307 Write completed with error (sct=0, sc=8) 00:22:47.307 starting I/O failed: -6 00:22:47.307 Write completed with error (sct=0, sc=8) 00:22:47.307 starting I/O failed: -6 00:22:47.307 Write completed with error (sct=0, sc=8) 00:22:47.307 starting I/O failed: -6 00:22:47.307 Write completed with error (sct=0, sc=8) 00:22:47.307 starting I/O failed: -6 00:22:47.307 Write completed with error (sct=0, sc=8) 00:22:47.307 starting I/O failed: -6 00:22:47.307 Write completed with error (sct=0, sc=8) 00:22:47.307 starting I/O failed: -6 00:22:47.307 Write completed with error (sct=0, sc=8) 00:22:47.307 starting I/O failed: -6 00:22:47.307 Write completed with error (sct=0, sc=8) 00:22:47.307 starting I/O failed: -6 00:22:47.307 Write completed with error (sct=0, sc=8) 00:22:47.307 starting I/O failed: -6 00:22:47.307 Write completed with error (sct=0, sc=8) 00:22:47.307 starting I/O failed: -6 00:22:47.307 Write completed with error (sct=0, sc=8) 00:22:47.307 starting I/O failed: -6 00:22:47.307 Write completed with error (sct=0, sc=8) 00:22:47.307 starting I/O failed: -6 00:22:47.307 Write completed with error (sct=0, sc=8) 00:22:47.307 starting I/O failed: -6 00:22:47.307 Write completed with error (sct=0, sc=8) 00:22:47.307 starting I/O failed: -6 00:22:47.307 Write completed with error (sct=0, sc=8) 00:22:47.307 starting I/O failed: -6 00:22:47.307 Write completed with error (sct=0, sc=8) 00:22:47.307 starting I/O failed: -6 00:22:47.307 Write completed with error (sct=0, sc=8) 00:22:47.307 starting I/O failed: -6 00:22:47.307 Write completed with error (sct=0, sc=8) 00:22:47.307 starting I/O failed: -6 00:22:47.307 Write completed with error (sct=0, sc=8) 00:22:47.307 starting I/O failed: -6 00:22:47.307 Write completed with error (sct=0, sc=8) 00:22:47.307 starting I/O failed: -6 00:22:47.307 Write completed with error (sct=0, sc=8) 00:22:47.307 starting I/O failed: -6 00:22:47.307 Write completed with error (sct=0, sc=8) 00:22:47.307 starting I/O failed: -6 00:22:47.307 Write completed with error (sct=0, sc=8) 00:22:47.307 starting I/O failed: -6 00:22:47.307 Write completed with error (sct=0, sc=8) 00:22:47.307 starting I/O failed: -6 00:22:47.307 Write completed with error (sct=0, sc=8) 00:22:47.307 starting I/O failed: -6 00:22:47.307 Write completed with error (sct=0, sc=8) 00:22:47.307 starting I/O failed: -6 00:22:47.307 Write completed with error (sct=0, sc=8) 00:22:47.307 starting I/O failed: -6 00:22:47.307 Write completed with error (sct=0, sc=8) 00:22:47.307 starting I/O failed: -6 00:22:47.307 [2024-10-09 00:30:17.419584] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13bcee0 is same with the state(6) to be set 00:22:47.307 Write completed with error (sct=0, sc=8) 00:22:47.307 [2024-10-09 00:30:17.419601] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13bcee0 is same with the state(6) to be set 00:22:47.307 starting I/O failed: -6 00:22:47.307 [2024-10-09 00:30:17.419606] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13bcee0 is same with the state(6) to be set 00:22:47.307 [2024-10-09 00:30:17.419611] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13bcee0 is same with Write completed with error (sct=0, sc=8) 00:22:47.307 the state(6) to be set 00:22:47.307 [2024-10-09 00:30:17.419618] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13bcee0 is same with the state(6) to be set 00:22:47.307 starting I/O failed: -6 00:22:47.307 [2024-10-09 00:30:17.419622] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13bcee0 is same with the state(6) to be set 00:22:47.307 [2024-10-09 00:30:17.419627] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13bcee0 is same with Write completed with error (sct=0, sc=8) 00:22:47.307 the state(6) to be set 00:22:47.307 [2024-10-09 00:30:17.419634] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13bcee0 is same with the state(6) to be set 00:22:47.307 starting I/O failed: -6 00:22:47.307 [2024-10-09 00:30:17.419638] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13bcee0 is same with the state(6) to be set 00:22:47.307 Write completed with error (sct=0, sc=8) 00:22:47.307 starting I/O failed: -6 00:22:47.307 Write completed with error (sct=0, sc=8) 00:22:47.307 starting I/O failed: -6 00:22:47.307 Write completed with error (sct=0, sc=8) 00:22:47.307 starting I/O failed: -6 00:22:47.307 Write completed with error (sct=0, sc=8) 00:22:47.307 starting I/O failed: -6 00:22:47.307 Write completed with error (sct=0, sc=8) 00:22:47.307 starting I/O failed: -6 00:22:47.307 Write completed with error (sct=0, sc=8) 00:22:47.307 starting I/O failed: -6 00:22:47.307 Write completed with error (sct=0, sc=8) 00:22:47.307 starting I/O failed: -6 00:22:47.307 Write completed with error (sct=0, sc=8) 00:22:47.307 starting I/O failed: -6 00:22:47.307 Write completed with error (sct=0, sc=8) 00:22:47.307 starting I/O failed: -6 00:22:47.307 Write completed with error (sct=0, sc=8) 00:22:47.307 starting I/O failed: -6 00:22:47.307 Write completed with error (sct=0, sc=8) 00:22:47.307 starting I/O failed: -6 00:22:47.307 Write completed with error (sct=0, sc=8) 00:22:47.307 starting I/O failed: -6 00:22:47.307 Write completed with error (sct=0, sc=8) 00:22:47.307 starting I/O failed: -6 00:22:47.307 [2024-10-09 00:30:17.419834] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1321f60 is same with Write completed with error (sct=0, sc=8) 00:22:47.307 the state(6) to be set 00:22:47.307 starting I/O failed: -6 00:22:47.307 [2024-10-09 00:30:17.419850] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1321f60 is same with the state(6) to be set 00:22:47.307 [2024-10-09 00:30:17.419856] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1321f60 is same with Write completed with error (sct=0, sc=8) 00:22:47.307 the state(6) to be set 00:22:47.307 [2024-10-09 00:30:17.419862] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1321f60 is same with the state(6) to be set 00:22:47.307 starting I/O failed: -6 00:22:47.307 [2024-10-09 00:30:17.419867] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1321f60 is same with the state(6) to be set 00:22:47.307 [2024-10-09 00:30:17.419872] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1321f60 is same with Write completed with error (sct=0, sc=8) 00:22:47.307 the state(6) to be set 00:22:47.307 [2024-10-09 00:30:17.419878] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1321f60 is same with the state(6) to be set 00:22:47.307 starting I/O failed: -6 00:22:47.307 [2024-10-09 00:30:17.419883] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1321f60 is same with the state(6) to be set 00:22:47.307 [2024-10-09 00:30:17.419888] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1321f60 is same with Write completed with error (sct=0, sc=8) 00:22:47.307 the state(6) to be set 00:22:47.307 starting I/O failed: -6 00:22:47.307 Write completed with error (sct=0, sc=8) 00:22:47.307 starting I/O failed: -6 00:22:47.307 Write completed with error (sct=0, sc=8) 00:22:47.307 starting I/O failed: -6 00:22:47.307 Write completed with error (sct=0, sc=8) 00:22:47.307 starting I/O failed: -6 00:22:47.307 Write completed with error (sct=0, sc=8) 00:22:47.307 starting I/O failed: -6 00:22:47.307 Write completed with error (sct=0, sc=8) 00:22:47.307 starting I/O failed: -6 00:22:47.307 Write completed with error (sct=0, sc=8) 00:22:47.307 starting I/O failed: -6 00:22:47.307 Write completed with error (sct=0, sc=8) 00:22:47.307 starting I/O failed: -6 00:22:47.307 Write completed with error (sct=0, sc=8) 00:22:47.307 starting I/O failed: -6 00:22:47.307 Write completed with error (sct=0, sc=8) 00:22:47.307 starting I/O failed: -6 00:22:47.307 Write completed with error (sct=0, sc=8) 00:22:47.307 starting I/O failed: -6 00:22:47.307 Write completed with error (sct=0, sc=8) 00:22:47.307 starting I/O failed: -6 00:22:47.307 Write completed with error (sct=0, sc=8) 00:22:47.307 starting I/O failed: -6 00:22:47.307 Write completed with error (sct=0, sc=8) 00:22:47.307 starting I/O failed: -6 00:22:47.307 Write completed with error (sct=0, sc=8) 00:22:47.307 starting I/O failed: -6 00:22:47.307 Write completed with error (sct=0, sc=8) 00:22:47.307 starting I/O failed: -6 00:22:47.307 Write completed with error (sct=0, sc=8) 00:22:47.307 starting I/O failed: -6 00:22:47.307 Write completed with error (sct=0, sc=8) 00:22:47.307 starting I/O failed: -6 00:22:47.307 [2024-10-09 00:30:17.420148] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1322450 is same with the state(6) to be set 00:22:47.307 Write completed with error (sct=0, sc=8) 00:22:47.307 [2024-10-09 00:30:17.420164] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1322450 is same with the state(6) to be set 00:22:47.307 starting I/O failed: -6 00:22:47.307 [2024-10-09 00:30:17.420169] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1322450 is same with the state(6) to be set 00:22:47.307 [2024-10-09 00:30:17.420176] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1322450 is same with the state(6) to be set 00:22:47.307 Write completed with error (sct=0, sc=8) 00:22:47.308 [2024-10-09 00:30:17.420181] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1322450 is same with the state(6) to be set 00:22:47.308 starting I/O failed: -6 00:22:47.308 Write completed with error (sct=0, sc=8) 00:22:47.308 starting I/O failed: -6 00:22:47.308 Write completed with error (sct=0, sc=8) 00:22:47.308 starting I/O failed: -6 00:22:47.308 Write completed with error (sct=0, sc=8) 00:22:47.308 starting I/O failed: -6 00:22:47.308 Write completed with error (sct=0, sc=8) 00:22:47.308 starting I/O failed: -6 00:22:47.308 Write completed with error (sct=0, sc=8) 00:22:47.308 starting I/O failed: -6 00:22:47.308 Write completed with error (sct=0, sc=8) 00:22:47.308 starting I/O failed: -6 00:22:47.308 Write completed with error (sct=0, sc=8) 00:22:47.308 starting I/O failed: -6 00:22:47.308 Write completed with error (sct=0, sc=8) 00:22:47.308 starting I/O failed: -6 00:22:47.308 Write completed with error (sct=0, sc=8) 00:22:47.308 starting I/O failed: -6 00:22:47.308 Write completed with error (sct=0, sc=8) 00:22:47.308 starting I/O failed: -6 00:22:47.308 [2024-10-09 00:30:17.420377] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13bca10 is same with the state(6) to be set 00:22:47.308 [2024-10-09 00:30:17.420391] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13bca10 is same with the state(6) to be set 00:22:47.308 Write completed with error (sct=0, sc=8) 00:22:47.308 starting I/O failed: -6 00:22:47.308 Write completed with error (sct=0, sc=8) 00:22:47.308 starting I/O failed: -6 00:22:47.308 Write completed with error (sct=0, sc=8) 00:22:47.308 starting I/O failed: -6 00:22:47.308 [2024-10-09 00:30:17.420746] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:22:47.308 NVMe io qpair process completion error 00:22:47.308 Write completed with error (sct=0, sc=8) 00:22:47.308 starting I/O failed: -6 00:22:47.308 Write completed with error (sct=0, sc=8) 00:22:47.308 Write completed with error (sct=0, sc=8) 00:22:47.308 Write completed with error (sct=0, sc=8) 00:22:47.308 Write completed with error (sct=0, sc=8) 00:22:47.308 starting I/O failed: -6 00:22:47.308 Write completed with error (sct=0, sc=8) 00:22:47.308 Write completed with error (sct=0, sc=8) 00:22:47.308 Write completed with error (sct=0, sc=8) 00:22:47.308 Write completed with error (sct=0, sc=8) 00:22:47.308 starting I/O failed: -6 00:22:47.308 Write completed with error (sct=0, sc=8) 00:22:47.308 Write completed with error (sct=0, sc=8) 00:22:47.308 Write completed with error (sct=0, sc=8) 00:22:47.308 Write completed with error (sct=0, sc=8) 00:22:47.308 starting I/O failed: -6 00:22:47.308 Write completed with error (sct=0, sc=8) 00:22:47.308 Write completed with error (sct=0, sc=8) 00:22:47.308 Write completed with error (sct=0, sc=8) 00:22:47.308 Write completed with error (sct=0, sc=8) 00:22:47.308 starting I/O failed: -6 00:22:47.308 Write completed with error (sct=0, sc=8) 00:22:47.308 Write completed with error (sct=0, sc=8) 00:22:47.308 Write completed with error (sct=0, sc=8) 00:22:47.308 Write completed with error (sct=0, sc=8) 00:22:47.308 starting I/O failed: -6 00:22:47.308 Write completed with error (sct=0, sc=8) 00:22:47.308 Write completed with error (sct=0, sc=8) 00:22:47.308 Write completed with error (sct=0, sc=8) 00:22:47.308 Write completed with error (sct=0, sc=8) 00:22:47.308 starting I/O failed: -6 00:22:47.308 Write completed with error (sct=0, sc=8) 00:22:47.308 Write completed with error (sct=0, sc=8) 00:22:47.308 Write completed with error (sct=0, sc=8) 00:22:47.308 Write completed with error (sct=0, sc=8) 00:22:47.308 starting I/O failed: -6 00:22:47.308 Write completed with error (sct=0, sc=8) 00:22:47.308 Write completed with error (sct=0, sc=8) 00:22:47.308 Write completed with error (sct=0, sc=8) 00:22:47.308 Write completed with error (sct=0, sc=8) 00:22:47.308 starting I/O failed: -6 00:22:47.308 Write completed with error (sct=0, sc=8) 00:22:47.308 Write completed with error (sct=0, sc=8) 00:22:47.308 Write completed with error (sct=0, sc=8) 00:22:47.308 Write completed with error (sct=0, sc=8) 00:22:47.308 starting I/O failed: -6 00:22:47.308 Write completed with error (sct=0, sc=8) 00:22:47.308 Write completed with error (sct=0, sc=8) 00:22:47.308 Write completed with error (sct=0, sc=8) 00:22:47.308 Write completed with error (sct=0, sc=8) 00:22:47.308 starting I/O failed: -6 00:22:47.308 Write completed with error (sct=0, sc=8) 00:22:47.308 Write completed with error (sct=0, sc=8) 00:22:47.308 [2024-10-09 00:30:17.422031] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:22:47.308 starting I/O failed: -6 00:22:47.308 Write completed with error (sct=0, sc=8) 00:22:47.308 Write completed with error (sct=0, sc=8) 00:22:47.308 starting I/O failed: -6 00:22:47.308 Write completed with error (sct=0, sc=8) 00:22:47.308 Write completed with error (sct=0, sc=8) 00:22:47.308 starting I/O failed: -6 00:22:47.308 Write completed with error (sct=0, sc=8) 00:22:47.308 Write completed with error (sct=0, sc=8) 00:22:47.308 starting I/O failed: -6 00:22:47.308 Write completed with error (sct=0, sc=8) 00:22:47.308 Write completed with error (sct=0, sc=8) 00:22:47.308 starting I/O failed: -6 00:22:47.308 Write completed with error (sct=0, sc=8) 00:22:47.308 Write completed with error (sct=0, sc=8) 00:22:47.308 starting I/O failed: -6 00:22:47.308 Write completed with error (sct=0, sc=8) 00:22:47.308 Write completed with error (sct=0, sc=8) 00:22:47.308 starting I/O failed: -6 00:22:47.308 Write completed with error (sct=0, sc=8) 00:22:47.308 Write completed with error (sct=0, sc=8) 00:22:47.308 starting I/O failed: -6 00:22:47.308 Write completed with error (sct=0, sc=8) 00:22:47.308 Write completed with error (sct=0, sc=8) 00:22:47.308 starting I/O failed: -6 00:22:47.308 Write completed with error (sct=0, sc=8) 00:22:47.308 Write completed with error (sct=0, sc=8) 00:22:47.308 starting I/O failed: -6 00:22:47.308 Write completed with error (sct=0, sc=8) 00:22:47.308 Write completed with error (sct=0, sc=8) 00:22:47.308 starting I/O failed: -6 00:22:47.308 Write completed with error (sct=0, sc=8) 00:22:47.308 Write completed with error (sct=0, sc=8) 00:22:47.308 starting I/O failed: -6 00:22:47.308 Write completed with error (sct=0, sc=8) 00:22:47.308 Write completed with error (sct=0, sc=8) 00:22:47.308 starting I/O failed: -6 00:22:47.308 Write completed with error (sct=0, sc=8) 00:22:47.308 Write completed with error (sct=0, sc=8) 00:22:47.308 starting I/O failed: -6 00:22:47.308 Write completed with error (sct=0, sc=8) 00:22:47.308 Write completed with error (sct=0, sc=8) 00:22:47.308 starting I/O failed: -6 00:22:47.308 Write completed with error (sct=0, sc=8) 00:22:47.308 Write completed with error (sct=0, sc=8) 00:22:47.308 starting I/O failed: -6 00:22:47.308 Write completed with error (sct=0, sc=8) 00:22:47.308 Write completed with error (sct=0, sc=8) 00:22:47.308 starting I/O failed: -6 00:22:47.308 Write completed with error (sct=0, sc=8) 00:22:47.308 Write completed with error (sct=0, sc=8) 00:22:47.308 starting I/O failed: -6 00:22:47.308 Write completed with error (sct=0, sc=8) 00:22:47.308 Write completed with error (sct=0, sc=8) 00:22:47.308 starting I/O failed: -6 00:22:47.308 Write completed with error (sct=0, sc=8) 00:22:47.308 [2024-10-09 00:30:17.422818] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:22:47.308 Write completed with error (sct=0, sc=8) 00:22:47.308 starting I/O failed: -6 00:22:47.308 Write completed with error (sct=0, sc=8) 00:22:47.308 Write completed with error (sct=0, sc=8) 00:22:47.308 starting I/O failed: -6 00:22:47.308 Write completed with error (sct=0, sc=8) 00:22:47.308 starting I/O failed: -6 00:22:47.308 Write completed with error (sct=0, sc=8) 00:22:47.308 starting I/O failed: -6 00:22:47.308 Write completed with error (sct=0, sc=8) 00:22:47.308 Write completed with error (sct=0, sc=8) 00:22:47.308 starting I/O failed: -6 00:22:47.308 Write completed with error (sct=0, sc=8) 00:22:47.308 starting I/O failed: -6 00:22:47.308 Write completed with error (sct=0, sc=8) 00:22:47.308 starting I/O failed: -6 00:22:47.308 Write completed with error (sct=0, sc=8) 00:22:47.308 Write completed with error (sct=0, sc=8) 00:22:47.308 starting I/O failed: -6 00:22:47.308 Write completed with error (sct=0, sc=8) 00:22:47.308 starting I/O failed: -6 00:22:47.308 Write completed with error (sct=0, sc=8) 00:22:47.308 starting I/O failed: -6 00:22:47.308 Write completed with error (sct=0, sc=8) 00:22:47.308 Write completed with error (sct=0, sc=8) 00:22:47.308 starting I/O failed: -6 00:22:47.308 Write completed with error (sct=0, sc=8) 00:22:47.308 starting I/O failed: -6 00:22:47.308 Write completed with error (sct=0, sc=8) 00:22:47.308 starting I/O failed: -6 00:22:47.308 Write completed with error (sct=0, sc=8) 00:22:47.308 Write completed with error (sct=0, sc=8) 00:22:47.308 starting I/O failed: -6 00:22:47.308 Write completed with error (sct=0, sc=8) 00:22:47.308 starting I/O failed: -6 00:22:47.308 Write completed with error (sct=0, sc=8) 00:22:47.308 starting I/O failed: -6 00:22:47.308 Write completed with error (sct=0, sc=8) 00:22:47.308 Write completed with error (sct=0, sc=8) 00:22:47.308 starting I/O failed: -6 00:22:47.308 Write completed with error (sct=0, sc=8) 00:22:47.308 starting I/O failed: -6 00:22:47.308 Write completed with error (sct=0, sc=8) 00:22:47.308 starting I/O failed: -6 00:22:47.308 Write completed with error (sct=0, sc=8) 00:22:47.308 Write completed with error (sct=0, sc=8) 00:22:47.308 starting I/O failed: -6 00:22:47.308 Write completed with error (sct=0, sc=8) 00:22:47.308 starting I/O failed: -6 00:22:47.308 Write completed with error (sct=0, sc=8) 00:22:47.308 starting I/O failed: -6 00:22:47.308 Write completed with error (sct=0, sc=8) 00:22:47.308 Write completed with error (sct=0, sc=8) 00:22:47.308 starting I/O failed: -6 00:22:47.308 Write completed with error (sct=0, sc=8) 00:22:47.308 starting I/O failed: -6 00:22:47.308 Write completed with error (sct=0, sc=8) 00:22:47.308 starting I/O failed: -6 00:22:47.308 Write completed with error (sct=0, sc=8) 00:22:47.308 Write completed with error (sct=0, sc=8) 00:22:47.308 starting I/O failed: -6 00:22:47.308 Write completed with error (sct=0, sc=8) 00:22:47.308 starting I/O failed: -6 00:22:47.308 Write completed with error (sct=0, sc=8) 00:22:47.309 starting I/O failed: -6 00:22:47.309 Write completed with error (sct=0, sc=8) 00:22:47.309 Write completed with error (sct=0, sc=8) 00:22:47.309 starting I/O failed: -6 00:22:47.309 Write completed with error (sct=0, sc=8) 00:22:47.309 starting I/O failed: -6 00:22:47.309 Write completed with error (sct=0, sc=8) 00:22:47.309 starting I/O failed: -6 00:22:47.309 Write completed with error (sct=0, sc=8) 00:22:47.309 Write completed with error (sct=0, sc=8) 00:22:47.309 starting I/O failed: -6 00:22:47.309 Write completed with error (sct=0, sc=8) 00:22:47.309 starting I/O failed: -6 00:22:47.309 Write completed with error (sct=0, sc=8) 00:22:47.309 starting I/O failed: -6 00:22:47.309 Write completed with error (sct=0, sc=8) 00:22:47.309 Write completed with error (sct=0, sc=8) 00:22:47.309 starting I/O failed: -6 00:22:47.309 Write completed with error (sct=0, sc=8) 00:22:47.309 starting I/O failed: -6 00:22:47.309 Write completed with error (sct=0, sc=8) 00:22:47.309 starting I/O failed: -6 00:22:47.309 [2024-10-09 00:30:17.423742] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:47.309 Write completed with error (sct=0, sc=8) 00:22:47.309 starting I/O failed: -6 00:22:47.309 Write completed with error (sct=0, sc=8) 00:22:47.309 starting I/O failed: -6 00:22:47.309 Write completed with error (sct=0, sc=8) 00:22:47.309 starting I/O failed: -6 00:22:47.309 Write completed with error (sct=0, sc=8) 00:22:47.309 starting I/O failed: -6 00:22:47.309 Write completed with error (sct=0, sc=8) 00:22:47.309 starting I/O failed: -6 00:22:47.309 Write completed with error (sct=0, sc=8) 00:22:47.309 starting I/O failed: -6 00:22:47.309 Write completed with error (sct=0, sc=8) 00:22:47.309 starting I/O failed: -6 00:22:47.309 Write completed with error (sct=0, sc=8) 00:22:47.309 starting I/O failed: -6 00:22:47.309 Write completed with error (sct=0, sc=8) 00:22:47.309 starting I/O failed: -6 00:22:47.309 Write completed with error (sct=0, sc=8) 00:22:47.309 starting I/O failed: -6 00:22:47.309 Write completed with error (sct=0, sc=8) 00:22:47.309 starting I/O failed: -6 00:22:47.309 Write completed with error (sct=0, sc=8) 00:22:47.309 starting I/O failed: -6 00:22:47.309 Write completed with error (sct=0, sc=8) 00:22:47.309 starting I/O failed: -6 00:22:47.309 Write completed with error (sct=0, sc=8) 00:22:47.309 starting I/O failed: -6 00:22:47.309 Write completed with error (sct=0, sc=8) 00:22:47.309 starting I/O failed: -6 00:22:47.309 Write completed with error (sct=0, sc=8) 00:22:47.309 starting I/O failed: -6 00:22:47.309 Write completed with error (sct=0, sc=8) 00:22:47.309 starting I/O failed: -6 00:22:47.309 Write completed with error (sct=0, sc=8) 00:22:47.309 starting I/O failed: -6 00:22:47.309 Write completed with error (sct=0, sc=8) 00:22:47.309 starting I/O failed: -6 00:22:47.309 Write completed with error (sct=0, sc=8) 00:22:47.309 starting I/O failed: -6 00:22:47.309 Write completed with error (sct=0, sc=8) 00:22:47.309 starting I/O failed: -6 00:22:47.309 Write completed with error (sct=0, sc=8) 00:22:47.309 starting I/O failed: -6 00:22:47.309 Write completed with error (sct=0, sc=8) 00:22:47.309 starting I/O failed: -6 00:22:47.309 Write completed with error (sct=0, sc=8) 00:22:47.309 starting I/O failed: -6 00:22:47.309 Write completed with error (sct=0, sc=8) 00:22:47.309 starting I/O failed: -6 00:22:47.309 Write completed with error (sct=0, sc=8) 00:22:47.309 starting I/O failed: -6 00:22:47.309 Write completed with error (sct=0, sc=8) 00:22:47.309 starting I/O failed: -6 00:22:47.309 Write completed with error (sct=0, sc=8) 00:22:47.309 starting I/O failed: -6 00:22:47.309 Write completed with error (sct=0, sc=8) 00:22:47.309 starting I/O failed: -6 00:22:47.309 Write completed with error (sct=0, sc=8) 00:22:47.309 starting I/O failed: -6 00:22:47.309 Write completed with error (sct=0, sc=8) 00:22:47.309 starting I/O failed: -6 00:22:47.309 Write completed with error (sct=0, sc=8) 00:22:47.309 starting I/O failed: -6 00:22:47.309 Write completed with error (sct=0, sc=8) 00:22:47.309 starting I/O failed: -6 00:22:47.309 Write completed with error (sct=0, sc=8) 00:22:47.309 starting I/O failed: -6 00:22:47.309 Write completed with error (sct=0, sc=8) 00:22:47.309 starting I/O failed: -6 00:22:47.309 Write completed with error (sct=0, sc=8) 00:22:47.309 starting I/O failed: -6 00:22:47.309 Write completed with error (sct=0, sc=8) 00:22:47.309 starting I/O failed: -6 00:22:47.309 Write completed with error (sct=0, sc=8) 00:22:47.309 starting I/O failed: -6 00:22:47.309 Write completed with error (sct=0, sc=8) 00:22:47.309 starting I/O failed: -6 00:22:47.309 Write completed with error (sct=0, sc=8) 00:22:47.309 starting I/O failed: -6 00:22:47.309 Write completed with error (sct=0, sc=8) 00:22:47.309 starting I/O failed: -6 00:22:47.309 Write completed with error (sct=0, sc=8) 00:22:47.309 starting I/O failed: -6 00:22:47.309 Write completed with error (sct=0, sc=8) 00:22:47.309 starting I/O failed: -6 00:22:47.309 Write completed with error (sct=0, sc=8) 00:22:47.309 starting I/O failed: -6 00:22:47.309 Write completed with error (sct=0, sc=8) 00:22:47.309 starting I/O failed: -6 00:22:47.309 Write completed with error (sct=0, sc=8) 00:22:47.309 starting I/O failed: -6 00:22:47.309 Write completed with error (sct=0, sc=8) 00:22:47.309 starting I/O failed: -6 00:22:47.309 Write completed with error (sct=0, sc=8) 00:22:47.309 starting I/O failed: -6 00:22:47.309 Write completed with error (sct=0, sc=8) 00:22:47.309 starting I/O failed: -6 00:22:47.309 Write completed with error (sct=0, sc=8) 00:22:47.309 starting I/O failed: -6 00:22:47.309 Write completed with error (sct=0, sc=8) 00:22:47.309 starting I/O failed: -6 00:22:47.309 Write completed with error (sct=0, sc=8) 00:22:47.309 starting I/O failed: -6 00:22:47.309 Write completed with error (sct=0, sc=8) 00:22:47.309 starting I/O failed: -6 00:22:47.309 Write completed with error (sct=0, sc=8) 00:22:47.309 starting I/O failed: -6 00:22:47.309 Write completed with error (sct=0, sc=8) 00:22:47.309 starting I/O failed: -6 00:22:47.309 Write completed with error (sct=0, sc=8) 00:22:47.309 starting I/O failed: -6 00:22:47.309 Write completed with error (sct=0, sc=8) 00:22:47.309 starting I/O failed: -6 00:22:47.309 Write completed with error (sct=0, sc=8) 00:22:47.309 starting I/O failed: -6 00:22:47.309 Write completed with error (sct=0, sc=8) 00:22:47.309 starting I/O failed: -6 00:22:47.309 Write completed with error (sct=0, sc=8) 00:22:47.309 starting I/O failed: -6 00:22:47.309 Write completed with error (sct=0, sc=8) 00:22:47.309 starting I/O failed: -6 00:22:47.309 [2024-10-09 00:30:17.425358] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:22:47.309 NVMe io qpair process completion error 00:22:47.309 Write completed with error (sct=0, sc=8) 00:22:47.309 Write completed with error (sct=0, sc=8) 00:22:47.309 Write completed with error (sct=0, sc=8) 00:22:47.309 starting I/O failed: -6 00:22:47.309 Write completed with error (sct=0, sc=8) 00:22:47.309 Write completed with error (sct=0, sc=8) 00:22:47.309 Write completed with error (sct=0, sc=8) 00:22:47.309 Write completed with error (sct=0, sc=8) 00:22:47.309 starting I/O failed: -6 00:22:47.309 Write completed with error (sct=0, sc=8) 00:22:47.309 Write completed with error (sct=0, sc=8) 00:22:47.309 Write completed with error (sct=0, sc=8) 00:22:47.309 Write completed with error (sct=0, sc=8) 00:22:47.309 starting I/O failed: -6 00:22:47.309 Write completed with error (sct=0, sc=8) 00:22:47.309 Write completed with error (sct=0, sc=8) 00:22:47.309 Write completed with error (sct=0, sc=8) 00:22:47.309 Write completed with error (sct=0, sc=8) 00:22:47.309 starting I/O failed: -6 00:22:47.309 Write completed with error (sct=0, sc=8) 00:22:47.309 Write completed with error (sct=0, sc=8) 00:22:47.309 Write completed with error (sct=0, sc=8) 00:22:47.309 Write completed with error (sct=0, sc=8) 00:22:47.309 starting I/O failed: -6 00:22:47.309 Write completed with error (sct=0, sc=8) 00:22:47.309 Write completed with error (sct=0, sc=8) 00:22:47.309 Write completed with error (sct=0, sc=8) 00:22:47.309 Write completed with error (sct=0, sc=8) 00:22:47.309 starting I/O failed: -6 00:22:47.309 Write completed with error (sct=0, sc=8) 00:22:47.309 Write completed with error (sct=0, sc=8) 00:22:47.309 Write completed with error (sct=0, sc=8) 00:22:47.309 Write completed with error (sct=0, sc=8) 00:22:47.309 starting I/O failed: -6 00:22:47.309 Write completed with error (sct=0, sc=8) 00:22:47.309 Write completed with error (sct=0, sc=8) 00:22:47.309 Write completed with error (sct=0, sc=8) 00:22:47.309 Write completed with error (sct=0, sc=8) 00:22:47.309 starting I/O failed: -6 00:22:47.309 Write completed with error (sct=0, sc=8) 00:22:47.309 Write completed with error (sct=0, sc=8) 00:22:47.309 Write completed with error (sct=0, sc=8) 00:22:47.309 Write completed with error (sct=0, sc=8) 00:22:47.309 starting I/O failed: -6 00:22:47.309 Write completed with error (sct=0, sc=8) 00:22:47.309 Write completed with error (sct=0, sc=8) 00:22:47.309 [2024-10-09 00:30:17.426587] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:22:47.309 starting I/O failed: -6 00:22:47.309 starting I/O failed: -6 00:22:47.309 starting I/O failed: -6 00:22:47.309 Write completed with error (sct=0, sc=8) 00:22:47.309 starting I/O failed: -6 00:22:47.309 Write completed with error (sct=0, sc=8) 00:22:47.309 Write completed with error (sct=0, sc=8) 00:22:47.309 starting I/O failed: -6 00:22:47.309 Write completed with error (sct=0, sc=8) 00:22:47.309 Write completed with error (sct=0, sc=8) 00:22:47.309 starting I/O failed: -6 00:22:47.309 Write completed with error (sct=0, sc=8) 00:22:47.309 Write completed with error (sct=0, sc=8) 00:22:47.309 starting I/O failed: -6 00:22:47.309 Write completed with error (sct=0, sc=8) 00:22:47.309 Write completed with error (sct=0, sc=8) 00:22:47.309 starting I/O failed: -6 00:22:47.309 Write completed with error (sct=0, sc=8) 00:22:47.309 Write completed with error (sct=0, sc=8) 00:22:47.309 starting I/O failed: -6 00:22:47.309 Write completed with error (sct=0, sc=8) 00:22:47.309 Write completed with error (sct=0, sc=8) 00:22:47.309 starting I/O failed: -6 00:22:47.309 Write completed with error (sct=0, sc=8) 00:22:47.309 Write completed with error (sct=0, sc=8) 00:22:47.309 starting I/O failed: -6 00:22:47.309 Write completed with error (sct=0, sc=8) 00:22:47.309 Write completed with error (sct=0, sc=8) 00:22:47.309 starting I/O failed: -6 00:22:47.309 Write completed with error (sct=0, sc=8) 00:22:47.309 Write completed with error (sct=0, sc=8) 00:22:47.309 starting I/O failed: -6 00:22:47.309 Write completed with error (sct=0, sc=8) 00:22:47.309 Write completed with error (sct=0, sc=8) 00:22:47.310 starting I/O failed: -6 00:22:47.310 Write completed with error (sct=0, sc=8) 00:22:47.310 Write completed with error (sct=0, sc=8) 00:22:47.310 starting I/O failed: -6 00:22:47.310 Write completed with error (sct=0, sc=8) 00:22:47.310 Write completed with error (sct=0, sc=8) 00:22:47.310 starting I/O failed: -6 00:22:47.310 Write completed with error (sct=0, sc=8) 00:22:47.310 Write completed with error (sct=0, sc=8) 00:22:47.310 starting I/O failed: -6 00:22:47.310 Write completed with error (sct=0, sc=8) 00:22:47.310 Write completed with error (sct=0, sc=8) 00:22:47.310 starting I/O failed: -6 00:22:47.310 Write completed with error (sct=0, sc=8) 00:22:47.310 Write completed with error (sct=0, sc=8) 00:22:47.310 starting I/O failed: -6 00:22:47.310 Write completed with error (sct=0, sc=8) 00:22:47.310 Write completed with error (sct=0, sc=8) 00:22:47.310 starting I/O failed: -6 00:22:47.310 Write completed with error (sct=0, sc=8) 00:22:47.310 Write completed with error (sct=0, sc=8) 00:22:47.310 starting I/O failed: -6 00:22:47.310 Write completed with error (sct=0, sc=8) 00:22:47.310 Write completed with error (sct=0, sc=8) 00:22:47.310 starting I/O failed: -6 00:22:47.310 [2024-10-09 00:30:17.427556] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:22:47.310 Write completed with error (sct=0, sc=8) 00:22:47.310 Write completed with error (sct=0, sc=8) 00:22:47.310 starting I/O failed: -6 00:22:47.310 Write completed with error (sct=0, sc=8) 00:22:47.310 starting I/O failed: -6 00:22:47.310 Write completed with error (sct=0, sc=8) 00:22:47.310 starting I/O failed: -6 00:22:47.310 Write completed with error (sct=0, sc=8) 00:22:47.310 Write completed with error (sct=0, sc=8) 00:22:47.310 starting I/O failed: -6 00:22:47.310 Write completed with error (sct=0, sc=8) 00:22:47.310 starting I/O failed: -6 00:22:47.310 Write completed with error (sct=0, sc=8) 00:22:47.310 starting I/O failed: -6 00:22:47.310 Write completed with error (sct=0, sc=8) 00:22:47.310 Write completed with error (sct=0, sc=8) 00:22:47.310 starting I/O failed: -6 00:22:47.310 Write completed with error (sct=0, sc=8) 00:22:47.310 starting I/O failed: -6 00:22:47.310 Write completed with error (sct=0, sc=8) 00:22:47.310 starting I/O failed: -6 00:22:47.310 Write completed with error (sct=0, sc=8) 00:22:47.310 Write completed with error (sct=0, sc=8) 00:22:47.310 starting I/O failed: -6 00:22:47.310 Write completed with error (sct=0, sc=8) 00:22:47.310 starting I/O failed: -6 00:22:47.310 Write completed with error (sct=0, sc=8) 00:22:47.310 starting I/O failed: -6 00:22:47.310 Write completed with error (sct=0, sc=8) 00:22:47.310 Write completed with error (sct=0, sc=8) 00:22:47.310 starting I/O failed: -6 00:22:47.310 Write completed with error (sct=0, sc=8) 00:22:47.310 starting I/O failed: -6 00:22:47.310 Write completed with error (sct=0, sc=8) 00:22:47.310 starting I/O failed: -6 00:22:47.310 Write completed with error (sct=0, sc=8) 00:22:47.310 Write completed with error (sct=0, sc=8) 00:22:47.310 starting I/O failed: -6 00:22:47.310 Write completed with error (sct=0, sc=8) 00:22:47.310 starting I/O failed: -6 00:22:47.310 Write completed with error (sct=0, sc=8) 00:22:47.310 starting I/O failed: -6 00:22:47.310 Write completed with error (sct=0, sc=8) 00:22:47.310 Write completed with error (sct=0, sc=8) 00:22:47.310 starting I/O failed: -6 00:22:47.310 Write completed with error (sct=0, sc=8) 00:22:47.310 starting I/O failed: -6 00:22:47.310 Write completed with error (sct=0, sc=8) 00:22:47.310 starting I/O failed: -6 00:22:47.310 Write completed with error (sct=0, sc=8) 00:22:47.310 Write completed with error (sct=0, sc=8) 00:22:47.310 starting I/O failed: -6 00:22:47.310 Write completed with error (sct=0, sc=8) 00:22:47.310 starting I/O failed: -6 00:22:47.310 Write completed with error (sct=0, sc=8) 00:22:47.310 starting I/O failed: -6 00:22:47.310 Write completed with error (sct=0, sc=8) 00:22:47.310 Write completed with error (sct=0, sc=8) 00:22:47.310 starting I/O failed: -6 00:22:47.310 Write completed with error (sct=0, sc=8) 00:22:47.310 starting I/O failed: -6 00:22:47.310 Write completed with error (sct=0, sc=8) 00:22:47.310 starting I/O failed: -6 00:22:47.310 Write completed with error (sct=0, sc=8) 00:22:47.310 Write completed with error (sct=0, sc=8) 00:22:47.310 starting I/O failed: -6 00:22:47.310 Write completed with error (sct=0, sc=8) 00:22:47.310 starting I/O failed: -6 00:22:47.310 Write completed with error (sct=0, sc=8) 00:22:47.310 starting I/O failed: -6 00:22:47.310 Write completed with error (sct=0, sc=8) 00:22:47.310 Write completed with error (sct=0, sc=8) 00:22:47.310 starting I/O failed: -6 00:22:47.310 Write completed with error (sct=0, sc=8) 00:22:47.310 starting I/O failed: -6 00:22:47.310 Write completed with error (sct=0, sc=8) 00:22:47.310 starting I/O failed: -6 00:22:47.310 Write completed with error (sct=0, sc=8) 00:22:47.310 Write completed with error (sct=0, sc=8) 00:22:47.310 starting I/O failed: -6 00:22:47.310 Write completed with error (sct=0, sc=8) 00:22:47.310 starting I/O failed: -6 00:22:47.310 Write completed with error (sct=0, sc=8) 00:22:47.310 starting I/O failed: -6 00:22:47.310 Write completed with error (sct=0, sc=8) 00:22:47.310 [2024-10-09 00:30:17.428480] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:47.310 Write completed with error (sct=0, sc=8) 00:22:47.310 starting I/O failed: -6 00:22:47.310 Write completed with error (sct=0, sc=8) 00:22:47.310 starting I/O failed: -6 00:22:47.310 Write completed with error (sct=0, sc=8) 00:22:47.310 starting I/O failed: -6 00:22:47.310 Write completed with error (sct=0, sc=8) 00:22:47.310 starting I/O failed: -6 00:22:47.310 Write completed with error (sct=0, sc=8) 00:22:47.310 starting I/O failed: -6 00:22:47.310 Write completed with error (sct=0, sc=8) 00:22:47.310 starting I/O failed: -6 00:22:47.310 Write completed with error (sct=0, sc=8) 00:22:47.310 starting I/O failed: -6 00:22:47.310 Write completed with error (sct=0, sc=8) 00:22:47.310 starting I/O failed: -6 00:22:47.310 Write completed with error (sct=0, sc=8) 00:22:47.310 starting I/O failed: -6 00:22:47.310 Write completed with error (sct=0, sc=8) 00:22:47.310 starting I/O failed: -6 00:22:47.310 Write completed with error (sct=0, sc=8) 00:22:47.310 starting I/O failed: -6 00:22:47.310 Write completed with error (sct=0, sc=8) 00:22:47.310 starting I/O failed: -6 00:22:47.310 Write completed with error (sct=0, sc=8) 00:22:47.310 starting I/O failed: -6 00:22:47.310 Write completed with error (sct=0, sc=8) 00:22:47.310 starting I/O failed: -6 00:22:47.310 Write completed with error (sct=0, sc=8) 00:22:47.310 starting I/O failed: -6 00:22:47.310 Write completed with error (sct=0, sc=8) 00:22:47.310 starting I/O failed: -6 00:22:47.310 Write completed with error (sct=0, sc=8) 00:22:47.310 starting I/O failed: -6 00:22:47.310 Write completed with error (sct=0, sc=8) 00:22:47.310 starting I/O failed: -6 00:22:47.310 Write completed with error (sct=0, sc=8) 00:22:47.310 starting I/O failed: -6 00:22:47.310 Write completed with error (sct=0, sc=8) 00:22:47.310 starting I/O failed: -6 00:22:47.310 Write completed with error (sct=0, sc=8) 00:22:47.310 starting I/O failed: -6 00:22:47.310 Write completed with error (sct=0, sc=8) 00:22:47.310 starting I/O failed: -6 00:22:47.310 Write completed with error (sct=0, sc=8) 00:22:47.310 starting I/O failed: -6 00:22:47.310 Write completed with error (sct=0, sc=8) 00:22:47.310 starting I/O failed: -6 00:22:47.310 Write completed with error (sct=0, sc=8) 00:22:47.310 starting I/O failed: -6 00:22:47.310 Write completed with error (sct=0, sc=8) 00:22:47.310 starting I/O failed: -6 00:22:47.310 Write completed with error (sct=0, sc=8) 00:22:47.310 starting I/O failed: -6 00:22:47.310 Write completed with error (sct=0, sc=8) 00:22:47.310 starting I/O failed: -6 00:22:47.310 Write completed with error (sct=0, sc=8) 00:22:47.310 starting I/O failed: -6 00:22:47.310 Write completed with error (sct=0, sc=8) 00:22:47.310 starting I/O failed: -6 00:22:47.310 Write completed with error (sct=0, sc=8) 00:22:47.310 starting I/O failed: -6 00:22:47.310 Write completed with error (sct=0, sc=8) 00:22:47.310 starting I/O failed: -6 00:22:47.310 Write completed with error (sct=0, sc=8) 00:22:47.310 starting I/O failed: -6 00:22:47.310 Write completed with error (sct=0, sc=8) 00:22:47.310 starting I/O failed: -6 00:22:47.310 Write completed with error (sct=0, sc=8) 00:22:47.310 starting I/O failed: -6 00:22:47.310 Write completed with error (sct=0, sc=8) 00:22:47.310 starting I/O failed: -6 00:22:47.310 Write completed with error (sct=0, sc=8) 00:22:47.310 starting I/O failed: -6 00:22:47.310 Write completed with error (sct=0, sc=8) 00:22:47.310 starting I/O failed: -6 00:22:47.310 Write completed with error (sct=0, sc=8) 00:22:47.310 starting I/O failed: -6 00:22:47.310 Write completed with error (sct=0, sc=8) 00:22:47.310 starting I/O failed: -6 00:22:47.310 Write completed with error (sct=0, sc=8) 00:22:47.310 starting I/O failed: -6 00:22:47.310 Write completed with error (sct=0, sc=8) 00:22:47.310 starting I/O failed: -6 00:22:47.310 Write completed with error (sct=0, sc=8) 00:22:47.310 starting I/O failed: -6 00:22:47.310 Write completed with error (sct=0, sc=8) 00:22:47.310 starting I/O failed: -6 00:22:47.311 Write completed with error (sct=0, sc=8) 00:22:47.311 starting I/O failed: -6 00:22:47.311 Write completed with error (sct=0, sc=8) 00:22:47.311 starting I/O failed: -6 00:22:47.311 Write completed with error (sct=0, sc=8) 00:22:47.311 starting I/O failed: -6 00:22:47.311 Write completed with error (sct=0, sc=8) 00:22:47.311 starting I/O failed: -6 00:22:47.311 Write completed with error (sct=0, sc=8) 00:22:47.311 starting I/O failed: -6 00:22:47.311 Write completed with error (sct=0, sc=8) 00:22:47.311 starting I/O failed: -6 00:22:47.311 Write completed with error (sct=0, sc=8) 00:22:47.311 starting I/O failed: -6 00:22:47.311 Write completed with error (sct=0, sc=8) 00:22:47.311 starting I/O failed: -6 00:22:47.311 Write completed with error (sct=0, sc=8) 00:22:47.311 starting I/O failed: -6 00:22:47.311 Write completed with error (sct=0, sc=8) 00:22:47.311 starting I/O failed: -6 00:22:47.311 Write completed with error (sct=0, sc=8) 00:22:47.311 starting I/O failed: -6 00:22:47.311 Write completed with error (sct=0, sc=8) 00:22:47.311 starting I/O failed: -6 00:22:47.311 Write completed with error (sct=0, sc=8) 00:22:47.311 starting I/O failed: -6 00:22:47.311 Write completed with error (sct=0, sc=8) 00:22:47.311 starting I/O failed: -6 00:22:47.311 Write completed with error (sct=0, sc=8) 00:22:47.311 starting I/O failed: -6 00:22:47.311 Write completed with error (sct=0, sc=8) 00:22:47.311 starting I/O failed: -6 00:22:47.311 Write completed with error (sct=0, sc=8) 00:22:47.311 starting I/O failed: -6 00:22:47.311 [2024-10-09 00:30:17.431328] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:22:47.311 NVMe io qpair process completion error 00:22:47.311 Write completed with error (sct=0, sc=8) 00:22:47.311 starting I/O failed: -6 00:22:47.311 Write completed with error (sct=0, sc=8) 00:22:47.311 Write completed with error (sct=0, sc=8) 00:22:47.311 Write completed with error (sct=0, sc=8) 00:22:47.311 Write completed with error (sct=0, sc=8) 00:22:47.311 starting I/O failed: -6 00:22:47.311 Write completed with error (sct=0, sc=8) 00:22:47.311 Write completed with error (sct=0, sc=8) 00:22:47.311 Write completed with error (sct=0, sc=8) 00:22:47.311 Write completed with error (sct=0, sc=8) 00:22:47.311 starting I/O failed: -6 00:22:47.311 Write completed with error (sct=0, sc=8) 00:22:47.311 Write completed with error (sct=0, sc=8) 00:22:47.311 Write completed with error (sct=0, sc=8) 00:22:47.311 Write completed with error (sct=0, sc=8) 00:22:47.311 starting I/O failed: -6 00:22:47.311 Write completed with error (sct=0, sc=8) 00:22:47.311 Write completed with error (sct=0, sc=8) 00:22:47.311 Write completed with error (sct=0, sc=8) 00:22:47.311 Write completed with error (sct=0, sc=8) 00:22:47.311 starting I/O failed: -6 00:22:47.311 Write completed with error (sct=0, sc=8) 00:22:47.311 Write completed with error (sct=0, sc=8) 00:22:47.311 Write completed with error (sct=0, sc=8) 00:22:47.311 Write completed with error (sct=0, sc=8) 00:22:47.311 starting I/O failed: -6 00:22:47.311 Write completed with error (sct=0, sc=8) 00:22:47.311 Write completed with error (sct=0, sc=8) 00:22:47.311 Write completed with error (sct=0, sc=8) 00:22:47.311 Write completed with error (sct=0, sc=8) 00:22:47.311 starting I/O failed: -6 00:22:47.311 Write completed with error (sct=0, sc=8) 00:22:47.311 Write completed with error (sct=0, sc=8) 00:22:47.311 Write completed with error (sct=0, sc=8) 00:22:47.311 Write completed with error (sct=0, sc=8) 00:22:47.311 starting I/O failed: -6 00:22:47.311 Write completed with error (sct=0, sc=8) 00:22:47.311 Write completed with error (sct=0, sc=8) 00:22:47.311 Write completed with error (sct=0, sc=8) 00:22:47.311 Write completed with error (sct=0, sc=8) 00:22:47.311 starting I/O failed: -6 00:22:47.311 Write completed with error (sct=0, sc=8) 00:22:47.311 Write completed with error (sct=0, sc=8) 00:22:47.311 Write completed with error (sct=0, sc=8) 00:22:47.311 Write completed with error (sct=0, sc=8) 00:22:47.311 starting I/O failed: -6 00:22:47.311 Write completed with error (sct=0, sc=8) 00:22:47.311 Write completed with error (sct=0, sc=8) 00:22:47.311 Write completed with error (sct=0, sc=8) 00:22:47.311 Write completed with error (sct=0, sc=8) 00:22:47.311 starting I/O failed: -6 00:22:47.311 [2024-10-09 00:30:17.432441] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:22:47.311 Write completed with error (sct=0, sc=8) 00:22:47.311 Write completed with error (sct=0, sc=8) 00:22:47.311 starting I/O failed: -6 00:22:47.311 Write completed with error (sct=0, sc=8) 00:22:47.311 starting I/O failed: -6 00:22:47.311 Write completed with error (sct=0, sc=8) 00:22:47.311 Write completed with error (sct=0, sc=8) 00:22:47.311 Write completed with error (sct=0, sc=8) 00:22:47.311 starting I/O failed: -6 00:22:47.311 Write completed with error (sct=0, sc=8) 00:22:47.311 starting I/O failed: -6 00:22:47.311 Write completed with error (sct=0, sc=8) 00:22:47.311 Write completed with error (sct=0, sc=8) 00:22:47.311 Write completed with error (sct=0, sc=8) 00:22:47.311 starting I/O failed: -6 00:22:47.311 Write completed with error (sct=0, sc=8) 00:22:47.311 starting I/O failed: -6 00:22:47.311 Write completed with error (sct=0, sc=8) 00:22:47.311 Write completed with error (sct=0, sc=8) 00:22:47.311 Write completed with error (sct=0, sc=8) 00:22:47.311 starting I/O failed: -6 00:22:47.311 Write completed with error (sct=0, sc=8) 00:22:47.311 starting I/O failed: -6 00:22:47.311 Write completed with error (sct=0, sc=8) 00:22:47.311 Write completed with error (sct=0, sc=8) 00:22:47.311 Write completed with error (sct=0, sc=8) 00:22:47.311 starting I/O failed: -6 00:22:47.311 Write completed with error (sct=0, sc=8) 00:22:47.311 starting I/O failed: -6 00:22:47.311 Write completed with error (sct=0, sc=8) 00:22:47.311 Write completed with error (sct=0, sc=8) 00:22:47.311 Write completed with error (sct=0, sc=8) 00:22:47.311 starting I/O failed: -6 00:22:47.311 Write completed with error (sct=0, sc=8) 00:22:47.311 starting I/O failed: -6 00:22:47.311 Write completed with error (sct=0, sc=8) 00:22:47.311 Write completed with error (sct=0, sc=8) 00:22:47.311 Write completed with error (sct=0, sc=8) 00:22:47.311 starting I/O failed: -6 00:22:47.311 Write completed with error (sct=0, sc=8) 00:22:47.311 starting I/O failed: -6 00:22:47.311 Write completed with error (sct=0, sc=8) 00:22:47.311 Write completed with error (sct=0, sc=8) 00:22:47.311 Write completed with error (sct=0, sc=8) 00:22:47.311 starting I/O failed: -6 00:22:47.311 Write completed with error (sct=0, sc=8) 00:22:47.311 starting I/O failed: -6 00:22:47.311 Write completed with error (sct=0, sc=8) 00:22:47.311 Write completed with error (sct=0, sc=8) 00:22:47.311 Write completed with error (sct=0, sc=8) 00:22:47.311 starting I/O failed: -6 00:22:47.311 Write completed with error (sct=0, sc=8) 00:22:47.311 starting I/O failed: -6 00:22:47.311 Write completed with error (sct=0, sc=8) 00:22:47.311 Write completed with error (sct=0, sc=8) 00:22:47.311 Write completed with error (sct=0, sc=8) 00:22:47.311 starting I/O failed: -6 00:22:47.311 [2024-10-09 00:30:17.433255] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:47.311 Write completed with error (sct=0, sc=8) 00:22:47.311 starting I/O failed: -6 00:22:47.311 Write completed with error (sct=0, sc=8) 00:22:47.311 Write completed with error (sct=0, sc=8) 00:22:47.311 starting I/O failed: -6 00:22:47.311 Write completed with error (sct=0, sc=8) 00:22:47.311 starting I/O failed: -6 00:22:47.311 Write completed with error (sct=0, sc=8) 00:22:47.311 starting I/O failed: -6 00:22:47.311 Write completed with error (sct=0, sc=8) 00:22:47.311 Write completed with error (sct=0, sc=8) 00:22:47.311 starting I/O failed: -6 00:22:47.311 Write completed with error (sct=0, sc=8) 00:22:47.311 starting I/O failed: -6 00:22:47.311 Write completed with error (sct=0, sc=8) 00:22:47.311 starting I/O failed: -6 00:22:47.311 Write completed with error (sct=0, sc=8) 00:22:47.311 Write completed with error (sct=0, sc=8) 00:22:47.311 starting I/O failed: -6 00:22:47.311 Write completed with error (sct=0, sc=8) 00:22:47.311 starting I/O failed: -6 00:22:47.311 Write completed with error (sct=0, sc=8) 00:22:47.311 starting I/O failed: -6 00:22:47.311 Write completed with error (sct=0, sc=8) 00:22:47.311 Write completed with error (sct=0, sc=8) 00:22:47.311 starting I/O failed: -6 00:22:47.311 Write completed with error (sct=0, sc=8) 00:22:47.311 starting I/O failed: -6 00:22:47.311 Write completed with error (sct=0, sc=8) 00:22:47.311 starting I/O failed: -6 00:22:47.311 Write completed with error (sct=0, sc=8) 00:22:47.311 Write completed with error (sct=0, sc=8) 00:22:47.311 starting I/O failed: -6 00:22:47.311 Write completed with error (sct=0, sc=8) 00:22:47.311 starting I/O failed: -6 00:22:47.311 Write completed with error (sct=0, sc=8) 00:22:47.311 starting I/O failed: -6 00:22:47.311 Write completed with error (sct=0, sc=8) 00:22:47.311 Write completed with error (sct=0, sc=8) 00:22:47.311 starting I/O failed: -6 00:22:47.311 Write completed with error (sct=0, sc=8) 00:22:47.311 starting I/O failed: -6 00:22:47.311 Write completed with error (sct=0, sc=8) 00:22:47.311 starting I/O failed: -6 00:22:47.311 Write completed with error (sct=0, sc=8) 00:22:47.311 Write completed with error (sct=0, sc=8) 00:22:47.311 starting I/O failed: -6 00:22:47.311 Write completed with error (sct=0, sc=8) 00:22:47.311 starting I/O failed: -6 00:22:47.311 Write completed with error (sct=0, sc=8) 00:22:47.311 starting I/O failed: -6 00:22:47.311 Write completed with error (sct=0, sc=8) 00:22:47.311 Write completed with error (sct=0, sc=8) 00:22:47.311 starting I/O failed: -6 00:22:47.311 Write completed with error (sct=0, sc=8) 00:22:47.311 starting I/O failed: -6 00:22:47.311 Write completed with error (sct=0, sc=8) 00:22:47.311 starting I/O failed: -6 00:22:47.311 Write completed with error (sct=0, sc=8) 00:22:47.311 Write completed with error (sct=0, sc=8) 00:22:47.311 starting I/O failed: -6 00:22:47.311 Write completed with error (sct=0, sc=8) 00:22:47.311 starting I/O failed: -6 00:22:47.311 Write completed with error (sct=0, sc=8) 00:22:47.311 starting I/O failed: -6 00:22:47.311 Write completed with error (sct=0, sc=8) 00:22:47.311 Write completed with error (sct=0, sc=8) 00:22:47.311 starting I/O failed: -6 00:22:47.311 Write completed with error (sct=0, sc=8) 00:22:47.311 starting I/O failed: -6 00:22:47.311 Write completed with error (sct=0, sc=8) 00:22:47.311 starting I/O failed: -6 00:22:47.311 Write completed with error (sct=0, sc=8) 00:22:47.311 Write completed with error (sct=0, sc=8) 00:22:47.311 starting I/O failed: -6 00:22:47.311 Write completed with error (sct=0, sc=8) 00:22:47.311 starting I/O failed: -6 00:22:47.311 Write completed with error (sct=0, sc=8) 00:22:47.311 starting I/O failed: -6 00:22:47.312 Write completed with error (sct=0, sc=8) 00:22:47.312 Write completed with error (sct=0, sc=8) 00:22:47.312 starting I/O failed: -6 00:22:47.312 Write completed with error (sct=0, sc=8) 00:22:47.312 starting I/O failed: -6 00:22:47.312 Write completed with error (sct=0, sc=8) 00:22:47.312 starting I/O failed: -6 00:22:47.312 [2024-10-09 00:30:17.434181] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:22:47.312 Write completed with error (sct=0, sc=8) 00:22:47.312 starting I/O failed: -6 00:22:47.312 Write completed with error (sct=0, sc=8) 00:22:47.312 starting I/O failed: -6 00:22:47.312 Write completed with error (sct=0, sc=8) 00:22:47.312 starting I/O failed: -6 00:22:47.312 Write completed with error (sct=0, sc=8) 00:22:47.312 starting I/O failed: -6 00:22:47.312 Write completed with error (sct=0, sc=8) 00:22:47.312 starting I/O failed: -6 00:22:47.312 Write completed with error (sct=0, sc=8) 00:22:47.312 starting I/O failed: -6 00:22:47.312 Write completed with error (sct=0, sc=8) 00:22:47.312 starting I/O failed: -6 00:22:47.312 Write completed with error (sct=0, sc=8) 00:22:47.312 starting I/O failed: -6 00:22:47.312 Write completed with error (sct=0, sc=8) 00:22:47.312 starting I/O failed: -6 00:22:47.312 Write completed with error (sct=0, sc=8) 00:22:47.312 starting I/O failed: -6 00:22:47.312 Write completed with error (sct=0, sc=8) 00:22:47.312 starting I/O failed: -6 00:22:47.312 Write completed with error (sct=0, sc=8) 00:22:47.312 starting I/O failed: -6 00:22:47.312 Write completed with error (sct=0, sc=8) 00:22:47.312 starting I/O failed: -6 00:22:47.312 Write completed with error (sct=0, sc=8) 00:22:47.312 starting I/O failed: -6 00:22:47.312 Write completed with error (sct=0, sc=8) 00:22:47.312 starting I/O failed: -6 00:22:47.312 Write completed with error (sct=0, sc=8) 00:22:47.312 starting I/O failed: -6 00:22:47.312 Write completed with error (sct=0, sc=8) 00:22:47.312 starting I/O failed: -6 00:22:47.312 Write completed with error (sct=0, sc=8) 00:22:47.312 starting I/O failed: -6 00:22:47.312 Write completed with error (sct=0, sc=8) 00:22:47.312 starting I/O failed: -6 00:22:47.312 Write completed with error (sct=0, sc=8) 00:22:47.312 starting I/O failed: -6 00:22:47.312 Write completed with error (sct=0, sc=8) 00:22:47.312 starting I/O failed: -6 00:22:47.312 Write completed with error (sct=0, sc=8) 00:22:47.312 starting I/O failed: -6 00:22:47.312 Write completed with error (sct=0, sc=8) 00:22:47.312 starting I/O failed: -6 00:22:47.312 Write completed with error (sct=0, sc=8) 00:22:47.312 starting I/O failed: -6 00:22:47.312 Write completed with error (sct=0, sc=8) 00:22:47.312 starting I/O failed: -6 00:22:47.312 Write completed with error (sct=0, sc=8) 00:22:47.312 starting I/O failed: -6 00:22:47.312 Write completed with error (sct=0, sc=8) 00:22:47.312 starting I/O failed: -6 00:22:47.312 Write completed with error (sct=0, sc=8) 00:22:47.312 starting I/O failed: -6 00:22:47.312 Write completed with error (sct=0, sc=8) 00:22:47.312 starting I/O failed: -6 00:22:47.312 Write completed with error (sct=0, sc=8) 00:22:47.312 starting I/O failed: -6 00:22:47.312 Write completed with error (sct=0, sc=8) 00:22:47.312 starting I/O failed: -6 00:22:47.312 Write completed with error (sct=0, sc=8) 00:22:47.312 starting I/O failed: -6 00:22:47.312 Write completed with error (sct=0, sc=8) 00:22:47.312 starting I/O failed: -6 00:22:47.312 Write completed with error (sct=0, sc=8) 00:22:47.312 starting I/O failed: -6 00:22:47.312 Write completed with error (sct=0, sc=8) 00:22:47.312 starting I/O failed: -6 00:22:47.312 Write completed with error (sct=0, sc=8) 00:22:47.312 starting I/O failed: -6 00:22:47.312 Write completed with error (sct=0, sc=8) 00:22:47.312 starting I/O failed: -6 00:22:47.312 Write completed with error (sct=0, sc=8) 00:22:47.312 starting I/O failed: -6 00:22:47.312 Write completed with error (sct=0, sc=8) 00:22:47.312 starting I/O failed: -6 00:22:47.312 Write completed with error (sct=0, sc=8) 00:22:47.312 starting I/O failed: -6 00:22:47.312 Write completed with error (sct=0, sc=8) 00:22:47.312 starting I/O failed: -6 00:22:47.312 Write completed with error (sct=0, sc=8) 00:22:47.312 starting I/O failed: -6 00:22:47.312 Write completed with error (sct=0, sc=8) 00:22:47.312 starting I/O failed: -6 00:22:47.312 Write completed with error (sct=0, sc=8) 00:22:47.312 starting I/O failed: -6 00:22:47.312 Write completed with error (sct=0, sc=8) 00:22:47.312 starting I/O failed: -6 00:22:47.312 Write completed with error (sct=0, sc=8) 00:22:47.312 starting I/O failed: -6 00:22:47.312 Write completed with error (sct=0, sc=8) 00:22:47.312 starting I/O failed: -6 00:22:47.312 Write completed with error (sct=0, sc=8) 00:22:47.312 starting I/O failed: -6 00:22:47.312 Write completed with error (sct=0, sc=8) 00:22:47.312 starting I/O failed: -6 00:22:47.312 Write completed with error (sct=0, sc=8) 00:22:47.312 starting I/O failed: -6 00:22:47.312 Write completed with error (sct=0, sc=8) 00:22:47.312 starting I/O failed: -6 00:22:47.312 Write completed with error (sct=0, sc=8) 00:22:47.312 starting I/O failed: -6 00:22:47.312 Write completed with error (sct=0, sc=8) 00:22:47.312 starting I/O failed: -6 00:22:47.312 Write completed with error (sct=0, sc=8) 00:22:47.312 starting I/O failed: -6 00:22:47.312 Write completed with error (sct=0, sc=8) 00:22:47.312 starting I/O failed: -6 00:22:47.312 Write completed with error (sct=0, sc=8) 00:22:47.312 starting I/O failed: -6 00:22:47.312 Write completed with error (sct=0, sc=8) 00:22:47.312 starting I/O failed: -6 00:22:47.312 Write completed with error (sct=0, sc=8) 00:22:47.312 starting I/O failed: -6 00:22:47.312 Write completed with error (sct=0, sc=8) 00:22:47.312 starting I/O failed: -6 00:22:47.312 Write completed with error (sct=0, sc=8) 00:22:47.312 starting I/O failed: -6 00:22:47.312 Write completed with error (sct=0, sc=8) 00:22:47.312 starting I/O failed: -6 00:22:47.312 [2024-10-09 00:30:17.435620] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:22:47.312 NVMe io qpair process completion error 00:22:47.312 Write completed with error (sct=0, sc=8) 00:22:47.312 Write completed with error (sct=0, sc=8) 00:22:47.312 Write completed with error (sct=0, sc=8) 00:22:47.312 Write completed with error (sct=0, sc=8) 00:22:47.312 starting I/O failed: -6 00:22:47.312 Write completed with error (sct=0, sc=8) 00:22:47.312 Write completed with error (sct=0, sc=8) 00:22:47.312 Write completed with error (sct=0, sc=8) 00:22:47.312 Write completed with error (sct=0, sc=8) 00:22:47.312 starting I/O failed: -6 00:22:47.312 Write completed with error (sct=0, sc=8) 00:22:47.312 Write completed with error (sct=0, sc=8) 00:22:47.312 Write completed with error (sct=0, sc=8) 00:22:47.312 Write completed with error (sct=0, sc=8) 00:22:47.312 starting I/O failed: -6 00:22:47.312 Write completed with error (sct=0, sc=8) 00:22:47.312 Write completed with error (sct=0, sc=8) 00:22:47.312 Write completed with error (sct=0, sc=8) 00:22:47.312 Write completed with error (sct=0, sc=8) 00:22:47.312 starting I/O failed: -6 00:22:47.312 Write completed with error (sct=0, sc=8) 00:22:47.312 Write completed with error (sct=0, sc=8) 00:22:47.312 Write completed with error (sct=0, sc=8) 00:22:47.312 Write completed with error (sct=0, sc=8) 00:22:47.312 starting I/O failed: -6 00:22:47.312 Write completed with error (sct=0, sc=8) 00:22:47.312 Write completed with error (sct=0, sc=8) 00:22:47.312 Write completed with error (sct=0, sc=8) 00:22:47.312 Write completed with error (sct=0, sc=8) 00:22:47.312 starting I/O failed: -6 00:22:47.312 Write completed with error (sct=0, sc=8) 00:22:47.312 Write completed with error (sct=0, sc=8) 00:22:47.312 Write completed with error (sct=0, sc=8) 00:22:47.312 Write completed with error (sct=0, sc=8) 00:22:47.312 starting I/O failed: -6 00:22:47.312 Write completed with error (sct=0, sc=8) 00:22:47.312 Write completed with error (sct=0, sc=8) 00:22:47.312 Write completed with error (sct=0, sc=8) 00:22:47.312 Write completed with error (sct=0, sc=8) 00:22:47.312 starting I/O failed: -6 00:22:47.312 Write completed with error (sct=0, sc=8) 00:22:47.312 Write completed with error (sct=0, sc=8) 00:22:47.312 Write completed with error (sct=0, sc=8) 00:22:47.312 Write completed with error (sct=0, sc=8) 00:22:47.312 starting I/O failed: -6 00:22:47.312 Write completed with error (sct=0, sc=8) 00:22:47.312 Write completed with error (sct=0, sc=8) 00:22:47.312 Write completed with error (sct=0, sc=8) 00:22:47.312 Write completed with error (sct=0, sc=8) 00:22:47.312 starting I/O failed: -6 00:22:47.312 Write completed with error (sct=0, sc=8) 00:22:47.312 Write completed with error (sct=0, sc=8) 00:22:47.312 Write completed with error (sct=0, sc=8) 00:22:47.312 [2024-10-09 00:30:17.437191] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:22:47.312 starting I/O failed: -6 00:22:47.312 starting I/O failed: -6 00:22:47.312 Write completed with error (sct=0, sc=8) 00:22:47.312 Write completed with error (sct=0, sc=8) 00:22:47.312 starting I/O failed: -6 00:22:47.312 Write completed with error (sct=0, sc=8) 00:22:47.312 starting I/O failed: -6 00:22:47.312 Write completed with error (sct=0, sc=8) 00:22:47.312 Write completed with error (sct=0, sc=8) 00:22:47.312 Write completed with error (sct=0, sc=8) 00:22:47.312 starting I/O failed: -6 00:22:47.312 Write completed with error (sct=0, sc=8) 00:22:47.312 starting I/O failed: -6 00:22:47.312 Write completed with error (sct=0, sc=8) 00:22:47.312 Write completed with error (sct=0, sc=8) 00:22:47.312 Write completed with error (sct=0, sc=8) 00:22:47.312 starting I/O failed: -6 00:22:47.312 Write completed with error (sct=0, sc=8) 00:22:47.312 starting I/O failed: -6 00:22:47.312 Write completed with error (sct=0, sc=8) 00:22:47.312 Write completed with error (sct=0, sc=8) 00:22:47.312 Write completed with error (sct=0, sc=8) 00:22:47.312 starting I/O failed: -6 00:22:47.312 Write completed with error (sct=0, sc=8) 00:22:47.312 starting I/O failed: -6 00:22:47.312 Write completed with error (sct=0, sc=8) 00:22:47.312 Write completed with error (sct=0, sc=8) 00:22:47.312 Write completed with error (sct=0, sc=8) 00:22:47.312 starting I/O failed: -6 00:22:47.312 Write completed with error (sct=0, sc=8) 00:22:47.312 starting I/O failed: -6 00:22:47.312 Write completed with error (sct=0, sc=8) 00:22:47.312 Write completed with error (sct=0, sc=8) 00:22:47.312 Write completed with error (sct=0, sc=8) 00:22:47.313 starting I/O failed: -6 00:22:47.313 Write completed with error (sct=0, sc=8) 00:22:47.313 starting I/O failed: -6 00:22:47.313 Write completed with error (sct=0, sc=8) 00:22:47.313 Write completed with error (sct=0, sc=8) 00:22:47.313 Write completed with error (sct=0, sc=8) 00:22:47.313 starting I/O failed: -6 00:22:47.313 Write completed with error (sct=0, sc=8) 00:22:47.313 starting I/O failed: -6 00:22:47.313 Write completed with error (sct=0, sc=8) 00:22:47.313 Write completed with error (sct=0, sc=8) 00:22:47.313 Write completed with error (sct=0, sc=8) 00:22:47.313 starting I/O failed: -6 00:22:47.313 Write completed with error (sct=0, sc=8) 00:22:47.313 starting I/O failed: -6 00:22:47.313 Write completed with error (sct=0, sc=8) 00:22:47.313 Write completed with error (sct=0, sc=8) 00:22:47.313 Write completed with error (sct=0, sc=8) 00:22:47.313 starting I/O failed: -6 00:22:47.313 [2024-10-09 00:30:17.438056] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:47.313 Write completed with error (sct=0, sc=8) 00:22:47.313 starting I/O failed: -6 00:22:47.313 Write completed with error (sct=0, sc=8) 00:22:47.313 Write completed with error (sct=0, sc=8) 00:22:47.313 starting I/O failed: -6 00:22:47.313 Write completed with error (sct=0, sc=8) 00:22:47.313 starting I/O failed: -6 00:22:47.313 Write completed with error (sct=0, sc=8) 00:22:47.313 starting I/O failed: -6 00:22:47.313 Write completed with error (sct=0, sc=8) 00:22:47.313 Write completed with error (sct=0, sc=8) 00:22:47.313 starting I/O failed: -6 00:22:47.313 Write completed with error (sct=0, sc=8) 00:22:47.313 starting I/O failed: -6 00:22:47.313 Write completed with error (sct=0, sc=8) 00:22:47.313 starting I/O failed: -6 00:22:47.313 Write completed with error (sct=0, sc=8) 00:22:47.313 Write completed with error (sct=0, sc=8) 00:22:47.313 starting I/O failed: -6 00:22:47.313 Write completed with error (sct=0, sc=8) 00:22:47.313 starting I/O failed: -6 00:22:47.313 Write completed with error (sct=0, sc=8) 00:22:47.313 starting I/O failed: -6 00:22:47.313 Write completed with error (sct=0, sc=8) 00:22:47.313 Write completed with error (sct=0, sc=8) 00:22:47.313 starting I/O failed: -6 00:22:47.313 Write completed with error (sct=0, sc=8) 00:22:47.313 starting I/O failed: -6 00:22:47.313 Write completed with error (sct=0, sc=8) 00:22:47.313 starting I/O failed: -6 00:22:47.313 Write completed with error (sct=0, sc=8) 00:22:47.313 Write completed with error (sct=0, sc=8) 00:22:47.313 starting I/O failed: -6 00:22:47.313 Write completed with error (sct=0, sc=8) 00:22:47.313 starting I/O failed: -6 00:22:47.313 Write completed with error (sct=0, sc=8) 00:22:47.313 starting I/O failed: -6 00:22:47.313 Write completed with error (sct=0, sc=8) 00:22:47.313 Write completed with error (sct=0, sc=8) 00:22:47.313 starting I/O failed: -6 00:22:47.313 Write completed with error (sct=0, sc=8) 00:22:47.313 starting I/O failed: -6 00:22:47.313 Write completed with error (sct=0, sc=8) 00:22:47.313 starting I/O failed: -6 00:22:47.313 Write completed with error (sct=0, sc=8) 00:22:47.313 Write completed with error (sct=0, sc=8) 00:22:47.313 starting I/O failed: -6 00:22:47.313 Write completed with error (sct=0, sc=8) 00:22:47.313 starting I/O failed: -6 00:22:47.313 Write completed with error (sct=0, sc=8) 00:22:47.313 starting I/O failed: -6 00:22:47.313 Write completed with error (sct=0, sc=8) 00:22:47.313 Write completed with error (sct=0, sc=8) 00:22:47.313 starting I/O failed: -6 00:22:47.313 Write completed with error (sct=0, sc=8) 00:22:47.313 starting I/O failed: -6 00:22:47.313 Write completed with error (sct=0, sc=8) 00:22:47.313 starting I/O failed: -6 00:22:47.313 Write completed with error (sct=0, sc=8) 00:22:47.313 Write completed with error (sct=0, sc=8) 00:22:47.313 starting I/O failed: -6 00:22:47.313 Write completed with error (sct=0, sc=8) 00:22:47.313 starting I/O failed: -6 00:22:47.313 Write completed with error (sct=0, sc=8) 00:22:47.313 starting I/O failed: -6 00:22:47.313 Write completed with error (sct=0, sc=8) 00:22:47.313 Write completed with error (sct=0, sc=8) 00:22:47.313 starting I/O failed: -6 00:22:47.313 Write completed with error (sct=0, sc=8) 00:22:47.313 starting I/O failed: -6 00:22:47.313 Write completed with error (sct=0, sc=8) 00:22:47.313 starting I/O failed: -6 00:22:47.313 Write completed with error (sct=0, sc=8) 00:22:47.313 Write completed with error (sct=0, sc=8) 00:22:47.313 starting I/O failed: -6 00:22:47.313 Write completed with error (sct=0, sc=8) 00:22:47.313 starting I/O failed: -6 00:22:47.313 Write completed with error (sct=0, sc=8) 00:22:47.313 starting I/O failed: -6 00:22:47.313 Write completed with error (sct=0, sc=8) 00:22:47.313 Write completed with error (sct=0, sc=8) 00:22:47.313 starting I/O failed: -6 00:22:47.313 Write completed with error (sct=0, sc=8) 00:22:47.313 starting I/O failed: -6 00:22:47.313 Write completed with error (sct=0, sc=8) 00:22:47.313 starting I/O failed: -6 00:22:47.313 Write completed with error (sct=0, sc=8) 00:22:47.313 [2024-10-09 00:30:17.439001] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:22:47.313 Write completed with error (sct=0, sc=8) 00:22:47.313 starting I/O failed: -6 00:22:47.313 Write completed with error (sct=0, sc=8) 00:22:47.313 starting I/O failed: -6 00:22:47.313 Write completed with error (sct=0, sc=8) 00:22:47.313 starting I/O failed: -6 00:22:47.313 Write completed with error (sct=0, sc=8) 00:22:47.313 starting I/O failed: -6 00:22:47.313 Write completed with error (sct=0, sc=8) 00:22:47.313 starting I/O failed: -6 00:22:47.313 Write completed with error (sct=0, sc=8) 00:22:47.313 starting I/O failed: -6 00:22:47.313 Write completed with error (sct=0, sc=8) 00:22:47.313 starting I/O failed: -6 00:22:47.313 Write completed with error (sct=0, sc=8) 00:22:47.313 starting I/O failed: -6 00:22:47.313 Write completed with error (sct=0, sc=8) 00:22:47.313 starting I/O failed: -6 00:22:47.313 Write completed with error (sct=0, sc=8) 00:22:47.313 starting I/O failed: -6 00:22:47.313 Write completed with error (sct=0, sc=8) 00:22:47.313 starting I/O failed: -6 00:22:47.313 Write completed with error (sct=0, sc=8) 00:22:47.313 starting I/O failed: -6 00:22:47.313 Write completed with error (sct=0, sc=8) 00:22:47.313 starting I/O failed: -6 00:22:47.313 Write completed with error (sct=0, sc=8) 00:22:47.313 starting I/O failed: -6 00:22:47.313 Write completed with error (sct=0, sc=8) 00:22:47.313 starting I/O failed: -6 00:22:47.313 Write completed with error (sct=0, sc=8) 00:22:47.313 starting I/O failed: -6 00:22:47.313 Write completed with error (sct=0, sc=8) 00:22:47.313 starting I/O failed: -6 00:22:47.313 Write completed with error (sct=0, sc=8) 00:22:47.313 starting I/O failed: -6 00:22:47.313 Write completed with error (sct=0, sc=8) 00:22:47.313 starting I/O failed: -6 00:22:47.313 Write completed with error (sct=0, sc=8) 00:22:47.313 starting I/O failed: -6 00:22:47.313 Write completed with error (sct=0, sc=8) 00:22:47.313 starting I/O failed: -6 00:22:47.313 Write completed with error (sct=0, sc=8) 00:22:47.313 starting I/O failed: -6 00:22:47.313 Write completed with error (sct=0, sc=8) 00:22:47.313 starting I/O failed: -6 00:22:47.313 Write completed with error (sct=0, sc=8) 00:22:47.313 starting I/O failed: -6 00:22:47.313 Write completed with error (sct=0, sc=8) 00:22:47.313 starting I/O failed: -6 00:22:47.313 Write completed with error (sct=0, sc=8) 00:22:47.313 starting I/O failed: -6 00:22:47.313 Write completed with error (sct=0, sc=8) 00:22:47.313 starting I/O failed: -6 00:22:47.313 Write completed with error (sct=0, sc=8) 00:22:47.313 starting I/O failed: -6 00:22:47.313 Write completed with error (sct=0, sc=8) 00:22:47.313 starting I/O failed: -6 00:22:47.313 Write completed with error (sct=0, sc=8) 00:22:47.313 starting I/O failed: -6 00:22:47.313 Write completed with error (sct=0, sc=8) 00:22:47.313 starting I/O failed: -6 00:22:47.313 Write completed with error (sct=0, sc=8) 00:22:47.313 starting I/O failed: -6 00:22:47.313 Write completed with error (sct=0, sc=8) 00:22:47.313 starting I/O failed: -6 00:22:47.313 Write completed with error (sct=0, sc=8) 00:22:47.313 starting I/O failed: -6 00:22:47.313 Write completed with error (sct=0, sc=8) 00:22:47.313 starting I/O failed: -6 00:22:47.313 Write completed with error (sct=0, sc=8) 00:22:47.313 starting I/O failed: -6 00:22:47.313 Write completed with error (sct=0, sc=8) 00:22:47.313 starting I/O failed: -6 00:22:47.313 Write completed with error (sct=0, sc=8) 00:22:47.313 starting I/O failed: -6 00:22:47.313 Write completed with error (sct=0, sc=8) 00:22:47.313 starting I/O failed: -6 00:22:47.313 Write completed with error (sct=0, sc=8) 00:22:47.313 starting I/O failed: -6 00:22:47.313 Write completed with error (sct=0, sc=8) 00:22:47.313 starting I/O failed: -6 00:22:47.313 Write completed with error (sct=0, sc=8) 00:22:47.313 starting I/O failed: -6 00:22:47.313 Write completed with error (sct=0, sc=8) 00:22:47.313 starting I/O failed: -6 00:22:47.313 Write completed with error (sct=0, sc=8) 00:22:47.313 starting I/O failed: -6 00:22:47.313 Write completed with error (sct=0, sc=8) 00:22:47.313 starting I/O failed: -6 00:22:47.313 Write completed with error (sct=0, sc=8) 00:22:47.313 starting I/O failed: -6 00:22:47.313 Write completed with error (sct=0, sc=8) 00:22:47.313 starting I/O failed: -6 00:22:47.313 Write completed with error (sct=0, sc=8) 00:22:47.313 starting I/O failed: -6 00:22:47.313 Write completed with error (sct=0, sc=8) 00:22:47.313 starting I/O failed: -6 00:22:47.313 Write completed with error (sct=0, sc=8) 00:22:47.313 starting I/O failed: -6 00:22:47.313 Write completed with error (sct=0, sc=8) 00:22:47.313 starting I/O failed: -6 00:22:47.313 Write completed with error (sct=0, sc=8) 00:22:47.313 starting I/O failed: -6 00:22:47.313 Write completed with error (sct=0, sc=8) 00:22:47.313 starting I/O failed: -6 00:22:47.313 Write completed with error (sct=0, sc=8) 00:22:47.313 starting I/O failed: -6 00:22:47.313 Write completed with error (sct=0, sc=8) 00:22:47.313 starting I/O failed: -6 00:22:47.313 Write completed with error (sct=0, sc=8) 00:22:47.313 starting I/O failed: -6 00:22:47.313 Write completed with error (sct=0, sc=8) 00:22:47.313 starting I/O failed: -6 00:22:47.314 Write completed with error (sct=0, sc=8) 00:22:47.314 starting I/O failed: -6 00:22:47.314 Write completed with error (sct=0, sc=8) 00:22:47.314 starting I/O failed: -6 00:22:47.314 Write completed with error (sct=0, sc=8) 00:22:47.314 starting I/O failed: -6 00:22:47.314 Write completed with error (sct=0, sc=8) 00:22:47.314 starting I/O failed: -6 00:22:47.314 Write completed with error (sct=0, sc=8) 00:22:47.314 starting I/O failed: -6 00:22:47.314 [2024-10-09 00:30:17.441731] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:22:47.314 NVMe io qpair process completion error 00:22:47.314 Write completed with error (sct=0, sc=8) 00:22:47.314 Write completed with error (sct=0, sc=8) 00:22:47.314 Write completed with error (sct=0, sc=8) 00:22:47.314 Write completed with error (sct=0, sc=8) 00:22:47.314 starting I/O failed: -6 00:22:47.314 Write completed with error (sct=0, sc=8) 00:22:47.314 Write completed with error (sct=0, sc=8) 00:22:47.314 Write completed with error (sct=0, sc=8) 00:22:47.314 Write completed with error (sct=0, sc=8) 00:22:47.314 starting I/O failed: -6 00:22:47.314 Write completed with error (sct=0, sc=8) 00:22:47.314 Write completed with error (sct=0, sc=8) 00:22:47.314 Write completed with error (sct=0, sc=8) 00:22:47.314 Write completed with error (sct=0, sc=8) 00:22:47.314 starting I/O failed: -6 00:22:47.314 Write completed with error (sct=0, sc=8) 00:22:47.314 Write completed with error (sct=0, sc=8) 00:22:47.314 Write completed with error (sct=0, sc=8) 00:22:47.314 Write completed with error (sct=0, sc=8) 00:22:47.314 starting I/O failed: -6 00:22:47.314 Write completed with error (sct=0, sc=8) 00:22:47.314 Write completed with error (sct=0, sc=8) 00:22:47.314 Write completed with error (sct=0, sc=8) 00:22:47.314 Write completed with error (sct=0, sc=8) 00:22:47.314 starting I/O failed: -6 00:22:47.314 Write completed with error (sct=0, sc=8) 00:22:47.314 Write completed with error (sct=0, sc=8) 00:22:47.314 Write completed with error (sct=0, sc=8) 00:22:47.314 Write completed with error (sct=0, sc=8) 00:22:47.314 starting I/O failed: -6 00:22:47.314 Write completed with error (sct=0, sc=8) 00:22:47.314 Write completed with error (sct=0, sc=8) 00:22:47.314 Write completed with error (sct=0, sc=8) 00:22:47.314 Write completed with error (sct=0, sc=8) 00:22:47.314 starting I/O failed: -6 00:22:47.314 Write completed with error (sct=0, sc=8) 00:22:47.314 Write completed with error (sct=0, sc=8) 00:22:47.314 Write completed with error (sct=0, sc=8) 00:22:47.314 Write completed with error (sct=0, sc=8) 00:22:47.314 starting I/O failed: -6 00:22:47.314 Write completed with error (sct=0, sc=8) 00:22:47.314 Write completed with error (sct=0, sc=8) 00:22:47.314 Write completed with error (sct=0, sc=8) 00:22:47.314 Write completed with error (sct=0, sc=8) 00:22:47.314 starting I/O failed: -6 00:22:47.314 Write completed with error (sct=0, sc=8) 00:22:47.314 Write completed with error (sct=0, sc=8) 00:22:47.314 Write completed with error (sct=0, sc=8) 00:22:47.314 [2024-10-09 00:30:17.442823] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:22:47.314 Write completed with error (sct=0, sc=8) 00:22:47.314 starting I/O failed: -6 00:22:47.314 Write completed with error (sct=0, sc=8) 00:22:47.314 starting I/O failed: -6 00:22:47.314 Write completed with error (sct=0, sc=8) 00:22:47.314 Write completed with error (sct=0, sc=8) 00:22:47.314 Write completed with error (sct=0, sc=8) 00:22:47.314 starting I/O failed: -6 00:22:47.314 Write completed with error (sct=0, sc=8) 00:22:47.314 starting I/O failed: -6 00:22:47.314 Write completed with error (sct=0, sc=8) 00:22:47.314 Write completed with error (sct=0, sc=8) 00:22:47.314 Write completed with error (sct=0, sc=8) 00:22:47.314 starting I/O failed: -6 00:22:47.314 Write completed with error (sct=0, sc=8) 00:22:47.314 starting I/O failed: -6 00:22:47.314 Write completed with error (sct=0, sc=8) 00:22:47.314 Write completed with error (sct=0, sc=8) 00:22:47.314 Write completed with error (sct=0, sc=8) 00:22:47.314 starting I/O failed: -6 00:22:47.314 Write completed with error (sct=0, sc=8) 00:22:47.314 starting I/O failed: -6 00:22:47.314 Write completed with error (sct=0, sc=8) 00:22:47.314 Write completed with error (sct=0, sc=8) 00:22:47.314 Write completed with error (sct=0, sc=8) 00:22:47.314 starting I/O failed: -6 00:22:47.314 Write completed with error (sct=0, sc=8) 00:22:47.314 starting I/O failed: -6 00:22:47.314 Write completed with error (sct=0, sc=8) 00:22:47.314 Write completed with error (sct=0, sc=8) 00:22:47.314 Write completed with error (sct=0, sc=8) 00:22:47.314 starting I/O failed: -6 00:22:47.314 Write completed with error (sct=0, sc=8) 00:22:47.314 starting I/O failed: -6 00:22:47.314 Write completed with error (sct=0, sc=8) 00:22:47.314 Write completed with error (sct=0, sc=8) 00:22:47.314 Write completed with error (sct=0, sc=8) 00:22:47.314 starting I/O failed: -6 00:22:47.314 Write completed with error (sct=0, sc=8) 00:22:47.314 starting I/O failed: -6 00:22:47.314 Write completed with error (sct=0, sc=8) 00:22:47.314 Write completed with error (sct=0, sc=8) 00:22:47.314 Write completed with error (sct=0, sc=8) 00:22:47.314 starting I/O failed: -6 00:22:47.314 Write completed with error (sct=0, sc=8) 00:22:47.314 starting I/O failed: -6 00:22:47.314 Write completed with error (sct=0, sc=8) 00:22:47.314 Write completed with error (sct=0, sc=8) 00:22:47.314 Write completed with error (sct=0, sc=8) 00:22:47.314 starting I/O failed: -6 00:22:47.314 Write completed with error (sct=0, sc=8) 00:22:47.314 starting I/O failed: -6 00:22:47.314 Write completed with error (sct=0, sc=8) 00:22:47.314 Write completed with error (sct=0, sc=8) 00:22:47.314 Write completed with error (sct=0, sc=8) 00:22:47.314 starting I/O failed: -6 00:22:47.314 Write completed with error (sct=0, sc=8) 00:22:47.314 starting I/O failed: -6 00:22:47.314 Write completed with error (sct=0, sc=8) 00:22:47.314 Write completed with error (sct=0, sc=8) 00:22:47.314 Write completed with error (sct=0, sc=8) 00:22:47.314 starting I/O failed: -6 00:22:47.314 Write completed with error (sct=0, sc=8) 00:22:47.314 starting I/O failed: -6 00:22:47.314 Write completed with error (sct=0, sc=8) 00:22:47.314 Write completed with error (sct=0, sc=8) 00:22:47.314 Write completed with error (sct=0, sc=8) 00:22:47.314 starting I/O failed: -6 00:22:47.314 Write completed with error (sct=0, sc=8) 00:22:47.314 starting I/O failed: -6 00:22:47.314 [2024-10-09 00:30:17.443728] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:47.314 Write completed with error (sct=0, sc=8) 00:22:47.314 starting I/O failed: -6 00:22:47.314 Write completed with error (sct=0, sc=8) 00:22:47.314 Write completed with error (sct=0, sc=8) 00:22:47.314 starting I/O failed: -6 00:22:47.314 Write completed with error (sct=0, sc=8) 00:22:47.314 starting I/O failed: -6 00:22:47.314 Write completed with error (sct=0, sc=8) 00:22:47.314 starting I/O failed: -6 00:22:47.314 Write completed with error (sct=0, sc=8) 00:22:47.314 Write completed with error (sct=0, sc=8) 00:22:47.314 starting I/O failed: -6 00:22:47.314 Write completed with error (sct=0, sc=8) 00:22:47.314 starting I/O failed: -6 00:22:47.314 Write completed with error (sct=0, sc=8) 00:22:47.314 starting I/O failed: -6 00:22:47.314 Write completed with error (sct=0, sc=8) 00:22:47.314 Write completed with error (sct=0, sc=8) 00:22:47.314 starting I/O failed: -6 00:22:47.314 Write completed with error (sct=0, sc=8) 00:22:47.314 starting I/O failed: -6 00:22:47.314 Write completed with error (sct=0, sc=8) 00:22:47.314 starting I/O failed: -6 00:22:47.314 Write completed with error (sct=0, sc=8) 00:22:47.314 Write completed with error (sct=0, sc=8) 00:22:47.314 starting I/O failed: -6 00:22:47.314 Write completed with error (sct=0, sc=8) 00:22:47.314 starting I/O failed: -6 00:22:47.314 Write completed with error (sct=0, sc=8) 00:22:47.314 starting I/O failed: -6 00:22:47.314 Write completed with error (sct=0, sc=8) 00:22:47.314 Write completed with error (sct=0, sc=8) 00:22:47.314 starting I/O failed: -6 00:22:47.314 Write completed with error (sct=0, sc=8) 00:22:47.314 starting I/O failed: -6 00:22:47.314 Write completed with error (sct=0, sc=8) 00:22:47.314 starting I/O failed: -6 00:22:47.314 Write completed with error (sct=0, sc=8) 00:22:47.314 Write completed with error (sct=0, sc=8) 00:22:47.314 starting I/O failed: -6 00:22:47.314 Write completed with error (sct=0, sc=8) 00:22:47.314 starting I/O failed: -6 00:22:47.314 Write completed with error (sct=0, sc=8) 00:22:47.314 starting I/O failed: -6 00:22:47.314 Write completed with error (sct=0, sc=8) 00:22:47.314 Write completed with error (sct=0, sc=8) 00:22:47.314 starting I/O failed: -6 00:22:47.314 Write completed with error (sct=0, sc=8) 00:22:47.314 starting I/O failed: -6 00:22:47.314 Write completed with error (sct=0, sc=8) 00:22:47.314 starting I/O failed: -6 00:22:47.314 Write completed with error (sct=0, sc=8) 00:22:47.314 Write completed with error (sct=0, sc=8) 00:22:47.314 starting I/O failed: -6 00:22:47.314 Write completed with error (sct=0, sc=8) 00:22:47.314 starting I/O failed: -6 00:22:47.314 Write completed with error (sct=0, sc=8) 00:22:47.314 starting I/O failed: -6 00:22:47.314 Write completed with error (sct=0, sc=8) 00:22:47.314 Write completed with error (sct=0, sc=8) 00:22:47.314 starting I/O failed: -6 00:22:47.315 Write completed with error (sct=0, sc=8) 00:22:47.315 starting I/O failed: -6 00:22:47.315 Write completed with error (sct=0, sc=8) 00:22:47.315 starting I/O failed: -6 00:22:47.315 Write completed with error (sct=0, sc=8) 00:22:47.315 Write completed with error (sct=0, sc=8) 00:22:47.315 starting I/O failed: -6 00:22:47.315 Write completed with error (sct=0, sc=8) 00:22:47.315 starting I/O failed: -6 00:22:47.315 Write completed with error (sct=0, sc=8) 00:22:47.315 starting I/O failed: -6 00:22:47.315 Write completed with error (sct=0, sc=8) 00:22:47.315 Write completed with error (sct=0, sc=8) 00:22:47.315 starting I/O failed: -6 00:22:47.315 Write completed with error (sct=0, sc=8) 00:22:47.315 starting I/O failed: -6 00:22:47.315 Write completed with error (sct=0, sc=8) 00:22:47.315 starting I/O failed: -6 00:22:47.315 Write completed with error (sct=0, sc=8) 00:22:47.315 Write completed with error (sct=0, sc=8) 00:22:47.315 starting I/O failed: -6 00:22:47.315 [2024-10-09 00:30:17.444607] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:22:47.315 Write completed with error (sct=0, sc=8) 00:22:47.315 starting I/O failed: -6 00:22:47.315 Write completed with error (sct=0, sc=8) 00:22:47.315 starting I/O failed: -6 00:22:47.315 Write completed with error (sct=0, sc=8) 00:22:47.315 starting I/O failed: -6 00:22:47.315 Write completed with error (sct=0, sc=8) 00:22:47.315 starting I/O failed: -6 00:22:47.315 Write completed with error (sct=0, sc=8) 00:22:47.315 starting I/O failed: -6 00:22:47.315 Write completed with error (sct=0, sc=8) 00:22:47.315 starting I/O failed: -6 00:22:47.315 Write completed with error (sct=0, sc=8) 00:22:47.315 starting I/O failed: -6 00:22:47.315 Write completed with error (sct=0, sc=8) 00:22:47.315 starting I/O failed: -6 00:22:47.315 Write completed with error (sct=0, sc=8) 00:22:47.315 starting I/O failed: -6 00:22:47.315 Write completed with error (sct=0, sc=8) 00:22:47.315 starting I/O failed: -6 00:22:47.315 Write completed with error (sct=0, sc=8) 00:22:47.315 starting I/O failed: -6 00:22:47.315 Write completed with error (sct=0, sc=8) 00:22:47.315 starting I/O failed: -6 00:22:47.315 Write completed with error (sct=0, sc=8) 00:22:47.315 starting I/O failed: -6 00:22:47.315 Write completed with error (sct=0, sc=8) 00:22:47.315 starting I/O failed: -6 00:22:47.315 Write completed with error (sct=0, sc=8) 00:22:47.315 starting I/O failed: -6 00:22:47.315 Write completed with error (sct=0, sc=8) 00:22:47.315 starting I/O failed: -6 00:22:47.315 Write completed with error (sct=0, sc=8) 00:22:47.315 starting I/O failed: -6 00:22:47.315 Write completed with error (sct=0, sc=8) 00:22:47.315 starting I/O failed: -6 00:22:47.315 Write completed with error (sct=0, sc=8) 00:22:47.315 starting I/O failed: -6 00:22:47.315 Write completed with error (sct=0, sc=8) 00:22:47.315 starting I/O failed: -6 00:22:47.315 Write completed with error (sct=0, sc=8) 00:22:47.315 starting I/O failed: -6 00:22:47.315 Write completed with error (sct=0, sc=8) 00:22:47.315 starting I/O failed: -6 00:22:47.315 Write completed with error (sct=0, sc=8) 00:22:47.315 starting I/O failed: -6 00:22:47.315 Write completed with error (sct=0, sc=8) 00:22:47.315 starting I/O failed: -6 00:22:47.315 Write completed with error (sct=0, sc=8) 00:22:47.315 starting I/O failed: -6 00:22:47.315 Write completed with error (sct=0, sc=8) 00:22:47.315 starting I/O failed: -6 00:22:47.315 Write completed with error (sct=0, sc=8) 00:22:47.315 starting I/O failed: -6 00:22:47.315 Write completed with error (sct=0, sc=8) 00:22:47.315 starting I/O failed: -6 00:22:47.315 Write completed with error (sct=0, sc=8) 00:22:47.315 starting I/O failed: -6 00:22:47.315 Write completed with error (sct=0, sc=8) 00:22:47.315 starting I/O failed: -6 00:22:47.315 Write completed with error (sct=0, sc=8) 00:22:47.315 starting I/O failed: -6 00:22:47.315 Write completed with error (sct=0, sc=8) 00:22:47.315 starting I/O failed: -6 00:22:47.315 Write completed with error (sct=0, sc=8) 00:22:47.315 starting I/O failed: -6 00:22:47.315 Write completed with error (sct=0, sc=8) 00:22:47.315 starting I/O failed: -6 00:22:47.315 Write completed with error (sct=0, sc=8) 00:22:47.315 starting I/O failed: -6 00:22:47.315 Write completed with error (sct=0, sc=8) 00:22:47.315 starting I/O failed: -6 00:22:47.315 Write completed with error (sct=0, sc=8) 00:22:47.315 starting I/O failed: -6 00:22:47.315 Write completed with error (sct=0, sc=8) 00:22:47.315 starting I/O failed: -6 00:22:47.315 Write completed with error (sct=0, sc=8) 00:22:47.315 starting I/O failed: -6 00:22:47.315 Write completed with error (sct=0, sc=8) 00:22:47.315 starting I/O failed: -6 00:22:47.315 Write completed with error (sct=0, sc=8) 00:22:47.315 starting I/O failed: -6 00:22:47.315 Write completed with error (sct=0, sc=8) 00:22:47.315 starting I/O failed: -6 00:22:47.315 Write completed with error (sct=0, sc=8) 00:22:47.315 starting I/O failed: -6 00:22:47.315 Write completed with error (sct=0, sc=8) 00:22:47.315 starting I/O failed: -6 00:22:47.315 Write completed with error (sct=0, sc=8) 00:22:47.315 starting I/O failed: -6 00:22:47.315 Write completed with error (sct=0, sc=8) 00:22:47.315 starting I/O failed: -6 00:22:47.315 Write completed with error (sct=0, sc=8) 00:22:47.315 starting I/O failed: -6 00:22:47.315 Write completed with error (sct=0, sc=8) 00:22:47.315 starting I/O failed: -6 00:22:47.315 Write completed with error (sct=0, sc=8) 00:22:47.315 starting I/O failed: -6 00:22:47.315 Write completed with error (sct=0, sc=8) 00:22:47.315 starting I/O failed: -6 00:22:47.315 Write completed with error (sct=0, sc=8) 00:22:47.315 starting I/O failed: -6 00:22:47.315 Write completed with error (sct=0, sc=8) 00:22:47.315 starting I/O failed: -6 00:22:47.315 Write completed with error (sct=0, sc=8) 00:22:47.315 starting I/O failed: -6 00:22:47.315 Write completed with error (sct=0, sc=8) 00:22:47.315 starting I/O failed: -6 00:22:47.315 Write completed with error (sct=0, sc=8) 00:22:47.315 starting I/O failed: -6 00:22:47.315 Write completed with error (sct=0, sc=8) 00:22:47.315 starting I/O failed: -6 00:22:47.315 Write completed with error (sct=0, sc=8) 00:22:47.315 starting I/O failed: -6 00:22:47.315 Write completed with error (sct=0, sc=8) 00:22:47.315 starting I/O failed: -6 00:22:47.315 Write completed with error (sct=0, sc=8) 00:22:47.315 starting I/O failed: -6 00:22:47.315 Write completed with error (sct=0, sc=8) 00:22:47.315 starting I/O failed: -6 00:22:47.315 [2024-10-09 00:30:17.447003] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:22:47.315 NVMe io qpair process completion error 00:22:47.315 Write completed with error (sct=0, sc=8) 00:22:47.315 starting I/O failed: -6 00:22:47.315 Write completed with error (sct=0, sc=8) 00:22:47.315 Write completed with error (sct=0, sc=8) 00:22:47.315 Write completed with error (sct=0, sc=8) 00:22:47.315 Write completed with error (sct=0, sc=8) 00:22:47.315 starting I/O failed: -6 00:22:47.315 Write completed with error (sct=0, sc=8) 00:22:47.315 Write completed with error (sct=0, sc=8) 00:22:47.315 Write completed with error (sct=0, sc=8) 00:22:47.315 Write completed with error (sct=0, sc=8) 00:22:47.315 starting I/O failed: -6 00:22:47.315 Write completed with error (sct=0, sc=8) 00:22:47.315 Write completed with error (sct=0, sc=8) 00:22:47.315 Write completed with error (sct=0, sc=8) 00:22:47.315 Write completed with error (sct=0, sc=8) 00:22:47.315 starting I/O failed: -6 00:22:47.315 Write completed with error (sct=0, sc=8) 00:22:47.315 Write completed with error (sct=0, sc=8) 00:22:47.315 Write completed with error (sct=0, sc=8) 00:22:47.315 Write completed with error (sct=0, sc=8) 00:22:47.315 starting I/O failed: -6 00:22:47.315 Write completed with error (sct=0, sc=8) 00:22:47.315 Write completed with error (sct=0, sc=8) 00:22:47.315 Write completed with error (sct=0, sc=8) 00:22:47.315 Write completed with error (sct=0, sc=8) 00:22:47.315 starting I/O failed: -6 00:22:47.315 Write completed with error (sct=0, sc=8) 00:22:47.315 Write completed with error (sct=0, sc=8) 00:22:47.315 Write completed with error (sct=0, sc=8) 00:22:47.315 Write completed with error (sct=0, sc=8) 00:22:47.315 starting I/O failed: -6 00:22:47.315 Write completed with error (sct=0, sc=8) 00:22:47.315 Write completed with error (sct=0, sc=8) 00:22:47.315 Write completed with error (sct=0, sc=8) 00:22:47.315 Write completed with error (sct=0, sc=8) 00:22:47.315 starting I/O failed: -6 00:22:47.315 Write completed with error (sct=0, sc=8) 00:22:47.315 Write completed with error (sct=0, sc=8) 00:22:47.315 Write completed with error (sct=0, sc=8) 00:22:47.315 Write completed with error (sct=0, sc=8) 00:22:47.315 starting I/O failed: -6 00:22:47.315 Write completed with error (sct=0, sc=8) 00:22:47.315 Write completed with error (sct=0, sc=8) 00:22:47.315 Write completed with error (sct=0, sc=8) 00:22:47.315 Write completed with error (sct=0, sc=8) 00:22:47.315 starting I/O failed: -6 00:22:47.315 Write completed with error (sct=0, sc=8) 00:22:47.315 [2024-10-09 00:30:17.448004] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:22:47.315 Write completed with error (sct=0, sc=8) 00:22:47.315 Write completed with error (sct=0, sc=8) 00:22:47.315 Write completed with error (sct=0, sc=8) 00:22:47.315 starting I/O failed: -6 00:22:47.315 Write completed with error (sct=0, sc=8) 00:22:47.315 starting I/O failed: -6 00:22:47.315 Write completed with error (sct=0, sc=8) 00:22:47.315 Write completed with error (sct=0, sc=8) 00:22:47.315 Write completed with error (sct=0, sc=8) 00:22:47.315 starting I/O failed: -6 00:22:47.315 Write completed with error (sct=0, sc=8) 00:22:47.315 starting I/O failed: -6 00:22:47.315 Write completed with error (sct=0, sc=8) 00:22:47.315 Write completed with error (sct=0, sc=8) 00:22:47.315 Write completed with error (sct=0, sc=8) 00:22:47.315 starting I/O failed: -6 00:22:47.316 Write completed with error (sct=0, sc=8) 00:22:47.316 starting I/O failed: -6 00:22:47.316 Write completed with error (sct=0, sc=8) 00:22:47.316 Write completed with error (sct=0, sc=8) 00:22:47.316 Write completed with error (sct=0, sc=8) 00:22:47.316 starting I/O failed: -6 00:22:47.316 Write completed with error (sct=0, sc=8) 00:22:47.316 starting I/O failed: -6 00:22:47.316 Write completed with error (sct=0, sc=8) 00:22:47.316 Write completed with error (sct=0, sc=8) 00:22:47.316 Write completed with error (sct=0, sc=8) 00:22:47.316 starting I/O failed: -6 00:22:47.316 Write completed with error (sct=0, sc=8) 00:22:47.316 starting I/O failed: -6 00:22:47.316 Write completed with error (sct=0, sc=8) 00:22:47.316 Write completed with error (sct=0, sc=8) 00:22:47.316 Write completed with error (sct=0, sc=8) 00:22:47.316 starting I/O failed: -6 00:22:47.316 Write completed with error (sct=0, sc=8) 00:22:47.316 starting I/O failed: -6 00:22:47.316 Write completed with error (sct=0, sc=8) 00:22:47.316 Write completed with error (sct=0, sc=8) 00:22:47.316 Write completed with error (sct=0, sc=8) 00:22:47.316 starting I/O failed: -6 00:22:47.316 Write completed with error (sct=0, sc=8) 00:22:47.316 starting I/O failed: -6 00:22:47.316 Write completed with error (sct=0, sc=8) 00:22:47.316 Write completed with error (sct=0, sc=8) 00:22:47.316 Write completed with error (sct=0, sc=8) 00:22:47.316 starting I/O failed: -6 00:22:47.316 Write completed with error (sct=0, sc=8) 00:22:47.316 starting I/O failed: -6 00:22:47.316 Write completed with error (sct=0, sc=8) 00:22:47.316 Write completed with error (sct=0, sc=8) 00:22:47.316 Write completed with error (sct=0, sc=8) 00:22:47.316 starting I/O failed: -6 00:22:47.316 Write completed with error (sct=0, sc=8) 00:22:47.316 starting I/O failed: -6 00:22:47.316 Write completed with error (sct=0, sc=8) 00:22:47.316 Write completed with error (sct=0, sc=8) 00:22:47.316 Write completed with error (sct=0, sc=8) 00:22:47.316 starting I/O failed: -6 00:22:47.316 Write completed with error (sct=0, sc=8) 00:22:47.316 starting I/O failed: -6 00:22:47.316 [2024-10-09 00:30:17.448830] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:47.316 Write completed with error (sct=0, sc=8) 00:22:47.316 Write completed with error (sct=0, sc=8) 00:22:47.316 starting I/O failed: -6 00:22:47.316 Write completed with error (sct=0, sc=8) 00:22:47.316 starting I/O failed: -6 00:22:47.316 Write completed with error (sct=0, sc=8) 00:22:47.316 starting I/O failed: -6 00:22:47.316 Write completed with error (sct=0, sc=8) 00:22:47.316 Write completed with error (sct=0, sc=8) 00:22:47.316 starting I/O failed: -6 00:22:47.316 Write completed with error (sct=0, sc=8) 00:22:47.316 starting I/O failed: -6 00:22:47.316 Write completed with error (sct=0, sc=8) 00:22:47.316 starting I/O failed: -6 00:22:47.316 Write completed with error (sct=0, sc=8) 00:22:47.316 Write completed with error (sct=0, sc=8) 00:22:47.316 starting I/O failed: -6 00:22:47.316 Write completed with error (sct=0, sc=8) 00:22:47.316 starting I/O failed: -6 00:22:47.316 Write completed with error (sct=0, sc=8) 00:22:47.316 starting I/O failed: -6 00:22:47.316 Write completed with error (sct=0, sc=8) 00:22:47.316 Write completed with error (sct=0, sc=8) 00:22:47.316 starting I/O failed: -6 00:22:47.316 Write completed with error (sct=0, sc=8) 00:22:47.316 starting I/O failed: -6 00:22:47.316 Write completed with error (sct=0, sc=8) 00:22:47.316 starting I/O failed: -6 00:22:47.316 Write completed with error (sct=0, sc=8) 00:22:47.316 Write completed with error (sct=0, sc=8) 00:22:47.316 starting I/O failed: -6 00:22:47.316 Write completed with error (sct=0, sc=8) 00:22:47.316 starting I/O failed: -6 00:22:47.316 Write completed with error (sct=0, sc=8) 00:22:47.316 starting I/O failed: -6 00:22:47.316 Write completed with error (sct=0, sc=8) 00:22:47.316 Write completed with error (sct=0, sc=8) 00:22:47.316 starting I/O failed: -6 00:22:47.316 Write completed with error (sct=0, sc=8) 00:22:47.316 starting I/O failed: -6 00:22:47.316 Write completed with error (sct=0, sc=8) 00:22:47.316 starting I/O failed: -6 00:22:47.316 Write completed with error (sct=0, sc=8) 00:22:47.316 Write completed with error (sct=0, sc=8) 00:22:47.316 starting I/O failed: -6 00:22:47.316 Write completed with error (sct=0, sc=8) 00:22:47.316 starting I/O failed: -6 00:22:47.316 Write completed with error (sct=0, sc=8) 00:22:47.316 starting I/O failed: -6 00:22:47.316 Write completed with error (sct=0, sc=8) 00:22:47.316 Write completed with error (sct=0, sc=8) 00:22:47.316 starting I/O failed: -6 00:22:47.316 Write completed with error (sct=0, sc=8) 00:22:47.316 starting I/O failed: -6 00:22:47.316 Write completed with error (sct=0, sc=8) 00:22:47.316 starting I/O failed: -6 00:22:47.316 Write completed with error (sct=0, sc=8) 00:22:47.316 Write completed with error (sct=0, sc=8) 00:22:47.316 starting I/O failed: -6 00:22:47.316 Write completed with error (sct=0, sc=8) 00:22:47.316 starting I/O failed: -6 00:22:47.316 Write completed with error (sct=0, sc=8) 00:22:47.316 starting I/O failed: -6 00:22:47.316 Write completed with error (sct=0, sc=8) 00:22:47.316 Write completed with error (sct=0, sc=8) 00:22:47.316 starting I/O failed: -6 00:22:47.316 Write completed with error (sct=0, sc=8) 00:22:47.316 starting I/O failed: -6 00:22:47.316 Write completed with error (sct=0, sc=8) 00:22:47.316 starting I/O failed: -6 00:22:47.316 Write completed with error (sct=0, sc=8) 00:22:47.316 Write completed with error (sct=0, sc=8) 00:22:47.316 starting I/O failed: -6 00:22:47.316 Write completed with error (sct=0, sc=8) 00:22:47.316 starting I/O failed: -6 00:22:47.316 Write completed with error (sct=0, sc=8) 00:22:47.316 starting I/O failed: -6 00:22:47.316 Write completed with error (sct=0, sc=8) 00:22:47.316 Write completed with error (sct=0, sc=8) 00:22:47.316 starting I/O failed: -6 00:22:47.316 Write completed with error (sct=0, sc=8) 00:22:47.316 starting I/O failed: -6 00:22:47.316 Write completed with error (sct=0, sc=8) 00:22:47.316 starting I/O failed: -6 00:22:47.316 Write completed with error (sct=0, sc=8) 00:22:47.316 [2024-10-09 00:30:17.449766] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:22:47.316 Write completed with error (sct=0, sc=8) 00:22:47.316 starting I/O failed: -6 00:22:47.316 Write completed with error (sct=0, sc=8) 00:22:47.316 starting I/O failed: -6 00:22:47.316 Write completed with error (sct=0, sc=8) 00:22:47.316 starting I/O failed: -6 00:22:47.316 Write completed with error (sct=0, sc=8) 00:22:47.316 starting I/O failed: -6 00:22:47.316 Write completed with error (sct=0, sc=8) 00:22:47.316 starting I/O failed: -6 00:22:47.316 Write completed with error (sct=0, sc=8) 00:22:47.316 starting I/O failed: -6 00:22:47.316 Write completed with error (sct=0, sc=8) 00:22:47.316 starting I/O failed: -6 00:22:47.316 Write completed with error (sct=0, sc=8) 00:22:47.316 starting I/O failed: -6 00:22:47.316 Write completed with error (sct=0, sc=8) 00:22:47.316 starting I/O failed: -6 00:22:47.316 Write completed with error (sct=0, sc=8) 00:22:47.316 starting I/O failed: -6 00:22:47.316 Write completed with error (sct=0, sc=8) 00:22:47.316 starting I/O failed: -6 00:22:47.316 Write completed with error (sct=0, sc=8) 00:22:47.316 starting I/O failed: -6 00:22:47.316 Write completed with error (sct=0, sc=8) 00:22:47.316 starting I/O failed: -6 00:22:47.316 Write completed with error (sct=0, sc=8) 00:22:47.316 starting I/O failed: -6 00:22:47.316 Write completed with error (sct=0, sc=8) 00:22:47.316 starting I/O failed: -6 00:22:47.316 Write completed with error (sct=0, sc=8) 00:22:47.316 starting I/O failed: -6 00:22:47.316 Write completed with error (sct=0, sc=8) 00:22:47.316 starting I/O failed: -6 00:22:47.316 Write completed with error (sct=0, sc=8) 00:22:47.316 starting I/O failed: -6 00:22:47.316 Write completed with error (sct=0, sc=8) 00:22:47.316 starting I/O failed: -6 00:22:47.316 Write completed with error (sct=0, sc=8) 00:22:47.316 starting I/O failed: -6 00:22:47.316 Write completed with error (sct=0, sc=8) 00:22:47.316 starting I/O failed: -6 00:22:47.316 Write completed with error (sct=0, sc=8) 00:22:47.316 starting I/O failed: -6 00:22:47.316 Write completed with error (sct=0, sc=8) 00:22:47.316 starting I/O failed: -6 00:22:47.316 Write completed with error (sct=0, sc=8) 00:22:47.316 starting I/O failed: -6 00:22:47.316 Write completed with error (sct=0, sc=8) 00:22:47.316 starting I/O failed: -6 00:22:47.316 Write completed with error (sct=0, sc=8) 00:22:47.316 starting I/O failed: -6 00:22:47.316 Write completed with error (sct=0, sc=8) 00:22:47.316 starting I/O failed: -6 00:22:47.316 Write completed with error (sct=0, sc=8) 00:22:47.316 starting I/O failed: -6 00:22:47.316 Write completed with error (sct=0, sc=8) 00:22:47.316 starting I/O failed: -6 00:22:47.316 Write completed with error (sct=0, sc=8) 00:22:47.316 starting I/O failed: -6 00:22:47.316 Write completed with error (sct=0, sc=8) 00:22:47.316 starting I/O failed: -6 00:22:47.316 Write completed with error (sct=0, sc=8) 00:22:47.316 starting I/O failed: -6 00:22:47.316 Write completed with error (sct=0, sc=8) 00:22:47.316 starting I/O failed: -6 00:22:47.316 Write completed with error (sct=0, sc=8) 00:22:47.316 starting I/O failed: -6 00:22:47.316 Write completed with error (sct=0, sc=8) 00:22:47.316 starting I/O failed: -6 00:22:47.316 Write completed with error (sct=0, sc=8) 00:22:47.316 starting I/O failed: -6 00:22:47.316 Write completed with error (sct=0, sc=8) 00:22:47.316 starting I/O failed: -6 00:22:47.316 Write completed with error (sct=0, sc=8) 00:22:47.316 starting I/O failed: -6 00:22:47.316 Write completed with error (sct=0, sc=8) 00:22:47.316 starting I/O failed: -6 00:22:47.316 Write completed with error (sct=0, sc=8) 00:22:47.316 starting I/O failed: -6 00:22:47.316 Write completed with error (sct=0, sc=8) 00:22:47.316 starting I/O failed: -6 00:22:47.316 Write completed with error (sct=0, sc=8) 00:22:47.316 starting I/O failed: -6 00:22:47.316 Write completed with error (sct=0, sc=8) 00:22:47.316 starting I/O failed: -6 00:22:47.316 Write completed with error (sct=0, sc=8) 00:22:47.316 starting I/O failed: -6 00:22:47.316 Write completed with error (sct=0, sc=8) 00:22:47.316 starting I/O failed: -6 00:22:47.316 Write completed with error (sct=0, sc=8) 00:22:47.316 starting I/O failed: -6 00:22:47.316 Write completed with error (sct=0, sc=8) 00:22:47.316 starting I/O failed: -6 00:22:47.316 Write completed with error (sct=0, sc=8) 00:22:47.316 starting I/O failed: -6 00:22:47.316 Write completed with error (sct=0, sc=8) 00:22:47.316 starting I/O failed: -6 00:22:47.316 Write completed with error (sct=0, sc=8) 00:22:47.317 starting I/O failed: -6 00:22:47.317 Write completed with error (sct=0, sc=8) 00:22:47.317 starting I/O failed: -6 00:22:47.317 Write completed with error (sct=0, sc=8) 00:22:47.317 starting I/O failed: -6 00:22:47.317 Write completed with error (sct=0, sc=8) 00:22:47.317 starting I/O failed: -6 00:22:47.317 Write completed with error (sct=0, sc=8) 00:22:47.317 starting I/O failed: -6 00:22:47.317 Write completed with error (sct=0, sc=8) 00:22:47.317 starting I/O failed: -6 00:22:47.317 Write completed with error (sct=0, sc=8) 00:22:47.317 starting I/O failed: -6 00:22:47.317 Write completed with error (sct=0, sc=8) 00:22:47.317 starting I/O failed: -6 00:22:47.317 Write completed with error (sct=0, sc=8) 00:22:47.317 starting I/O failed: -6 00:22:47.317 Write completed with error (sct=0, sc=8) 00:22:47.317 starting I/O failed: -6 00:22:47.317 Write completed with error (sct=0, sc=8) 00:22:47.317 starting I/O failed: -6 00:22:47.317 Write completed with error (sct=0, sc=8) 00:22:47.317 starting I/O failed: -6 00:22:47.317 Write completed with error (sct=0, sc=8) 00:22:47.317 starting I/O failed: -6 00:22:47.317 [2024-10-09 00:30:17.451663] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:22:47.317 NVMe io qpair process completion error 00:22:47.317 Write completed with error (sct=0, sc=8) 00:22:47.317 Write completed with error (sct=0, sc=8) 00:22:47.317 Write completed with error (sct=0, sc=8) 00:22:47.317 starting I/O failed: -6 00:22:47.317 Write completed with error (sct=0, sc=8) 00:22:47.317 Write completed with error (sct=0, sc=8) 00:22:47.317 Write completed with error (sct=0, sc=8) 00:22:47.317 Write completed with error (sct=0, sc=8) 00:22:47.317 starting I/O failed: -6 00:22:47.317 Write completed with error (sct=0, sc=8) 00:22:47.317 Write completed with error (sct=0, sc=8) 00:22:47.317 Write completed with error (sct=0, sc=8) 00:22:47.317 Write completed with error (sct=0, sc=8) 00:22:47.317 starting I/O failed: -6 00:22:47.317 Write completed with error (sct=0, sc=8) 00:22:47.317 Write completed with error (sct=0, sc=8) 00:22:47.317 Write completed with error (sct=0, sc=8) 00:22:47.317 Write completed with error (sct=0, sc=8) 00:22:47.317 starting I/O failed: -6 00:22:47.317 Write completed with error (sct=0, sc=8) 00:22:47.317 Write completed with error (sct=0, sc=8) 00:22:47.317 Write completed with error (sct=0, sc=8) 00:22:47.317 Write completed with error (sct=0, sc=8) 00:22:47.317 starting I/O failed: -6 00:22:47.317 Write completed with error (sct=0, sc=8) 00:22:47.317 Write completed with error (sct=0, sc=8) 00:22:47.317 Write completed with error (sct=0, sc=8) 00:22:47.317 Write completed with error (sct=0, sc=8) 00:22:47.317 starting I/O failed: -6 00:22:47.317 Write completed with error (sct=0, sc=8) 00:22:47.317 Write completed with error (sct=0, sc=8) 00:22:47.317 Write completed with error (sct=0, sc=8) 00:22:47.317 Write completed with error (sct=0, sc=8) 00:22:47.317 starting I/O failed: -6 00:22:47.317 Write completed with error (sct=0, sc=8) 00:22:47.317 Write completed with error (sct=0, sc=8) 00:22:47.317 Write completed with error (sct=0, sc=8) 00:22:47.317 Write completed with error (sct=0, sc=8) 00:22:47.317 starting I/O failed: -6 00:22:47.317 Write completed with error (sct=0, sc=8) 00:22:47.317 Write completed with error (sct=0, sc=8) 00:22:47.317 Write completed with error (sct=0, sc=8) 00:22:47.317 Write completed with error (sct=0, sc=8) 00:22:47.317 starting I/O failed: -6 00:22:47.317 Write completed with error (sct=0, sc=8) 00:22:47.317 Write completed with error (sct=0, sc=8) 00:22:47.317 Write completed with error (sct=0, sc=8) 00:22:47.317 Write completed with error (sct=0, sc=8) 00:22:47.317 starting I/O failed: -6 00:22:47.317 Write completed with error (sct=0, sc=8) 00:22:47.317 Write completed with error (sct=0, sc=8) 00:22:47.317 Write completed with error (sct=0, sc=8) 00:22:47.317 Write completed with error (sct=0, sc=8) 00:22:47.317 starting I/O failed: -6 00:22:47.317 [2024-10-09 00:30:17.452809] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:22:47.317 starting I/O failed: -6 00:22:47.317 Write completed with error (sct=0, sc=8) 00:22:47.317 Write completed with error (sct=0, sc=8) 00:22:47.317 starting I/O failed: -6 00:22:47.317 Write completed with error (sct=0, sc=8) 00:22:47.317 Write completed with error (sct=0, sc=8) 00:22:47.317 starting I/O failed: -6 00:22:47.317 Write completed with error (sct=0, sc=8) 00:22:47.317 Write completed with error (sct=0, sc=8) 00:22:47.317 starting I/O failed: -6 00:22:47.317 Write completed with error (sct=0, sc=8) 00:22:47.317 Write completed with error (sct=0, sc=8) 00:22:47.317 starting I/O failed: -6 00:22:47.317 Write completed with error (sct=0, sc=8) 00:22:47.317 Write completed with error (sct=0, sc=8) 00:22:47.317 starting I/O failed: -6 00:22:47.317 Write completed with error (sct=0, sc=8) 00:22:47.317 Write completed with error (sct=0, sc=8) 00:22:47.317 starting I/O failed: -6 00:22:47.317 Write completed with error (sct=0, sc=8) 00:22:47.317 Write completed with error (sct=0, sc=8) 00:22:47.317 starting I/O failed: -6 00:22:47.317 Write completed with error (sct=0, sc=8) 00:22:47.317 Write completed with error (sct=0, sc=8) 00:22:47.317 starting I/O failed: -6 00:22:47.317 Write completed with error (sct=0, sc=8) 00:22:47.317 Write completed with error (sct=0, sc=8) 00:22:47.317 starting I/O failed: -6 00:22:47.317 Write completed with error (sct=0, sc=8) 00:22:47.317 Write completed with error (sct=0, sc=8) 00:22:47.317 starting I/O failed: -6 00:22:47.317 Write completed with error (sct=0, sc=8) 00:22:47.317 Write completed with error (sct=0, sc=8) 00:22:47.317 starting I/O failed: -6 00:22:47.317 Write completed with error (sct=0, sc=8) 00:22:47.317 Write completed with error (sct=0, sc=8) 00:22:47.317 starting I/O failed: -6 00:22:47.317 Write completed with error (sct=0, sc=8) 00:22:47.317 Write completed with error (sct=0, sc=8) 00:22:47.317 starting I/O failed: -6 00:22:47.317 Write completed with error (sct=0, sc=8) 00:22:47.317 Write completed with error (sct=0, sc=8) 00:22:47.317 starting I/O failed: -6 00:22:47.317 Write completed with error (sct=0, sc=8) 00:22:47.317 Write completed with error (sct=0, sc=8) 00:22:47.317 starting I/O failed: -6 00:22:47.317 Write completed with error (sct=0, sc=8) 00:22:47.317 Write completed with error (sct=0, sc=8) 00:22:47.317 starting I/O failed: -6 00:22:47.317 Write completed with error (sct=0, sc=8) 00:22:47.317 Write completed with error (sct=0, sc=8) 00:22:47.317 starting I/O failed: -6 00:22:47.317 Write completed with error (sct=0, sc=8) 00:22:47.317 Write completed with error (sct=0, sc=8) 00:22:47.317 starting I/O failed: -6 00:22:47.317 [2024-10-09 00:30:17.453623] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:47.317 Write completed with error (sct=0, sc=8) 00:22:47.317 Write completed with error (sct=0, sc=8) 00:22:47.317 starting I/O failed: -6 00:22:47.317 Write completed with error (sct=0, sc=8) 00:22:47.317 starting I/O failed: -6 00:22:47.317 Write completed with error (sct=0, sc=8) 00:22:47.317 starting I/O failed: -6 00:22:47.317 Write completed with error (sct=0, sc=8) 00:22:47.317 Write completed with error (sct=0, sc=8) 00:22:47.317 starting I/O failed: -6 00:22:47.317 Write completed with error (sct=0, sc=8) 00:22:47.317 starting I/O failed: -6 00:22:47.317 Write completed with error (sct=0, sc=8) 00:22:47.317 starting I/O failed: -6 00:22:47.317 Write completed with error (sct=0, sc=8) 00:22:47.317 Write completed with error (sct=0, sc=8) 00:22:47.317 starting I/O failed: -6 00:22:47.317 Write completed with error (sct=0, sc=8) 00:22:47.317 starting I/O failed: -6 00:22:47.317 Write completed with error (sct=0, sc=8) 00:22:47.317 starting I/O failed: -6 00:22:47.317 Write completed with error (sct=0, sc=8) 00:22:47.317 Write completed with error (sct=0, sc=8) 00:22:47.317 starting I/O failed: -6 00:22:47.317 Write completed with error (sct=0, sc=8) 00:22:47.317 starting I/O failed: -6 00:22:47.317 Write completed with error (sct=0, sc=8) 00:22:47.317 starting I/O failed: -6 00:22:47.317 Write completed with error (sct=0, sc=8) 00:22:47.317 Write completed with error (sct=0, sc=8) 00:22:47.317 starting I/O failed: -6 00:22:47.317 Write completed with error (sct=0, sc=8) 00:22:47.317 starting I/O failed: -6 00:22:47.317 Write completed with error (sct=0, sc=8) 00:22:47.317 starting I/O failed: -6 00:22:47.317 Write completed with error (sct=0, sc=8) 00:22:47.317 Write completed with error (sct=0, sc=8) 00:22:47.317 starting I/O failed: -6 00:22:47.317 Write completed with error (sct=0, sc=8) 00:22:47.317 starting I/O failed: -6 00:22:47.317 Write completed with error (sct=0, sc=8) 00:22:47.317 starting I/O failed: -6 00:22:47.317 Write completed with error (sct=0, sc=8) 00:22:47.317 Write completed with error (sct=0, sc=8) 00:22:47.317 starting I/O failed: -6 00:22:47.317 Write completed with error (sct=0, sc=8) 00:22:47.317 starting I/O failed: -6 00:22:47.317 Write completed with error (sct=0, sc=8) 00:22:47.317 starting I/O failed: -6 00:22:47.317 Write completed with error (sct=0, sc=8) 00:22:47.317 Write completed with error (sct=0, sc=8) 00:22:47.317 starting I/O failed: -6 00:22:47.317 Write completed with error (sct=0, sc=8) 00:22:47.317 starting I/O failed: -6 00:22:47.317 Write completed with error (sct=0, sc=8) 00:22:47.317 starting I/O failed: -6 00:22:47.317 Write completed with error (sct=0, sc=8) 00:22:47.317 Write completed with error (sct=0, sc=8) 00:22:47.317 starting I/O failed: -6 00:22:47.317 Write completed with error (sct=0, sc=8) 00:22:47.317 starting I/O failed: -6 00:22:47.317 Write completed with error (sct=0, sc=8) 00:22:47.317 starting I/O failed: -6 00:22:47.317 Write completed with error (sct=0, sc=8) 00:22:47.317 Write completed with error (sct=0, sc=8) 00:22:47.317 starting I/O failed: -6 00:22:47.317 Write completed with error (sct=0, sc=8) 00:22:47.317 starting I/O failed: -6 00:22:47.317 Write completed with error (sct=0, sc=8) 00:22:47.317 starting I/O failed: -6 00:22:47.317 Write completed with error (sct=0, sc=8) 00:22:47.317 Write completed with error (sct=0, sc=8) 00:22:47.317 starting I/O failed: -6 00:22:47.317 Write completed with error (sct=0, sc=8) 00:22:47.317 starting I/O failed: -6 00:22:47.317 Write completed with error (sct=0, sc=8) 00:22:47.317 starting I/O failed: -6 00:22:47.317 Write completed with error (sct=0, sc=8) 00:22:47.317 Write completed with error (sct=0, sc=8) 00:22:47.317 starting I/O failed: -6 00:22:47.317 Write completed with error (sct=0, sc=8) 00:22:47.317 starting I/O failed: -6 00:22:47.317 Write completed with error (sct=0, sc=8) 00:22:47.317 starting I/O failed: -6 00:22:47.317 Write completed with error (sct=0, sc=8) 00:22:47.317 [2024-10-09 00:30:17.454563] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:22:47.317 Write completed with error (sct=0, sc=8) 00:22:47.317 starting I/O failed: -6 00:22:47.317 Write completed with error (sct=0, sc=8) 00:22:47.317 starting I/O failed: -6 00:22:47.318 Write completed with error (sct=0, sc=8) 00:22:47.318 starting I/O failed: -6 00:22:47.318 Write completed with error (sct=0, sc=8) 00:22:47.318 starting I/O failed: -6 00:22:47.318 Write completed with error (sct=0, sc=8) 00:22:47.318 starting I/O failed: -6 00:22:47.318 Write completed with error (sct=0, sc=8) 00:22:47.318 starting I/O failed: -6 00:22:47.318 Write completed with error (sct=0, sc=8) 00:22:47.318 starting I/O failed: -6 00:22:47.318 Write completed with error (sct=0, sc=8) 00:22:47.318 starting I/O failed: -6 00:22:47.318 Write completed with error (sct=0, sc=8) 00:22:47.318 starting I/O failed: -6 00:22:47.318 Write completed with error (sct=0, sc=8) 00:22:47.318 starting I/O failed: -6 00:22:47.318 Write completed with error (sct=0, sc=8) 00:22:47.318 starting I/O failed: -6 00:22:47.318 Write completed with error (sct=0, sc=8) 00:22:47.318 starting I/O failed: -6 00:22:47.318 Write completed with error (sct=0, sc=8) 00:22:47.318 starting I/O failed: -6 00:22:47.318 Write completed with error (sct=0, sc=8) 00:22:47.318 starting I/O failed: -6 00:22:47.318 Write completed with error (sct=0, sc=8) 00:22:47.318 starting I/O failed: -6 00:22:47.318 Write completed with error (sct=0, sc=8) 00:22:47.318 starting I/O failed: -6 00:22:47.318 Write completed with error (sct=0, sc=8) 00:22:47.318 starting I/O failed: -6 00:22:47.318 Write completed with error (sct=0, sc=8) 00:22:47.318 starting I/O failed: -6 00:22:47.318 Write completed with error (sct=0, sc=8) 00:22:47.318 starting I/O failed: -6 00:22:47.318 Write completed with error (sct=0, sc=8) 00:22:47.318 starting I/O failed: -6 00:22:47.318 Write completed with error (sct=0, sc=8) 00:22:47.318 starting I/O failed: -6 00:22:47.318 Write completed with error (sct=0, sc=8) 00:22:47.318 starting I/O failed: -6 00:22:47.318 Write completed with error (sct=0, sc=8) 00:22:47.318 starting I/O failed: -6 00:22:47.318 Write completed with error (sct=0, sc=8) 00:22:47.318 starting I/O failed: -6 00:22:47.318 Write completed with error (sct=0, sc=8) 00:22:47.318 starting I/O failed: -6 00:22:47.318 Write completed with error (sct=0, sc=8) 00:22:47.318 starting I/O failed: -6 00:22:47.318 Write completed with error (sct=0, sc=8) 00:22:47.318 starting I/O failed: -6 00:22:47.318 Write completed with error (sct=0, sc=8) 00:22:47.318 starting I/O failed: -6 00:22:47.318 Write completed with error (sct=0, sc=8) 00:22:47.318 starting I/O failed: -6 00:22:47.318 Write completed with error (sct=0, sc=8) 00:22:47.318 starting I/O failed: -6 00:22:47.318 Write completed with error (sct=0, sc=8) 00:22:47.318 starting I/O failed: -6 00:22:47.318 Write completed with error (sct=0, sc=8) 00:22:47.318 starting I/O failed: -6 00:22:47.318 Write completed with error (sct=0, sc=8) 00:22:47.318 starting I/O failed: -6 00:22:47.318 Write completed with error (sct=0, sc=8) 00:22:47.318 starting I/O failed: -6 00:22:47.318 Write completed with error (sct=0, sc=8) 00:22:47.318 starting I/O failed: -6 00:22:47.318 Write completed with error (sct=0, sc=8) 00:22:47.318 starting I/O failed: -6 00:22:47.318 Write completed with error (sct=0, sc=8) 00:22:47.318 starting I/O failed: -6 00:22:47.318 Write completed with error (sct=0, sc=8) 00:22:47.318 starting I/O failed: -6 00:22:47.318 Write completed with error (sct=0, sc=8) 00:22:47.318 starting I/O failed: -6 00:22:47.318 Write completed with error (sct=0, sc=8) 00:22:47.318 starting I/O failed: -6 00:22:47.318 Write completed with error (sct=0, sc=8) 00:22:47.318 starting I/O failed: -6 00:22:47.318 Write completed with error (sct=0, sc=8) 00:22:47.318 starting I/O failed: -6 00:22:47.318 Write completed with error (sct=0, sc=8) 00:22:47.318 starting I/O failed: -6 00:22:47.318 Write completed with error (sct=0, sc=8) 00:22:47.318 starting I/O failed: -6 00:22:47.318 Write completed with error (sct=0, sc=8) 00:22:47.318 starting I/O failed: -6 00:22:47.318 Write completed with error (sct=0, sc=8) 00:22:47.318 starting I/O failed: -6 00:22:47.318 Write completed with error (sct=0, sc=8) 00:22:47.318 starting I/O failed: -6 00:22:47.318 Write completed with error (sct=0, sc=8) 00:22:47.318 starting I/O failed: -6 00:22:47.318 Write completed with error (sct=0, sc=8) 00:22:47.318 starting I/O failed: -6 00:22:47.318 Write completed with error (sct=0, sc=8) 00:22:47.318 starting I/O failed: -6 00:22:47.318 Write completed with error (sct=0, sc=8) 00:22:47.318 starting I/O failed: -6 00:22:47.318 Write completed with error (sct=0, sc=8) 00:22:47.318 starting I/O failed: -6 00:22:47.318 Write completed with error (sct=0, sc=8) 00:22:47.318 starting I/O failed: -6 00:22:47.318 Write completed with error (sct=0, sc=8) 00:22:47.318 starting I/O failed: -6 00:22:47.318 Write completed with error (sct=0, sc=8) 00:22:47.318 starting I/O failed: -6 00:22:47.318 Write completed with error (sct=0, sc=8) 00:22:47.318 starting I/O failed: -6 00:22:47.318 Write completed with error (sct=0, sc=8) 00:22:47.318 starting I/O failed: -6 00:22:47.318 Write completed with error (sct=0, sc=8) 00:22:47.318 starting I/O failed: -6 00:22:47.318 Write completed with error (sct=0, sc=8) 00:22:47.318 starting I/O failed: -6 00:22:47.318 Write completed with error (sct=0, sc=8) 00:22:47.318 starting I/O failed: -6 00:22:47.318 Write completed with error (sct=0, sc=8) 00:22:47.318 starting I/O failed: -6 00:22:47.318 Write completed with error (sct=0, sc=8) 00:22:47.318 starting I/O failed: -6 00:22:47.318 [2024-10-09 00:30:17.456507] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:22:47.318 NVMe io qpair process completion error 00:22:47.318 Write completed with error (sct=0, sc=8) 00:22:47.318 Write completed with error (sct=0, sc=8) 00:22:47.318 Write completed with error (sct=0, sc=8) 00:22:47.318 Write completed with error (sct=0, sc=8) 00:22:47.318 Write completed with error (sct=0, sc=8) 00:22:47.318 Write completed with error (sct=0, sc=8) 00:22:47.318 Write completed with error (sct=0, sc=8) 00:22:47.318 Write completed with error (sct=0, sc=8) 00:22:47.318 Write completed with error (sct=0, sc=8) 00:22:47.318 Write completed with error (sct=0, sc=8) 00:22:47.318 Write completed with error (sct=0, sc=8) 00:22:47.318 Write completed with error (sct=0, sc=8) 00:22:47.318 Write completed with error (sct=0, sc=8) 00:22:47.318 Write completed with error (sct=0, sc=8) 00:22:47.318 Write completed with error (sct=0, sc=8) 00:22:47.318 Write completed with error (sct=0, sc=8) 00:22:47.318 Write completed with error (sct=0, sc=8) 00:22:47.318 Write completed with error (sct=0, sc=8) 00:22:47.318 Write completed with error (sct=0, sc=8) 00:22:47.318 Write completed with error (sct=0, sc=8) 00:22:47.318 Write completed with error (sct=0, sc=8) 00:22:47.318 Write completed with error (sct=0, sc=8) 00:22:47.318 Write completed with error (sct=0, sc=8) 00:22:47.318 Write completed with error (sct=0, sc=8) 00:22:47.318 Write completed with error (sct=0, sc=8) 00:22:47.318 Write completed with error (sct=0, sc=8) 00:22:47.318 Write completed with error (sct=0, sc=8) 00:22:47.318 Write completed with error (sct=0, sc=8) 00:22:47.318 Write completed with error (sct=0, sc=8) 00:22:47.318 Write completed with error (sct=0, sc=8) 00:22:47.318 Write completed with error (sct=0, sc=8) 00:22:47.318 Write completed with error (sct=0, sc=8) 00:22:47.318 Write completed with error (sct=0, sc=8) 00:22:47.318 Write completed with error (sct=0, sc=8) 00:22:47.318 Write completed with error (sct=0, sc=8) 00:22:47.318 Write completed with error (sct=0, sc=8) 00:22:47.318 Write completed with error (sct=0, sc=8) 00:22:47.318 Write completed with error (sct=0, sc=8) 00:22:47.318 Write completed with error (sct=0, sc=8) 00:22:47.318 Write completed with error (sct=0, sc=8) 00:22:47.318 Write completed with error (sct=0, sc=8) 00:22:47.318 Write completed with error (sct=0, sc=8) 00:22:47.318 Write completed with error (sct=0, sc=8) 00:22:47.318 Write completed with error (sct=0, sc=8) 00:22:47.318 Write completed with error (sct=0, sc=8) 00:22:47.318 Write completed with error (sct=0, sc=8) 00:22:47.318 Write completed with error (sct=0, sc=8) 00:22:47.318 Write completed with error (sct=0, sc=8) 00:22:47.318 Write completed with error (sct=0, sc=8) 00:22:47.318 Write completed with error (sct=0, sc=8) 00:22:47.318 Write completed with error (sct=0, sc=8) 00:22:47.318 Write completed with error (sct=0, sc=8) 00:22:47.318 Write completed with error (sct=0, sc=8) 00:22:47.318 Write completed with error (sct=0, sc=8) 00:22:47.318 Write completed with error (sct=0, sc=8) 00:22:47.318 Write completed with error (sct=0, sc=8) 00:22:47.318 Write completed with error (sct=0, sc=8) 00:22:47.318 Write completed with error (sct=0, sc=8) 00:22:47.318 Write completed with error (sct=0, sc=8) 00:22:47.318 Write completed with error (sct=0, sc=8) 00:22:47.318 Write completed with error (sct=0, sc=8) 00:22:47.318 Write completed with error (sct=0, sc=8) 00:22:47.318 Write completed with error (sct=0, sc=8) 00:22:47.318 Write completed with error (sct=0, sc=8) 00:22:47.318 Write completed with error (sct=0, sc=8) 00:22:47.318 Write completed with error (sct=0, sc=8) 00:22:47.318 Write completed with error (sct=0, sc=8) 00:22:47.318 Write completed with error (sct=0, sc=8) 00:22:47.318 Write completed with error (sct=0, sc=8) 00:22:47.318 Write completed with error (sct=0, sc=8) 00:22:47.318 Write completed with error (sct=0, sc=8) 00:22:47.318 Write completed with error (sct=0, sc=8) 00:22:47.318 Write completed with error (sct=0, sc=8) 00:22:47.318 Write completed with error (sct=0, sc=8) 00:22:47.318 Write completed with error (sct=0, sc=8) 00:22:47.318 Write completed with error (sct=0, sc=8) 00:22:47.318 Write completed with error (sct=0, sc=8) 00:22:47.318 Write completed with error (sct=0, sc=8) 00:22:47.318 Write completed with error (sct=0, sc=8) 00:22:47.318 Write completed with error (sct=0, sc=8) 00:22:47.318 Write completed with error (sct=0, sc=8) 00:22:47.318 Write completed with error (sct=0, sc=8) 00:22:47.318 Write completed with error (sct=0, sc=8) 00:22:47.318 Write completed with error (sct=0, sc=8) 00:22:47.318 Write completed with error (sct=0, sc=8) 00:22:47.318 Write completed with error (sct=0, sc=8) 00:22:47.318 Write completed with error (sct=0, sc=8) 00:22:47.318 Write completed with error (sct=0, sc=8) 00:22:47.318 Write completed with error (sct=0, sc=8) 00:22:47.318 Write completed with error (sct=0, sc=8) 00:22:47.318 Write completed with error (sct=0, sc=8) 00:22:47.318 Write completed with error (sct=0, sc=8) 00:22:47.318 Write completed with error (sct=0, sc=8) 00:22:47.318 Write completed with error (sct=0, sc=8) 00:22:47.318 Write completed with error (sct=0, sc=8) 00:22:47.318 [2024-10-09 00:30:17.458866] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:22:47.318 NVMe io qpair process completion error 00:22:47.318 Write completed with error (sct=0, sc=8) 00:22:47.318 Write completed with error (sct=0, sc=8) 00:22:47.318 starting I/O failed: -6 00:22:47.318 Write completed with error (sct=0, sc=8) 00:22:47.318 Write completed with error (sct=0, sc=8) 00:22:47.318 Write completed with error (sct=0, sc=8) 00:22:47.318 Write completed with error (sct=0, sc=8) 00:22:47.318 starting I/O failed: -6 00:22:47.318 Write completed with error (sct=0, sc=8) 00:22:47.319 Write completed with error (sct=0, sc=8) 00:22:47.319 Write completed with error (sct=0, sc=8) 00:22:47.319 Write completed with error (sct=0, sc=8) 00:22:47.319 starting I/O failed: -6 00:22:47.319 Write completed with error (sct=0, sc=8) 00:22:47.319 Write completed with error (sct=0, sc=8) 00:22:47.319 Write completed with error (sct=0, sc=8) 00:22:47.319 Write completed with error (sct=0, sc=8) 00:22:47.319 starting I/O failed: -6 00:22:47.319 Write completed with error (sct=0, sc=8) 00:22:47.319 Write completed with error (sct=0, sc=8) 00:22:47.319 Write completed with error (sct=0, sc=8) 00:22:47.319 Write completed with error (sct=0, sc=8) 00:22:47.319 starting I/O failed: -6 00:22:47.319 Write completed with error (sct=0, sc=8) 00:22:47.319 Write completed with error (sct=0, sc=8) 00:22:47.319 Write completed with error (sct=0, sc=8) 00:22:47.319 Write completed with error (sct=0, sc=8) 00:22:47.319 starting I/O failed: -6 00:22:47.319 Write completed with error (sct=0, sc=8) 00:22:47.319 Write completed with error (sct=0, sc=8) 00:22:47.319 Write completed with error (sct=0, sc=8) 00:22:47.319 Write completed with error (sct=0, sc=8) 00:22:47.319 starting I/O failed: -6 00:22:47.319 Write completed with error (sct=0, sc=8) 00:22:47.319 Write completed with error (sct=0, sc=8) 00:22:47.319 Write completed with error (sct=0, sc=8) 00:22:47.319 Write completed with error (sct=0, sc=8) 00:22:47.319 starting I/O failed: -6 00:22:47.319 Write completed with error (sct=0, sc=8) 00:22:47.319 Write completed with error (sct=0, sc=8) 00:22:47.319 Write completed with error (sct=0, sc=8) 00:22:47.319 Write completed with error (sct=0, sc=8) 00:22:47.319 starting I/O failed: -6 00:22:47.319 Write completed with error (sct=0, sc=8) 00:22:47.319 Write completed with error (sct=0, sc=8) 00:22:47.319 Write completed with error (sct=0, sc=8) 00:22:47.319 Write completed with error (sct=0, sc=8) 00:22:47.319 starting I/O failed: -6 00:22:47.319 Write completed with error (sct=0, sc=8) 00:22:47.319 Write completed with error (sct=0, sc=8) 00:22:47.319 Write completed with error (sct=0, sc=8) 00:22:47.319 Write completed with error (sct=0, sc=8) 00:22:47.319 starting I/O failed: -6 00:22:47.319 [2024-10-09 00:30:17.460020] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:22:47.319 starting I/O failed: -6 00:22:47.319 Write completed with error (sct=0, sc=8) 00:22:47.319 Write completed with error (sct=0, sc=8) 00:22:47.319 starting I/O failed: -6 00:22:47.319 Write completed with error (sct=0, sc=8) 00:22:47.319 Write completed with error (sct=0, sc=8) 00:22:47.319 starting I/O failed: -6 00:22:47.319 Write completed with error (sct=0, sc=8) 00:22:47.319 Write completed with error (sct=0, sc=8) 00:22:47.319 starting I/O failed: -6 00:22:47.319 Write completed with error (sct=0, sc=8) 00:22:47.319 Write completed with error (sct=0, sc=8) 00:22:47.319 starting I/O failed: -6 00:22:47.319 Write completed with error (sct=0, sc=8) 00:22:47.319 Write completed with error (sct=0, sc=8) 00:22:47.319 starting I/O failed: -6 00:22:47.319 Write completed with error (sct=0, sc=8) 00:22:47.319 Write completed with error (sct=0, sc=8) 00:22:47.319 starting I/O failed: -6 00:22:47.319 Write completed with error (sct=0, sc=8) 00:22:47.319 Write completed with error (sct=0, sc=8) 00:22:47.319 starting I/O failed: -6 00:22:47.319 Write completed with error (sct=0, sc=8) 00:22:47.319 Write completed with error (sct=0, sc=8) 00:22:47.319 starting I/O failed: -6 00:22:47.319 Write completed with error (sct=0, sc=8) 00:22:47.319 Write completed with error (sct=0, sc=8) 00:22:47.319 starting I/O failed: -6 00:22:47.319 Write completed with error (sct=0, sc=8) 00:22:47.319 Write completed with error (sct=0, sc=8) 00:22:47.319 starting I/O failed: -6 00:22:47.319 Write completed with error (sct=0, sc=8) 00:22:47.319 Write completed with error (sct=0, sc=8) 00:22:47.319 starting I/O failed: -6 00:22:47.319 Write completed with error (sct=0, sc=8) 00:22:47.319 Write completed with error (sct=0, sc=8) 00:22:47.319 starting I/O failed: -6 00:22:47.319 Write completed with error (sct=0, sc=8) 00:22:47.319 Write completed with error (sct=0, sc=8) 00:22:47.319 starting I/O failed: -6 00:22:47.319 Write completed with error (sct=0, sc=8) 00:22:47.319 Write completed with error (sct=0, sc=8) 00:22:47.319 starting I/O failed: -6 00:22:47.319 Write completed with error (sct=0, sc=8) 00:22:47.319 Write completed with error (sct=0, sc=8) 00:22:47.319 starting I/O failed: -6 00:22:47.319 Write completed with error (sct=0, sc=8) 00:22:47.319 Write completed with error (sct=0, sc=8) 00:22:47.319 starting I/O failed: -6 00:22:47.319 Write completed with error (sct=0, sc=8) 00:22:47.319 Write completed with error (sct=0, sc=8) 00:22:47.319 starting I/O failed: -6 00:22:47.319 Write completed with error (sct=0, sc=8) 00:22:47.319 Write completed with error (sct=0, sc=8) 00:22:47.319 starting I/O failed: -6 00:22:47.319 [2024-10-09 00:30:17.460845] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:47.319 Write completed with error (sct=0, sc=8) 00:22:47.319 starting I/O failed: -6 00:22:47.319 Write completed with error (sct=0, sc=8) 00:22:47.319 starting I/O failed: -6 00:22:47.319 Write completed with error (sct=0, sc=8) 00:22:47.319 Write completed with error (sct=0, sc=8) 00:22:47.319 starting I/O failed: -6 00:22:47.319 Write completed with error (sct=0, sc=8) 00:22:47.319 starting I/O failed: -6 00:22:47.319 Write completed with error (sct=0, sc=8) 00:22:47.319 starting I/O failed: -6 00:22:47.319 Write completed with error (sct=0, sc=8) 00:22:47.319 Write completed with error (sct=0, sc=8) 00:22:47.319 starting I/O failed: -6 00:22:47.319 Write completed with error (sct=0, sc=8) 00:22:47.319 starting I/O failed: -6 00:22:47.319 Write completed with error (sct=0, sc=8) 00:22:47.319 starting I/O failed: -6 00:22:47.319 Write completed with error (sct=0, sc=8) 00:22:47.319 Write completed with error (sct=0, sc=8) 00:22:47.319 starting I/O failed: -6 00:22:47.319 Write completed with error (sct=0, sc=8) 00:22:47.319 starting I/O failed: -6 00:22:47.319 Write completed with error (sct=0, sc=8) 00:22:47.319 starting I/O failed: -6 00:22:47.319 Write completed with error (sct=0, sc=8) 00:22:47.319 Write completed with error (sct=0, sc=8) 00:22:47.319 starting I/O failed: -6 00:22:47.319 Write completed with error (sct=0, sc=8) 00:22:47.319 starting I/O failed: -6 00:22:47.319 Write completed with error (sct=0, sc=8) 00:22:47.319 starting I/O failed: -6 00:22:47.319 Write completed with error (sct=0, sc=8) 00:22:47.319 Write completed with error (sct=0, sc=8) 00:22:47.319 starting I/O failed: -6 00:22:47.319 Write completed with error (sct=0, sc=8) 00:22:47.319 starting I/O failed: -6 00:22:47.319 Write completed with error (sct=0, sc=8) 00:22:47.319 starting I/O failed: -6 00:22:47.319 Write completed with error (sct=0, sc=8) 00:22:47.319 Write completed with error (sct=0, sc=8) 00:22:47.319 starting I/O failed: -6 00:22:47.319 Write completed with error (sct=0, sc=8) 00:22:47.319 starting I/O failed: -6 00:22:47.319 Write completed with error (sct=0, sc=8) 00:22:47.319 starting I/O failed: -6 00:22:47.319 Write completed with error (sct=0, sc=8) 00:22:47.319 Write completed with error (sct=0, sc=8) 00:22:47.319 starting I/O failed: -6 00:22:47.319 Write completed with error (sct=0, sc=8) 00:22:47.319 starting I/O failed: -6 00:22:47.319 Write completed with error (sct=0, sc=8) 00:22:47.319 starting I/O failed: -6 00:22:47.319 Write completed with error (sct=0, sc=8) 00:22:47.319 Write completed with error (sct=0, sc=8) 00:22:47.319 starting I/O failed: -6 00:22:47.319 Write completed with error (sct=0, sc=8) 00:22:47.319 starting I/O failed: -6 00:22:47.319 Write completed with error (sct=0, sc=8) 00:22:47.319 starting I/O failed: -6 00:22:47.319 Write completed with error (sct=0, sc=8) 00:22:47.319 Write completed with error (sct=0, sc=8) 00:22:47.319 starting I/O failed: -6 00:22:47.319 Write completed with error (sct=0, sc=8) 00:22:47.319 starting I/O failed: -6 00:22:47.319 Write completed with error (sct=0, sc=8) 00:22:47.319 starting I/O failed: -6 00:22:47.319 Write completed with error (sct=0, sc=8) 00:22:47.319 Write completed with error (sct=0, sc=8) 00:22:47.319 starting I/O failed: -6 00:22:47.319 Write completed with error (sct=0, sc=8) 00:22:47.319 starting I/O failed: -6 00:22:47.319 Write completed with error (sct=0, sc=8) 00:22:47.319 starting I/O failed: -6 00:22:47.319 Write completed with error (sct=0, sc=8) 00:22:47.319 Write completed with error (sct=0, sc=8) 00:22:47.319 starting I/O failed: -6 00:22:47.319 Write completed with error (sct=0, sc=8) 00:22:47.319 starting I/O failed: -6 00:22:47.320 Write completed with error (sct=0, sc=8) 00:22:47.320 starting I/O failed: -6 00:22:47.320 Write completed with error (sct=0, sc=8) 00:22:47.320 Write completed with error (sct=0, sc=8) 00:22:47.320 starting I/O failed: -6 00:22:47.320 Write completed with error (sct=0, sc=8) 00:22:47.320 starting I/O failed: -6 00:22:47.320 [2024-10-09 00:30:17.461791] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:22:47.320 Write completed with error (sct=0, sc=8) 00:22:47.320 starting I/O failed: -6 00:22:47.320 Write completed with error (sct=0, sc=8) 00:22:47.320 starting I/O failed: -6 00:22:47.320 Write completed with error (sct=0, sc=8) 00:22:47.320 starting I/O failed: -6 00:22:47.320 Write completed with error (sct=0, sc=8) 00:22:47.320 starting I/O failed: -6 00:22:47.320 Write completed with error (sct=0, sc=8) 00:22:47.320 starting I/O failed: -6 00:22:47.320 Write completed with error (sct=0, sc=8) 00:22:47.320 starting I/O failed: -6 00:22:47.320 Write completed with error (sct=0, sc=8) 00:22:47.320 starting I/O failed: -6 00:22:47.320 Write completed with error (sct=0, sc=8) 00:22:47.320 starting I/O failed: -6 00:22:47.320 Write completed with error (sct=0, sc=8) 00:22:47.320 starting I/O failed: -6 00:22:47.320 Write completed with error (sct=0, sc=8) 00:22:47.320 starting I/O failed: -6 00:22:47.320 Write completed with error (sct=0, sc=8) 00:22:47.320 starting I/O failed: -6 00:22:47.320 Write completed with error (sct=0, sc=8) 00:22:47.320 starting I/O failed: -6 00:22:47.320 Write completed with error (sct=0, sc=8) 00:22:47.320 starting I/O failed: -6 00:22:47.320 Write completed with error (sct=0, sc=8) 00:22:47.320 starting I/O failed: -6 00:22:47.320 Write completed with error (sct=0, sc=8) 00:22:47.320 starting I/O failed: -6 00:22:47.320 Write completed with error (sct=0, sc=8) 00:22:47.320 starting I/O failed: -6 00:22:47.320 Write completed with error (sct=0, sc=8) 00:22:47.320 starting I/O failed: -6 00:22:47.320 Write completed with error (sct=0, sc=8) 00:22:47.320 starting I/O failed: -6 00:22:47.320 Write completed with error (sct=0, sc=8) 00:22:47.320 starting I/O failed: -6 00:22:47.320 Write completed with error (sct=0, sc=8) 00:22:47.320 starting I/O failed: -6 00:22:47.320 Write completed with error (sct=0, sc=8) 00:22:47.320 starting I/O failed: -6 00:22:47.320 Write completed with error (sct=0, sc=8) 00:22:47.320 starting I/O failed: -6 00:22:47.320 Write completed with error (sct=0, sc=8) 00:22:47.320 starting I/O failed: -6 00:22:47.320 Write completed with error (sct=0, sc=8) 00:22:47.320 starting I/O failed: -6 00:22:47.320 Write completed with error (sct=0, sc=8) 00:22:47.320 starting I/O failed: -6 00:22:47.320 Write completed with error (sct=0, sc=8) 00:22:47.320 starting I/O failed: -6 00:22:47.320 Write completed with error (sct=0, sc=8) 00:22:47.320 starting I/O failed: -6 00:22:47.320 Write completed with error (sct=0, sc=8) 00:22:47.320 starting I/O failed: -6 00:22:47.320 Write completed with error (sct=0, sc=8) 00:22:47.320 starting I/O failed: -6 00:22:47.320 Write completed with error (sct=0, sc=8) 00:22:47.320 starting I/O failed: -6 00:22:47.320 Write completed with error (sct=0, sc=8) 00:22:47.320 starting I/O failed: -6 00:22:47.320 Write completed with error (sct=0, sc=8) 00:22:47.320 starting I/O failed: -6 00:22:47.320 Write completed with error (sct=0, sc=8) 00:22:47.320 starting I/O failed: -6 00:22:47.320 Write completed with error (sct=0, sc=8) 00:22:47.320 starting I/O failed: -6 00:22:47.320 Write completed with error (sct=0, sc=8) 00:22:47.320 starting I/O failed: -6 00:22:47.320 Write completed with error (sct=0, sc=8) 00:22:47.320 starting I/O failed: -6 00:22:47.320 Write completed with error (sct=0, sc=8) 00:22:47.320 starting I/O failed: -6 00:22:47.320 Write completed with error (sct=0, sc=8) 00:22:47.320 starting I/O failed: -6 00:22:47.320 Write completed with error (sct=0, sc=8) 00:22:47.320 starting I/O failed: -6 00:22:47.320 Write completed with error (sct=0, sc=8) 00:22:47.320 starting I/O failed: -6 00:22:47.320 Write completed with error (sct=0, sc=8) 00:22:47.320 starting I/O failed: -6 00:22:47.320 Write completed with error (sct=0, sc=8) 00:22:47.320 starting I/O failed: -6 00:22:47.320 Write completed with error (sct=0, sc=8) 00:22:47.320 starting I/O failed: -6 00:22:47.320 Write completed with error (sct=0, sc=8) 00:22:47.320 starting I/O failed: -6 00:22:47.320 Write completed with error (sct=0, sc=8) 00:22:47.320 starting I/O failed: -6 00:22:47.320 Write completed with error (sct=0, sc=8) 00:22:47.320 starting I/O failed: -6 00:22:47.320 Write completed with error (sct=0, sc=8) 00:22:47.320 starting I/O failed: -6 00:22:47.320 Write completed with error (sct=0, sc=8) 00:22:47.320 starting I/O failed: -6 00:22:47.320 Write completed with error (sct=0, sc=8) 00:22:47.320 starting I/O failed: -6 00:22:47.320 Write completed with error (sct=0, sc=8) 00:22:47.320 starting I/O failed: -6 00:22:47.320 Write completed with error (sct=0, sc=8) 00:22:47.320 starting I/O failed: -6 00:22:47.320 Write completed with error (sct=0, sc=8) 00:22:47.320 starting I/O failed: -6 00:22:47.320 Write completed with error (sct=0, sc=8) 00:22:47.320 starting I/O failed: -6 00:22:47.320 Write completed with error (sct=0, sc=8) 00:22:47.320 starting I/O failed: -6 00:22:47.320 Write completed with error (sct=0, sc=8) 00:22:47.320 starting I/O failed: -6 00:22:47.320 Write completed with error (sct=0, sc=8) 00:22:47.320 starting I/O failed: -6 00:22:47.320 Write completed with error (sct=0, sc=8) 00:22:47.320 starting I/O failed: -6 00:22:47.320 Write completed with error (sct=0, sc=8) 00:22:47.320 starting I/O failed: -6 00:22:47.320 Write completed with error (sct=0, sc=8) 00:22:47.320 starting I/O failed: -6 00:22:47.320 Write completed with error (sct=0, sc=8) 00:22:47.320 starting I/O failed: -6 00:22:47.320 Write completed with error (sct=0, sc=8) 00:22:47.320 starting I/O failed: -6 00:22:47.320 [2024-10-09 00:30:17.463646] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:22:47.320 NVMe io qpair process completion error 00:22:47.320 Initializing NVMe Controllers 00:22:47.320 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode3 00:22:47.320 Controller IO queue size 128, less than required. 00:22:47.320 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:47.320 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:47.320 Controller IO queue size 128, less than required. 00:22:47.320 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:47.320 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode8 00:22:47.320 Controller IO queue size 128, less than required. 00:22:47.320 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:47.320 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode10 00:22:47.320 Controller IO queue size 128, less than required. 00:22:47.320 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:47.320 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode5 00:22:47.320 Controller IO queue size 128, less than required. 00:22:47.320 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:47.320 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode6 00:22:47.320 Controller IO queue size 128, less than required. 00:22:47.320 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:47.320 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode4 00:22:47.320 Controller IO queue size 128, less than required. 00:22:47.320 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:47.320 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode2 00:22:47.320 Controller IO queue size 128, less than required. 00:22:47.320 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:47.320 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode7 00:22:47.320 Controller IO queue size 128, less than required. 00:22:47.320 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:47.320 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode9 00:22:47.320 Controller IO queue size 128, less than required. 00:22:47.320 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:47.320 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 with lcore 0 00:22:47.320 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:47.320 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 with lcore 0 00:22:47.320 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 with lcore 0 00:22:47.320 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 with lcore 0 00:22:47.320 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 with lcore 0 00:22:47.320 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 with lcore 0 00:22:47.320 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 with lcore 0 00:22:47.320 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 with lcore 0 00:22:47.320 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 with lcore 0 00:22:47.320 Initialization complete. Launching workers. 00:22:47.320 ======================================================== 00:22:47.320 Latency(us) 00:22:47.320 Device Information : IOPS MiB/s Average min max 00:22:47.320 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 from core 0: 1899.61 81.62 67398.99 691.65 123044.87 00:22:47.320 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1905.18 81.86 67043.63 996.26 122538.60 00:22:47.320 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 from core 0: 1921.66 82.57 66641.14 701.50 133310.31 00:22:47.321 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 from core 0: 1898.12 81.56 66813.94 677.29 121128.35 00:22:47.321 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 from core 0: 1859.59 79.90 68215.68 786.41 118937.75 00:22:47.321 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 from core 0: 1860.66 79.95 68201.03 709.37 116869.96 00:22:47.321 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 from core 0: 1875.64 80.59 67694.07 693.71 117650.81 00:22:47.321 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 from core 0: 1893.19 81.35 67092.60 647.43 124324.24 00:22:47.321 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 from core 0: 1903.47 81.79 66765.76 800.95 128066.79 00:22:47.321 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 from core 0: 1882.06 80.87 67555.46 825.98 121666.87 00:22:47.321 ======================================================== 00:22:47.321 Total : 18899.19 812.07 67337.07 647.43 133310.31 00:22:47.321 00:22:47.321 [2024-10-09 00:30:17.467255] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0b630 is same with the state(6) to be set 00:22:47.321 [2024-10-09 00:30:17.467299] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0d7f0 is same with the state(6) to be set 00:22:47.321 [2024-10-09 00:30:17.467328] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd12190 is same with the state(6) to be set 00:22:47.321 [2024-10-09 00:30:17.467358] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0d9d0 is same with the state(6) to be set 00:22:47.321 [2024-10-09 00:30:17.467386] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0bc90 is same with the state(6) to be set 00:22:47.321 [2024-10-09 00:30:17.467414] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0bfc0 is same with the state(6) to be set 00:22:47.321 [2024-10-09 00:30:17.467443] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0b960 is same with the state(6) to be set 00:22:47.321 [2024-10-09 00:30:17.467471] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0dbb0 is same with the state(6) to be set 00:22:47.321 [2024-10-09 00:30:17.467499] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd11e60 is same with the state(6) to be set 00:22:47.321 [2024-10-09 00:30:17.467526] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd124c0 is same with the state(6) to be set 00:22:47.321 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:22:47.321 00:30:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@156 -- # sleep 1 00:22:48.263 00:30:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@158 -- # NOT wait 3322853 00:22:48.263 00:30:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@650 -- # local es=0 00:22:48.263 00:30:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@652 -- # valid_exec_arg wait 3322853 00:22:48.263 00:30:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@638 -- # local arg=wait 00:22:48.263 00:30:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:48.263 00:30:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@642 -- # type -t wait 00:22:48.263 00:30:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:48.264 00:30:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@653 -- # wait 3322853 00:22:48.264 00:30:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@653 -- # es=1 00:22:48.264 00:30:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:48.264 00:30:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:48.264 00:30:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:48.264 00:30:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@159 -- # stoptarget 00:22:48.264 00:30:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:22:48.264 00:30:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:22:48.264 00:30:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:48.264 00:30:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@46 -- # nvmftestfini 00:22:48.264 00:30:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@514 -- # nvmfcleanup 00:22:48.264 00:30:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@121 -- # sync 00:22:48.264 00:30:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:48.264 00:30:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@124 -- # set +e 00:22:48.264 00:30:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:48.264 00:30:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:48.264 rmmod nvme_tcp 00:22:48.264 rmmod nvme_fabrics 00:22:48.264 rmmod nvme_keyring 00:22:48.264 00:30:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:48.264 00:30:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@128 -- # set -e 00:22:48.264 00:30:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@129 -- # return 0 00:22:48.264 00:30:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@515 -- # '[' -n 3322427 ']' 00:22:48.264 00:30:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@516 -- # killprocess 3322427 00:22:48.264 00:30:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@950 -- # '[' -z 3322427 ']' 00:22:48.264 00:30:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # kill -0 3322427 00:22:48.264 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (3322427) - No such process 00:22:48.264 00:30:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@977 -- # echo 'Process with pid 3322427 is not found' 00:22:48.264 Process with pid 3322427 is not found 00:22:48.264 00:30:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:22:48.264 00:30:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:22:48.264 00:30:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:22:48.264 00:30:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@297 -- # iptr 00:22:48.264 00:30:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@789 -- # iptables-save 00:22:48.264 00:30:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:22:48.264 00:30:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@789 -- # iptables-restore 00:22:48.264 00:30:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:48.264 00:30:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:48.264 00:30:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:48.264 00:30:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:48.264 00:30:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:50.810 00:30:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:50.810 00:22:50.810 real 0m10.294s 00:22:50.810 user 0m27.946s 00:22:50.810 sys 0m3.941s 00:22:50.810 00:30:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:50.810 00:30:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:50.810 ************************************ 00:22:50.810 END TEST nvmf_shutdown_tc4 00:22:50.810 ************************************ 00:22:50.810 00:30:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@170 -- # trap - SIGINT SIGTERM EXIT 00:22:50.810 00:22:50.810 real 0m43.866s 00:22:50.810 user 1m47.206s 00:22:50.810 sys 0m13.865s 00:22:50.810 00:30:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:50.810 00:30:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:50.810 ************************************ 00:22:50.810 END TEST nvmf_shutdown 00:22:50.810 ************************************ 00:22:50.810 00:30:20 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:22:50.810 00:22:50.810 real 12m43.598s 00:22:50.810 user 27m0.238s 00:22:50.810 sys 3m43.402s 00:22:50.810 00:30:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:50.810 00:30:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:50.810 ************************************ 00:22:50.810 END TEST nvmf_target_extra 00:22:50.810 ************************************ 00:22:50.810 00:30:20 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:22:50.810 00:30:20 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:22:50.810 00:30:20 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:50.810 00:30:20 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:50.810 ************************************ 00:22:50.810 START TEST nvmf_host 00:22:50.810 ************************************ 00:22:50.811 00:30:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:22:50.811 * Looking for test storage... 00:22:50.811 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:22:50.811 00:30:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:22:50.811 00:30:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1681 -- # lcov --version 00:22:50.811 00:30:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:22:50.811 00:30:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:22:50.811 00:30:21 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:50.811 00:30:21 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:50.811 00:30:21 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:50.811 00:30:21 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:22:50.811 00:30:21 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:22:50.811 00:30:21 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:22:50.811 00:30:21 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:22:50.811 00:30:21 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:22:50.811 00:30:21 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:22:50.811 00:30:21 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:22:50.811 00:30:21 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:50.811 00:30:21 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:22:50.811 00:30:21 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:22:50.811 00:30:21 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:50.811 00:30:21 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:50.811 00:30:21 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:22:50.811 00:30:21 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:22:50.811 00:30:21 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:50.811 00:30:21 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:22:50.811 00:30:21 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:22:50.811 00:30:21 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:22:50.811 00:30:21 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:22:50.811 00:30:21 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:50.811 00:30:21 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:22:50.811 00:30:21 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:22:50.811 00:30:21 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:50.811 00:30:21 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:50.811 00:30:21 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:22:50.811 00:30:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:50.811 00:30:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:22:50.811 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:50.811 --rc genhtml_branch_coverage=1 00:22:50.811 --rc genhtml_function_coverage=1 00:22:50.811 --rc genhtml_legend=1 00:22:50.811 --rc geninfo_all_blocks=1 00:22:50.811 --rc geninfo_unexecuted_blocks=1 00:22:50.811 00:22:50.811 ' 00:22:50.811 00:30:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:22:50.811 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:50.811 --rc genhtml_branch_coverage=1 00:22:50.811 --rc genhtml_function_coverage=1 00:22:50.811 --rc genhtml_legend=1 00:22:50.811 --rc geninfo_all_blocks=1 00:22:50.811 --rc geninfo_unexecuted_blocks=1 00:22:50.811 00:22:50.811 ' 00:22:50.811 00:30:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:22:50.811 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:50.811 --rc genhtml_branch_coverage=1 00:22:50.811 --rc genhtml_function_coverage=1 00:22:50.811 --rc genhtml_legend=1 00:22:50.811 --rc geninfo_all_blocks=1 00:22:50.811 --rc geninfo_unexecuted_blocks=1 00:22:50.811 00:22:50.811 ' 00:22:50.811 00:30:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:22:50.811 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:50.811 --rc genhtml_branch_coverage=1 00:22:50.811 --rc genhtml_function_coverage=1 00:22:50.811 --rc genhtml_legend=1 00:22:50.811 --rc geninfo_all_blocks=1 00:22:50.811 --rc geninfo_unexecuted_blocks=1 00:22:50.811 00:22:50.811 ' 00:22:50.811 00:30:21 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:50.811 00:30:21 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:22:50.811 00:30:21 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:50.811 00:30:21 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:50.811 00:30:21 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:50.811 00:30:21 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:50.811 00:30:21 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:50.811 00:30:21 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:50.811 00:30:21 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:50.811 00:30:21 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:50.811 00:30:21 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:50.811 00:30:21 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:50.811 00:30:21 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:50.811 00:30:21 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:50.811 00:30:21 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:50.811 00:30:21 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:50.811 00:30:21 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:50.811 00:30:21 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:50.811 00:30:21 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:50.811 00:30:21 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:22:50.811 00:30:21 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:50.811 00:30:21 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:50.811 00:30:21 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:50.811 00:30:21 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:50.811 00:30:21 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:50.811 00:30:21 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:50.811 00:30:21 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:22:50.811 00:30:21 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:50.811 00:30:21 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:22:50.811 00:30:21 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:50.811 00:30:21 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:50.811 00:30:21 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:50.811 00:30:21 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:50.811 00:30:21 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:50.811 00:30:21 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:50.811 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:50.811 00:30:21 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:50.811 00:30:21 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:50.811 00:30:21 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:50.811 00:30:21 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:22:50.811 00:30:21 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:22:50.811 00:30:21 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:22:50.811 00:30:21 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:22:50.811 00:30:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:22:50.811 00:30:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:50.811 00:30:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:50.811 ************************************ 00:22:50.811 START TEST nvmf_multicontroller 00:22:50.811 ************************************ 00:22:50.811 00:30:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:22:50.811 * Looking for test storage... 00:22:50.811 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:50.811 00:30:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:22:50.811 00:30:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1681 -- # lcov --version 00:22:50.811 00:30:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:22:51.073 00:30:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:22:51.073 00:30:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:51.073 00:30:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:51.073 00:30:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:51.073 00:30:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 00:22:51.073 00:30:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 00:22:51.073 00:30:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 00:22:51.073 00:30:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 00:22:51.073 00:30:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 00:22:51.073 00:30:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 00:22:51.073 00:30:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 00:22:51.073 00:30:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:51.073 00:30:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 00:22:51.073 00:30:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 00:22:51.073 00:30:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:51.073 00:30:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:51.073 00:30:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 00:22:51.073 00:30:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 00:22:51.073 00:30:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:51.073 00:30:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 00:22:51.073 00:30:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 00:22:51.073 00:30:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 00:22:51.073 00:30:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 00:22:51.073 00:30:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:51.073 00:30:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 00:22:51.073 00:30:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 00:22:51.073 00:30:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:51.073 00:30:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:51.073 00:30:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 00:22:51.073 00:30:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:51.073 00:30:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:22:51.073 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:51.073 --rc genhtml_branch_coverage=1 00:22:51.073 --rc genhtml_function_coverage=1 00:22:51.073 --rc genhtml_legend=1 00:22:51.073 --rc geninfo_all_blocks=1 00:22:51.073 --rc geninfo_unexecuted_blocks=1 00:22:51.073 00:22:51.073 ' 00:22:51.073 00:30:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:22:51.073 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:51.073 --rc genhtml_branch_coverage=1 00:22:51.073 --rc genhtml_function_coverage=1 00:22:51.073 --rc genhtml_legend=1 00:22:51.073 --rc geninfo_all_blocks=1 00:22:51.073 --rc geninfo_unexecuted_blocks=1 00:22:51.073 00:22:51.073 ' 00:22:51.073 00:30:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:22:51.073 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:51.073 --rc genhtml_branch_coverage=1 00:22:51.073 --rc genhtml_function_coverage=1 00:22:51.073 --rc genhtml_legend=1 00:22:51.073 --rc geninfo_all_blocks=1 00:22:51.073 --rc geninfo_unexecuted_blocks=1 00:22:51.073 00:22:51.073 ' 00:22:51.073 00:30:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:22:51.073 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:51.073 --rc genhtml_branch_coverage=1 00:22:51.073 --rc genhtml_function_coverage=1 00:22:51.073 --rc genhtml_legend=1 00:22:51.073 --rc geninfo_all_blocks=1 00:22:51.073 --rc geninfo_unexecuted_blocks=1 00:22:51.073 00:22:51.073 ' 00:22:51.073 00:30:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:51.074 00:30:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:22:51.074 00:30:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:51.074 00:30:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:51.074 00:30:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:51.074 00:30:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:51.074 00:30:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:51.074 00:30:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:51.074 00:30:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:51.074 00:30:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:51.074 00:30:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:51.074 00:30:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:51.074 00:30:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:51.074 00:30:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:51.074 00:30:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:51.074 00:30:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:51.074 00:30:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:51.074 00:30:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:51.074 00:30:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:51.074 00:30:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 00:22:51.074 00:30:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:51.074 00:30:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:51.074 00:30:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:51.074 00:30:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:51.074 00:30:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:51.074 00:30:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:51.074 00:30:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:22:51.074 00:30:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:51.074 00:30:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # : 0 00:22:51.074 00:30:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:51.074 00:30:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:51.074 00:30:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:51.074 00:30:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:51.074 00:30:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:51.074 00:30:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:51.074 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:51.074 00:30:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:51.074 00:30:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:51.074 00:30:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:51.074 00:30:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:51.074 00:30:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:51.074 00:30:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:22:51.074 00:30:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:22:51.074 00:30:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:51.074 00:30:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:22:51.074 00:30:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:22:51.074 00:30:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:22:51.074 00:30:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:51.074 00:30:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # prepare_net_devs 00:22:51.074 00:30:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@436 -- # local -g is_hw=no 00:22:51.074 00:30:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@438 -- # remove_spdk_ns 00:22:51.074 00:30:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:51.074 00:30:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:51.074 00:30:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:51.074 00:30:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:22:51.074 00:30:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:22:51.074 00:30:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@309 -- # xtrace_disable 00:22:51.074 00:30:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:59.253 00:30:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:59.253 00:30:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # pci_devs=() 00:22:59.253 00:30:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:59.253 00:30:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:59.253 00:30:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:59.253 00:30:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:59.253 00:30:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:59.253 00:30:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # net_devs=() 00:22:59.253 00:30:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:59.253 00:30:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # e810=() 00:22:59.253 00:30:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # local -ga e810 00:22:59.253 00:30:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # x722=() 00:22:59.253 00:30:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # local -ga x722 00:22:59.254 00:30:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # mlx=() 00:22:59.254 00:30:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # local -ga mlx 00:22:59.254 00:30:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:59.254 00:30:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:59.254 00:30:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:59.254 00:30:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:59.254 00:30:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:59.254 00:30:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:59.254 00:30:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:59.254 00:30:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:59.254 00:30:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:59.254 00:30:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:59.254 00:30:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:59.254 00:30:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:59.254 00:30:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:59.254 00:30:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:59.254 00:30:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:59.254 00:30:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:59.254 00:30:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:59.254 00:30:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:59.254 00:30:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:59.254 00:30:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:22:59.254 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:22:59.254 00:30:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:59.254 00:30:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:59.254 00:30:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:59.254 00:30:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:59.254 00:30:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:59.254 00:30:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:59.254 00:30:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:22:59.254 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:22:59.254 00:30:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:59.254 00:30:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:59.254 00:30:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:59.254 00:30:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:59.254 00:30:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:59.254 00:30:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:59.254 00:30:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:59.254 00:30:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:59.254 00:30:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:22:59.254 00:30:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:59.254 00:30:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:22:59.254 00:30:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:59.254 00:30:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ up == up ]] 00:22:59.254 00:30:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:22:59.254 00:30:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:59.254 00:30:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:22:59.254 Found net devices under 0000:4b:00.0: cvl_0_0 00:22:59.254 00:30:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:22:59.254 00:30:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:22:59.254 00:30:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:59.254 00:30:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:22:59.254 00:30:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:59.254 00:30:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ up == up ]] 00:22:59.254 00:30:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:22:59.254 00:30:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:59.254 00:30:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:22:59.254 Found net devices under 0000:4b:00.1: cvl_0_1 00:22:59.254 00:30:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:22:59.254 00:30:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:22:59.254 00:30:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # is_hw=yes 00:22:59.254 00:30:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:22:59.254 00:30:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:22:59.254 00:30:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:22:59.254 00:30:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:59.254 00:30:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:59.254 00:30:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:59.254 00:30:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:59.254 00:30:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:59.254 00:30:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:59.254 00:30:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:59.254 00:30:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:59.254 00:30:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:59.254 00:30:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:59.254 00:30:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:59.254 00:30:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:59.254 00:30:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:59.254 00:30:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:59.254 00:30:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:59.254 00:30:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:59.254 00:30:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:59.254 00:30:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:59.254 00:30:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:59.254 00:30:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:59.254 00:30:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:59.254 00:30:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:59.254 00:30:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:59.254 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:59.254 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.574 ms 00:22:59.254 00:22:59.254 --- 10.0.0.2 ping statistics --- 00:22:59.254 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:59.254 rtt min/avg/max/mdev = 0.574/0.574/0.574/0.000 ms 00:22:59.254 00:30:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:59.254 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:59.254 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.326 ms 00:22:59.254 00:22:59.254 --- 10.0.0.1 ping statistics --- 00:22:59.254 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:59.254 rtt min/avg/max/mdev = 0.326/0.326/0.326/0.000 ms 00:22:59.254 00:30:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:59.254 00:30:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@448 -- # return 0 00:22:59.254 00:30:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:22:59.254 00:30:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:59.254 00:30:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:22:59.254 00:30:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:22:59.254 00:30:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:59.254 00:30:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:22:59.254 00:30:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:22:59.254 00:30:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:22:59.254 00:30:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:22:59.254 00:30:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:59.254 00:30:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:59.254 00:30:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@507 -- # nvmfpid=3328263 00:22:59.254 00:30:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@508 -- # waitforlisten 3328263 00:22:59.254 00:30:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:22:59.254 00:30:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@831 -- # '[' -z 3328263 ']' 00:22:59.254 00:30:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:59.254 00:30:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:59.255 00:30:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:59.255 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:59.255 00:30:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:59.255 00:30:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:59.255 [2024-10-09 00:30:29.148075] Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 initialization... 00:22:59.255 [2024-10-09 00:30:29.148137] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:59.255 [2024-10-09 00:30:29.237613] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:22:59.255 [2024-10-09 00:30:29.330815] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:59.255 [2024-10-09 00:30:29.330871] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:59.255 [2024-10-09 00:30:29.330880] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:59.255 [2024-10-09 00:30:29.330887] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:59.255 [2024-10-09 00:30:29.330894] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:59.255 [2024-10-09 00:30:29.332499] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:22:59.255 [2024-10-09 00:30:29.332661] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:22:59.255 [2024-10-09 00:30:29.332661] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:22:59.516 00:30:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:59.516 00:30:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # return 0 00:22:59.516 00:30:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:22:59.516 00:30:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:59.516 00:30:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:59.516 00:30:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:59.516 00:30:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:59.516 00:30:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:59.516 00:30:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:59.516 [2024-10-09 00:30:30.008310] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:59.516 00:30:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:59.516 00:30:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:59.516 00:30:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:59.516 00:30:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:59.516 Malloc0 00:22:59.516 00:30:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:59.516 00:30:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:59.516 00:30:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:59.516 00:30:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:59.516 00:30:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:59.516 00:30:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:59.516 00:30:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:59.516 00:30:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:59.516 00:30:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:59.516 00:30:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:59.516 00:30:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:59.516 00:30:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:59.516 [2024-10-09 00:30:30.091973] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:59.516 00:30:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:59.516 00:30:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:59.516 00:30:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:59.516 00:30:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:59.516 [2024-10-09 00:30:30.103851] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:59.517 00:30:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:59.517 00:30:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:22:59.517 00:30:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:59.517 00:30:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:59.517 Malloc1 00:22:59.517 00:30:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:59.517 00:30:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:22:59.517 00:30:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:59.517 00:30:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:59.517 00:30:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:59.517 00:30:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:22:59.517 00:30:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:59.517 00:30:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:59.778 00:30:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:59.778 00:30:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:22:59.778 00:30:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:59.778 00:30:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:59.778 00:30:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:59.778 00:30:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:22:59.778 00:30:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:59.778 00:30:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:59.778 00:30:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:59.778 00:30:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:22:59.778 00:30:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=3328613 00:22:59.778 00:30:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:59.778 00:30:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 3328613 /var/tmp/bdevperf.sock 00:22:59.778 00:30:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@831 -- # '[' -z 3328613 ']' 00:22:59.778 00:30:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:59.778 00:30:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:59.778 00:30:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:59.778 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:59.778 00:30:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:59.778 00:30:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:00.722 00:30:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:00.722 00:30:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # return 0 00:23:00.722 00:30:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:23:00.722 00:30:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:00.722 00:30:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:00.722 NVMe0n1 00:23:00.722 00:30:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:00.722 00:30:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:00.722 00:30:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:23:00.722 00:30:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:00.722 00:30:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:00.722 00:30:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:00.722 1 00:23:00.722 00:30:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:23:00.722 00:30:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:23:00.722 00:30:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:23:00.722 00:30:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:23:00.722 00:30:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:00.722 00:30:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:23:00.722 00:30:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:00.722 00:30:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:23:00.722 00:30:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:00.722 00:30:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:00.722 request: 00:23:00.722 { 00:23:00.722 "name": "NVMe0", 00:23:00.722 "trtype": "tcp", 00:23:00.722 "traddr": "10.0.0.2", 00:23:00.722 "adrfam": "ipv4", 00:23:00.722 "trsvcid": "4420", 00:23:00.722 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:00.722 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:23:00.722 "hostaddr": "10.0.0.1", 00:23:00.722 "prchk_reftag": false, 00:23:00.722 "prchk_guard": false, 00:23:00.722 "hdgst": false, 00:23:00.722 "ddgst": false, 00:23:00.722 "allow_unrecognized_csi": false, 00:23:00.722 "method": "bdev_nvme_attach_controller", 00:23:00.722 "req_id": 1 00:23:00.722 } 00:23:00.722 Got JSON-RPC error response 00:23:00.722 response: 00:23:00.722 { 00:23:00.722 "code": -114, 00:23:00.722 "message": "A controller named NVMe0 already exists with the specified network path" 00:23:00.722 } 00:23:00.722 00:30:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:23:00.722 00:30:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:23:00.722 00:30:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:00.722 00:30:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:00.722 00:30:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:00.722 00:30:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:23:00.722 00:30:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:23:00.722 00:30:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:23:00.723 00:30:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:23:00.723 00:30:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:00.723 00:30:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:23:00.723 00:30:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:00.723 00:30:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:23:00.723 00:30:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:00.723 00:30:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:00.723 request: 00:23:00.723 { 00:23:00.723 "name": "NVMe0", 00:23:00.723 "trtype": "tcp", 00:23:00.723 "traddr": "10.0.0.2", 00:23:00.723 "adrfam": "ipv4", 00:23:00.723 "trsvcid": "4420", 00:23:00.723 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:00.723 "hostaddr": "10.0.0.1", 00:23:00.723 "prchk_reftag": false, 00:23:00.723 "prchk_guard": false, 00:23:00.723 "hdgst": false, 00:23:00.723 "ddgst": false, 00:23:00.723 "allow_unrecognized_csi": false, 00:23:00.723 "method": "bdev_nvme_attach_controller", 00:23:00.723 "req_id": 1 00:23:00.723 } 00:23:00.723 Got JSON-RPC error response 00:23:00.723 response: 00:23:00.723 { 00:23:00.723 "code": -114, 00:23:00.723 "message": "A controller named NVMe0 already exists with the specified network path" 00:23:00.723 } 00:23:00.723 00:30:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:23:00.723 00:30:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:23:00.723 00:30:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:00.723 00:30:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:00.723 00:30:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:00.723 00:30:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:23:00.723 00:30:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:23:00.723 00:30:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:23:00.723 00:30:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:23:00.723 00:30:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:00.723 00:30:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:23:00.723 00:30:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:00.723 00:30:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:23:00.723 00:30:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:00.723 00:30:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:00.723 request: 00:23:00.723 { 00:23:00.723 "name": "NVMe0", 00:23:00.723 "trtype": "tcp", 00:23:00.723 "traddr": "10.0.0.2", 00:23:00.723 "adrfam": "ipv4", 00:23:00.723 "trsvcid": "4420", 00:23:00.723 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:00.723 "hostaddr": "10.0.0.1", 00:23:00.723 "prchk_reftag": false, 00:23:00.723 "prchk_guard": false, 00:23:00.723 "hdgst": false, 00:23:00.723 "ddgst": false, 00:23:00.723 "multipath": "disable", 00:23:00.723 "allow_unrecognized_csi": false, 00:23:00.723 "method": "bdev_nvme_attach_controller", 00:23:00.723 "req_id": 1 00:23:00.723 } 00:23:00.723 Got JSON-RPC error response 00:23:00.723 response: 00:23:00.723 { 00:23:00.723 "code": -114, 00:23:00.723 "message": "A controller named NVMe0 already exists and multipath is disabled" 00:23:00.723 } 00:23:00.723 00:30:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:23:00.723 00:30:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:23:00.723 00:30:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:00.723 00:30:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:00.723 00:30:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:00.723 00:30:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:23:00.723 00:30:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:23:00.723 00:30:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:23:00.723 00:30:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:23:00.723 00:30:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:00.723 00:30:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:23:00.723 00:30:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:00.723 00:30:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:23:00.723 00:30:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:00.723 00:30:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:00.723 request: 00:23:00.723 { 00:23:00.723 "name": "NVMe0", 00:23:00.723 "trtype": "tcp", 00:23:00.723 "traddr": "10.0.0.2", 00:23:00.723 "adrfam": "ipv4", 00:23:00.723 "trsvcid": "4420", 00:23:00.723 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:00.723 "hostaddr": "10.0.0.1", 00:23:00.723 "prchk_reftag": false, 00:23:00.723 "prchk_guard": false, 00:23:00.723 "hdgst": false, 00:23:00.723 "ddgst": false, 00:23:00.723 "multipath": "failover", 00:23:00.723 "allow_unrecognized_csi": false, 00:23:00.723 "method": "bdev_nvme_attach_controller", 00:23:00.723 "req_id": 1 00:23:00.723 } 00:23:00.723 Got JSON-RPC error response 00:23:00.723 response: 00:23:00.723 { 00:23:00.723 "code": -114, 00:23:00.723 "message": "A controller named NVMe0 already exists with the specified network path" 00:23:00.723 } 00:23:00.723 00:30:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:23:00.723 00:30:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:23:00.723 00:30:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:00.723 00:30:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:00.723 00:30:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:00.723 00:30:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:00.723 00:30:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:00.723 00:30:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:00.986 NVMe0n1 00:23:00.986 00:30:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:00.986 00:30:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:00.986 00:30:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:00.986 00:30:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:00.986 00:30:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:00.986 00:30:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:23:00.986 00:30:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:00.986 00:30:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:00.986 00:23:00.986 00:30:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:00.986 00:30:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:00.986 00:30:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:00.986 00:30:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:23:00.986 00:30:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:01.247 00:30:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:01.247 00:30:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:23:01.247 00:30:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:02.189 { 00:23:02.189 "results": [ 00:23:02.189 { 00:23:02.189 "job": "NVMe0n1", 00:23:02.189 "core_mask": "0x1", 00:23:02.189 "workload": "write", 00:23:02.189 "status": "finished", 00:23:02.189 "queue_depth": 128, 00:23:02.189 "io_size": 4096, 00:23:02.189 "runtime": 1.006559, 00:23:02.189 "iops": 19055.018136045677, 00:23:02.189 "mibps": 74.43366459392843, 00:23:02.189 "io_failed": 0, 00:23:02.189 "io_timeout": 0, 00:23:02.189 "avg_latency_us": 6700.195370177268, 00:23:02.189 "min_latency_us": 4068.693333333333, 00:23:02.189 "max_latency_us": 16056.32 00:23:02.189 } 00:23:02.189 ], 00:23:02.189 "core_count": 1 00:23:02.189 } 00:23:02.189 00:30:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:23:02.189 00:30:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:02.189 00:30:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:02.189 00:30:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:02.189 00:30:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # [[ -n '' ]] 00:23:02.189 00:30:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@116 -- # killprocess 3328613 00:23:02.189 00:30:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@950 -- # '[' -z 3328613 ']' 00:23:02.189 00:30:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # kill -0 3328613 00:23:02.189 00:30:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # uname 00:23:02.189 00:30:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:02.189 00:30:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3328613 00:23:02.450 00:30:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:02.450 00:30:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:02.450 00:30:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3328613' 00:23:02.450 killing process with pid 3328613 00:23:02.450 00:30:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@969 -- # kill 3328613 00:23:02.450 00:30:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@974 -- # wait 3328613 00:23:02.450 00:30:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@118 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:02.450 00:30:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:02.450 00:30:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:02.450 00:30:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:02.450 00:30:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@119 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:23:02.450 00:30:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:02.450 00:30:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:02.450 00:30:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:02.450 00:30:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:23:02.450 00:30:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@123 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:02.450 00:30:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1597 -- # read -r file 00:23:02.450 00:30:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1596 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:23:02.450 00:30:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1596 -- # sort -u 00:23:02.450 00:30:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # cat 00:23:02.450 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:23:02.450 [2024-10-09 00:30:30.221431] Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 initialization... 00:23:02.450 [2024-10-09 00:30:30.221522] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3328613 ] 00:23:02.450 [2024-10-09 00:30:30.306252] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:02.450 [2024-10-09 00:30:30.401305] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:23:02.450 [2024-10-09 00:30:31.612196] bdev.c:4701:bdev_name_add: *ERROR*: Bdev name c2f358d4-46a1-4373-b7c8-e7da00a1c909 already exists 00:23:02.450 [2024-10-09 00:30:31.612243] bdev.c:7846:bdev_register: *ERROR*: Unable to add uuid:c2f358d4-46a1-4373-b7c8-e7da00a1c909 alias for bdev NVMe1n1 00:23:02.450 [2024-10-09 00:30:31.612254] bdev_nvme.c:4483:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:23:02.450 Running I/O for 1 seconds... 00:23:02.450 18988.00 IOPS, 74.17 MiB/s 00:23:02.450 Latency(us) 00:23:02.450 [2024-10-08T22:30:33.085Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:02.450 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:23:02.450 NVMe0n1 : 1.01 19055.02 74.43 0.00 0.00 6700.20 4068.69 16056.32 00:23:02.450 [2024-10-08T22:30:33.085Z] =================================================================================================================== 00:23:02.450 [2024-10-08T22:30:33.085Z] Total : 19055.02 74.43 0.00 0.00 6700.20 4068.69 16056.32 00:23:02.450 Received shutdown signal, test time was about 1.000000 seconds 00:23:02.450 00:23:02.450 Latency(us) 00:23:02.450 [2024-10-08T22:30:33.085Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:02.450 [2024-10-08T22:30:33.085Z] =================================================================================================================== 00:23:02.450 [2024-10-08T22:30:33.085Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:02.450 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:23:02.450 00:30:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1603 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:02.450 00:30:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1597 -- # read -r file 00:23:02.450 00:30:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@124 -- # nvmftestfini 00:23:02.450 00:30:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@514 -- # nvmfcleanup 00:23:02.450 00:30:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # sync 00:23:02.450 00:30:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:02.450 00:30:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set +e 00:23:02.450 00:30:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:02.450 00:30:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:02.450 rmmod nvme_tcp 00:23:02.450 rmmod nvme_fabrics 00:23:02.450 rmmod nvme_keyring 00:23:02.711 00:30:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:02.711 00:30:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@128 -- # set -e 00:23:02.711 00:30:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@129 -- # return 0 00:23:02.711 00:30:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@515 -- # '[' -n 3328263 ']' 00:23:02.711 00:30:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@516 -- # killprocess 3328263 00:23:02.711 00:30:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@950 -- # '[' -z 3328263 ']' 00:23:02.711 00:30:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # kill -0 3328263 00:23:02.711 00:30:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # uname 00:23:02.711 00:30:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:02.711 00:30:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3328263 00:23:02.711 00:30:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:23:02.711 00:30:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:23:02.711 00:30:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3328263' 00:23:02.711 killing process with pid 3328263 00:23:02.711 00:30:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@969 -- # kill 3328263 00:23:02.711 00:30:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@974 -- # wait 3328263 00:23:02.711 00:30:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:23:02.711 00:30:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:23:02.711 00:30:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:23:02.711 00:30:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # iptr 00:23:02.711 00:30:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@789 -- # iptables-save 00:23:02.711 00:30:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:23:02.711 00:30:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@789 -- # iptables-restore 00:23:02.971 00:30:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:02.971 00:30:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:02.971 00:30:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:02.971 00:30:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:02.971 00:30:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:04.878 00:30:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:04.878 00:23:04.878 real 0m14.135s 00:23:04.878 user 0m17.268s 00:23:04.878 sys 0m6.551s 00:23:04.878 00:30:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:04.878 00:30:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:04.878 ************************************ 00:23:04.878 END TEST nvmf_multicontroller 00:23:04.878 ************************************ 00:23:04.878 00:30:35 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:23:04.878 00:30:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:23:04.878 00:30:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:04.878 00:30:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:04.878 ************************************ 00:23:04.878 START TEST nvmf_aer 00:23:04.878 ************************************ 00:23:04.878 00:30:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:23:05.138 * Looking for test storage... 00:23:05.138 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:05.138 00:30:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:23:05.138 00:30:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1681 -- # lcov --version 00:23:05.138 00:30:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:23:05.138 00:30:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:23:05.138 00:30:35 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:05.138 00:30:35 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:05.138 00:30:35 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:05.138 00:30:35 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:23:05.138 00:30:35 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:23:05.138 00:30:35 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:23:05.138 00:30:35 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:23:05.138 00:30:35 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:23:05.138 00:30:35 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:23:05.138 00:30:35 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:23:05.138 00:30:35 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:05.138 00:30:35 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:23:05.138 00:30:35 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:23:05.138 00:30:35 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:05.138 00:30:35 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:05.138 00:30:35 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:23:05.138 00:30:35 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:23:05.138 00:30:35 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:05.138 00:30:35 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:23:05.138 00:30:35 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:23:05.138 00:30:35 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:23:05.138 00:30:35 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:23:05.138 00:30:35 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:05.138 00:30:35 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:23:05.138 00:30:35 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:23:05.138 00:30:35 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:05.138 00:30:35 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:05.138 00:30:35 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:23:05.138 00:30:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:05.138 00:30:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:23:05.138 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:05.138 --rc genhtml_branch_coverage=1 00:23:05.138 --rc genhtml_function_coverage=1 00:23:05.138 --rc genhtml_legend=1 00:23:05.138 --rc geninfo_all_blocks=1 00:23:05.138 --rc geninfo_unexecuted_blocks=1 00:23:05.138 00:23:05.138 ' 00:23:05.138 00:30:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:23:05.138 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:05.138 --rc genhtml_branch_coverage=1 00:23:05.138 --rc genhtml_function_coverage=1 00:23:05.138 --rc genhtml_legend=1 00:23:05.138 --rc geninfo_all_blocks=1 00:23:05.138 --rc geninfo_unexecuted_blocks=1 00:23:05.138 00:23:05.138 ' 00:23:05.138 00:30:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:23:05.138 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:05.138 --rc genhtml_branch_coverage=1 00:23:05.138 --rc genhtml_function_coverage=1 00:23:05.138 --rc genhtml_legend=1 00:23:05.138 --rc geninfo_all_blocks=1 00:23:05.138 --rc geninfo_unexecuted_blocks=1 00:23:05.138 00:23:05.138 ' 00:23:05.138 00:30:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:23:05.138 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:05.138 --rc genhtml_branch_coverage=1 00:23:05.138 --rc genhtml_function_coverage=1 00:23:05.138 --rc genhtml_legend=1 00:23:05.138 --rc geninfo_all_blocks=1 00:23:05.138 --rc geninfo_unexecuted_blocks=1 00:23:05.138 00:23:05.138 ' 00:23:05.138 00:30:35 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:05.138 00:30:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:23:05.138 00:30:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:05.138 00:30:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:05.138 00:30:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:05.138 00:30:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:05.138 00:30:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:05.138 00:30:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:05.138 00:30:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:05.138 00:30:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:05.138 00:30:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:05.138 00:30:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:05.138 00:30:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:05.138 00:30:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:05.138 00:30:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:05.138 00:30:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:05.138 00:30:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:05.138 00:30:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:05.138 00:30:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:05.138 00:30:35 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:23:05.138 00:30:35 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:05.138 00:30:35 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:05.138 00:30:35 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:05.138 00:30:35 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:05.139 00:30:35 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:05.139 00:30:35 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:05.139 00:30:35 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:23:05.139 00:30:35 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:05.139 00:30:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # : 0 00:23:05.139 00:30:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:05.139 00:30:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:05.139 00:30:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:05.139 00:30:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:05.139 00:30:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:05.139 00:30:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:05.139 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:05.139 00:30:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:05.139 00:30:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:05.139 00:30:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:05.139 00:30:35 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:23:05.139 00:30:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:23:05.139 00:30:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:05.139 00:30:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # prepare_net_devs 00:23:05.139 00:30:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@436 -- # local -g is_hw=no 00:23:05.139 00:30:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # remove_spdk_ns 00:23:05.139 00:30:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:05.139 00:30:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:05.139 00:30:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:05.139 00:30:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:23:05.139 00:30:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:23:05.139 00:30:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@309 -- # xtrace_disable 00:23:05.139 00:30:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:13.296 00:30:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:13.296 00:30:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # pci_devs=() 00:23:13.296 00:30:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:13.296 00:30:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:13.296 00:30:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:13.296 00:30:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:13.296 00:30:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:13.296 00:30:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # net_devs=() 00:23:13.296 00:30:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:13.296 00:30:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # e810=() 00:23:13.296 00:30:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # local -ga e810 00:23:13.296 00:30:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # x722=() 00:23:13.296 00:30:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # local -ga x722 00:23:13.296 00:30:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # mlx=() 00:23:13.296 00:30:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # local -ga mlx 00:23:13.296 00:30:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:13.296 00:30:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:13.296 00:30:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:13.296 00:30:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:13.296 00:30:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:13.296 00:30:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:13.296 00:30:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:13.296 00:30:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:13.296 00:30:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:13.296 00:30:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:13.296 00:30:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:13.296 00:30:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:13.296 00:30:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:13.296 00:30:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:13.296 00:30:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:13.296 00:30:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:13.296 00:30:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:13.296 00:30:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:13.296 00:30:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:13.296 00:30:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:13.296 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:13.296 00:30:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:13.296 00:30:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:13.296 00:30:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:13.296 00:30:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:13.296 00:30:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:13.296 00:30:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:13.296 00:30:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:13.296 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:13.296 00:30:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:13.296 00:30:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:13.296 00:30:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:13.296 00:30:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:13.296 00:30:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:13.296 00:30:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:13.296 00:30:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:13.296 00:30:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:13.296 00:30:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:23:13.296 00:30:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:13.296 00:30:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:23:13.296 00:30:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:13.296 00:30:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ up == up ]] 00:23:13.296 00:30:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:23:13.296 00:30:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:13.296 00:30:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:13.296 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:13.296 00:30:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:23:13.296 00:30:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:23:13.296 00:30:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:13.296 00:30:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:23:13.296 00:30:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:13.296 00:30:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ up == up ]] 00:23:13.296 00:30:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:23:13.296 00:30:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:13.296 00:30:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:13.296 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:13.296 00:30:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:23:13.296 00:30:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:23:13.296 00:30:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # is_hw=yes 00:23:13.296 00:30:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:23:13.296 00:30:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:23:13.296 00:30:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:23:13.296 00:30:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:13.296 00:30:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:13.296 00:30:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:13.296 00:30:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:13.296 00:30:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:13.296 00:30:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:13.296 00:30:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:13.296 00:30:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:13.296 00:30:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:13.296 00:30:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:13.296 00:30:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:13.296 00:30:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:13.296 00:30:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:13.296 00:30:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:13.297 00:30:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:13.297 00:30:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:13.297 00:30:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:13.297 00:30:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:13.297 00:30:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:13.297 00:30:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:13.297 00:30:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:13.297 00:30:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:13.297 00:30:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:13.297 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:13.297 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.631 ms 00:23:13.297 00:23:13.297 --- 10.0.0.2 ping statistics --- 00:23:13.297 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:13.297 rtt min/avg/max/mdev = 0.631/0.631/0.631/0.000 ms 00:23:13.297 00:30:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:13.297 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:13.297 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.216 ms 00:23:13.297 00:23:13.297 --- 10.0.0.1 ping statistics --- 00:23:13.297 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:13.297 rtt min/avg/max/mdev = 0.216/0.216/0.216/0.000 ms 00:23:13.297 00:30:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:13.297 00:30:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@448 -- # return 0 00:23:13.297 00:30:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:23:13.297 00:30:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:13.297 00:30:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:23:13.297 00:30:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:23:13.297 00:30:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:13.297 00:30:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:23:13.297 00:30:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:23:13.297 00:30:43 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:23:13.297 00:30:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:23:13.297 00:30:43 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:13.297 00:30:43 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:13.297 00:30:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@507 -- # nvmfpid=3333301 00:23:13.297 00:30:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@508 -- # waitforlisten 3333301 00:23:13.297 00:30:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:13.297 00:30:43 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@831 -- # '[' -z 3333301 ']' 00:23:13.297 00:30:43 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:13.297 00:30:43 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:13.297 00:30:43 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:13.297 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:13.297 00:30:43 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:13.297 00:30:43 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:13.297 [2024-10-09 00:30:43.334998] Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 initialization... 00:23:13.297 [2024-10-09 00:30:43.335064] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:13.297 [2024-10-09 00:30:43.424618] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:13.297 [2024-10-09 00:30:43.521327] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:13.297 [2024-10-09 00:30:43.521386] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:13.297 [2024-10-09 00:30:43.521395] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:13.297 [2024-10-09 00:30:43.521402] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:13.297 [2024-10-09 00:30:43.521408] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:13.297 [2024-10-09 00:30:43.523573] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:23:13.297 [2024-10-09 00:30:43.523760] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:23:13.297 [2024-10-09 00:30:43.523859] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:23:13.297 [2024-10-09 00:30:43.524041] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:23:13.573 00:30:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:13.573 00:30:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # return 0 00:23:13.573 00:30:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:23:13.573 00:30:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:13.573 00:30:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:13.573 00:30:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:13.573 00:30:44 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:13.573 00:30:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:13.573 00:30:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:13.573 [2024-10-09 00:30:44.201146] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:13.836 00:30:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:13.836 00:30:44 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:23:13.836 00:30:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:13.837 00:30:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:13.837 Malloc0 00:23:13.837 00:30:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:13.837 00:30:44 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:23:13.837 00:30:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:13.837 00:30:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:13.837 00:30:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:13.837 00:30:44 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:13.837 00:30:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:13.837 00:30:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:13.837 00:30:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:13.837 00:30:44 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:13.837 00:30:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:13.837 00:30:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:13.837 [2024-10-09 00:30:44.266864] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:13.837 00:30:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:13.837 00:30:44 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:23:13.837 00:30:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:13.837 00:30:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:13.837 [ 00:23:13.837 { 00:23:13.837 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:23:13.837 "subtype": "Discovery", 00:23:13.837 "listen_addresses": [], 00:23:13.837 "allow_any_host": true, 00:23:13.837 "hosts": [] 00:23:13.837 }, 00:23:13.837 { 00:23:13.837 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:13.837 "subtype": "NVMe", 00:23:13.837 "listen_addresses": [ 00:23:13.837 { 00:23:13.837 "trtype": "TCP", 00:23:13.837 "adrfam": "IPv4", 00:23:13.837 "traddr": "10.0.0.2", 00:23:13.837 "trsvcid": "4420" 00:23:13.837 } 00:23:13.837 ], 00:23:13.837 "allow_any_host": true, 00:23:13.837 "hosts": [], 00:23:13.837 "serial_number": "SPDK00000000000001", 00:23:13.837 "model_number": "SPDK bdev Controller", 00:23:13.837 "max_namespaces": 2, 00:23:13.837 "min_cntlid": 1, 00:23:13.837 "max_cntlid": 65519, 00:23:13.837 "namespaces": [ 00:23:13.837 { 00:23:13.837 "nsid": 1, 00:23:13.837 "bdev_name": "Malloc0", 00:23:13.837 "name": "Malloc0", 00:23:13.837 "nguid": "701B28173DD4467D919F04A1046D7B61", 00:23:13.837 "uuid": "701b2817-3dd4-467d-919f-04a1046d7b61" 00:23:13.837 } 00:23:13.837 ] 00:23:13.837 } 00:23:13.837 ] 00:23:13.837 00:30:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:13.837 00:30:44 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:23:13.837 00:30:44 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:23:13.837 00:30:44 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=3333530 00:23:13.837 00:30:44 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:23:13.837 00:30:44 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:23:13.837 00:30:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1265 -- # local i=0 00:23:13.837 00:30:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:13.837 00:30:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 0 -lt 200 ']' 00:23:13.837 00:30:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=1 00:23:13.837 00:30:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:23:13.837 00:30:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:13.837 00:30:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 1 -lt 200 ']' 00:23:13.837 00:30:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=2 00:23:13.837 00:30:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:23:14.098 00:30:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:14.098 00:30:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 2 -lt 200 ']' 00:23:14.098 00:30:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=3 00:23:14.098 00:30:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:23:14.098 00:30:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:14.098 00:30:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:14.098 00:30:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # return 0 00:23:14.098 00:30:44 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:23:14.098 00:30:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:14.098 00:30:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:14.098 Malloc1 00:23:14.098 00:30:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:14.098 00:30:44 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:23:14.098 00:30:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:14.098 00:30:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:14.098 00:30:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:14.098 00:30:44 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:23:14.098 00:30:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:14.098 00:30:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:14.098 Asynchronous Event Request test 00:23:14.098 Attaching to 10.0.0.2 00:23:14.098 Attached to 10.0.0.2 00:23:14.098 Registering asynchronous event callbacks... 00:23:14.098 Starting namespace attribute notice tests for all controllers... 00:23:14.098 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:23:14.098 aer_cb - Changed Namespace 00:23:14.098 Cleaning up... 00:23:14.098 [ 00:23:14.098 { 00:23:14.098 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:23:14.098 "subtype": "Discovery", 00:23:14.098 "listen_addresses": [], 00:23:14.098 "allow_any_host": true, 00:23:14.098 "hosts": [] 00:23:14.098 }, 00:23:14.098 { 00:23:14.098 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:14.098 "subtype": "NVMe", 00:23:14.098 "listen_addresses": [ 00:23:14.098 { 00:23:14.098 "trtype": "TCP", 00:23:14.098 "adrfam": "IPv4", 00:23:14.098 "traddr": "10.0.0.2", 00:23:14.098 "trsvcid": "4420" 00:23:14.098 } 00:23:14.098 ], 00:23:14.098 "allow_any_host": true, 00:23:14.098 "hosts": [], 00:23:14.098 "serial_number": "SPDK00000000000001", 00:23:14.098 "model_number": "SPDK bdev Controller", 00:23:14.098 "max_namespaces": 2, 00:23:14.098 "min_cntlid": 1, 00:23:14.098 "max_cntlid": 65519, 00:23:14.098 "namespaces": [ 00:23:14.098 { 00:23:14.098 "nsid": 1, 00:23:14.098 "bdev_name": "Malloc0", 00:23:14.098 "name": "Malloc0", 00:23:14.098 "nguid": "701B28173DD4467D919F04A1046D7B61", 00:23:14.098 "uuid": "701b2817-3dd4-467d-919f-04a1046d7b61" 00:23:14.098 }, 00:23:14.098 { 00:23:14.098 "nsid": 2, 00:23:14.098 "bdev_name": "Malloc1", 00:23:14.098 "name": "Malloc1", 00:23:14.098 "nguid": "343814D1991B47BBB00628C4E5908578", 00:23:14.098 "uuid": "343814d1-991b-47bb-b006-28c4e5908578" 00:23:14.098 } 00:23:14.098 ] 00:23:14.098 } 00:23:14.098 ] 00:23:14.098 00:30:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:14.098 00:30:44 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 3333530 00:23:14.098 00:30:44 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:23:14.098 00:30:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:14.098 00:30:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:14.098 00:30:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:14.098 00:30:44 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:23:14.098 00:30:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:14.098 00:30:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:14.359 00:30:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:14.359 00:30:44 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:14.359 00:30:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:14.359 00:30:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:14.359 00:30:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:14.359 00:30:44 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:23:14.359 00:30:44 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:23:14.359 00:30:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@514 -- # nvmfcleanup 00:23:14.359 00:30:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # sync 00:23:14.359 00:30:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:14.359 00:30:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set +e 00:23:14.359 00:30:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:14.359 00:30:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:14.359 rmmod nvme_tcp 00:23:14.359 rmmod nvme_fabrics 00:23:14.359 rmmod nvme_keyring 00:23:14.359 00:30:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:14.359 00:30:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@128 -- # set -e 00:23:14.359 00:30:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # return 0 00:23:14.359 00:30:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@515 -- # '[' -n 3333301 ']' 00:23:14.359 00:30:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@516 -- # killprocess 3333301 00:23:14.359 00:30:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@950 -- # '[' -z 3333301 ']' 00:23:14.359 00:30:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # kill -0 3333301 00:23:14.359 00:30:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@955 -- # uname 00:23:14.359 00:30:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:14.359 00:30:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3333301 00:23:14.359 00:30:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:14.359 00:30:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:14.359 00:30:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3333301' 00:23:14.359 killing process with pid 3333301 00:23:14.359 00:30:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@969 -- # kill 3333301 00:23:14.359 00:30:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@974 -- # wait 3333301 00:23:14.619 00:30:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:23:14.619 00:30:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:23:14.619 00:30:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:23:14.619 00:30:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # iptr 00:23:14.619 00:30:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@789 -- # iptables-save 00:23:14.619 00:30:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:23:14.619 00:30:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@789 -- # iptables-restore 00:23:14.619 00:30:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:14.619 00:30:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:14.619 00:30:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:14.619 00:30:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:14.619 00:30:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:16.610 00:30:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:16.610 00:23:16.610 real 0m11.670s 00:23:16.610 user 0m8.416s 00:23:16.610 sys 0m6.251s 00:23:16.610 00:30:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:16.610 00:30:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:16.610 ************************************ 00:23:16.610 END TEST nvmf_aer 00:23:16.610 ************************************ 00:23:16.610 00:30:47 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:23:16.610 00:30:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:23:16.610 00:30:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:16.610 00:30:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:16.872 ************************************ 00:23:16.872 START TEST nvmf_async_init 00:23:16.872 ************************************ 00:23:16.872 00:30:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:23:16.872 * Looking for test storage... 00:23:16.872 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:16.872 00:30:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:23:16.872 00:30:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1681 -- # lcov --version 00:23:16.872 00:30:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:23:16.872 00:30:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:23:16.872 00:30:47 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:16.872 00:30:47 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:16.872 00:30:47 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:16.872 00:30:47 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:23:16.872 00:30:47 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:23:16.872 00:30:47 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:23:16.872 00:30:47 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:23:16.872 00:30:47 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:23:16.873 00:30:47 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:23:16.873 00:30:47 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:23:16.873 00:30:47 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:16.873 00:30:47 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:23:16.873 00:30:47 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:23:16.873 00:30:47 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:16.873 00:30:47 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:16.873 00:30:47 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:23:16.873 00:30:47 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:23:16.873 00:30:47 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:16.873 00:30:47 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:23:16.873 00:30:47 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:23:16.873 00:30:47 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:23:16.873 00:30:47 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:23:16.873 00:30:47 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:16.873 00:30:47 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:23:16.873 00:30:47 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:23:16.873 00:30:47 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:16.873 00:30:47 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:16.873 00:30:47 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:23:16.873 00:30:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:16.873 00:30:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:23:16.873 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:16.873 --rc genhtml_branch_coverage=1 00:23:16.873 --rc genhtml_function_coverage=1 00:23:16.873 --rc genhtml_legend=1 00:23:16.873 --rc geninfo_all_blocks=1 00:23:16.873 --rc geninfo_unexecuted_blocks=1 00:23:16.873 00:23:16.873 ' 00:23:16.873 00:30:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:23:16.873 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:16.873 --rc genhtml_branch_coverage=1 00:23:16.873 --rc genhtml_function_coverage=1 00:23:16.873 --rc genhtml_legend=1 00:23:16.873 --rc geninfo_all_blocks=1 00:23:16.873 --rc geninfo_unexecuted_blocks=1 00:23:16.873 00:23:16.873 ' 00:23:16.873 00:30:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:23:16.873 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:16.873 --rc genhtml_branch_coverage=1 00:23:16.873 --rc genhtml_function_coverage=1 00:23:16.873 --rc genhtml_legend=1 00:23:16.873 --rc geninfo_all_blocks=1 00:23:16.873 --rc geninfo_unexecuted_blocks=1 00:23:16.873 00:23:16.873 ' 00:23:16.873 00:30:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:23:16.873 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:16.873 --rc genhtml_branch_coverage=1 00:23:16.873 --rc genhtml_function_coverage=1 00:23:16.873 --rc genhtml_legend=1 00:23:16.873 --rc geninfo_all_blocks=1 00:23:16.873 --rc geninfo_unexecuted_blocks=1 00:23:16.873 00:23:16.873 ' 00:23:16.873 00:30:47 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:16.873 00:30:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:23:16.873 00:30:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:16.873 00:30:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:16.873 00:30:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:16.873 00:30:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:16.873 00:30:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:16.873 00:30:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:16.873 00:30:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:16.873 00:30:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:16.873 00:30:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:16.873 00:30:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:16.873 00:30:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:16.873 00:30:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:16.873 00:30:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:16.873 00:30:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:16.873 00:30:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:16.873 00:30:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:16.873 00:30:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:16.873 00:30:47 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:23:16.873 00:30:47 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:16.873 00:30:47 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:16.873 00:30:47 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:16.873 00:30:47 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:16.873 00:30:47 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:16.873 00:30:47 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:16.873 00:30:47 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:23:16.873 00:30:47 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:16.873 00:30:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # : 0 00:23:16.874 00:30:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:16.874 00:30:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:16.874 00:30:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:16.874 00:30:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:16.874 00:30:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:16.874 00:30:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:16.874 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:16.874 00:30:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:16.874 00:30:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:16.874 00:30:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:16.874 00:30:47 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:23:16.874 00:30:47 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:23:16.874 00:30:47 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:23:16.874 00:30:47 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:23:16.874 00:30:47 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:23:16.874 00:30:47 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:23:16.874 00:30:47 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=e55974704bb1432cadee0967be5e75ab 00:23:16.874 00:30:47 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:23:16.874 00:30:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:23:16.874 00:30:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:16.874 00:30:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # prepare_net_devs 00:23:16.874 00:30:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@436 -- # local -g is_hw=no 00:23:16.874 00:30:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # remove_spdk_ns 00:23:16.874 00:30:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:16.874 00:30:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:16.874 00:30:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:17.135 00:30:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:23:17.135 00:30:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:23:17.135 00:30:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@309 -- # xtrace_disable 00:23:17.135 00:30:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:25.305 00:30:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:25.305 00:30:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # pci_devs=() 00:23:25.305 00:30:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:25.305 00:30:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:25.305 00:30:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:25.305 00:30:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:25.305 00:30:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:25.305 00:30:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # net_devs=() 00:23:25.305 00:30:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:25.305 00:30:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # e810=() 00:23:25.305 00:30:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # local -ga e810 00:23:25.305 00:30:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # x722=() 00:23:25.305 00:30:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # local -ga x722 00:23:25.305 00:30:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # mlx=() 00:23:25.305 00:30:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # local -ga mlx 00:23:25.305 00:30:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:25.305 00:30:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:25.305 00:30:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:25.305 00:30:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:25.305 00:30:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:25.305 00:30:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:25.305 00:30:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:25.305 00:30:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:25.305 00:30:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:25.305 00:30:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:25.305 00:30:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:25.305 00:30:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:25.305 00:30:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:25.305 00:30:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:25.305 00:30:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:25.305 00:30:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:25.305 00:30:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:25.305 00:30:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:25.305 00:30:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:25.305 00:30:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:25.305 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:25.305 00:30:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:25.305 00:30:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:25.305 00:30:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:25.305 00:30:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:25.305 00:30:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:25.305 00:30:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:25.305 00:30:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:25.305 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:25.305 00:30:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:25.305 00:30:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:25.305 00:30:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:25.305 00:30:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:25.305 00:30:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:25.305 00:30:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:25.305 00:30:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:25.305 00:30:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:25.305 00:30:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:23:25.305 00:30:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:25.305 00:30:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:23:25.305 00:30:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:25.305 00:30:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ up == up ]] 00:23:25.305 00:30:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:23:25.305 00:30:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:25.305 00:30:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:25.305 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:25.305 00:30:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:23:25.305 00:30:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:23:25.305 00:30:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:25.305 00:30:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:23:25.305 00:30:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:25.305 00:30:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ up == up ]] 00:23:25.305 00:30:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:23:25.305 00:30:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:25.305 00:30:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:25.305 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:25.305 00:30:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:23:25.305 00:30:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:23:25.306 00:30:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # is_hw=yes 00:23:25.306 00:30:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:23:25.306 00:30:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:23:25.306 00:30:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:23:25.306 00:30:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:25.306 00:30:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:25.306 00:30:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:25.306 00:30:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:25.306 00:30:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:25.306 00:30:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:25.306 00:30:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:25.306 00:30:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:25.306 00:30:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:25.306 00:30:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:25.306 00:30:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:25.306 00:30:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:25.306 00:30:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:25.306 00:30:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:25.306 00:30:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:25.306 00:30:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:25.306 00:30:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:25.306 00:30:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:25.306 00:30:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:25.306 00:30:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:25.306 00:30:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:25.306 00:30:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:25.306 00:30:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:25.306 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:25.306 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.618 ms 00:23:25.306 00:23:25.306 --- 10.0.0.2 ping statistics --- 00:23:25.306 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:25.306 rtt min/avg/max/mdev = 0.618/0.618/0.618/0.000 ms 00:23:25.306 00:30:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:25.306 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:25.306 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.281 ms 00:23:25.306 00:23:25.306 --- 10.0.0.1 ping statistics --- 00:23:25.306 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:25.306 rtt min/avg/max/mdev = 0.281/0.281/0.281/0.000 ms 00:23:25.306 00:30:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:25.306 00:30:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@448 -- # return 0 00:23:25.306 00:30:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:23:25.306 00:30:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:25.306 00:30:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:23:25.306 00:30:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:23:25.306 00:30:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:25.306 00:30:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:23:25.306 00:30:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:23:25.306 00:30:55 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:23:25.306 00:30:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:23:25.306 00:30:55 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:25.306 00:30:55 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:25.306 00:30:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@507 -- # nvmfpid=3337820 00:23:25.306 00:30:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@508 -- # waitforlisten 3337820 00:23:25.306 00:30:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:23:25.306 00:30:55 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@831 -- # '[' -z 3337820 ']' 00:23:25.306 00:30:55 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:25.306 00:30:55 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:25.306 00:30:55 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:25.306 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:25.306 00:30:55 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:25.306 00:30:55 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:25.306 [2024-10-09 00:30:55.112027] Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 initialization... 00:23:25.306 [2024-10-09 00:30:55.112090] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:25.306 [2024-10-09 00:30:55.199648] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:25.306 [2024-10-09 00:30:55.294522] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:25.306 [2024-10-09 00:30:55.294579] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:25.306 [2024-10-09 00:30:55.294588] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:25.306 [2024-10-09 00:30:55.294595] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:25.306 [2024-10-09 00:30:55.294602] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:25.306 [2024-10-09 00:30:55.295381] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:23:25.306 00:30:55 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:25.306 00:30:55 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # return 0 00:23:25.306 00:30:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:23:25.306 00:30:55 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:25.306 00:30:55 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:25.566 00:30:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:25.566 00:30:55 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:23:25.566 00:30:55 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:25.566 00:30:55 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:25.566 [2024-10-09 00:30:55.975375] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:25.566 00:30:55 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:25.566 00:30:55 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:23:25.566 00:30:55 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:25.566 00:30:55 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:25.566 null0 00:23:25.566 00:30:55 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:25.566 00:30:55 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:23:25.566 00:30:55 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:25.566 00:30:55 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:25.566 00:30:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:25.566 00:30:56 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:23:25.566 00:30:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:25.566 00:30:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:25.566 00:30:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:25.566 00:30:56 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g e55974704bb1432cadee0967be5e75ab 00:23:25.566 00:30:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:25.566 00:30:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:25.566 00:30:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:25.566 00:30:56 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:25.566 00:30:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:25.566 00:30:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:25.566 [2024-10-09 00:30:56.035744] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:25.566 00:30:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:25.566 00:30:56 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:23:25.566 00:30:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:25.566 00:30:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:25.826 nvme0n1 00:23:25.826 00:30:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:25.826 00:30:56 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:23:25.826 00:30:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:25.826 00:30:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:25.826 [ 00:23:25.826 { 00:23:25.826 "name": "nvme0n1", 00:23:25.826 "aliases": [ 00:23:25.826 "e5597470-4bb1-432c-adee-0967be5e75ab" 00:23:25.826 ], 00:23:25.826 "product_name": "NVMe disk", 00:23:25.826 "block_size": 512, 00:23:25.826 "num_blocks": 2097152, 00:23:25.826 "uuid": "e5597470-4bb1-432c-adee-0967be5e75ab", 00:23:25.826 "numa_id": 0, 00:23:25.826 "assigned_rate_limits": { 00:23:25.826 "rw_ios_per_sec": 0, 00:23:25.826 "rw_mbytes_per_sec": 0, 00:23:25.826 "r_mbytes_per_sec": 0, 00:23:25.826 "w_mbytes_per_sec": 0 00:23:25.826 }, 00:23:25.826 "claimed": false, 00:23:25.826 "zoned": false, 00:23:25.826 "supported_io_types": { 00:23:25.826 "read": true, 00:23:25.826 "write": true, 00:23:25.826 "unmap": false, 00:23:25.826 "flush": true, 00:23:25.826 "reset": true, 00:23:25.826 "nvme_admin": true, 00:23:25.826 "nvme_io": true, 00:23:25.826 "nvme_io_md": false, 00:23:25.826 "write_zeroes": true, 00:23:25.826 "zcopy": false, 00:23:25.826 "get_zone_info": false, 00:23:25.826 "zone_management": false, 00:23:25.826 "zone_append": false, 00:23:25.826 "compare": true, 00:23:25.826 "compare_and_write": true, 00:23:25.826 "abort": true, 00:23:25.826 "seek_hole": false, 00:23:25.826 "seek_data": false, 00:23:25.826 "copy": true, 00:23:25.826 "nvme_iov_md": false 00:23:25.826 }, 00:23:25.826 "memory_domains": [ 00:23:25.826 { 00:23:25.826 "dma_device_id": "system", 00:23:25.826 "dma_device_type": 1 00:23:25.826 } 00:23:25.826 ], 00:23:25.826 "driver_specific": { 00:23:25.826 "nvme": [ 00:23:25.826 { 00:23:25.826 "trid": { 00:23:25.826 "trtype": "TCP", 00:23:25.826 "adrfam": "IPv4", 00:23:25.826 "traddr": "10.0.0.2", 00:23:25.826 "trsvcid": "4420", 00:23:25.826 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:23:25.826 }, 00:23:25.826 "ctrlr_data": { 00:23:25.826 "cntlid": 1, 00:23:25.826 "vendor_id": "0x8086", 00:23:25.826 "model_number": "SPDK bdev Controller", 00:23:25.826 "serial_number": "00000000000000000000", 00:23:25.826 "firmware_revision": "25.01", 00:23:25.826 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:25.826 "oacs": { 00:23:25.826 "security": 0, 00:23:25.826 "format": 0, 00:23:25.826 "firmware": 0, 00:23:25.826 "ns_manage": 0 00:23:25.826 }, 00:23:25.826 "multi_ctrlr": true, 00:23:25.826 "ana_reporting": false 00:23:25.826 }, 00:23:25.826 "vs": { 00:23:25.826 "nvme_version": "1.3" 00:23:25.826 }, 00:23:25.826 "ns_data": { 00:23:25.826 "id": 1, 00:23:25.826 "can_share": true 00:23:25.826 } 00:23:25.826 } 00:23:25.826 ], 00:23:25.826 "mp_policy": "active_passive" 00:23:25.826 } 00:23:25.826 } 00:23:25.826 ] 00:23:25.826 00:30:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:25.826 00:30:56 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:23:25.826 00:30:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:25.826 00:30:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:25.826 [2024-10-09 00:30:56.312242] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:25.826 [2024-10-09 00:30:56.312324] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x143a700 (9): Bad file descriptor 00:23:25.826 [2024-10-09 00:30:56.443878] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:23:25.826 00:30:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:25.826 00:30:56 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:23:25.826 00:30:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:25.826 00:30:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:25.826 [ 00:23:25.826 { 00:23:25.826 "name": "nvme0n1", 00:23:25.826 "aliases": [ 00:23:25.826 "e5597470-4bb1-432c-adee-0967be5e75ab" 00:23:25.826 ], 00:23:25.826 "product_name": "NVMe disk", 00:23:25.826 "block_size": 512, 00:23:25.826 "num_blocks": 2097152, 00:23:25.826 "uuid": "e5597470-4bb1-432c-adee-0967be5e75ab", 00:23:25.826 "numa_id": 0, 00:23:25.826 "assigned_rate_limits": { 00:23:25.826 "rw_ios_per_sec": 0, 00:23:25.826 "rw_mbytes_per_sec": 0, 00:23:25.826 "r_mbytes_per_sec": 0, 00:23:25.826 "w_mbytes_per_sec": 0 00:23:25.826 }, 00:23:25.826 "claimed": false, 00:23:25.826 "zoned": false, 00:23:25.826 "supported_io_types": { 00:23:26.087 "read": true, 00:23:26.087 "write": true, 00:23:26.087 "unmap": false, 00:23:26.087 "flush": true, 00:23:26.087 "reset": true, 00:23:26.087 "nvme_admin": true, 00:23:26.087 "nvme_io": true, 00:23:26.087 "nvme_io_md": false, 00:23:26.087 "write_zeroes": true, 00:23:26.087 "zcopy": false, 00:23:26.087 "get_zone_info": false, 00:23:26.087 "zone_management": false, 00:23:26.087 "zone_append": false, 00:23:26.087 "compare": true, 00:23:26.087 "compare_and_write": true, 00:23:26.087 "abort": true, 00:23:26.087 "seek_hole": false, 00:23:26.087 "seek_data": false, 00:23:26.087 "copy": true, 00:23:26.087 "nvme_iov_md": false 00:23:26.087 }, 00:23:26.087 "memory_domains": [ 00:23:26.087 { 00:23:26.087 "dma_device_id": "system", 00:23:26.087 "dma_device_type": 1 00:23:26.087 } 00:23:26.087 ], 00:23:26.087 "driver_specific": { 00:23:26.087 "nvme": [ 00:23:26.087 { 00:23:26.087 "trid": { 00:23:26.087 "trtype": "TCP", 00:23:26.087 "adrfam": "IPv4", 00:23:26.087 "traddr": "10.0.0.2", 00:23:26.087 "trsvcid": "4420", 00:23:26.087 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:23:26.087 }, 00:23:26.087 "ctrlr_data": { 00:23:26.087 "cntlid": 2, 00:23:26.087 "vendor_id": "0x8086", 00:23:26.087 "model_number": "SPDK bdev Controller", 00:23:26.087 "serial_number": "00000000000000000000", 00:23:26.087 "firmware_revision": "25.01", 00:23:26.087 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:26.087 "oacs": { 00:23:26.087 "security": 0, 00:23:26.087 "format": 0, 00:23:26.087 "firmware": 0, 00:23:26.087 "ns_manage": 0 00:23:26.087 }, 00:23:26.087 "multi_ctrlr": true, 00:23:26.087 "ana_reporting": false 00:23:26.087 }, 00:23:26.087 "vs": { 00:23:26.087 "nvme_version": "1.3" 00:23:26.087 }, 00:23:26.087 "ns_data": { 00:23:26.087 "id": 1, 00:23:26.087 "can_share": true 00:23:26.087 } 00:23:26.087 } 00:23:26.087 ], 00:23:26.087 "mp_policy": "active_passive" 00:23:26.087 } 00:23:26.087 } 00:23:26.087 ] 00:23:26.087 00:30:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:26.087 00:30:56 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:26.087 00:30:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:26.087 00:30:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:26.087 00:30:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:26.087 00:30:56 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:23:26.087 00:30:56 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.eA1Mu4HhQI 00:23:26.087 00:30:56 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:23:26.087 00:30:56 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.eA1Mu4HhQI 00:23:26.087 00:30:56 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.eA1Mu4HhQI 00:23:26.087 00:30:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:26.087 00:30:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:26.087 00:30:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:26.087 00:30:56 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:23:26.087 00:30:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:26.087 00:30:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:26.087 00:30:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:26.087 00:30:56 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:23:26.087 00:30:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:26.087 00:30:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:26.087 [2024-10-09 00:30:56.532958] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:26.087 [2024-10-09 00:30:56.533135] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:26.087 00:30:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:26.087 00:30:56 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:23:26.087 00:30:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:26.087 00:30:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:26.087 00:30:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:26.087 00:30:56 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:26.087 00:30:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:26.087 00:30:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:26.087 [2024-10-09 00:30:56.557040] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:26.087 nvme0n1 00:23:26.087 00:30:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:26.087 00:30:56 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:23:26.087 00:30:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:26.087 00:30:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:26.087 [ 00:23:26.087 { 00:23:26.087 "name": "nvme0n1", 00:23:26.087 "aliases": [ 00:23:26.087 "e5597470-4bb1-432c-adee-0967be5e75ab" 00:23:26.087 ], 00:23:26.087 "product_name": "NVMe disk", 00:23:26.087 "block_size": 512, 00:23:26.087 "num_blocks": 2097152, 00:23:26.087 "uuid": "e5597470-4bb1-432c-adee-0967be5e75ab", 00:23:26.087 "numa_id": 0, 00:23:26.087 "assigned_rate_limits": { 00:23:26.087 "rw_ios_per_sec": 0, 00:23:26.087 "rw_mbytes_per_sec": 0, 00:23:26.087 "r_mbytes_per_sec": 0, 00:23:26.087 "w_mbytes_per_sec": 0 00:23:26.087 }, 00:23:26.087 "claimed": false, 00:23:26.087 "zoned": false, 00:23:26.087 "supported_io_types": { 00:23:26.087 "read": true, 00:23:26.087 "write": true, 00:23:26.087 "unmap": false, 00:23:26.087 "flush": true, 00:23:26.087 "reset": true, 00:23:26.087 "nvme_admin": true, 00:23:26.087 "nvme_io": true, 00:23:26.087 "nvme_io_md": false, 00:23:26.087 "write_zeroes": true, 00:23:26.087 "zcopy": false, 00:23:26.087 "get_zone_info": false, 00:23:26.087 "zone_management": false, 00:23:26.087 "zone_append": false, 00:23:26.087 "compare": true, 00:23:26.087 "compare_and_write": true, 00:23:26.087 "abort": true, 00:23:26.087 "seek_hole": false, 00:23:26.087 "seek_data": false, 00:23:26.087 "copy": true, 00:23:26.087 "nvme_iov_md": false 00:23:26.087 }, 00:23:26.087 "memory_domains": [ 00:23:26.087 { 00:23:26.087 "dma_device_id": "system", 00:23:26.087 "dma_device_type": 1 00:23:26.087 } 00:23:26.087 ], 00:23:26.087 "driver_specific": { 00:23:26.087 "nvme": [ 00:23:26.087 { 00:23:26.087 "trid": { 00:23:26.087 "trtype": "TCP", 00:23:26.087 "adrfam": "IPv4", 00:23:26.087 "traddr": "10.0.0.2", 00:23:26.087 "trsvcid": "4421", 00:23:26.087 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:23:26.087 }, 00:23:26.087 "ctrlr_data": { 00:23:26.087 "cntlid": 3, 00:23:26.087 "vendor_id": "0x8086", 00:23:26.087 "model_number": "SPDK bdev Controller", 00:23:26.087 "serial_number": "00000000000000000000", 00:23:26.087 "firmware_revision": "25.01", 00:23:26.087 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:26.087 "oacs": { 00:23:26.087 "security": 0, 00:23:26.087 "format": 0, 00:23:26.087 "firmware": 0, 00:23:26.087 "ns_manage": 0 00:23:26.087 }, 00:23:26.087 "multi_ctrlr": true, 00:23:26.087 "ana_reporting": false 00:23:26.087 }, 00:23:26.087 "vs": { 00:23:26.087 "nvme_version": "1.3" 00:23:26.087 }, 00:23:26.087 "ns_data": { 00:23:26.087 "id": 1, 00:23:26.087 "can_share": true 00:23:26.087 } 00:23:26.087 } 00:23:26.087 ], 00:23:26.087 "mp_policy": "active_passive" 00:23:26.087 } 00:23:26.087 } 00:23:26.087 ] 00:23:26.087 00:30:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:26.087 00:30:56 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:26.087 00:30:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:26.087 00:30:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:26.087 00:30:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:26.087 00:30:56 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.eA1Mu4HhQI 00:23:26.087 00:30:56 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:23:26.088 00:30:56 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:23:26.088 00:30:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@514 -- # nvmfcleanup 00:23:26.088 00:30:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # sync 00:23:26.088 00:30:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:26.088 00:30:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set +e 00:23:26.088 00:30:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:26.088 00:30:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:26.088 rmmod nvme_tcp 00:23:26.088 rmmod nvme_fabrics 00:23:26.088 rmmod nvme_keyring 00:23:26.349 00:30:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:26.349 00:30:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@128 -- # set -e 00:23:26.349 00:30:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # return 0 00:23:26.349 00:30:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@515 -- # '[' -n 3337820 ']' 00:23:26.349 00:30:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@516 -- # killprocess 3337820 00:23:26.349 00:30:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@950 -- # '[' -z 3337820 ']' 00:23:26.349 00:30:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # kill -0 3337820 00:23:26.349 00:30:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@955 -- # uname 00:23:26.349 00:30:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:26.349 00:30:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3337820 00:23:26.349 00:30:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:26.349 00:30:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:26.349 00:30:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3337820' 00:23:26.349 killing process with pid 3337820 00:23:26.349 00:30:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@969 -- # kill 3337820 00:23:26.349 00:30:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@974 -- # wait 3337820 00:23:26.610 00:30:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:23:26.610 00:30:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:23:26.610 00:30:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:23:26.610 00:30:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # iptr 00:23:26.610 00:30:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@789 -- # iptables-save 00:23:26.610 00:30:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:23:26.610 00:30:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@789 -- # iptables-restore 00:23:26.610 00:30:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:26.610 00:30:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:26.610 00:30:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:26.610 00:30:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:26.610 00:30:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:28.521 00:30:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:28.521 00:23:28.521 real 0m11.819s 00:23:28.521 user 0m4.255s 00:23:28.521 sys 0m6.137s 00:23:28.521 00:30:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:28.521 00:30:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:28.521 ************************************ 00:23:28.521 END TEST nvmf_async_init 00:23:28.521 ************************************ 00:23:28.521 00:30:59 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:23:28.521 00:30:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:23:28.521 00:30:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:28.521 00:30:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.782 ************************************ 00:23:28.782 START TEST dma 00:23:28.782 ************************************ 00:23:28.782 00:30:59 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:23:28.782 * Looking for test storage... 00:23:28.782 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:28.782 00:30:59 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:23:28.782 00:30:59 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1681 -- # lcov --version 00:23:28.782 00:30:59 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:23:28.782 00:30:59 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:23:28.782 00:30:59 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:28.782 00:30:59 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:28.782 00:30:59 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:28.782 00:30:59 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-: 00:23:28.782 00:30:59 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1 00:23:28.782 00:30:59 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-: 00:23:28.782 00:30:59 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2 00:23:28.782 00:30:59 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<' 00:23:28.782 00:30:59 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2 00:23:28.782 00:30:59 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1 00:23:28.782 00:30:59 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:28.782 00:30:59 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in 00:23:28.782 00:30:59 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@345 -- # : 1 00:23:28.782 00:30:59 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:28.782 00:30:59 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:28.782 00:30:59 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1 00:23:28.782 00:30:59 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1 00:23:28.782 00:30:59 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:28.782 00:30:59 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1 00:23:28.782 00:30:59 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1 00:23:28.782 00:30:59 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2 00:23:28.782 00:30:59 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2 00:23:28.782 00:30:59 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:28.782 00:30:59 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2 00:23:28.782 00:30:59 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2 00:23:28.782 00:30:59 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:28.782 00:30:59 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:28.782 00:30:59 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # return 0 00:23:28.782 00:30:59 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:28.782 00:30:59 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:23:28.782 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:28.782 --rc genhtml_branch_coverage=1 00:23:28.782 --rc genhtml_function_coverage=1 00:23:28.782 --rc genhtml_legend=1 00:23:28.782 --rc geninfo_all_blocks=1 00:23:28.782 --rc geninfo_unexecuted_blocks=1 00:23:28.782 00:23:28.782 ' 00:23:28.782 00:30:59 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:23:28.782 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:28.782 --rc genhtml_branch_coverage=1 00:23:28.782 --rc genhtml_function_coverage=1 00:23:28.782 --rc genhtml_legend=1 00:23:28.782 --rc geninfo_all_blocks=1 00:23:28.782 --rc geninfo_unexecuted_blocks=1 00:23:28.782 00:23:28.782 ' 00:23:28.782 00:30:59 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:23:28.782 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:28.782 --rc genhtml_branch_coverage=1 00:23:28.782 --rc genhtml_function_coverage=1 00:23:28.782 --rc genhtml_legend=1 00:23:28.782 --rc geninfo_all_blocks=1 00:23:28.782 --rc geninfo_unexecuted_blocks=1 00:23:28.782 00:23:28.782 ' 00:23:28.782 00:30:59 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:23:28.782 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:28.782 --rc genhtml_branch_coverage=1 00:23:28.782 --rc genhtml_function_coverage=1 00:23:28.782 --rc genhtml_legend=1 00:23:28.782 --rc geninfo_all_blocks=1 00:23:28.782 --rc geninfo_unexecuted_blocks=1 00:23:28.782 00:23:28.782 ' 00:23:28.782 00:30:59 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:28.783 00:30:59 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:23:28.783 00:30:59 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:28.783 00:30:59 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:28.783 00:30:59 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:28.783 00:30:59 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:28.783 00:30:59 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:28.783 00:30:59 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:28.783 00:30:59 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:28.783 00:30:59 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:28.783 00:30:59 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:28.783 00:30:59 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:28.783 00:30:59 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:28.783 00:30:59 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:28.783 00:30:59 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:28.783 00:30:59 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:28.783 00:30:59 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:28.783 00:30:59 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:28.783 00:30:59 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:28.783 00:30:59 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob 00:23:28.783 00:30:59 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:28.783 00:30:59 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:28.783 00:30:59 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:28.783 00:30:59 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:28.783 00:30:59 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:28.783 00:30:59 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:28.783 00:30:59 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:23:28.783 00:30:59 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:28.783 00:30:59 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # : 0 00:23:28.783 00:30:59 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:28.783 00:30:59 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:28.783 00:30:59 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:28.783 00:30:59 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:28.783 00:30:59 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:28.783 00:30:59 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:28.783 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:28.783 00:30:59 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:28.783 00:30:59 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:28.783 00:30:59 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:28.783 00:30:59 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:23:28.783 00:30:59 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:23:28.783 00:23:28.783 real 0m0.235s 00:23:28.783 user 0m0.128s 00:23:28.783 sys 0m0.122s 00:23:28.783 00:30:59 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:28.783 00:30:59 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:23:28.783 ************************************ 00:23:28.783 END TEST dma 00:23:28.783 ************************************ 00:23:29.044 00:30:59 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:23:29.044 00:30:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:23:29.044 00:30:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:29.044 00:30:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.044 ************************************ 00:23:29.044 START TEST nvmf_identify 00:23:29.044 ************************************ 00:23:29.044 00:30:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:23:29.044 * Looking for test storage... 00:23:29.044 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:29.044 00:30:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:23:29.044 00:30:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1681 -- # lcov --version 00:23:29.044 00:30:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:23:29.044 00:30:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:23:29.044 00:30:59 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:29.044 00:30:59 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:29.044 00:30:59 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:29.044 00:30:59 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:23:29.044 00:30:59 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:23:29.044 00:30:59 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:23:29.044 00:30:59 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:23:29.044 00:30:59 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:23:29.044 00:30:59 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:23:29.044 00:30:59 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:23:29.044 00:30:59 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:29.044 00:30:59 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:23:29.044 00:30:59 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:23:29.044 00:30:59 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:29.044 00:30:59 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:29.044 00:30:59 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:23:29.044 00:30:59 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:23:29.044 00:30:59 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:29.044 00:30:59 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:23:29.044 00:30:59 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:23:29.044 00:30:59 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:23:29.044 00:30:59 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:23:29.044 00:30:59 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:29.044 00:30:59 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:23:29.044 00:30:59 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:23:29.044 00:30:59 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:29.044 00:30:59 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:29.044 00:30:59 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:23:29.044 00:30:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:29.044 00:30:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:23:29.044 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:29.044 --rc genhtml_branch_coverage=1 00:23:29.044 --rc genhtml_function_coverage=1 00:23:29.044 --rc genhtml_legend=1 00:23:29.044 --rc geninfo_all_blocks=1 00:23:29.044 --rc geninfo_unexecuted_blocks=1 00:23:29.044 00:23:29.044 ' 00:23:29.044 00:30:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:23:29.044 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:29.044 --rc genhtml_branch_coverage=1 00:23:29.044 --rc genhtml_function_coverage=1 00:23:29.044 --rc genhtml_legend=1 00:23:29.044 --rc geninfo_all_blocks=1 00:23:29.044 --rc geninfo_unexecuted_blocks=1 00:23:29.044 00:23:29.044 ' 00:23:29.044 00:30:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:23:29.044 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:29.044 --rc genhtml_branch_coverage=1 00:23:29.044 --rc genhtml_function_coverage=1 00:23:29.044 --rc genhtml_legend=1 00:23:29.044 --rc geninfo_all_blocks=1 00:23:29.044 --rc geninfo_unexecuted_blocks=1 00:23:29.044 00:23:29.044 ' 00:23:29.044 00:30:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:23:29.044 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:29.044 --rc genhtml_branch_coverage=1 00:23:29.044 --rc genhtml_function_coverage=1 00:23:29.044 --rc genhtml_legend=1 00:23:29.044 --rc geninfo_all_blocks=1 00:23:29.044 --rc geninfo_unexecuted_blocks=1 00:23:29.044 00:23:29.044 ' 00:23:29.044 00:30:59 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:29.311 00:30:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:23:29.311 00:30:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:29.311 00:30:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:29.311 00:30:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:29.311 00:30:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:29.311 00:30:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:29.311 00:30:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:29.311 00:30:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:29.311 00:30:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:29.311 00:30:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:29.311 00:30:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:29.311 00:30:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:29.311 00:30:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:29.311 00:30:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:29.311 00:30:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:29.311 00:30:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:29.311 00:30:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:29.311 00:30:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:29.311 00:30:59 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:23:29.311 00:30:59 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:29.311 00:30:59 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:29.311 00:30:59 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:29.311 00:30:59 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:29.311 00:30:59 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:29.311 00:30:59 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:29.311 00:30:59 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:23:29.311 00:30:59 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:29.311 00:30:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:23:29.311 00:30:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:29.311 00:30:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:29.311 00:30:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:29.311 00:30:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:29.311 00:30:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:29.311 00:30:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:29.311 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:29.311 00:30:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:29.311 00:30:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:29.311 00:30:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:29.311 00:30:59 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:29.311 00:30:59 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:29.311 00:30:59 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:23:29.311 00:30:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:23:29.311 00:30:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:29.311 00:30:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # prepare_net_devs 00:23:29.311 00:30:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@436 -- # local -g is_hw=no 00:23:29.311 00:30:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # remove_spdk_ns 00:23:29.311 00:30:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:29.311 00:30:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:29.311 00:30:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:29.311 00:30:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:23:29.311 00:30:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:23:29.311 00:30:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@309 -- # xtrace_disable 00:23:29.311 00:30:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:37.462 00:31:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:37.462 00:31:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # pci_devs=() 00:23:37.462 00:31:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:37.462 00:31:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:37.462 00:31:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:37.462 00:31:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:37.462 00:31:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:37.462 00:31:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # net_devs=() 00:23:37.462 00:31:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:37.462 00:31:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # e810=() 00:23:37.462 00:31:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # local -ga e810 00:23:37.462 00:31:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # x722=() 00:23:37.462 00:31:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # local -ga x722 00:23:37.462 00:31:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # mlx=() 00:23:37.462 00:31:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # local -ga mlx 00:23:37.462 00:31:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:37.462 00:31:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:37.462 00:31:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:37.462 00:31:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:37.462 00:31:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:37.462 00:31:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:37.462 00:31:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:37.462 00:31:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:37.462 00:31:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:37.462 00:31:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:37.462 00:31:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:37.462 00:31:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:37.462 00:31:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:37.463 00:31:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:37.463 00:31:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:37.463 00:31:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:37.463 00:31:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:37.463 00:31:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:37.463 00:31:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:37.463 00:31:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:37.463 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:37.463 00:31:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:37.463 00:31:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:37.463 00:31:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:37.463 00:31:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:37.463 00:31:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:37.463 00:31:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:37.463 00:31:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:37.463 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:37.463 00:31:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:37.463 00:31:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:37.463 00:31:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:37.463 00:31:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:37.463 00:31:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:37.463 00:31:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:37.463 00:31:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:37.463 00:31:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:37.463 00:31:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:23:37.463 00:31:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:37.463 00:31:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:23:37.463 00:31:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:37.463 00:31:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ up == up ]] 00:23:37.463 00:31:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:23:37.463 00:31:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:37.463 00:31:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:37.463 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:37.463 00:31:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:23:37.463 00:31:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:23:37.463 00:31:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:37.463 00:31:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:23:37.463 00:31:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:37.463 00:31:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ up == up ]] 00:23:37.463 00:31:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:23:37.463 00:31:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:37.463 00:31:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:37.463 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:37.463 00:31:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:23:37.463 00:31:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:23:37.463 00:31:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # is_hw=yes 00:23:37.463 00:31:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:23:37.463 00:31:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:23:37.463 00:31:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:23:37.463 00:31:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:37.463 00:31:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:37.463 00:31:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:37.463 00:31:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:37.463 00:31:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:37.463 00:31:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:37.463 00:31:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:37.463 00:31:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:37.463 00:31:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:37.463 00:31:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:37.463 00:31:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:37.463 00:31:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:37.463 00:31:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:37.463 00:31:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:37.463 00:31:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:37.463 00:31:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:37.463 00:31:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:37.463 00:31:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:37.463 00:31:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:37.463 00:31:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:37.463 00:31:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:37.463 00:31:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:37.463 00:31:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:37.463 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:37.463 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.687 ms 00:23:37.463 00:23:37.463 --- 10.0.0.2 ping statistics --- 00:23:37.463 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:37.463 rtt min/avg/max/mdev = 0.687/0.687/0.687/0.000 ms 00:23:37.463 00:31:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:37.463 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:37.463 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.322 ms 00:23:37.463 00:23:37.463 --- 10.0.0.1 ping statistics --- 00:23:37.463 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:37.463 rtt min/avg/max/mdev = 0.322/0.322/0.322/0.000 ms 00:23:37.463 00:31:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:37.463 00:31:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@448 -- # return 0 00:23:37.463 00:31:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:23:37.463 00:31:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:37.463 00:31:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:23:37.463 00:31:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:23:37.463 00:31:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:37.463 00:31:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:23:37.463 00:31:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:23:37.463 00:31:07 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:23:37.463 00:31:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:37.463 00:31:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:37.463 00:31:07 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=3342398 00:23:37.463 00:31:07 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:37.463 00:31:07 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:37.463 00:31:07 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 3342398 00:23:37.463 00:31:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@831 -- # '[' -z 3342398 ']' 00:23:37.463 00:31:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:37.463 00:31:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:37.463 00:31:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:37.463 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:37.463 00:31:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:37.463 00:31:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:37.463 [2024-10-09 00:31:07.294370] Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 initialization... 00:23:37.463 [2024-10-09 00:31:07.294434] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:37.463 [2024-10-09 00:31:07.382601] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:37.463 [2024-10-09 00:31:07.479739] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:37.463 [2024-10-09 00:31:07.479801] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:37.463 [2024-10-09 00:31:07.479809] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:37.463 [2024-10-09 00:31:07.479816] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:37.464 [2024-10-09 00:31:07.479823] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:37.464 [2024-10-09 00:31:07.481883] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:23:37.464 [2024-10-09 00:31:07.482045] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:23:37.464 [2024-10-09 00:31:07.482206] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:23:37.464 [2024-10-09 00:31:07.482206] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:23:37.725 00:31:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:37.725 00:31:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # return 0 00:23:37.725 00:31:08 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:37.725 00:31:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:37.725 00:31:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:37.725 [2024-10-09 00:31:08.121869] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:37.725 00:31:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:37.725 00:31:08 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:23:37.725 00:31:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:37.725 00:31:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:37.725 00:31:08 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:23:37.725 00:31:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:37.725 00:31:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:37.725 Malloc0 00:23:37.725 00:31:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:37.725 00:31:08 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:37.725 00:31:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:37.725 00:31:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:37.725 00:31:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:37.725 00:31:08 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:23:37.725 00:31:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:37.725 00:31:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:37.725 00:31:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:37.725 00:31:08 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:37.725 00:31:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:37.725 00:31:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:37.725 [2024-10-09 00:31:08.231614] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:37.725 00:31:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:37.725 00:31:08 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:23:37.725 00:31:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:37.725 00:31:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:37.725 00:31:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:37.725 00:31:08 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:23:37.725 00:31:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:37.725 00:31:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:37.725 [ 00:23:37.725 { 00:23:37.725 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:23:37.725 "subtype": "Discovery", 00:23:37.725 "listen_addresses": [ 00:23:37.725 { 00:23:37.725 "trtype": "TCP", 00:23:37.725 "adrfam": "IPv4", 00:23:37.725 "traddr": "10.0.0.2", 00:23:37.725 "trsvcid": "4420" 00:23:37.725 } 00:23:37.725 ], 00:23:37.725 "allow_any_host": true, 00:23:37.725 "hosts": [] 00:23:37.725 }, 00:23:37.725 { 00:23:37.725 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:37.725 "subtype": "NVMe", 00:23:37.725 "listen_addresses": [ 00:23:37.725 { 00:23:37.725 "trtype": "TCP", 00:23:37.725 "adrfam": "IPv4", 00:23:37.725 "traddr": "10.0.0.2", 00:23:37.725 "trsvcid": "4420" 00:23:37.725 } 00:23:37.725 ], 00:23:37.725 "allow_any_host": true, 00:23:37.725 "hosts": [], 00:23:37.725 "serial_number": "SPDK00000000000001", 00:23:37.725 "model_number": "SPDK bdev Controller", 00:23:37.725 "max_namespaces": 32, 00:23:37.725 "min_cntlid": 1, 00:23:37.725 "max_cntlid": 65519, 00:23:37.725 "namespaces": [ 00:23:37.725 { 00:23:37.725 "nsid": 1, 00:23:37.725 "bdev_name": "Malloc0", 00:23:37.725 "name": "Malloc0", 00:23:37.725 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:23:37.725 "eui64": "ABCDEF0123456789", 00:23:37.725 "uuid": "bc21a9f3-023b-4758-9f00-db22a9d9db91" 00:23:37.725 } 00:23:37.725 ] 00:23:37.725 } 00:23:37.725 ] 00:23:37.725 00:31:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:37.725 00:31:08 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:23:37.725 [2024-10-09 00:31:08.295601] Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 initialization... 00:23:37.726 [2024-10-09 00:31:08.295651] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3342750 ] 00:23:37.726 [2024-10-09 00:31:08.333992] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:23:37.726 [2024-10-09 00:31:08.334056] nvme_tcp.c:2349:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:23:37.726 [2024-10-09 00:31:08.334061] nvme_tcp.c:2353:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:23:37.726 [2024-10-09 00:31:08.334077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:23:37.726 [2024-10-09 00:31:08.334089] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:23:37.726 [2024-10-09 00:31:08.334982] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:23:37.726 [2024-10-09 00:31:08.335029] nvme_tcp.c:1566:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1438760 0 00:23:37.726 [2024-10-09 00:31:08.348743] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:23:37.726 [2024-10-09 00:31:08.348760] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:23:37.726 [2024-10-09 00:31:08.348766] nvme_tcp.c:1612:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:23:37.726 [2024-10-09 00:31:08.348770] nvme_tcp.c:1613:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:23:37.726 [2024-10-09 00:31:08.348806] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:37.726 [2024-10-09 00:31:08.348813] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:37.726 [2024-10-09 00:31:08.348817] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1438760) 00:23:37.726 [2024-10-09 00:31:08.348836] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:23:37.726 [2024-10-09 00:31:08.348860] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1498480, cid 0, qid 0 00:23:37.726 [2024-10-09 00:31:08.356736] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:37.726 [2024-10-09 00:31:08.356746] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:37.726 [2024-10-09 00:31:08.356750] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:37.726 [2024-10-09 00:31:08.356755] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1498480) on tqpair=0x1438760 00:23:37.726 [2024-10-09 00:31:08.356770] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:23:37.726 [2024-10-09 00:31:08.356779] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:23:37.726 [2024-10-09 00:31:08.356785] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:23:37.726 [2024-10-09 00:31:08.356802] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:37.726 [2024-10-09 00:31:08.356806] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:37.726 [2024-10-09 00:31:08.356810] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1438760) 00:23:37.726 [2024-10-09 00:31:08.356818] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.726 [2024-10-09 00:31:08.356835] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1498480, cid 0, qid 0 00:23:37.726 [2024-10-09 00:31:08.357036] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:37.726 [2024-10-09 00:31:08.357043] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:37.726 [2024-10-09 00:31:08.357046] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:37.726 [2024-10-09 00:31:08.357050] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1498480) on tqpair=0x1438760 00:23:37.726 [2024-10-09 00:31:08.357056] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:23:37.726 [2024-10-09 00:31:08.357064] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:23:37.726 [2024-10-09 00:31:08.357071] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:37.726 [2024-10-09 00:31:08.357075] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:37.726 [2024-10-09 00:31:08.357079] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1438760) 00:23:37.726 [2024-10-09 00:31:08.357086] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.726 [2024-10-09 00:31:08.357097] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1498480, cid 0, qid 0 00:23:37.726 [2024-10-09 00:31:08.357282] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:37.726 [2024-10-09 00:31:08.357288] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:37.726 [2024-10-09 00:31:08.357291] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:37.726 [2024-10-09 00:31:08.357295] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1498480) on tqpair=0x1438760 00:23:37.726 [2024-10-09 00:31:08.357305] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:23:37.726 [2024-10-09 00:31:08.357314] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:23:37.726 [2024-10-09 00:31:08.357321] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:37.726 [2024-10-09 00:31:08.357325] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:37.726 [2024-10-09 00:31:08.357328] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1438760) 00:23:37.726 [2024-10-09 00:31:08.357335] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.726 [2024-10-09 00:31:08.357346] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1498480, cid 0, qid 0 00:23:37.726 [2024-10-09 00:31:08.357562] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:37.726 [2024-10-09 00:31:08.357568] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:37.726 [2024-10-09 00:31:08.357572] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:37.726 [2024-10-09 00:31:08.357576] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1498480) on tqpair=0x1438760 00:23:37.726 [2024-10-09 00:31:08.357581] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:23:37.726 [2024-10-09 00:31:08.357591] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:37.726 [2024-10-09 00:31:08.357595] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:37.726 [2024-10-09 00:31:08.357599] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1438760) 00:23:37.726 [2024-10-09 00:31:08.357606] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.726 [2024-10-09 00:31:08.357617] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1498480, cid 0, qid 0 00:23:37.726 [2024-10-09 00:31:08.357794] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:37.726 [2024-10-09 00:31:08.357801] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:37.726 [2024-10-09 00:31:08.357804] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:37.726 [2024-10-09 00:31:08.357808] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1498480) on tqpair=0x1438760 00:23:37.726 [2024-10-09 00:31:08.357814] nvme_ctrlr.c:3893:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:23:37.726 [2024-10-09 00:31:08.357819] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:23:37.726 [2024-10-09 00:31:08.357827] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:23:37.726 [2024-10-09 00:31:08.357933] nvme_ctrlr.c:4091:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:23:37.726 [2024-10-09 00:31:08.357938] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:23:37.726 [2024-10-09 00:31:08.357949] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:37.726 [2024-10-09 00:31:08.357952] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:37.726 [2024-10-09 00:31:08.357956] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1438760) 00:23:37.726 [2024-10-09 00:31:08.357963] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.726 [2024-10-09 00:31:08.357974] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1498480, cid 0, qid 0 00:23:37.726 [2024-10-09 00:31:08.358161] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:37.726 [2024-10-09 00:31:08.358170] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:37.726 [2024-10-09 00:31:08.358174] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:37.726 [2024-10-09 00:31:08.358178] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1498480) on tqpair=0x1438760 00:23:37.726 [2024-10-09 00:31:08.358183] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:23:37.726 [2024-10-09 00:31:08.358192] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:37.726 [2024-10-09 00:31:08.358196] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:37.726 [2024-10-09 00:31:08.358200] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1438760) 00:23:37.726 [2024-10-09 00:31:08.358207] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.726 [2024-10-09 00:31:08.358217] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1498480, cid 0, qid 0 00:23:37.726 [2024-10-09 00:31:08.358422] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:37.726 [2024-10-09 00:31:08.358429] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:37.726 [2024-10-09 00:31:08.358432] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:37.726 [2024-10-09 00:31:08.358436] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1498480) on tqpair=0x1438760 00:23:37.726 [2024-10-09 00:31:08.358441] nvme_ctrlr.c:3928:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:23:37.726 [2024-10-09 00:31:08.358446] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:23:37.726 [2024-10-09 00:31:08.358453] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:23:37.726 [2024-10-09 00:31:08.358462] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:23:37.726 [2024-10-09 00:31:08.358472] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:37.726 [2024-10-09 00:31:08.358476] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1438760) 00:23:37.726 [2024-10-09 00:31:08.358483] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.726 [2024-10-09 00:31:08.358494] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1498480, cid 0, qid 0 00:23:37.726 [2024-10-09 00:31:08.358695] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:37.726 [2024-10-09 00:31:08.358701] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:37.726 [2024-10-09 00:31:08.358705] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:37.726 [2024-10-09 00:31:08.358710] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1438760): datao=0, datal=4096, cccid=0 00:23:37.726 [2024-10-09 00:31:08.358715] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1498480) on tqpair(0x1438760): expected_datao=0, payload_size=4096 00:23:37.726 [2024-10-09 00:31:08.358729] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:37.727 [2024-10-09 00:31:08.358744] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:37.990 [2024-10-09 00:31:08.358751] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:37.990 [2024-10-09 00:31:08.403741] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:37.990 [2024-10-09 00:31:08.403771] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:37.990 [2024-10-09 00:31:08.403776] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:37.990 [2024-10-09 00:31:08.403782] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1498480) on tqpair=0x1438760 00:23:37.990 [2024-10-09 00:31:08.403795] nvme_ctrlr.c:2077:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:23:37.990 [2024-10-09 00:31:08.403807] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:23:37.990 [2024-10-09 00:31:08.403812] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:23:37.990 [2024-10-09 00:31:08.403818] nvme_ctrlr.c:2108:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:23:37.990 [2024-10-09 00:31:08.403823] nvme_ctrlr.c:2123:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:23:37.990 [2024-10-09 00:31:08.403829] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:23:37.990 [2024-10-09 00:31:08.403840] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:23:37.990 [2024-10-09 00:31:08.403849] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:37.990 [2024-10-09 00:31:08.403854] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:37.990 [2024-10-09 00:31:08.403859] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1438760) 00:23:37.990 [2024-10-09 00:31:08.403871] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:37.991 [2024-10-09 00:31:08.403892] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1498480, cid 0, qid 0 00:23:37.991 [2024-10-09 00:31:08.404081] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:37.991 [2024-10-09 00:31:08.404088] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:37.991 [2024-10-09 00:31:08.404092] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:37.991 [2024-10-09 00:31:08.404096] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1498480) on tqpair=0x1438760 00:23:37.991 [2024-10-09 00:31:08.404107] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:37.991 [2024-10-09 00:31:08.404111] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:37.991 [2024-10-09 00:31:08.404115] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1438760) 00:23:37.991 [2024-10-09 00:31:08.404122] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:37.991 [2024-10-09 00:31:08.404129] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:37.991 [2024-10-09 00:31:08.404133] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:37.991 [2024-10-09 00:31:08.404137] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1438760) 00:23:37.991 [2024-10-09 00:31:08.404143] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:37.991 [2024-10-09 00:31:08.404150] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:37.991 [2024-10-09 00:31:08.404154] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:37.991 [2024-10-09 00:31:08.404158] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1438760) 00:23:37.991 [2024-10-09 00:31:08.404163] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:37.991 [2024-10-09 00:31:08.404170] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:37.991 [2024-10-09 00:31:08.404174] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:37.991 [2024-10-09 00:31:08.404179] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1438760) 00:23:37.991 [2024-10-09 00:31:08.404184] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:37.991 [2024-10-09 00:31:08.404190] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:23:37.991 [2024-10-09 00:31:08.404204] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:23:37.991 [2024-10-09 00:31:08.404214] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:37.991 [2024-10-09 00:31:08.404218] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1438760) 00:23:37.991 [2024-10-09 00:31:08.404225] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.991 [2024-10-09 00:31:08.404240] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1498480, cid 0, qid 0 00:23:37.991 [2024-10-09 00:31:08.404245] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1498600, cid 1, qid 0 00:23:37.991 [2024-10-09 00:31:08.404250] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1498780, cid 2, qid 0 00:23:37.991 [2024-10-09 00:31:08.404256] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1498900, cid 3, qid 0 00:23:37.991 [2024-10-09 00:31:08.404261] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1498a80, cid 4, qid 0 00:23:37.991 [2024-10-09 00:31:08.404487] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:37.991 [2024-10-09 00:31:08.404494] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:37.991 [2024-10-09 00:31:08.404498] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:37.991 [2024-10-09 00:31:08.404502] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1498a80) on tqpair=0x1438760 00:23:37.991 [2024-10-09 00:31:08.404507] nvme_ctrlr.c:3046:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:23:37.991 [2024-10-09 00:31:08.404513] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:23:37.991 [2024-10-09 00:31:08.404527] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:37.991 [2024-10-09 00:31:08.404531] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1438760) 00:23:37.991 [2024-10-09 00:31:08.404538] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.991 [2024-10-09 00:31:08.404550] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1498a80, cid 4, qid 0 00:23:37.991 [2024-10-09 00:31:08.404763] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:37.991 [2024-10-09 00:31:08.404772] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:37.991 [2024-10-09 00:31:08.404776] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:37.991 [2024-10-09 00:31:08.404780] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1438760): datao=0, datal=4096, cccid=4 00:23:37.991 [2024-10-09 00:31:08.404786] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1498a80) on tqpair(0x1438760): expected_datao=0, payload_size=4096 00:23:37.991 [2024-10-09 00:31:08.404791] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:37.991 [2024-10-09 00:31:08.404810] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:37.991 [2024-10-09 00:31:08.404815] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:37.991 [2024-10-09 00:31:08.405005] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:37.991 [2024-10-09 00:31:08.405012] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:37.991 [2024-10-09 00:31:08.405016] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:37.991 [2024-10-09 00:31:08.405020] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1498a80) on tqpair=0x1438760 00:23:37.991 [2024-10-09 00:31:08.405035] nvme_ctrlr.c:4189:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:23:37.991 [2024-10-09 00:31:08.405072] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:37.991 [2024-10-09 00:31:08.405077] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1438760) 00:23:37.991 [2024-10-09 00:31:08.405084] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.991 [2024-10-09 00:31:08.405095] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:37.991 [2024-10-09 00:31:08.405099] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:37.991 [2024-10-09 00:31:08.405103] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1438760) 00:23:37.991 [2024-10-09 00:31:08.405110] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:23:37.991 [2024-10-09 00:31:08.405123] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1498a80, cid 4, qid 0 00:23:37.991 [2024-10-09 00:31:08.405129] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1498c00, cid 5, qid 0 00:23:37.991 [2024-10-09 00:31:08.405385] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:37.991 [2024-10-09 00:31:08.405391] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:37.991 [2024-10-09 00:31:08.405395] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:37.991 [2024-10-09 00:31:08.405399] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1438760): datao=0, datal=1024, cccid=4 00:23:37.991 [2024-10-09 00:31:08.405404] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1498a80) on tqpair(0x1438760): expected_datao=0, payload_size=1024 00:23:37.991 [2024-10-09 00:31:08.405409] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:37.991 [2024-10-09 00:31:08.405416] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:37.991 [2024-10-09 00:31:08.405420] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:37.991 [2024-10-09 00:31:08.405426] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:37.991 [2024-10-09 00:31:08.405432] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:37.991 [2024-10-09 00:31:08.405436] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:37.991 [2024-10-09 00:31:08.405440] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1498c00) on tqpair=0x1438760 00:23:37.991 [2024-10-09 00:31:08.445885] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:37.991 [2024-10-09 00:31:08.445898] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:37.991 [2024-10-09 00:31:08.445902] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:37.991 [2024-10-09 00:31:08.445906] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1498a80) on tqpair=0x1438760 00:23:37.991 [2024-10-09 00:31:08.445926] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:37.991 [2024-10-09 00:31:08.445931] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1438760) 00:23:37.991 [2024-10-09 00:31:08.445939] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.991 [2024-10-09 00:31:08.445957] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1498a80, cid 4, qid 0 00:23:37.991 [2024-10-09 00:31:08.446164] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:37.991 [2024-10-09 00:31:08.446170] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:37.991 [2024-10-09 00:31:08.446174] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:37.991 [2024-10-09 00:31:08.446178] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1438760): datao=0, datal=3072, cccid=4 00:23:37.991 [2024-10-09 00:31:08.446183] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1498a80) on tqpair(0x1438760): expected_datao=0, payload_size=3072 00:23:37.991 [2024-10-09 00:31:08.446187] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:37.991 [2024-10-09 00:31:08.446195] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:37.991 [2024-10-09 00:31:08.446199] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:37.991 [2024-10-09 00:31:08.446355] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:37.991 [2024-10-09 00:31:08.446361] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:37.991 [2024-10-09 00:31:08.446368] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:37.991 [2024-10-09 00:31:08.446372] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1498a80) on tqpair=0x1438760 00:23:37.991 [2024-10-09 00:31:08.446381] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:37.991 [2024-10-09 00:31:08.446385] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1438760) 00:23:37.991 [2024-10-09 00:31:08.446392] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.991 [2024-10-09 00:31:08.446406] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1498a80, cid 4, qid 0 00:23:37.991 [2024-10-09 00:31:08.446610] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:37.991 [2024-10-09 00:31:08.446616] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:37.991 [2024-10-09 00:31:08.446620] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:37.991 [2024-10-09 00:31:08.446624] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1438760): datao=0, datal=8, cccid=4 00:23:37.991 [2024-10-09 00:31:08.446628] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1498a80) on tqpair(0x1438760): expected_datao=0, payload_size=8 00:23:37.991 [2024-10-09 00:31:08.446632] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:37.991 [2024-10-09 00:31:08.446639] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:37.992 [2024-10-09 00:31:08.446643] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:37.992 [2024-10-09 00:31:08.486905] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:37.992 [2024-10-09 00:31:08.486915] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:37.992 [2024-10-09 00:31:08.486919] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:37.992 [2024-10-09 00:31:08.486923] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1498a80) on tqpair=0x1438760 00:23:37.992 ===================================================== 00:23:37.992 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:23:37.992 ===================================================== 00:23:37.992 Controller Capabilities/Features 00:23:37.992 ================================ 00:23:37.992 Vendor ID: 0000 00:23:37.992 Subsystem Vendor ID: 0000 00:23:37.992 Serial Number: .................... 00:23:37.992 Model Number: ........................................ 00:23:37.992 Firmware Version: 25.01 00:23:37.992 Recommended Arb Burst: 0 00:23:37.992 IEEE OUI Identifier: 00 00 00 00:23:37.992 Multi-path I/O 00:23:37.992 May have multiple subsystem ports: No 00:23:37.992 May have multiple controllers: No 00:23:37.992 Associated with SR-IOV VF: No 00:23:37.992 Max Data Transfer Size: 131072 00:23:37.992 Max Number of Namespaces: 0 00:23:37.992 Max Number of I/O Queues: 1024 00:23:37.992 NVMe Specification Version (VS): 1.3 00:23:37.992 NVMe Specification Version (Identify): 1.3 00:23:37.992 Maximum Queue Entries: 128 00:23:37.992 Contiguous Queues Required: Yes 00:23:37.992 Arbitration Mechanisms Supported 00:23:37.992 Weighted Round Robin: Not Supported 00:23:37.992 Vendor Specific: Not Supported 00:23:37.992 Reset Timeout: 15000 ms 00:23:37.992 Doorbell Stride: 4 bytes 00:23:37.992 NVM Subsystem Reset: Not Supported 00:23:37.992 Command Sets Supported 00:23:37.992 NVM Command Set: Supported 00:23:37.992 Boot Partition: Not Supported 00:23:37.992 Memory Page Size Minimum: 4096 bytes 00:23:37.992 Memory Page Size Maximum: 4096 bytes 00:23:37.992 Persistent Memory Region: Not Supported 00:23:37.992 Optional Asynchronous Events Supported 00:23:37.992 Namespace Attribute Notices: Not Supported 00:23:37.992 Firmware Activation Notices: Not Supported 00:23:37.992 ANA Change Notices: Not Supported 00:23:37.992 PLE Aggregate Log Change Notices: Not Supported 00:23:37.992 LBA Status Info Alert Notices: Not Supported 00:23:37.992 EGE Aggregate Log Change Notices: Not Supported 00:23:37.992 Normal NVM Subsystem Shutdown event: Not Supported 00:23:37.992 Zone Descriptor Change Notices: Not Supported 00:23:37.992 Discovery Log Change Notices: Supported 00:23:37.992 Controller Attributes 00:23:37.992 128-bit Host Identifier: Not Supported 00:23:37.992 Non-Operational Permissive Mode: Not Supported 00:23:37.992 NVM Sets: Not Supported 00:23:37.992 Read Recovery Levels: Not Supported 00:23:37.992 Endurance Groups: Not Supported 00:23:37.992 Predictable Latency Mode: Not Supported 00:23:37.992 Traffic Based Keep ALive: Not Supported 00:23:37.992 Namespace Granularity: Not Supported 00:23:37.992 SQ Associations: Not Supported 00:23:37.992 UUID List: Not Supported 00:23:37.992 Multi-Domain Subsystem: Not Supported 00:23:37.992 Fixed Capacity Management: Not Supported 00:23:37.992 Variable Capacity Management: Not Supported 00:23:37.992 Delete Endurance Group: Not Supported 00:23:37.992 Delete NVM Set: Not Supported 00:23:37.992 Extended LBA Formats Supported: Not Supported 00:23:37.992 Flexible Data Placement Supported: Not Supported 00:23:37.992 00:23:37.992 Controller Memory Buffer Support 00:23:37.992 ================================ 00:23:37.992 Supported: No 00:23:37.992 00:23:37.992 Persistent Memory Region Support 00:23:37.992 ================================ 00:23:37.992 Supported: No 00:23:37.992 00:23:37.992 Admin Command Set Attributes 00:23:37.992 ============================ 00:23:37.992 Security Send/Receive: Not Supported 00:23:37.992 Format NVM: Not Supported 00:23:37.992 Firmware Activate/Download: Not Supported 00:23:37.992 Namespace Management: Not Supported 00:23:37.992 Device Self-Test: Not Supported 00:23:37.992 Directives: Not Supported 00:23:37.992 NVMe-MI: Not Supported 00:23:37.992 Virtualization Management: Not Supported 00:23:37.992 Doorbell Buffer Config: Not Supported 00:23:37.992 Get LBA Status Capability: Not Supported 00:23:37.992 Command & Feature Lockdown Capability: Not Supported 00:23:37.992 Abort Command Limit: 1 00:23:37.992 Async Event Request Limit: 4 00:23:37.992 Number of Firmware Slots: N/A 00:23:37.992 Firmware Slot 1 Read-Only: N/A 00:23:37.992 Firmware Activation Without Reset: N/A 00:23:37.992 Multiple Update Detection Support: N/A 00:23:37.992 Firmware Update Granularity: No Information Provided 00:23:37.992 Per-Namespace SMART Log: No 00:23:37.992 Asymmetric Namespace Access Log Page: Not Supported 00:23:37.992 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:23:37.992 Command Effects Log Page: Not Supported 00:23:37.992 Get Log Page Extended Data: Supported 00:23:37.992 Telemetry Log Pages: Not Supported 00:23:37.992 Persistent Event Log Pages: Not Supported 00:23:37.992 Supported Log Pages Log Page: May Support 00:23:37.992 Commands Supported & Effects Log Page: Not Supported 00:23:37.992 Feature Identifiers & Effects Log Page:May Support 00:23:37.992 NVMe-MI Commands & Effects Log Page: May Support 00:23:37.992 Data Area 4 for Telemetry Log: Not Supported 00:23:37.992 Error Log Page Entries Supported: 128 00:23:37.992 Keep Alive: Not Supported 00:23:37.992 00:23:37.992 NVM Command Set Attributes 00:23:37.992 ========================== 00:23:37.992 Submission Queue Entry Size 00:23:37.992 Max: 1 00:23:37.992 Min: 1 00:23:37.992 Completion Queue Entry Size 00:23:37.992 Max: 1 00:23:37.992 Min: 1 00:23:37.992 Number of Namespaces: 0 00:23:37.992 Compare Command: Not Supported 00:23:37.992 Write Uncorrectable Command: Not Supported 00:23:37.992 Dataset Management Command: Not Supported 00:23:37.992 Write Zeroes Command: Not Supported 00:23:37.992 Set Features Save Field: Not Supported 00:23:37.992 Reservations: Not Supported 00:23:37.992 Timestamp: Not Supported 00:23:37.992 Copy: Not Supported 00:23:37.992 Volatile Write Cache: Not Present 00:23:37.992 Atomic Write Unit (Normal): 1 00:23:37.992 Atomic Write Unit (PFail): 1 00:23:37.992 Atomic Compare & Write Unit: 1 00:23:37.992 Fused Compare & Write: Supported 00:23:37.992 Scatter-Gather List 00:23:37.992 SGL Command Set: Supported 00:23:37.992 SGL Keyed: Supported 00:23:37.992 SGL Bit Bucket Descriptor: Not Supported 00:23:37.992 SGL Metadata Pointer: Not Supported 00:23:37.992 Oversized SGL: Not Supported 00:23:37.992 SGL Metadata Address: Not Supported 00:23:37.992 SGL Offset: Supported 00:23:37.992 Transport SGL Data Block: Not Supported 00:23:37.992 Replay Protected Memory Block: Not Supported 00:23:37.992 00:23:37.992 Firmware Slot Information 00:23:37.992 ========================= 00:23:37.992 Active slot: 0 00:23:37.992 00:23:37.992 00:23:37.992 Error Log 00:23:37.992 ========= 00:23:37.992 00:23:37.992 Active Namespaces 00:23:37.992 ================= 00:23:37.992 Discovery Log Page 00:23:37.992 ================== 00:23:37.992 Generation Counter: 2 00:23:37.992 Number of Records: 2 00:23:37.992 Record Format: 0 00:23:37.992 00:23:37.992 Discovery Log Entry 0 00:23:37.992 ---------------------- 00:23:37.992 Transport Type: 3 (TCP) 00:23:37.992 Address Family: 1 (IPv4) 00:23:37.992 Subsystem Type: 3 (Current Discovery Subsystem) 00:23:37.992 Entry Flags: 00:23:37.992 Duplicate Returned Information: 1 00:23:37.992 Explicit Persistent Connection Support for Discovery: 1 00:23:37.992 Transport Requirements: 00:23:37.992 Secure Channel: Not Required 00:23:37.992 Port ID: 0 (0x0000) 00:23:37.992 Controller ID: 65535 (0xffff) 00:23:37.992 Admin Max SQ Size: 128 00:23:37.992 Transport Service Identifier: 4420 00:23:37.992 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:23:37.992 Transport Address: 10.0.0.2 00:23:37.992 Discovery Log Entry 1 00:23:37.992 ---------------------- 00:23:37.992 Transport Type: 3 (TCP) 00:23:37.992 Address Family: 1 (IPv4) 00:23:37.992 Subsystem Type: 2 (NVM Subsystem) 00:23:37.992 Entry Flags: 00:23:37.992 Duplicate Returned Information: 0 00:23:37.992 Explicit Persistent Connection Support for Discovery: 0 00:23:37.992 Transport Requirements: 00:23:37.992 Secure Channel: Not Required 00:23:37.992 Port ID: 0 (0x0000) 00:23:37.992 Controller ID: 65535 (0xffff) 00:23:37.992 Admin Max SQ Size: 128 00:23:37.992 Transport Service Identifier: 4420 00:23:37.992 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:23:37.992 Transport Address: 10.0.0.2 [2024-10-09 00:31:08.487015] nvme_ctrlr.c:4386:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:23:37.992 [2024-10-09 00:31:08.487027] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1498480) on tqpair=0x1438760 00:23:37.993 [2024-10-09 00:31:08.487035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.993 [2024-10-09 00:31:08.487041] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1498600) on tqpair=0x1438760 00:23:37.993 [2024-10-09 00:31:08.487046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.993 [2024-10-09 00:31:08.487051] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1498780) on tqpair=0x1438760 00:23:37.993 [2024-10-09 00:31:08.487056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.993 [2024-10-09 00:31:08.487061] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1498900) on tqpair=0x1438760 00:23:37.993 [2024-10-09 00:31:08.487066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.993 [2024-10-09 00:31:08.487076] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:37.993 [2024-10-09 00:31:08.487080] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:37.993 [2024-10-09 00:31:08.487084] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1438760) 00:23:37.993 [2024-10-09 00:31:08.487092] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.993 [2024-10-09 00:31:08.487107] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1498900, cid 3, qid 0 00:23:37.993 [2024-10-09 00:31:08.487204] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:37.993 [2024-10-09 00:31:08.487210] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:37.993 [2024-10-09 00:31:08.487216] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:37.993 [2024-10-09 00:31:08.487220] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1498900) on tqpair=0x1438760 00:23:37.993 [2024-10-09 00:31:08.487228] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:37.993 [2024-10-09 00:31:08.487232] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:37.993 [2024-10-09 00:31:08.487236] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1438760) 00:23:37.993 [2024-10-09 00:31:08.487243] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.993 [2024-10-09 00:31:08.487257] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1498900, cid 3, qid 0 00:23:37.993 [2024-10-09 00:31:08.487474] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:37.993 [2024-10-09 00:31:08.487480] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:37.993 [2024-10-09 00:31:08.487483] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:37.993 [2024-10-09 00:31:08.487487] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1498900) on tqpair=0x1438760 00:23:37.993 [2024-10-09 00:31:08.487493] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:23:37.993 [2024-10-09 00:31:08.487501] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:23:37.993 [2024-10-09 00:31:08.487511] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:37.993 [2024-10-09 00:31:08.487515] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:37.993 [2024-10-09 00:31:08.487518] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1438760) 00:23:37.993 [2024-10-09 00:31:08.487525] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.993 [2024-10-09 00:31:08.487536] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1498900, cid 3, qid 0 00:23:37.993 [2024-10-09 00:31:08.492732] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:37.993 [2024-10-09 00:31:08.492743] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:37.993 [2024-10-09 00:31:08.492747] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:37.993 [2024-10-09 00:31:08.492751] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1498900) on tqpair=0x1438760 00:23:37.993 [2024-10-09 00:31:08.492763] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:37.993 [2024-10-09 00:31:08.492768] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:37.993 [2024-10-09 00:31:08.492771] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1438760) 00:23:37.993 [2024-10-09 00:31:08.492778] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.993 [2024-10-09 00:31:08.492792] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1498900, cid 3, qid 0 00:23:37.993 [2024-10-09 00:31:08.493009] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:37.993 [2024-10-09 00:31:08.493015] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:37.993 [2024-10-09 00:31:08.493019] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:37.993 [2024-10-09 00:31:08.493023] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1498900) on tqpair=0x1438760 00:23:37.993 [2024-10-09 00:31:08.493031] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 5 milliseconds 00:23:37.993 00:23:37.993 00:31:08 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:23:37.993 [2024-10-09 00:31:08.540328] Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 initialization... 00:23:37.993 [2024-10-09 00:31:08.540375] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3342755 ] 00:23:37.993 [2024-10-09 00:31:08.577763] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:23:37.993 [2024-10-09 00:31:08.577825] nvme_tcp.c:2349:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:23:37.993 [2024-10-09 00:31:08.577830] nvme_tcp.c:2353:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:23:37.993 [2024-10-09 00:31:08.577845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:23:37.993 [2024-10-09 00:31:08.577856] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:23:37.993 [2024-10-09 00:31:08.578524] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:23:37.993 [2024-10-09 00:31:08.578567] nvme_tcp.c:1566:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0xea8760 0 00:23:37.993 [2024-10-09 00:31:08.592737] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:23:37.993 [2024-10-09 00:31:08.592752] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:23:37.993 [2024-10-09 00:31:08.592757] nvme_tcp.c:1612:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:23:37.993 [2024-10-09 00:31:08.592761] nvme_tcp.c:1613:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:23:37.993 [2024-10-09 00:31:08.592792] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:37.993 [2024-10-09 00:31:08.592798] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:37.993 [2024-10-09 00:31:08.592803] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xea8760) 00:23:37.993 [2024-10-09 00:31:08.592817] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:23:37.993 [2024-10-09 00:31:08.592839] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf08480, cid 0, qid 0 00:23:37.993 [2024-10-09 00:31:08.600733] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:37.993 [2024-10-09 00:31:08.600743] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:37.993 [2024-10-09 00:31:08.600747] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:37.993 [2024-10-09 00:31:08.600752] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf08480) on tqpair=0xea8760 00:23:37.993 [2024-10-09 00:31:08.600761] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:23:37.993 [2024-10-09 00:31:08.600769] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:23:37.993 [2024-10-09 00:31:08.600774] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:23:37.993 [2024-10-09 00:31:08.600788] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:37.993 [2024-10-09 00:31:08.600792] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:37.993 [2024-10-09 00:31:08.600795] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xea8760) 00:23:37.993 [2024-10-09 00:31:08.600805] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.993 [2024-10-09 00:31:08.600821] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf08480, cid 0, qid 0 00:23:37.993 [2024-10-09 00:31:08.601011] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:37.993 [2024-10-09 00:31:08.601018] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:37.993 [2024-10-09 00:31:08.601021] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:37.993 [2024-10-09 00:31:08.601025] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf08480) on tqpair=0xea8760 00:23:37.993 [2024-10-09 00:31:08.601035] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:23:37.993 [2024-10-09 00:31:08.601043] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:23:37.993 [2024-10-09 00:31:08.601050] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:37.993 [2024-10-09 00:31:08.601054] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:37.993 [2024-10-09 00:31:08.601058] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xea8760) 00:23:37.993 [2024-10-09 00:31:08.601065] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.993 [2024-10-09 00:31:08.601076] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf08480, cid 0, qid 0 00:23:37.993 [2024-10-09 00:31:08.601291] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:37.993 [2024-10-09 00:31:08.601297] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:37.993 [2024-10-09 00:31:08.601300] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:37.993 [2024-10-09 00:31:08.601304] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf08480) on tqpair=0xea8760 00:23:37.993 [2024-10-09 00:31:08.601309] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:23:37.994 [2024-10-09 00:31:08.601318] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:23:37.994 [2024-10-09 00:31:08.601324] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:37.994 [2024-10-09 00:31:08.601328] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:37.994 [2024-10-09 00:31:08.601332] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xea8760) 00:23:37.994 [2024-10-09 00:31:08.601339] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.994 [2024-10-09 00:31:08.601350] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf08480, cid 0, qid 0 00:23:37.994 [2024-10-09 00:31:08.601520] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:37.994 [2024-10-09 00:31:08.601526] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:37.994 [2024-10-09 00:31:08.601530] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:37.994 [2024-10-09 00:31:08.601534] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf08480) on tqpair=0xea8760 00:23:37.994 [2024-10-09 00:31:08.601539] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:23:37.994 [2024-10-09 00:31:08.601549] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:37.994 [2024-10-09 00:31:08.601553] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:37.994 [2024-10-09 00:31:08.601556] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xea8760) 00:23:37.994 [2024-10-09 00:31:08.601563] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.994 [2024-10-09 00:31:08.601574] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf08480, cid 0, qid 0 00:23:37.994 [2024-10-09 00:31:08.601766] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:37.994 [2024-10-09 00:31:08.601772] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:37.994 [2024-10-09 00:31:08.601776] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:37.994 [2024-10-09 00:31:08.601780] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf08480) on tqpair=0xea8760 00:23:37.994 [2024-10-09 00:31:08.601785] nvme_ctrlr.c:3893:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:23:37.994 [2024-10-09 00:31:08.601790] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:23:37.994 [2024-10-09 00:31:08.601799] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:23:37.994 [2024-10-09 00:31:08.601906] nvme_ctrlr.c:4091:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:23:37.994 [2024-10-09 00:31:08.601910] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:23:37.994 [2024-10-09 00:31:08.601918] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:37.994 [2024-10-09 00:31:08.601922] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:37.994 [2024-10-09 00:31:08.601925] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xea8760) 00:23:37.994 [2024-10-09 00:31:08.601932] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.994 [2024-10-09 00:31:08.601944] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf08480, cid 0, qid 0 00:23:37.994 [2024-10-09 00:31:08.602125] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:37.994 [2024-10-09 00:31:08.602131] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:37.994 [2024-10-09 00:31:08.602134] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:37.994 [2024-10-09 00:31:08.602138] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf08480) on tqpair=0xea8760 00:23:37.994 [2024-10-09 00:31:08.602143] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:23:37.994 [2024-10-09 00:31:08.602153] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:37.994 [2024-10-09 00:31:08.602157] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:37.994 [2024-10-09 00:31:08.602160] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xea8760) 00:23:37.994 [2024-10-09 00:31:08.602167] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.994 [2024-10-09 00:31:08.602177] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf08480, cid 0, qid 0 00:23:37.994 [2024-10-09 00:31:08.602367] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:37.994 [2024-10-09 00:31:08.602373] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:37.994 [2024-10-09 00:31:08.602376] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:37.994 [2024-10-09 00:31:08.602380] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf08480) on tqpair=0xea8760 00:23:37.994 [2024-10-09 00:31:08.602385] nvme_ctrlr.c:3928:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:23:37.994 [2024-10-09 00:31:08.602389] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:23:37.994 [2024-10-09 00:31:08.602397] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:23:37.994 [2024-10-09 00:31:08.602411] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:23:37.994 [2024-10-09 00:31:08.602421] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:37.994 [2024-10-09 00:31:08.602424] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xea8760) 00:23:37.994 [2024-10-09 00:31:08.602432] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.994 [2024-10-09 00:31:08.602443] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf08480, cid 0, qid 0 00:23:37.994 [2024-10-09 00:31:08.602737] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:37.994 [2024-10-09 00:31:08.602743] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:37.994 [2024-10-09 00:31:08.602749] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:37.994 [2024-10-09 00:31:08.602753] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xea8760): datao=0, datal=4096, cccid=0 00:23:37.994 [2024-10-09 00:31:08.602758] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xf08480) on tqpair(0xea8760): expected_datao=0, payload_size=4096 00:23:37.994 [2024-10-09 00:31:08.602763] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:37.994 [2024-10-09 00:31:08.602771] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:37.994 [2024-10-09 00:31:08.602775] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:37.994 [2024-10-09 00:31:08.602880] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:37.994 [2024-10-09 00:31:08.602887] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:37.994 [2024-10-09 00:31:08.602890] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:37.994 [2024-10-09 00:31:08.602894] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf08480) on tqpair=0xea8760 00:23:37.994 [2024-10-09 00:31:08.602902] nvme_ctrlr.c:2077:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:23:37.994 [2024-10-09 00:31:08.602907] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:23:37.994 [2024-10-09 00:31:08.602911] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:23:37.994 [2024-10-09 00:31:08.602916] nvme_ctrlr.c:2108:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:23:37.994 [2024-10-09 00:31:08.602920] nvme_ctrlr.c:2123:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:23:37.994 [2024-10-09 00:31:08.602925] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:23:37.994 [2024-10-09 00:31:08.602933] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:23:37.994 [2024-10-09 00:31:08.602940] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:37.994 [2024-10-09 00:31:08.602944] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:37.994 [2024-10-09 00:31:08.602947] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xea8760) 00:23:37.994 [2024-10-09 00:31:08.602954] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:37.994 [2024-10-09 00:31:08.602966] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf08480, cid 0, qid 0 00:23:37.994 [2024-10-09 00:31:08.603199] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:37.994 [2024-10-09 00:31:08.603206] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:37.994 [2024-10-09 00:31:08.603209] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:37.994 [2024-10-09 00:31:08.603213] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf08480) on tqpair=0xea8760 00:23:37.994 [2024-10-09 00:31:08.603220] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:37.994 [2024-10-09 00:31:08.603224] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:37.994 [2024-10-09 00:31:08.603228] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xea8760) 00:23:37.994 [2024-10-09 00:31:08.603234] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:37.994 [2024-10-09 00:31:08.603240] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:37.994 [2024-10-09 00:31:08.603244] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:37.994 [2024-10-09 00:31:08.603247] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0xea8760) 00:23:37.994 [2024-10-09 00:31:08.603253] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:37.994 [2024-10-09 00:31:08.603260] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:37.994 [2024-10-09 00:31:08.603266] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:37.994 [2024-10-09 00:31:08.603269] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0xea8760) 00:23:37.994 [2024-10-09 00:31:08.603275] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:37.994 [2024-10-09 00:31:08.603281] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:37.994 [2024-10-09 00:31:08.603285] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:37.994 [2024-10-09 00:31:08.603289] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xea8760) 00:23:37.994 [2024-10-09 00:31:08.603294] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:37.994 [2024-10-09 00:31:08.603299] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:23:37.994 [2024-10-09 00:31:08.603310] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:23:37.994 [2024-10-09 00:31:08.603316] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:37.994 [2024-10-09 00:31:08.603320] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xea8760) 00:23:37.994 [2024-10-09 00:31:08.603327] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.994 [2024-10-09 00:31:08.603340] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf08480, cid 0, qid 0 00:23:37.994 [2024-10-09 00:31:08.603345] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf08600, cid 1, qid 0 00:23:37.994 [2024-10-09 00:31:08.603350] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf08780, cid 2, qid 0 00:23:37.994 [2024-10-09 00:31:08.603354] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf08900, cid 3, qid 0 00:23:37.995 [2024-10-09 00:31:08.603359] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf08a80, cid 4, qid 0 00:23:37.995 [2024-10-09 00:31:08.603599] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:37.995 [2024-10-09 00:31:08.603605] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:37.995 [2024-10-09 00:31:08.603608] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:37.995 [2024-10-09 00:31:08.603612] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf08a80) on tqpair=0xea8760 00:23:37.995 [2024-10-09 00:31:08.603617] nvme_ctrlr.c:3046:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:23:37.995 [2024-10-09 00:31:08.603622] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:23:37.995 [2024-10-09 00:31:08.603631] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:23:37.995 [2024-10-09 00:31:08.603640] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:23:37.995 [2024-10-09 00:31:08.603646] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:37.995 [2024-10-09 00:31:08.603650] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:37.995 [2024-10-09 00:31:08.603654] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xea8760) 00:23:37.995 [2024-10-09 00:31:08.603661] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:37.995 [2024-10-09 00:31:08.603671] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf08a80, cid 4, qid 0 00:23:37.995 [2024-10-09 00:31:08.603869] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:37.995 [2024-10-09 00:31:08.603876] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:37.995 [2024-10-09 00:31:08.603881] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:37.995 [2024-10-09 00:31:08.603885] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf08a80) on tqpair=0xea8760 00:23:37.995 [2024-10-09 00:31:08.603951] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:23:37.995 [2024-10-09 00:31:08.603962] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:23:37.995 [2024-10-09 00:31:08.603970] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:37.995 [2024-10-09 00:31:08.603973] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xea8760) 00:23:37.995 [2024-10-09 00:31:08.603980] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.995 [2024-10-09 00:31:08.603991] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf08a80, cid 4, qid 0 00:23:37.995 [2024-10-09 00:31:08.604207] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:37.995 [2024-10-09 00:31:08.604214] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:37.995 [2024-10-09 00:31:08.604218] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:37.995 [2024-10-09 00:31:08.604221] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xea8760): datao=0, datal=4096, cccid=4 00:23:37.995 [2024-10-09 00:31:08.604226] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xf08a80) on tqpair(0xea8760): expected_datao=0, payload_size=4096 00:23:37.995 [2024-10-09 00:31:08.604230] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:37.995 [2024-10-09 00:31:08.604245] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:37.995 [2024-10-09 00:31:08.604249] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:38.258 [2024-10-09 00:31:08.646731] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:38.258 [2024-10-09 00:31:08.646744] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:38.258 [2024-10-09 00:31:08.646748] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:38.258 [2024-10-09 00:31:08.646752] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf08a80) on tqpair=0xea8760 00:23:38.258 [2024-10-09 00:31:08.646764] nvme_ctrlr.c:4722:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:23:38.258 [2024-10-09 00:31:08.646776] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:23:38.258 [2024-10-09 00:31:08.646787] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:23:38.258 [2024-10-09 00:31:08.646794] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:38.258 [2024-10-09 00:31:08.646798] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xea8760) 00:23:38.258 [2024-10-09 00:31:08.646805] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.258 [2024-10-09 00:31:08.646819] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf08a80, cid 4, qid 0 00:23:38.258 [2024-10-09 00:31:08.647034] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:38.258 [2024-10-09 00:31:08.647041] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:38.258 [2024-10-09 00:31:08.647044] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:38.258 [2024-10-09 00:31:08.647048] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xea8760): datao=0, datal=4096, cccid=4 00:23:38.258 [2024-10-09 00:31:08.647052] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xf08a80) on tqpair(0xea8760): expected_datao=0, payload_size=4096 00:23:38.258 [2024-10-09 00:31:08.647057] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:38.258 [2024-10-09 00:31:08.647094] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:38.258 [2024-10-09 00:31:08.647106] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:38.258 [2024-10-09 00:31:08.647273] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:38.258 [2024-10-09 00:31:08.647279] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:38.258 [2024-10-09 00:31:08.647283] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:38.259 [2024-10-09 00:31:08.647287] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf08a80) on tqpair=0xea8760 00:23:38.259 [2024-10-09 00:31:08.647301] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:23:38.259 [2024-10-09 00:31:08.647311] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:23:38.259 [2024-10-09 00:31:08.647318] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:38.259 [2024-10-09 00:31:08.647322] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xea8760) 00:23:38.259 [2024-10-09 00:31:08.647328] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.259 [2024-10-09 00:31:08.647340] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf08a80, cid 4, qid 0 00:23:38.259 [2024-10-09 00:31:08.647520] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:38.259 [2024-10-09 00:31:08.647527] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:38.259 [2024-10-09 00:31:08.647530] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:38.259 [2024-10-09 00:31:08.647534] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xea8760): datao=0, datal=4096, cccid=4 00:23:38.259 [2024-10-09 00:31:08.647538] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xf08a80) on tqpair(0xea8760): expected_datao=0, payload_size=4096 00:23:38.259 [2024-10-09 00:31:08.647542] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:38.259 [2024-10-09 00:31:08.647559] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:38.259 [2024-10-09 00:31:08.647563] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:38.259 [2024-10-09 00:31:08.647748] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:38.259 [2024-10-09 00:31:08.647755] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:38.259 [2024-10-09 00:31:08.647759] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:38.259 [2024-10-09 00:31:08.647763] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf08a80) on tqpair=0xea8760 00:23:38.259 [2024-10-09 00:31:08.647771] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:23:38.259 [2024-10-09 00:31:08.647779] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:23:38.259 [2024-10-09 00:31:08.647788] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:23:38.259 [2024-10-09 00:31:08.647795] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:23:38.259 [2024-10-09 00:31:08.647800] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:23:38.259 [2024-10-09 00:31:08.647806] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:23:38.259 [2024-10-09 00:31:08.647812] nvme_ctrlr.c:3134:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:23:38.259 [2024-10-09 00:31:08.647816] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:23:38.259 [2024-10-09 00:31:08.647822] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:23:38.259 [2024-10-09 00:31:08.647840] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:38.259 [2024-10-09 00:31:08.647844] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xea8760) 00:23:38.259 [2024-10-09 00:31:08.647851] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.259 [2024-10-09 00:31:08.647859] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:38.259 [2024-10-09 00:31:08.647862] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:38.259 [2024-10-09 00:31:08.647866] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xea8760) 00:23:38.259 [2024-10-09 00:31:08.647872] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:23:38.259 [2024-10-09 00:31:08.647885] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf08a80, cid 4, qid 0 00:23:38.259 [2024-10-09 00:31:08.647890] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf08c00, cid 5, qid 0 00:23:38.259 [2024-10-09 00:31:08.648110] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:38.259 [2024-10-09 00:31:08.648116] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:38.259 [2024-10-09 00:31:08.648119] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:38.259 [2024-10-09 00:31:08.648123] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf08a80) on tqpair=0xea8760 00:23:38.259 [2024-10-09 00:31:08.648130] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:38.259 [2024-10-09 00:31:08.648135] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:38.259 [2024-10-09 00:31:08.648139] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:38.259 [2024-10-09 00:31:08.648143] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf08c00) on tqpair=0xea8760 00:23:38.259 [2024-10-09 00:31:08.648152] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:38.259 [2024-10-09 00:31:08.648156] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xea8760) 00:23:38.259 [2024-10-09 00:31:08.648162] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.259 [2024-10-09 00:31:08.648173] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf08c00, cid 5, qid 0 00:23:38.259 [2024-10-09 00:31:08.648359] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:38.259 [2024-10-09 00:31:08.648365] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:38.259 [2024-10-09 00:31:08.648369] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:38.259 [2024-10-09 00:31:08.648373] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf08c00) on tqpair=0xea8760 00:23:38.259 [2024-10-09 00:31:08.648382] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:38.259 [2024-10-09 00:31:08.648386] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xea8760) 00:23:38.259 [2024-10-09 00:31:08.648392] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.259 [2024-10-09 00:31:08.648403] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf08c00, cid 5, qid 0 00:23:38.259 [2024-10-09 00:31:08.648610] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:38.259 [2024-10-09 00:31:08.648616] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:38.259 [2024-10-09 00:31:08.648619] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:38.259 [2024-10-09 00:31:08.648623] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf08c00) on tqpair=0xea8760 00:23:38.259 [2024-10-09 00:31:08.648633] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:38.259 [2024-10-09 00:31:08.648637] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xea8760) 00:23:38.259 [2024-10-09 00:31:08.648644] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.259 [2024-10-09 00:31:08.648656] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf08c00, cid 5, qid 0 00:23:38.259 [2024-10-09 00:31:08.648861] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:38.259 [2024-10-09 00:31:08.648868] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:38.259 [2024-10-09 00:31:08.648872] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:38.259 [2024-10-09 00:31:08.648876] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf08c00) on tqpair=0xea8760 00:23:38.259 [2024-10-09 00:31:08.648891] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:38.259 [2024-10-09 00:31:08.648895] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xea8760) 00:23:38.259 [2024-10-09 00:31:08.648902] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.259 [2024-10-09 00:31:08.648909] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:38.259 [2024-10-09 00:31:08.648913] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xea8760) 00:23:38.259 [2024-10-09 00:31:08.648919] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.259 [2024-10-09 00:31:08.648926] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:38.259 [2024-10-09 00:31:08.648930] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0xea8760) 00:23:38.259 [2024-10-09 00:31:08.648936] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.259 [2024-10-09 00:31:08.648945] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:38.259 [2024-10-09 00:31:08.648949] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0xea8760) 00:23:38.259 [2024-10-09 00:31:08.648955] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.259 [2024-10-09 00:31:08.648967] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf08c00, cid 5, qid 0 00:23:38.259 [2024-10-09 00:31:08.648973] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf08a80, cid 4, qid 0 00:23:38.259 [2024-10-09 00:31:08.648977] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf08d80, cid 6, qid 0 00:23:38.259 [2024-10-09 00:31:08.648982] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf08f00, cid 7, qid 0 00:23:38.259 [2024-10-09 00:31:08.649274] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:38.259 [2024-10-09 00:31:08.649281] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:38.259 [2024-10-09 00:31:08.649285] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:38.259 [2024-10-09 00:31:08.649288] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xea8760): datao=0, datal=8192, cccid=5 00:23:38.259 [2024-10-09 00:31:08.649293] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xf08c00) on tqpair(0xea8760): expected_datao=0, payload_size=8192 00:23:38.259 [2024-10-09 00:31:08.649297] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:38.259 [2024-10-09 00:31:08.649396] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:38.259 [2024-10-09 00:31:08.649401] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:38.259 [2024-10-09 00:31:08.649407] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:38.259 [2024-10-09 00:31:08.649413] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:38.259 [2024-10-09 00:31:08.649416] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:38.259 [2024-10-09 00:31:08.649420] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xea8760): datao=0, datal=512, cccid=4 00:23:38.259 [2024-10-09 00:31:08.649424] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xf08a80) on tqpair(0xea8760): expected_datao=0, payload_size=512 00:23:38.259 [2024-10-09 00:31:08.649430] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:38.259 [2024-10-09 00:31:08.649437] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:38.259 [2024-10-09 00:31:08.649440] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:38.259 [2024-10-09 00:31:08.649446] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:38.259 [2024-10-09 00:31:08.649452] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:38.259 [2024-10-09 00:31:08.649455] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:38.259 [2024-10-09 00:31:08.649459] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xea8760): datao=0, datal=512, cccid=6 00:23:38.260 [2024-10-09 00:31:08.649463] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xf08d80) on tqpair(0xea8760): expected_datao=0, payload_size=512 00:23:38.260 [2024-10-09 00:31:08.649467] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:38.260 [2024-10-09 00:31:08.649474] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:38.260 [2024-10-09 00:31:08.649477] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:38.260 [2024-10-09 00:31:08.649483] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:38.260 [2024-10-09 00:31:08.649489] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:38.260 [2024-10-09 00:31:08.649492] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:38.260 [2024-10-09 00:31:08.649495] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xea8760): datao=0, datal=4096, cccid=7 00:23:38.260 [2024-10-09 00:31:08.649500] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xf08f00) on tqpair(0xea8760): expected_datao=0, payload_size=4096 00:23:38.260 [2024-10-09 00:31:08.649504] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:38.260 [2024-10-09 00:31:08.649518] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:38.260 [2024-10-09 00:31:08.649522] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:38.260 [2024-10-09 00:31:08.689917] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:38.260 [2024-10-09 00:31:08.689927] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:38.260 [2024-10-09 00:31:08.689931] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:38.260 [2024-10-09 00:31:08.689935] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf08c00) on tqpair=0xea8760 00:23:38.260 [2024-10-09 00:31:08.689949] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:38.260 [2024-10-09 00:31:08.689955] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:38.260 [2024-10-09 00:31:08.689958] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:38.260 [2024-10-09 00:31:08.689962] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf08a80) on tqpair=0xea8760 00:23:38.260 [2024-10-09 00:31:08.689973] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:38.260 [2024-10-09 00:31:08.689978] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:38.260 [2024-10-09 00:31:08.689982] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:38.260 [2024-10-09 00:31:08.689986] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf08d80) on tqpair=0xea8760 00:23:38.260 [2024-10-09 00:31:08.689993] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:38.260 [2024-10-09 00:31:08.689998] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:38.260 [2024-10-09 00:31:08.690002] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:38.260 [2024-10-09 00:31:08.690006] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf08f00) on tqpair=0xea8760 00:23:38.260 ===================================================== 00:23:38.260 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:38.260 ===================================================== 00:23:38.260 Controller Capabilities/Features 00:23:38.260 ================================ 00:23:38.260 Vendor ID: 8086 00:23:38.260 Subsystem Vendor ID: 8086 00:23:38.260 Serial Number: SPDK00000000000001 00:23:38.260 Model Number: SPDK bdev Controller 00:23:38.260 Firmware Version: 25.01 00:23:38.260 Recommended Arb Burst: 6 00:23:38.260 IEEE OUI Identifier: e4 d2 5c 00:23:38.260 Multi-path I/O 00:23:38.260 May have multiple subsystem ports: Yes 00:23:38.260 May have multiple controllers: Yes 00:23:38.260 Associated with SR-IOV VF: No 00:23:38.260 Max Data Transfer Size: 131072 00:23:38.260 Max Number of Namespaces: 32 00:23:38.260 Max Number of I/O Queues: 127 00:23:38.260 NVMe Specification Version (VS): 1.3 00:23:38.260 NVMe Specification Version (Identify): 1.3 00:23:38.260 Maximum Queue Entries: 128 00:23:38.260 Contiguous Queues Required: Yes 00:23:38.260 Arbitration Mechanisms Supported 00:23:38.260 Weighted Round Robin: Not Supported 00:23:38.260 Vendor Specific: Not Supported 00:23:38.260 Reset Timeout: 15000 ms 00:23:38.260 Doorbell Stride: 4 bytes 00:23:38.260 NVM Subsystem Reset: Not Supported 00:23:38.260 Command Sets Supported 00:23:38.260 NVM Command Set: Supported 00:23:38.260 Boot Partition: Not Supported 00:23:38.260 Memory Page Size Minimum: 4096 bytes 00:23:38.260 Memory Page Size Maximum: 4096 bytes 00:23:38.260 Persistent Memory Region: Not Supported 00:23:38.260 Optional Asynchronous Events Supported 00:23:38.260 Namespace Attribute Notices: Supported 00:23:38.260 Firmware Activation Notices: Not Supported 00:23:38.260 ANA Change Notices: Not Supported 00:23:38.260 PLE Aggregate Log Change Notices: Not Supported 00:23:38.260 LBA Status Info Alert Notices: Not Supported 00:23:38.260 EGE Aggregate Log Change Notices: Not Supported 00:23:38.260 Normal NVM Subsystem Shutdown event: Not Supported 00:23:38.260 Zone Descriptor Change Notices: Not Supported 00:23:38.260 Discovery Log Change Notices: Not Supported 00:23:38.260 Controller Attributes 00:23:38.260 128-bit Host Identifier: Supported 00:23:38.260 Non-Operational Permissive Mode: Not Supported 00:23:38.260 NVM Sets: Not Supported 00:23:38.260 Read Recovery Levels: Not Supported 00:23:38.260 Endurance Groups: Not Supported 00:23:38.260 Predictable Latency Mode: Not Supported 00:23:38.260 Traffic Based Keep ALive: Not Supported 00:23:38.260 Namespace Granularity: Not Supported 00:23:38.260 SQ Associations: Not Supported 00:23:38.260 UUID List: Not Supported 00:23:38.260 Multi-Domain Subsystem: Not Supported 00:23:38.260 Fixed Capacity Management: Not Supported 00:23:38.260 Variable Capacity Management: Not Supported 00:23:38.260 Delete Endurance Group: Not Supported 00:23:38.260 Delete NVM Set: Not Supported 00:23:38.260 Extended LBA Formats Supported: Not Supported 00:23:38.260 Flexible Data Placement Supported: Not Supported 00:23:38.260 00:23:38.260 Controller Memory Buffer Support 00:23:38.260 ================================ 00:23:38.260 Supported: No 00:23:38.260 00:23:38.260 Persistent Memory Region Support 00:23:38.260 ================================ 00:23:38.260 Supported: No 00:23:38.260 00:23:38.260 Admin Command Set Attributes 00:23:38.260 ============================ 00:23:38.260 Security Send/Receive: Not Supported 00:23:38.260 Format NVM: Not Supported 00:23:38.260 Firmware Activate/Download: Not Supported 00:23:38.260 Namespace Management: Not Supported 00:23:38.260 Device Self-Test: Not Supported 00:23:38.260 Directives: Not Supported 00:23:38.260 NVMe-MI: Not Supported 00:23:38.260 Virtualization Management: Not Supported 00:23:38.260 Doorbell Buffer Config: Not Supported 00:23:38.260 Get LBA Status Capability: Not Supported 00:23:38.260 Command & Feature Lockdown Capability: Not Supported 00:23:38.260 Abort Command Limit: 4 00:23:38.260 Async Event Request Limit: 4 00:23:38.260 Number of Firmware Slots: N/A 00:23:38.260 Firmware Slot 1 Read-Only: N/A 00:23:38.260 Firmware Activation Without Reset: N/A 00:23:38.260 Multiple Update Detection Support: N/A 00:23:38.260 Firmware Update Granularity: No Information Provided 00:23:38.260 Per-Namespace SMART Log: No 00:23:38.260 Asymmetric Namespace Access Log Page: Not Supported 00:23:38.260 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:23:38.260 Command Effects Log Page: Supported 00:23:38.260 Get Log Page Extended Data: Supported 00:23:38.260 Telemetry Log Pages: Not Supported 00:23:38.260 Persistent Event Log Pages: Not Supported 00:23:38.260 Supported Log Pages Log Page: May Support 00:23:38.260 Commands Supported & Effects Log Page: Not Supported 00:23:38.260 Feature Identifiers & Effects Log Page:May Support 00:23:38.260 NVMe-MI Commands & Effects Log Page: May Support 00:23:38.260 Data Area 4 for Telemetry Log: Not Supported 00:23:38.260 Error Log Page Entries Supported: 128 00:23:38.260 Keep Alive: Supported 00:23:38.260 Keep Alive Granularity: 10000 ms 00:23:38.260 00:23:38.260 NVM Command Set Attributes 00:23:38.260 ========================== 00:23:38.260 Submission Queue Entry Size 00:23:38.260 Max: 64 00:23:38.260 Min: 64 00:23:38.260 Completion Queue Entry Size 00:23:38.260 Max: 16 00:23:38.260 Min: 16 00:23:38.260 Number of Namespaces: 32 00:23:38.260 Compare Command: Supported 00:23:38.260 Write Uncorrectable Command: Not Supported 00:23:38.260 Dataset Management Command: Supported 00:23:38.260 Write Zeroes Command: Supported 00:23:38.260 Set Features Save Field: Not Supported 00:23:38.260 Reservations: Supported 00:23:38.260 Timestamp: Not Supported 00:23:38.260 Copy: Supported 00:23:38.260 Volatile Write Cache: Present 00:23:38.260 Atomic Write Unit (Normal): 1 00:23:38.260 Atomic Write Unit (PFail): 1 00:23:38.260 Atomic Compare & Write Unit: 1 00:23:38.260 Fused Compare & Write: Supported 00:23:38.260 Scatter-Gather List 00:23:38.260 SGL Command Set: Supported 00:23:38.260 SGL Keyed: Supported 00:23:38.260 SGL Bit Bucket Descriptor: Not Supported 00:23:38.260 SGL Metadata Pointer: Not Supported 00:23:38.260 Oversized SGL: Not Supported 00:23:38.260 SGL Metadata Address: Not Supported 00:23:38.260 SGL Offset: Supported 00:23:38.260 Transport SGL Data Block: Not Supported 00:23:38.260 Replay Protected Memory Block: Not Supported 00:23:38.260 00:23:38.260 Firmware Slot Information 00:23:38.260 ========================= 00:23:38.260 Active slot: 1 00:23:38.260 Slot 1 Firmware Revision: 25.01 00:23:38.260 00:23:38.260 00:23:38.260 Commands Supported and Effects 00:23:38.260 ============================== 00:23:38.260 Admin Commands 00:23:38.260 -------------- 00:23:38.260 Get Log Page (02h): Supported 00:23:38.260 Identify (06h): Supported 00:23:38.260 Abort (08h): Supported 00:23:38.260 Set Features (09h): Supported 00:23:38.260 Get Features (0Ah): Supported 00:23:38.260 Asynchronous Event Request (0Ch): Supported 00:23:38.260 Keep Alive (18h): Supported 00:23:38.261 I/O Commands 00:23:38.261 ------------ 00:23:38.261 Flush (00h): Supported LBA-Change 00:23:38.261 Write (01h): Supported LBA-Change 00:23:38.261 Read (02h): Supported 00:23:38.261 Compare (05h): Supported 00:23:38.261 Write Zeroes (08h): Supported LBA-Change 00:23:38.261 Dataset Management (09h): Supported LBA-Change 00:23:38.261 Copy (19h): Supported LBA-Change 00:23:38.261 00:23:38.261 Error Log 00:23:38.261 ========= 00:23:38.261 00:23:38.261 Arbitration 00:23:38.261 =========== 00:23:38.261 Arbitration Burst: 1 00:23:38.261 00:23:38.261 Power Management 00:23:38.261 ================ 00:23:38.261 Number of Power States: 1 00:23:38.261 Current Power State: Power State #0 00:23:38.261 Power State #0: 00:23:38.261 Max Power: 0.00 W 00:23:38.261 Non-Operational State: Operational 00:23:38.261 Entry Latency: Not Reported 00:23:38.261 Exit Latency: Not Reported 00:23:38.261 Relative Read Throughput: 0 00:23:38.261 Relative Read Latency: 0 00:23:38.261 Relative Write Throughput: 0 00:23:38.261 Relative Write Latency: 0 00:23:38.261 Idle Power: Not Reported 00:23:38.261 Active Power: Not Reported 00:23:38.261 Non-Operational Permissive Mode: Not Supported 00:23:38.261 00:23:38.261 Health Information 00:23:38.261 ================== 00:23:38.261 Critical Warnings: 00:23:38.261 Available Spare Space: OK 00:23:38.261 Temperature: OK 00:23:38.261 Device Reliability: OK 00:23:38.261 Read Only: No 00:23:38.261 Volatile Memory Backup: OK 00:23:38.261 Current Temperature: 0 Kelvin (-273 Celsius) 00:23:38.261 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:23:38.261 Available Spare: 0% 00:23:38.261 Available Spare Threshold: 0% 00:23:38.261 Life Percentage Used:[2024-10-09 00:31:08.690111] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:38.261 [2024-10-09 00:31:08.690117] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0xea8760) 00:23:38.261 [2024-10-09 00:31:08.690124] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.261 [2024-10-09 00:31:08.690140] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf08f00, cid 7, qid 0 00:23:38.261 [2024-10-09 00:31:08.690408] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:38.261 [2024-10-09 00:31:08.690414] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:38.261 [2024-10-09 00:31:08.690417] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:38.261 [2024-10-09 00:31:08.690422] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf08f00) on tqpair=0xea8760 00:23:38.261 [2024-10-09 00:31:08.690456] nvme_ctrlr.c:4386:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:23:38.261 [2024-10-09 00:31:08.690466] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf08480) on tqpair=0xea8760 00:23:38.261 [2024-10-09 00:31:08.690473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.261 [2024-10-09 00:31:08.690478] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf08600) on tqpair=0xea8760 00:23:38.261 [2024-10-09 00:31:08.690483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.261 [2024-10-09 00:31:08.690488] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf08780) on tqpair=0xea8760 00:23:38.261 [2024-10-09 00:31:08.690492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.261 [2024-10-09 00:31:08.690497] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf08900) on tqpair=0xea8760 00:23:38.261 [2024-10-09 00:31:08.690502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.261 [2024-10-09 00:31:08.690511] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:38.261 [2024-10-09 00:31:08.690515] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:38.261 [2024-10-09 00:31:08.690518] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xea8760) 00:23:38.261 [2024-10-09 00:31:08.690525] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.261 [2024-10-09 00:31:08.690538] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf08900, cid 3, qid 0 00:23:38.261 [2024-10-09 00:31:08.694730] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:38.261 [2024-10-09 00:31:08.694739] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:38.261 [2024-10-09 00:31:08.694742] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:38.261 [2024-10-09 00:31:08.694746] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf08900) on tqpair=0xea8760 00:23:38.261 [2024-10-09 00:31:08.694754] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:38.261 [2024-10-09 00:31:08.694757] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:38.261 [2024-10-09 00:31:08.694761] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xea8760) 00:23:38.261 [2024-10-09 00:31:08.694768] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.261 [2024-10-09 00:31:08.694783] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf08900, cid 3, qid 0 00:23:38.261 [2024-10-09 00:31:08.695037] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:38.261 [2024-10-09 00:31:08.695043] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:38.261 [2024-10-09 00:31:08.695046] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:38.261 [2024-10-09 00:31:08.695050] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf08900) on tqpair=0xea8760 00:23:38.261 [2024-10-09 00:31:08.695055] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:23:38.261 [2024-10-09 00:31:08.695060] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:23:38.261 [2024-10-09 00:31:08.695072] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:38.261 [2024-10-09 00:31:08.695076] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:38.261 [2024-10-09 00:31:08.695079] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xea8760) 00:23:38.261 [2024-10-09 00:31:08.695086] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.261 [2024-10-09 00:31:08.695097] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf08900, cid 3, qid 0 00:23:38.261 [2024-10-09 00:31:08.695286] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:38.261 [2024-10-09 00:31:08.695292] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:38.261 [2024-10-09 00:31:08.695296] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:38.261 [2024-10-09 00:31:08.695300] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf08900) on tqpair=0xea8760 00:23:38.261 [2024-10-09 00:31:08.695311] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:38.261 [2024-10-09 00:31:08.695315] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:38.261 [2024-10-09 00:31:08.695318] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xea8760) 00:23:38.261 [2024-10-09 00:31:08.695325] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.261 [2024-10-09 00:31:08.695336] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf08900, cid 3, qid 0 00:23:38.261 [2024-10-09 00:31:08.695589] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:38.261 [2024-10-09 00:31:08.695596] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:38.261 [2024-10-09 00:31:08.695599] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:38.261 [2024-10-09 00:31:08.695603] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf08900) on tqpair=0xea8760 00:23:38.261 [2024-10-09 00:31:08.695613] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:38.261 [2024-10-09 00:31:08.695617] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:38.261 [2024-10-09 00:31:08.695620] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xea8760) 00:23:38.261 [2024-10-09 00:31:08.695627] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.261 [2024-10-09 00:31:08.695638] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf08900, cid 3, qid 0 00:23:38.261 [2024-10-09 00:31:08.695833] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:38.261 [2024-10-09 00:31:08.695839] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:38.261 [2024-10-09 00:31:08.695843] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:38.261 [2024-10-09 00:31:08.695847] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf08900) on tqpair=0xea8760 00:23:38.261 [2024-10-09 00:31:08.695856] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:38.261 [2024-10-09 00:31:08.695860] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:38.261 [2024-10-09 00:31:08.695864] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xea8760) 00:23:38.261 [2024-10-09 00:31:08.695871] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.261 [2024-10-09 00:31:08.695881] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf08900, cid 3, qid 0 00:23:38.261 [2024-10-09 00:31:08.696094] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:38.261 [2024-10-09 00:31:08.696100] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:38.261 [2024-10-09 00:31:08.696103] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:38.261 [2024-10-09 00:31:08.696107] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf08900) on tqpair=0xea8760 00:23:38.261 [2024-10-09 00:31:08.696117] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:38.261 [2024-10-09 00:31:08.696121] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:38.261 [2024-10-09 00:31:08.696126] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xea8760) 00:23:38.261 [2024-10-09 00:31:08.696133] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.261 [2024-10-09 00:31:08.696144] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf08900, cid 3, qid 0 00:23:38.261 [2024-10-09 00:31:08.696345] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:38.262 [2024-10-09 00:31:08.696351] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:38.262 [2024-10-09 00:31:08.696354] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:38.262 [2024-10-09 00:31:08.696358] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf08900) on tqpair=0xea8760 00:23:38.262 [2024-10-09 00:31:08.696367] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:38.262 [2024-10-09 00:31:08.696371] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:38.262 [2024-10-09 00:31:08.696375] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xea8760) 00:23:38.262 [2024-10-09 00:31:08.696382] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.262 [2024-10-09 00:31:08.696392] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf08900, cid 3, qid 0 00:23:38.262 [2024-10-09 00:31:08.696648] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:38.262 [2024-10-09 00:31:08.696654] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:38.262 [2024-10-09 00:31:08.696657] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:38.262 [2024-10-09 00:31:08.696661] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf08900) on tqpair=0xea8760 00:23:38.262 [2024-10-09 00:31:08.696671] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:38.262 [2024-10-09 00:31:08.696675] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:38.262 [2024-10-09 00:31:08.696678] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xea8760) 00:23:38.262 [2024-10-09 00:31:08.696685] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.262 [2024-10-09 00:31:08.696696] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf08900, cid 3, qid 0 00:23:38.262 [2024-10-09 00:31:08.696875] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:38.262 [2024-10-09 00:31:08.696882] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:38.262 [2024-10-09 00:31:08.696886] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:38.262 [2024-10-09 00:31:08.696889] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf08900) on tqpair=0xea8760 00:23:38.262 [2024-10-09 00:31:08.696899] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:38.262 [2024-10-09 00:31:08.696903] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:38.262 [2024-10-09 00:31:08.696907] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xea8760) 00:23:38.262 [2024-10-09 00:31:08.696913] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.262 [2024-10-09 00:31:08.696924] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf08900, cid 3, qid 0 00:23:38.262 [2024-10-09 00:31:08.697152] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:38.262 [2024-10-09 00:31:08.697158] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:38.262 [2024-10-09 00:31:08.697161] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:38.262 [2024-10-09 00:31:08.697165] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf08900) on tqpair=0xea8760 00:23:38.262 [2024-10-09 00:31:08.697175] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:38.262 [2024-10-09 00:31:08.697179] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:38.262 [2024-10-09 00:31:08.697182] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xea8760) 00:23:38.262 [2024-10-09 00:31:08.697191] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.262 [2024-10-09 00:31:08.697202] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf08900, cid 3, qid 0 00:23:38.262 [2024-10-09 00:31:08.697405] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:38.262 [2024-10-09 00:31:08.697411] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:38.262 [2024-10-09 00:31:08.697415] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:38.262 [2024-10-09 00:31:08.697418] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf08900) on tqpair=0xea8760 00:23:38.262 [2024-10-09 00:31:08.697429] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:38.262 [2024-10-09 00:31:08.697433] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:38.262 [2024-10-09 00:31:08.697436] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xea8760) 00:23:38.262 [2024-10-09 00:31:08.697443] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.262 [2024-10-09 00:31:08.697454] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf08900, cid 3, qid 0 00:23:38.262 [2024-10-09 00:31:08.697655] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:38.262 [2024-10-09 00:31:08.697661] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:38.262 [2024-10-09 00:31:08.697665] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:38.262 [2024-10-09 00:31:08.697669] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf08900) on tqpair=0xea8760 00:23:38.262 [2024-10-09 00:31:08.697679] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:38.262 [2024-10-09 00:31:08.697683] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:38.262 [2024-10-09 00:31:08.697686] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xea8760) 00:23:38.262 [2024-10-09 00:31:08.697693] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.262 [2024-10-09 00:31:08.697704] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf08900, cid 3, qid 0 00:23:38.262 [2024-10-09 00:31:08.697916] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:38.262 [2024-10-09 00:31:08.697923] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:38.262 [2024-10-09 00:31:08.697926] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:38.262 [2024-10-09 00:31:08.697930] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf08900) on tqpair=0xea8760 00:23:38.262 [2024-10-09 00:31:08.697940] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:38.262 [2024-10-09 00:31:08.697944] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:38.262 [2024-10-09 00:31:08.697947] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xea8760) 00:23:38.262 [2024-10-09 00:31:08.697954] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.262 [2024-10-09 00:31:08.697965] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf08900, cid 3, qid 0 00:23:38.262 [2024-10-09 00:31:08.698159] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:38.262 [2024-10-09 00:31:08.698165] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:38.262 [2024-10-09 00:31:08.698169] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:38.262 [2024-10-09 00:31:08.698172] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf08900) on tqpair=0xea8760 00:23:38.262 [2024-10-09 00:31:08.698182] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:38.262 [2024-10-09 00:31:08.698186] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:38.262 [2024-10-09 00:31:08.698190] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xea8760) 00:23:38.262 [2024-10-09 00:31:08.698196] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.262 [2024-10-09 00:31:08.698209] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf08900, cid 3, qid 0 00:23:38.262 [2024-10-09 00:31:08.698412] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:38.262 [2024-10-09 00:31:08.698419] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:38.262 [2024-10-09 00:31:08.698422] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:38.262 [2024-10-09 00:31:08.698426] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf08900) on tqpair=0xea8760 00:23:38.262 [2024-10-09 00:31:08.698436] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:38.262 [2024-10-09 00:31:08.698440] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:38.262 [2024-10-09 00:31:08.698443] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xea8760) 00:23:38.262 [2024-10-09 00:31:08.698450] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.262 [2024-10-09 00:31:08.698460] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf08900, cid 3, qid 0 00:23:38.262 [2024-10-09 00:31:08.698715] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:38.262 [2024-10-09 00:31:08.702729] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:38.262 [2024-10-09 00:31:08.702734] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:38.262 [2024-10-09 00:31:08.702738] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf08900) on tqpair=0xea8760 00:23:38.262 [2024-10-09 00:31:08.702749] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:38.262 [2024-10-09 00:31:08.702753] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:38.262 [2024-10-09 00:31:08.702756] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xea8760) 00:23:38.262 [2024-10-09 00:31:08.702763] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.262 [2024-10-09 00:31:08.702774] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf08900, cid 3, qid 0 00:23:38.262 [2024-10-09 00:31:08.702948] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:38.262 [2024-10-09 00:31:08.702955] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:38.262 [2024-10-09 00:31:08.702959] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:38.262 [2024-10-09 00:31:08.702963] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf08900) on tqpair=0xea8760 00:23:38.262 [2024-10-09 00:31:08.702971] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 7 milliseconds 00:23:38.262 0% 00:23:38.262 Data Units Read: 0 00:23:38.262 Data Units Written: 0 00:23:38.262 Host Read Commands: 0 00:23:38.262 Host Write Commands: 0 00:23:38.262 Controller Busy Time: 0 minutes 00:23:38.262 Power Cycles: 0 00:23:38.262 Power On Hours: 0 hours 00:23:38.262 Unsafe Shutdowns: 0 00:23:38.262 Unrecoverable Media Errors: 0 00:23:38.262 Lifetime Error Log Entries: 0 00:23:38.262 Warning Temperature Time: 0 minutes 00:23:38.262 Critical Temperature Time: 0 minutes 00:23:38.262 00:23:38.262 Number of Queues 00:23:38.262 ================ 00:23:38.262 Number of I/O Submission Queues: 127 00:23:38.262 Number of I/O Completion Queues: 127 00:23:38.262 00:23:38.262 Active Namespaces 00:23:38.262 ================= 00:23:38.262 Namespace ID:1 00:23:38.262 Error Recovery Timeout: Unlimited 00:23:38.262 Command Set Identifier: NVM (00h) 00:23:38.262 Deallocate: Supported 00:23:38.262 Deallocated/Unwritten Error: Not Supported 00:23:38.262 Deallocated Read Value: Unknown 00:23:38.262 Deallocate in Write Zeroes: Not Supported 00:23:38.262 Deallocated Guard Field: 0xFFFF 00:23:38.262 Flush: Supported 00:23:38.262 Reservation: Supported 00:23:38.262 Namespace Sharing Capabilities: Multiple Controllers 00:23:38.262 Size (in LBAs): 131072 (0GiB) 00:23:38.262 Capacity (in LBAs): 131072 (0GiB) 00:23:38.262 Utilization (in LBAs): 131072 (0GiB) 00:23:38.262 NGUID: ABCDEF0123456789ABCDEF0123456789 00:23:38.262 EUI64: ABCDEF0123456789 00:23:38.263 UUID: bc21a9f3-023b-4758-9f00-db22a9d9db91 00:23:38.263 Thin Provisioning: Not Supported 00:23:38.263 Per-NS Atomic Units: Yes 00:23:38.263 Atomic Boundary Size (Normal): 0 00:23:38.263 Atomic Boundary Size (PFail): 0 00:23:38.263 Atomic Boundary Offset: 0 00:23:38.263 Maximum Single Source Range Length: 65535 00:23:38.263 Maximum Copy Length: 65535 00:23:38.263 Maximum Source Range Count: 1 00:23:38.263 NGUID/EUI64 Never Reused: No 00:23:38.263 Namespace Write Protected: No 00:23:38.263 Number of LBA Formats: 1 00:23:38.263 Current LBA Format: LBA Format #00 00:23:38.263 LBA Format #00: Data Size: 512 Metadata Size: 0 00:23:38.263 00:23:38.263 00:31:08 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:23:38.263 00:31:08 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:38.263 00:31:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:38.263 00:31:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:38.263 00:31:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:38.263 00:31:08 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:23:38.263 00:31:08 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:23:38.263 00:31:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@514 -- # nvmfcleanup 00:23:38.263 00:31:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:23:38.263 00:31:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:38.263 00:31:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:23:38.263 00:31:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:38.263 00:31:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:38.263 rmmod nvme_tcp 00:23:38.263 rmmod nvme_fabrics 00:23:38.263 rmmod nvme_keyring 00:23:38.263 00:31:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:38.263 00:31:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:23:38.263 00:31:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:23:38.263 00:31:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@515 -- # '[' -n 3342398 ']' 00:23:38.263 00:31:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # killprocess 3342398 00:23:38.263 00:31:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@950 -- # '[' -z 3342398 ']' 00:23:38.263 00:31:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # kill -0 3342398 00:23:38.263 00:31:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@955 -- # uname 00:23:38.263 00:31:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:38.263 00:31:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3342398 00:23:38.522 00:31:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:38.522 00:31:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:38.522 00:31:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3342398' 00:23:38.522 killing process with pid 3342398 00:23:38.522 00:31:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@969 -- # kill 3342398 00:23:38.522 00:31:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@974 -- # wait 3342398 00:23:38.522 00:31:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:23:38.522 00:31:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:23:38.522 00:31:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:23:38.522 00:31:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:23:38.522 00:31:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@789 -- # iptables-save 00:23:38.522 00:31:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:23:38.522 00:31:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@789 -- # iptables-restore 00:23:38.522 00:31:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:38.522 00:31:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:38.522 00:31:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:38.522 00:31:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:38.522 00:31:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:41.063 00:31:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:41.063 00:23:41.063 real 0m11.699s 00:23:41.063 user 0m8.588s 00:23:41.063 sys 0m6.169s 00:23:41.063 00:31:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:41.063 00:31:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:41.063 ************************************ 00:23:41.063 END TEST nvmf_identify 00:23:41.063 ************************************ 00:23:41.063 00:31:11 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:23:41.063 00:31:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:23:41.063 00:31:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:41.063 00:31:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:41.063 ************************************ 00:23:41.063 START TEST nvmf_perf 00:23:41.063 ************************************ 00:23:41.063 00:31:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:23:41.063 * Looking for test storage... 00:23:41.063 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:41.063 00:31:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:23:41.063 00:31:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1681 -- # lcov --version 00:23:41.063 00:31:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:23:41.063 00:31:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:23:41.063 00:31:11 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:41.063 00:31:11 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:41.063 00:31:11 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:41.063 00:31:11 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:23:41.063 00:31:11 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:23:41.063 00:31:11 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:23:41.063 00:31:11 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:23:41.063 00:31:11 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:23:41.063 00:31:11 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:23:41.063 00:31:11 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:23:41.063 00:31:11 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:41.063 00:31:11 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:23:41.063 00:31:11 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:23:41.063 00:31:11 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:41.063 00:31:11 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:41.063 00:31:11 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:23:41.063 00:31:11 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:23:41.063 00:31:11 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:41.063 00:31:11 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:23:41.063 00:31:11 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:23:41.063 00:31:11 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:23:41.063 00:31:11 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:23:41.063 00:31:11 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:41.063 00:31:11 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:23:41.063 00:31:11 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:23:41.063 00:31:11 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:41.063 00:31:11 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:41.063 00:31:11 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:23:41.063 00:31:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:41.063 00:31:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:23:41.063 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:41.063 --rc genhtml_branch_coverage=1 00:23:41.063 --rc genhtml_function_coverage=1 00:23:41.063 --rc genhtml_legend=1 00:23:41.063 --rc geninfo_all_blocks=1 00:23:41.063 --rc geninfo_unexecuted_blocks=1 00:23:41.063 00:23:41.063 ' 00:23:41.063 00:31:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:23:41.063 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:41.063 --rc genhtml_branch_coverage=1 00:23:41.063 --rc genhtml_function_coverage=1 00:23:41.063 --rc genhtml_legend=1 00:23:41.063 --rc geninfo_all_blocks=1 00:23:41.063 --rc geninfo_unexecuted_blocks=1 00:23:41.063 00:23:41.063 ' 00:23:41.063 00:31:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:23:41.063 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:41.063 --rc genhtml_branch_coverage=1 00:23:41.063 --rc genhtml_function_coverage=1 00:23:41.063 --rc genhtml_legend=1 00:23:41.063 --rc geninfo_all_blocks=1 00:23:41.063 --rc geninfo_unexecuted_blocks=1 00:23:41.063 00:23:41.063 ' 00:23:41.063 00:31:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:23:41.063 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:41.063 --rc genhtml_branch_coverage=1 00:23:41.063 --rc genhtml_function_coverage=1 00:23:41.063 --rc genhtml_legend=1 00:23:41.063 --rc geninfo_all_blocks=1 00:23:41.063 --rc geninfo_unexecuted_blocks=1 00:23:41.063 00:23:41.063 ' 00:23:41.063 00:31:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:41.063 00:31:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:23:41.063 00:31:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:41.063 00:31:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:41.063 00:31:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:41.063 00:31:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:41.063 00:31:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:41.063 00:31:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:41.063 00:31:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:41.063 00:31:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:41.063 00:31:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:41.063 00:31:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:41.063 00:31:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:41.063 00:31:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:41.064 00:31:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:41.064 00:31:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:41.064 00:31:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:41.064 00:31:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:41.064 00:31:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:41.064 00:31:11 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:23:41.064 00:31:11 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:41.064 00:31:11 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:41.064 00:31:11 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:41.064 00:31:11 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:41.064 00:31:11 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:41.064 00:31:11 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:41.064 00:31:11 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:23:41.064 00:31:11 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:41.064 00:31:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:23:41.064 00:31:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:41.064 00:31:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:41.064 00:31:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:41.064 00:31:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:41.064 00:31:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:41.064 00:31:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:41.064 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:41.064 00:31:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:41.064 00:31:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:41.064 00:31:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:41.064 00:31:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:23:41.064 00:31:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:23:41.064 00:31:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:41.064 00:31:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:23:41.064 00:31:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:23:41.064 00:31:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:41.064 00:31:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # prepare_net_devs 00:23:41.064 00:31:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@436 -- # local -g is_hw=no 00:23:41.064 00:31:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # remove_spdk_ns 00:23:41.064 00:31:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:41.064 00:31:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:41.064 00:31:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:41.064 00:31:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:23:41.064 00:31:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:23:41.064 00:31:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@309 -- # xtrace_disable 00:23:41.064 00:31:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:49.195 00:31:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:49.195 00:31:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # pci_devs=() 00:23:49.195 00:31:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:49.195 00:31:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:49.195 00:31:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:49.195 00:31:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:49.195 00:31:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:49.195 00:31:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # net_devs=() 00:23:49.195 00:31:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:49.195 00:31:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # e810=() 00:23:49.195 00:31:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # local -ga e810 00:23:49.195 00:31:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # x722=() 00:23:49.195 00:31:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # local -ga x722 00:23:49.195 00:31:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # mlx=() 00:23:49.195 00:31:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # local -ga mlx 00:23:49.195 00:31:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:49.195 00:31:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:49.195 00:31:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:49.195 00:31:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:49.195 00:31:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:49.195 00:31:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:49.195 00:31:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:49.195 00:31:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:49.195 00:31:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:49.195 00:31:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:49.195 00:31:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:49.195 00:31:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:49.195 00:31:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:49.195 00:31:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:49.195 00:31:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:49.195 00:31:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:49.195 00:31:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:49.195 00:31:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:49.195 00:31:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:49.195 00:31:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:49.195 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:49.195 00:31:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:49.195 00:31:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:49.195 00:31:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:49.195 00:31:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:49.195 00:31:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:49.195 00:31:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:49.195 00:31:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:49.195 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:49.195 00:31:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:49.195 00:31:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:49.195 00:31:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:49.195 00:31:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:49.195 00:31:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:49.195 00:31:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:49.195 00:31:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:49.195 00:31:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:49.195 00:31:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:23:49.195 00:31:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:49.195 00:31:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:23:49.195 00:31:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:49.195 00:31:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ up == up ]] 00:23:49.195 00:31:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:23:49.195 00:31:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:49.195 00:31:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:49.195 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:49.195 00:31:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:23:49.195 00:31:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:23:49.195 00:31:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:49.195 00:31:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:23:49.195 00:31:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:49.195 00:31:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ up == up ]] 00:23:49.195 00:31:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:23:49.195 00:31:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:49.195 00:31:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:49.195 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:49.195 00:31:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:23:49.195 00:31:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:23:49.195 00:31:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # is_hw=yes 00:23:49.195 00:31:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:23:49.195 00:31:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:23:49.195 00:31:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:23:49.195 00:31:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:49.195 00:31:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:49.195 00:31:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:49.195 00:31:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:49.195 00:31:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:49.195 00:31:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:49.195 00:31:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:49.195 00:31:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:49.195 00:31:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:49.195 00:31:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:49.195 00:31:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:49.195 00:31:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:49.195 00:31:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:49.195 00:31:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:49.195 00:31:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:49.195 00:31:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:49.195 00:31:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:49.195 00:31:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:49.195 00:31:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:49.195 00:31:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:49.195 00:31:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:49.195 00:31:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:49.195 00:31:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:49.195 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:49.195 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.731 ms 00:23:49.195 00:23:49.195 --- 10.0.0.2 ping statistics --- 00:23:49.195 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:49.195 rtt min/avg/max/mdev = 0.731/0.731/0.731/0.000 ms 00:23:49.195 00:31:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:49.195 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:49.195 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.284 ms 00:23:49.195 00:23:49.195 --- 10.0.0.1 ping statistics --- 00:23:49.195 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:49.195 rtt min/avg/max/mdev = 0.284/0.284/0.284/0.000 ms 00:23:49.195 00:31:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:49.195 00:31:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@448 -- # return 0 00:23:49.195 00:31:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:23:49.196 00:31:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:49.196 00:31:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:23:49.196 00:31:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:23:49.196 00:31:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:49.196 00:31:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:23:49.196 00:31:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:23:49.196 00:31:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:23:49.196 00:31:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:23:49.196 00:31:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:49.196 00:31:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:49.196 00:31:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # nvmfpid=3347069 00:23:49.196 00:31:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # waitforlisten 3347069 00:23:49.196 00:31:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:49.196 00:31:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@831 -- # '[' -z 3347069 ']' 00:23:49.196 00:31:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:49.196 00:31:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:49.196 00:31:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:49.196 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:49.196 00:31:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:49.196 00:31:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:49.196 [2024-10-09 00:31:19.110145] Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 initialization... 00:23:49.196 [2024-10-09 00:31:19.110207] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:49.196 [2024-10-09 00:31:19.196514] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:49.196 [2024-10-09 00:31:19.291486] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:49.196 [2024-10-09 00:31:19.291545] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:49.196 [2024-10-09 00:31:19.291555] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:49.196 [2024-10-09 00:31:19.291563] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:49.196 [2024-10-09 00:31:19.291570] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:49.196 [2024-10-09 00:31:19.293895] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:23:49.196 [2024-10-09 00:31:19.294057] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:23:49.196 [2024-10-09 00:31:19.294216] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:23:49.196 [2024-10-09 00:31:19.294217] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:23:49.456 00:31:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:49.456 00:31:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # return 0 00:23:49.456 00:31:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:23:49.456 00:31:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:49.456 00:31:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:49.456 00:31:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:49.456 00:31:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:23:49.456 00:31:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:23:50.026 00:31:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:23:50.026 00:31:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:23:50.286 00:31:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:65:00.0 00:23:50.286 00:31:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:23:50.547 00:31:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:23:50.547 00:31:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:65:00.0 ']' 00:23:50.547 00:31:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:23:50.547 00:31:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:23:50.547 00:31:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:50.547 [2024-10-09 00:31:21.090346] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:50.547 00:31:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:50.806 00:31:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:23:50.806 00:31:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:51.065 00:31:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:23:51.065 00:31:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:23:51.065 00:31:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:51.328 [2024-10-09 00:31:21.845199] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:51.328 00:31:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:23:51.592 00:31:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:65:00.0 ']' 00:23:51.592 00:31:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:23:51.592 00:31:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:23:51.592 00:31:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:23:52.971 Initializing NVMe Controllers 00:23:52.971 Attached to NVMe Controller at 0000:65:00.0 [144d:a80a] 00:23:52.971 Associating PCIE (0000:65:00.0) NSID 1 with lcore 0 00:23:52.971 Initialization complete. Launching workers. 00:23:52.971 ======================================================== 00:23:52.971 Latency(us) 00:23:52.971 Device Information : IOPS MiB/s Average min max 00:23:52.971 PCIE (0000:65:00.0) NSID 1 from core 0: 79244.73 309.55 403.24 13.38 4873.81 00:23:52.971 ======================================================== 00:23:52.971 Total : 79244.73 309.55 403.24 13.38 4873.81 00:23:52.971 00:23:52.971 00:31:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:54.352 Initializing NVMe Controllers 00:23:54.352 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:54.352 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:54.352 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:23:54.352 Initialization complete. Launching workers. 00:23:54.352 ======================================================== 00:23:54.352 Latency(us) 00:23:54.353 Device Information : IOPS MiB/s Average min max 00:23:54.353 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 58.00 0.23 17458.36 252.60 45640.22 00:23:54.353 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 56.00 0.22 17939.15 7930.25 47902.73 00:23:54.353 ======================================================== 00:23:54.353 Total : 114.00 0.45 17694.54 252.60 47902.73 00:23:54.353 00:23:54.353 00:31:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:55.293 Initializing NVMe Controllers 00:23:55.293 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:55.293 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:55.293 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:23:55.293 Initialization complete. Launching workers. 00:23:55.293 ======================================================== 00:23:55.293 Latency(us) 00:23:55.293 Device Information : IOPS MiB/s Average min max 00:23:55.293 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11838.06 46.24 2702.91 489.03 8805.96 00:23:55.293 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3674.67 14.35 8708.84 4640.57 17245.12 00:23:55.293 ======================================================== 00:23:55.293 Total : 15512.74 60.60 4125.60 489.03 17245.12 00:23:55.293 00:23:55.293 00:31:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:23:55.293 00:31:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:23:55.293 00:31:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:58.587 Initializing NVMe Controllers 00:23:58.587 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:58.587 Controller IO queue size 128, less than required. 00:23:58.587 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:58.587 Controller IO queue size 128, less than required. 00:23:58.587 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:58.587 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:58.587 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:23:58.587 Initialization complete. Launching workers. 00:23:58.587 ======================================================== 00:23:58.587 Latency(us) 00:23:58.587 Device Information : IOPS MiB/s Average min max 00:23:58.587 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1994.69 498.67 64797.73 35708.44 96004.99 00:23:58.587 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 605.99 151.50 223766.70 58550.30 320127.03 00:23:58.587 ======================================================== 00:23:58.587 Total : 2600.68 650.17 101839.63 35708.44 320127.03 00:23:58.587 00:23:58.587 00:31:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:23:58.587 No valid NVMe controllers or AIO or URING devices found 00:23:58.587 Initializing NVMe Controllers 00:23:58.587 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:58.587 Controller IO queue size 128, less than required. 00:23:58.587 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:58.587 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:23:58.587 Controller IO queue size 128, less than required. 00:23:58.587 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:58.587 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:23:58.587 WARNING: Some requested NVMe devices were skipped 00:23:58.587 00:31:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:24:01.128 Initializing NVMe Controllers 00:24:01.128 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:01.128 Controller IO queue size 128, less than required. 00:24:01.128 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:01.128 Controller IO queue size 128, less than required. 00:24:01.128 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:01.128 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:01.128 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:01.128 Initialization complete. Launching workers. 00:24:01.128 00:24:01.128 ==================== 00:24:01.128 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:24:01.128 TCP transport: 00:24:01.128 polls: 32833 00:24:01.128 idle_polls: 19172 00:24:01.128 sock_completions: 13661 00:24:01.128 nvme_completions: 7115 00:24:01.128 submitted_requests: 10726 00:24:01.128 queued_requests: 1 00:24:01.128 00:24:01.128 ==================== 00:24:01.128 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:24:01.128 TCP transport: 00:24:01.128 polls: 32954 00:24:01.128 idle_polls: 18535 00:24:01.128 sock_completions: 14419 00:24:01.128 nvme_completions: 7677 00:24:01.128 submitted_requests: 11436 00:24:01.128 queued_requests: 1 00:24:01.128 ======================================================== 00:24:01.128 Latency(us) 00:24:01.128 Device Information : IOPS MiB/s Average min max 00:24:01.128 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1778.08 444.52 73195.90 40008.60 121238.33 00:24:01.128 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1918.54 479.64 67533.18 31884.80 106157.57 00:24:01.128 ======================================================== 00:24:01.128 Total : 3696.62 924.16 70256.95 31884.80 121238.33 00:24:01.128 00:24:01.128 00:31:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:24:01.128 00:31:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:01.128 00:31:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:24:01.128 00:31:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:24:01.128 00:31:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:24:01.128 00:31:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@514 -- # nvmfcleanup 00:24:01.128 00:31:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:24:01.128 00:31:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:01.128 00:31:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:24:01.128 00:31:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:01.128 00:31:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:01.128 rmmod nvme_tcp 00:24:01.128 rmmod nvme_fabrics 00:24:01.128 rmmod nvme_keyring 00:24:01.128 00:31:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:01.128 00:31:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:24:01.128 00:31:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:24:01.128 00:31:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@515 -- # '[' -n 3347069 ']' 00:24:01.128 00:31:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # killprocess 3347069 00:24:01.128 00:31:31 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@950 -- # '[' -z 3347069 ']' 00:24:01.128 00:31:31 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # kill -0 3347069 00:24:01.128 00:31:31 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@955 -- # uname 00:24:01.128 00:31:31 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:01.128 00:31:31 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3347069 00:24:01.128 00:31:31 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:01.128 00:31:31 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:01.128 00:31:31 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3347069' 00:24:01.128 killing process with pid 3347069 00:24:01.128 00:31:31 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@969 -- # kill 3347069 00:24:01.128 00:31:31 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@974 -- # wait 3347069 00:24:03.037 00:31:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:24:03.037 00:31:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:24:03.037 00:31:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:24:03.037 00:31:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:24:03.037 00:31:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@789 -- # iptables-save 00:24:03.037 00:31:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:24:03.037 00:31:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@789 -- # iptables-restore 00:24:03.037 00:31:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:03.037 00:31:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:03.037 00:31:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:03.037 00:31:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:03.037 00:31:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:05.578 00:31:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:05.578 00:24:05.578 real 0m24.349s 00:24:05.578 user 0m58.512s 00:24:05.578 sys 0m8.620s 00:24:05.578 00:31:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:05.578 00:31:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:05.578 ************************************ 00:24:05.578 END TEST nvmf_perf 00:24:05.578 ************************************ 00:24:05.578 00:31:35 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:24:05.578 00:31:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:24:05.578 00:31:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:05.578 00:31:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:05.578 ************************************ 00:24:05.578 START TEST nvmf_fio_host 00:24:05.578 ************************************ 00:24:05.578 00:31:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:24:05.578 * Looking for test storage... 00:24:05.578 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:05.578 00:31:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:24:05.578 00:31:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1681 -- # lcov --version 00:24:05.578 00:31:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:24:05.578 00:31:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:24:05.578 00:31:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:05.578 00:31:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:05.578 00:31:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:05.578 00:31:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:24:05.578 00:31:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:24:05.578 00:31:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:24:05.578 00:31:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:24:05.578 00:31:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:24:05.578 00:31:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:24:05.578 00:31:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:24:05.578 00:31:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:05.578 00:31:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:24:05.578 00:31:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:24:05.578 00:31:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:05.578 00:31:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:05.578 00:31:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:24:05.578 00:31:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:24:05.578 00:31:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:05.578 00:31:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:24:05.578 00:31:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:24:05.578 00:31:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:24:05.578 00:31:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:24:05.578 00:31:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:05.578 00:31:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:24:05.578 00:31:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:24:05.578 00:31:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:05.578 00:31:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:05.578 00:31:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:24:05.578 00:31:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:05.578 00:31:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:24:05.578 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:05.578 --rc genhtml_branch_coverage=1 00:24:05.578 --rc genhtml_function_coverage=1 00:24:05.578 --rc genhtml_legend=1 00:24:05.578 --rc geninfo_all_blocks=1 00:24:05.578 --rc geninfo_unexecuted_blocks=1 00:24:05.578 00:24:05.578 ' 00:24:05.578 00:31:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:24:05.578 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:05.578 --rc genhtml_branch_coverage=1 00:24:05.578 --rc genhtml_function_coverage=1 00:24:05.578 --rc genhtml_legend=1 00:24:05.578 --rc geninfo_all_blocks=1 00:24:05.578 --rc geninfo_unexecuted_blocks=1 00:24:05.578 00:24:05.578 ' 00:24:05.578 00:31:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:24:05.578 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:05.578 --rc genhtml_branch_coverage=1 00:24:05.578 --rc genhtml_function_coverage=1 00:24:05.578 --rc genhtml_legend=1 00:24:05.578 --rc geninfo_all_blocks=1 00:24:05.578 --rc geninfo_unexecuted_blocks=1 00:24:05.578 00:24:05.578 ' 00:24:05.578 00:31:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:24:05.578 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:05.578 --rc genhtml_branch_coverage=1 00:24:05.578 --rc genhtml_function_coverage=1 00:24:05.578 --rc genhtml_legend=1 00:24:05.578 --rc geninfo_all_blocks=1 00:24:05.578 --rc geninfo_unexecuted_blocks=1 00:24:05.578 00:24:05.578 ' 00:24:05.578 00:31:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:05.579 00:31:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:24:05.579 00:31:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:05.579 00:31:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:05.579 00:31:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:05.579 00:31:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:05.579 00:31:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:05.579 00:31:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:05.579 00:31:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:24:05.579 00:31:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:05.579 00:31:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:05.579 00:31:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:24:05.579 00:31:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:05.579 00:31:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:05.579 00:31:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:05.579 00:31:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:05.579 00:31:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:05.579 00:31:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:05.579 00:31:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:05.579 00:31:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:05.579 00:31:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:05.579 00:31:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:05.579 00:31:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:05.579 00:31:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:05.579 00:31:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:05.579 00:31:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:05.579 00:31:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:05.579 00:31:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:05.579 00:31:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:05.579 00:31:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:24:05.579 00:31:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:05.579 00:31:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:05.579 00:31:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:05.579 00:31:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:05.579 00:31:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:05.579 00:31:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:05.579 00:31:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:24:05.579 00:31:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:05.579 00:31:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:24:05.579 00:31:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:05.579 00:31:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:05.579 00:31:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:05.579 00:31:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:05.579 00:31:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:05.579 00:31:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:05.579 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:05.579 00:31:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:05.579 00:31:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:05.579 00:31:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:05.579 00:31:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:05.579 00:31:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:24:05.579 00:31:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:24:05.579 00:31:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:05.579 00:31:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # prepare_net_devs 00:24:05.579 00:31:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@436 -- # local -g is_hw=no 00:24:05.579 00:31:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # remove_spdk_ns 00:24:05.579 00:31:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:05.579 00:31:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:05.579 00:31:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:05.579 00:31:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:24:05.579 00:31:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:24:05.579 00:31:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@309 -- # xtrace_disable 00:24:05.579 00:31:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:13.722 00:31:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:13.722 00:31:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # pci_devs=() 00:24:13.722 00:31:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:13.722 00:31:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:13.722 00:31:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:13.722 00:31:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:13.722 00:31:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:13.722 00:31:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # net_devs=() 00:24:13.722 00:31:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:13.722 00:31:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # e810=() 00:24:13.722 00:31:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # local -ga e810 00:24:13.722 00:31:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # x722=() 00:24:13.722 00:31:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # local -ga x722 00:24:13.722 00:31:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # mlx=() 00:24:13.722 00:31:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # local -ga mlx 00:24:13.722 00:31:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:13.722 00:31:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:13.722 00:31:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:13.722 00:31:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:13.722 00:31:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:13.722 00:31:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:13.722 00:31:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:13.722 00:31:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:13.722 00:31:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:13.722 00:31:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:13.722 00:31:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:13.722 00:31:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:13.722 00:31:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:13.722 00:31:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:13.722 00:31:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:13.722 00:31:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:13.722 00:31:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:13.722 00:31:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:13.722 00:31:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:13.723 00:31:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:24:13.723 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:24:13.723 00:31:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:13.723 00:31:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:13.723 00:31:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:13.723 00:31:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:13.723 00:31:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:13.723 00:31:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:13.723 00:31:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:24:13.723 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:24:13.723 00:31:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:13.723 00:31:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:13.723 00:31:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:13.723 00:31:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:13.723 00:31:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:13.723 00:31:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:13.723 00:31:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:13.723 00:31:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:13.723 00:31:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:24:13.723 00:31:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:13.723 00:31:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:24:13.723 00:31:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:13.723 00:31:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ up == up ]] 00:24:13.723 00:31:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:24:13.723 00:31:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:13.723 00:31:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:24:13.723 Found net devices under 0000:4b:00.0: cvl_0_0 00:24:13.723 00:31:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:24:13.723 00:31:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:24:13.723 00:31:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:13.723 00:31:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:24:13.723 00:31:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:13.723 00:31:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ up == up ]] 00:24:13.723 00:31:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:24:13.723 00:31:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:13.723 00:31:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:24:13.723 Found net devices under 0000:4b:00.1: cvl_0_1 00:24:13.723 00:31:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:24:13.723 00:31:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:24:13.723 00:31:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # is_hw=yes 00:24:13.723 00:31:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:24:13.723 00:31:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:24:13.723 00:31:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:24:13.723 00:31:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:13.723 00:31:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:13.723 00:31:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:13.723 00:31:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:13.723 00:31:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:13.723 00:31:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:13.723 00:31:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:13.723 00:31:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:13.723 00:31:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:13.723 00:31:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:13.723 00:31:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:13.723 00:31:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:13.723 00:31:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:13.723 00:31:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:13.723 00:31:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:13.723 00:31:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:13.723 00:31:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:13.723 00:31:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:13.723 00:31:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:13.723 00:31:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:13.723 00:31:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:13.723 00:31:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:13.723 00:31:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:13.723 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:13.723 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.448 ms 00:24:13.723 00:24:13.723 --- 10.0.0.2 ping statistics --- 00:24:13.723 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:13.723 rtt min/avg/max/mdev = 0.448/0.448/0.448/0.000 ms 00:24:13.723 00:31:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:13.723 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:13.723 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.318 ms 00:24:13.723 00:24:13.723 --- 10.0.0.1 ping statistics --- 00:24:13.723 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:13.723 rtt min/avg/max/mdev = 0.318/0.318/0.318/0.000 ms 00:24:13.723 00:31:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:13.723 00:31:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@448 -- # return 0 00:24:13.723 00:31:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:24:13.723 00:31:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:13.723 00:31:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:24:13.723 00:31:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:24:13.723 00:31:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:13.723 00:31:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:24:13.723 00:31:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:24:13.723 00:31:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:24:13.723 00:31:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:24:13.723 00:31:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:13.723 00:31:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:13.723 00:31:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=3353986 00:24:13.723 00:31:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:13.723 00:31:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:13.723 00:31:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 3353986 00:24:13.723 00:31:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@831 -- # '[' -z 3353986 ']' 00:24:13.723 00:31:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:13.723 00:31:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:13.723 00:31:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:13.723 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:13.723 00:31:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:13.723 00:31:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:13.723 [2024-10-09 00:31:43.489437] Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 initialization... 00:24:13.723 [2024-10-09 00:31:43.489503] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:13.723 [2024-10-09 00:31:43.576884] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:13.723 [2024-10-09 00:31:43.671398] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:13.723 [2024-10-09 00:31:43.671460] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:13.723 [2024-10-09 00:31:43.671469] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:13.723 [2024-10-09 00:31:43.671476] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:13.723 [2024-10-09 00:31:43.671482] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:13.723 [2024-10-09 00:31:43.673614] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:24:13.723 [2024-10-09 00:31:43.673785] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:24:13.723 [2024-10-09 00:31:43.673903] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:24:13.723 [2024-10-09 00:31:43.673904] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:24:13.723 00:31:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:13.723 00:31:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # return 0 00:24:13.723 00:31:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:24:13.985 [2024-10-09 00:31:44.476790] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:13.985 00:31:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:24:13.985 00:31:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:13.985 00:31:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:13.985 00:31:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:24:14.245 Malloc1 00:24:14.245 00:31:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:14.506 00:31:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:24:14.766 00:31:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:14.766 [2024-10-09 00:31:45.333624] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:14.766 00:31:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:24:15.028 00:31:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:24:15.028 00:31:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:15.028 00:31:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:15.028 00:31:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:24:15.028 00:31:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:15.028 00:31:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:24:15.028 00:31:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:15.028 00:31:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:24:15.028 00:31:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:24:15.028 00:31:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:24:15.028 00:31:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:15.028 00:31:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:24:15.028 00:31:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:24:15.028 00:31:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:24:15.028 00:31:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:24:15.028 00:31:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:24:15.028 00:31:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:15.028 00:31:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:24:15.028 00:31:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:24:15.028 00:31:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:24:15.028 00:31:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:24:15.028 00:31:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:24:15.028 00:31:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:15.287 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:24:15.287 fio-3.35 00:24:15.287 Starting 1 thread 00:24:17.828 00:24:17.829 test: (groupid=0, jobs=1): err= 0: pid=3354671: Wed Oct 9 00:31:48 2024 00:24:17.829 read: IOPS=13.0k, BW=50.6MiB/s (53.1MB/s)(101MiB/2004msec) 00:24:17.829 slat (usec): min=2, max=294, avg= 2.16, stdev= 2.56 00:24:17.829 clat (usec): min=3862, max=9317, avg=5451.33, stdev=861.95 00:24:17.829 lat (usec): min=3864, max=9319, avg=5453.49, stdev=862.07 00:24:17.829 clat percentiles (usec): 00:24:17.829 | 1.00th=[ 4359], 5.00th=[ 4621], 10.00th=[ 4752], 20.00th=[ 4883], 00:24:17.829 | 30.00th=[ 5014], 40.00th=[ 5145], 50.00th=[ 5211], 60.00th=[ 5342], 00:24:17.829 | 70.00th=[ 5473], 80.00th=[ 5669], 90.00th=[ 6915], 95.00th=[ 7701], 00:24:17.829 | 99.00th=[ 8356], 99.50th=[ 8455], 99.90th=[ 8979], 99.95th=[ 8979], 00:24:17.829 | 99.99th=[ 9110] 00:24:17.829 bw ( KiB/s): min=41920, max=55696, per=99.93%, avg=51804.00, stdev=6613.72, samples=4 00:24:17.829 iops : min=10480, max=13924, avg=12951.00, stdev=1653.43, samples=4 00:24:17.829 write: IOPS=12.9k, BW=50.6MiB/s (53.0MB/s)(101MiB/2004msec); 0 zone resets 00:24:17.829 slat (usec): min=2, max=273, avg= 2.25, stdev= 1.88 00:24:17.829 clat (usec): min=2999, max=7799, avg=4402.12, stdev=713.53 00:24:17.829 lat (usec): min=3006, max=7801, avg=4404.37, stdev=713.69 00:24:17.829 clat percentiles (usec): 00:24:17.829 | 1.00th=[ 3523], 5.00th=[ 3720], 10.00th=[ 3851], 20.00th=[ 3982], 00:24:17.829 | 30.00th=[ 4047], 40.00th=[ 4146], 50.00th=[ 4228], 60.00th=[ 4293], 00:24:17.829 | 70.00th=[ 4424], 80.00th=[ 4555], 90.00th=[ 5735], 95.00th=[ 6259], 00:24:17.829 | 99.00th=[ 6783], 99.50th=[ 6915], 99.90th=[ 7308], 99.95th=[ 7439], 00:24:17.829 | 99.99th=[ 7701] 00:24:17.829 bw ( KiB/s): min=42448, max=55552, per=99.98%, avg=51764.00, stdev=6272.21, samples=4 00:24:17.829 iops : min=10612, max=13888, avg=12941.00, stdev=1568.05, samples=4 00:24:17.829 lat (msec) : 4=11.93%, 10=88.07% 00:24:17.829 cpu : usr=73.24%, sys=25.26%, ctx=40, majf=0, minf=9 00:24:17.829 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:24:17.829 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:17.829 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:17.829 issued rwts: total=25973,25938,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:17.829 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:17.829 00:24:17.829 Run status group 0 (all jobs): 00:24:17.829 READ: bw=50.6MiB/s (53.1MB/s), 50.6MiB/s-50.6MiB/s (53.1MB/s-53.1MB/s), io=101MiB (106MB), run=2004-2004msec 00:24:17.829 WRITE: bw=50.6MiB/s (53.0MB/s), 50.6MiB/s-50.6MiB/s (53.0MB/s-53.0MB/s), io=101MiB (106MB), run=2004-2004msec 00:24:17.829 00:31:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:24:17.829 00:31:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:24:17.829 00:31:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:24:17.829 00:31:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:17.829 00:31:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:24:17.829 00:31:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:17.829 00:31:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:24:17.829 00:31:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:24:17.829 00:31:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:24:17.829 00:31:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:17.829 00:31:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:24:17.829 00:31:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:24:17.829 00:31:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:24:17.829 00:31:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:24:17.829 00:31:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:24:17.829 00:31:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:17.829 00:31:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:24:17.829 00:31:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:24:17.829 00:31:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:24:17.829 00:31:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:24:17.829 00:31:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:24:17.829 00:31:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:24:18.089 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:24:18.089 fio-3.35 00:24:18.089 Starting 1 thread 00:24:20.631 00:24:20.631 test: (groupid=0, jobs=1): err= 0: pid=3355342: Wed Oct 9 00:31:50 2024 00:24:20.631 read: IOPS=9640, BW=151MiB/s (158MB/s)(302MiB/2002msec) 00:24:20.631 slat (usec): min=3, max=112, avg= 3.64, stdev= 1.65 00:24:20.631 clat (usec): min=1218, max=18004, avg=8045.81, stdev=2030.88 00:24:20.631 lat (usec): min=1222, max=18015, avg=8049.45, stdev=2031.10 00:24:20.631 clat percentiles (usec): 00:24:20.631 | 1.00th=[ 4080], 5.00th=[ 4948], 10.00th=[ 5473], 20.00th=[ 6194], 00:24:20.631 | 30.00th=[ 6783], 40.00th=[ 7373], 50.00th=[ 7963], 60.00th=[ 8586], 00:24:20.631 | 70.00th=[ 9241], 80.00th=[ 9896], 90.00th=[10552], 95.00th=[11338], 00:24:20.631 | 99.00th=[13042], 99.50th=[13435], 99.90th=[14353], 99.95th=[15008], 00:24:20.631 | 99.99th=[17957] 00:24:20.631 bw ( KiB/s): min=70336, max=85536, per=49.64%, avg=76568.00, stdev=7129.68, samples=4 00:24:20.631 iops : min= 4396, max= 5346, avg=4785.50, stdev=445.60, samples=4 00:24:20.631 write: IOPS=5684, BW=88.8MiB/s (93.1MB/s)(157MiB/1762msec); 0 zone resets 00:24:20.631 slat (usec): min=39, max=326, avg=41.55, stdev= 9.63 00:24:20.631 clat (usec): min=2326, max=19797, avg=9025.78, stdev=1637.57 00:24:20.631 lat (usec): min=2366, max=19837, avg=9067.33, stdev=1641.42 00:24:20.631 clat percentiles (usec): 00:24:20.631 | 1.00th=[ 5604], 5.00th=[ 6849], 10.00th=[ 7308], 20.00th=[ 7767], 00:24:20.631 | 30.00th=[ 8160], 40.00th=[ 8586], 50.00th=[ 8979], 60.00th=[ 9241], 00:24:20.631 | 70.00th=[ 9634], 80.00th=[10159], 90.00th=[10814], 95.00th=[11469], 00:24:20.631 | 99.00th=[14877], 99.50th=[16450], 99.90th=[18220], 99.95th=[19268], 00:24:20.631 | 99.99th=[19530] 00:24:20.631 bw ( KiB/s): min=73440, max=89120, per=87.53%, avg=79608.00, stdev=7183.29, samples=4 00:24:20.631 iops : min= 4590, max= 5570, avg=4975.50, stdev=448.96, samples=4 00:24:20.631 lat (msec) : 2=0.05%, 4=0.64%, 10=79.55%, 20=19.75% 00:24:20.631 cpu : usr=85.71%, sys=13.29%, ctx=14, majf=0, minf=31 00:24:20.631 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:24:20.631 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:20.631 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:20.631 issued rwts: total=19300,10016,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:20.631 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:20.631 00:24:20.631 Run status group 0 (all jobs): 00:24:20.631 READ: bw=151MiB/s (158MB/s), 151MiB/s-151MiB/s (158MB/s-158MB/s), io=302MiB (316MB), run=2002-2002msec 00:24:20.631 WRITE: bw=88.8MiB/s (93.1MB/s), 88.8MiB/s-88.8MiB/s (93.1MB/s-93.1MB/s), io=157MiB (164MB), run=1762-1762msec 00:24:20.631 00:31:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:20.631 00:31:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:24:20.631 00:31:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:24:20.631 00:31:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:24:20.631 00:31:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:24:20.632 00:31:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@514 -- # nvmfcleanup 00:24:20.632 00:31:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:24:20.632 00:31:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:20.632 00:31:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:24:20.632 00:31:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:20.632 00:31:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:20.632 rmmod nvme_tcp 00:24:20.632 rmmod nvme_fabrics 00:24:20.632 rmmod nvme_keyring 00:24:20.632 00:31:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:20.632 00:31:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:24:20.632 00:31:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:24:20.632 00:31:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@515 -- # '[' -n 3353986 ']' 00:24:20.632 00:31:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # killprocess 3353986 00:24:20.632 00:31:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@950 -- # '[' -z 3353986 ']' 00:24:20.632 00:31:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # kill -0 3353986 00:24:20.632 00:31:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@955 -- # uname 00:24:20.632 00:31:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:20.632 00:31:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3353986 00:24:20.893 00:31:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:20.893 00:31:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:20.893 00:31:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3353986' 00:24:20.893 killing process with pid 3353986 00:24:20.893 00:31:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@969 -- # kill 3353986 00:24:20.893 00:31:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@974 -- # wait 3353986 00:24:20.893 00:31:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:24:20.893 00:31:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:24:20.893 00:31:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:24:20.893 00:31:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:24:20.893 00:31:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@789 -- # iptables-save 00:24:20.893 00:31:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:24:20.893 00:31:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@789 -- # iptables-restore 00:24:20.893 00:31:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:20.893 00:31:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:20.893 00:31:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:20.893 00:31:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:20.893 00:31:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:23.435 00:31:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:23.435 00:24:23.435 real 0m17.816s 00:24:23.435 user 1m0.383s 00:24:23.435 sys 0m7.785s 00:24:23.435 00:31:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:23.435 00:31:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:23.435 ************************************ 00:24:23.435 END TEST nvmf_fio_host 00:24:23.435 ************************************ 00:24:23.435 00:31:53 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:24:23.435 00:31:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:24:23.435 00:31:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:23.435 00:31:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:23.435 ************************************ 00:24:23.435 START TEST nvmf_failover 00:24:23.435 ************************************ 00:24:23.435 00:31:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:24:23.435 * Looking for test storage... 00:24:23.435 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:23.435 00:31:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:24:23.435 00:31:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1681 -- # lcov --version 00:24:23.435 00:31:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:24:23.435 00:31:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:24:23.435 00:31:53 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:23.435 00:31:53 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:23.435 00:31:53 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:23.435 00:31:53 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:24:23.435 00:31:53 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:24:23.435 00:31:53 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:24:23.435 00:31:53 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:24:23.435 00:31:53 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:24:23.435 00:31:53 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:24:23.435 00:31:53 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:24:23.435 00:31:53 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:23.435 00:31:53 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:24:23.435 00:31:53 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:24:23.435 00:31:53 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:23.435 00:31:53 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:23.435 00:31:53 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:24:23.435 00:31:53 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:24:23.435 00:31:53 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:23.435 00:31:53 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:24:23.435 00:31:53 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:24:23.435 00:31:53 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:24:23.435 00:31:53 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:24:23.435 00:31:53 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:23.435 00:31:53 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:24:23.435 00:31:53 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:24:23.435 00:31:53 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:23.435 00:31:53 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:23.435 00:31:53 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:24:23.435 00:31:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:23.435 00:31:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:24:23.435 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:23.435 --rc genhtml_branch_coverage=1 00:24:23.435 --rc genhtml_function_coverage=1 00:24:23.435 --rc genhtml_legend=1 00:24:23.435 --rc geninfo_all_blocks=1 00:24:23.435 --rc geninfo_unexecuted_blocks=1 00:24:23.435 00:24:23.435 ' 00:24:23.435 00:31:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:24:23.435 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:23.435 --rc genhtml_branch_coverage=1 00:24:23.435 --rc genhtml_function_coverage=1 00:24:23.435 --rc genhtml_legend=1 00:24:23.435 --rc geninfo_all_blocks=1 00:24:23.435 --rc geninfo_unexecuted_blocks=1 00:24:23.435 00:24:23.435 ' 00:24:23.435 00:31:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:24:23.435 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:23.435 --rc genhtml_branch_coverage=1 00:24:23.435 --rc genhtml_function_coverage=1 00:24:23.435 --rc genhtml_legend=1 00:24:23.435 --rc geninfo_all_blocks=1 00:24:23.435 --rc geninfo_unexecuted_blocks=1 00:24:23.435 00:24:23.435 ' 00:24:23.435 00:31:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:24:23.435 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:23.435 --rc genhtml_branch_coverage=1 00:24:23.435 --rc genhtml_function_coverage=1 00:24:23.435 --rc genhtml_legend=1 00:24:23.435 --rc geninfo_all_blocks=1 00:24:23.435 --rc geninfo_unexecuted_blocks=1 00:24:23.435 00:24:23.435 ' 00:24:23.436 00:31:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:23.436 00:31:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:24:23.436 00:31:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:23.436 00:31:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:23.436 00:31:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:23.436 00:31:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:23.436 00:31:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:23.436 00:31:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:23.436 00:31:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:23.436 00:31:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:23.436 00:31:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:23.436 00:31:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:23.436 00:31:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:23.436 00:31:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:23.436 00:31:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:23.436 00:31:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:23.436 00:31:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:23.436 00:31:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:23.436 00:31:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:23.436 00:31:53 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:24:23.436 00:31:53 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:23.436 00:31:53 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:23.436 00:31:53 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:23.436 00:31:53 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:23.436 00:31:53 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:23.436 00:31:53 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:23.436 00:31:53 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:24:23.436 00:31:53 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:23.436 00:31:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:24:23.436 00:31:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:23.436 00:31:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:23.436 00:31:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:23.436 00:31:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:23.436 00:31:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:23.436 00:31:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:23.436 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:23.436 00:31:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:23.436 00:31:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:23.436 00:31:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:23.436 00:31:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:23.436 00:31:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:23.436 00:31:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:23.436 00:31:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:23.436 00:31:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:24:23.436 00:31:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:24:23.436 00:31:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:23.436 00:31:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # prepare_net_devs 00:24:23.436 00:31:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@436 -- # local -g is_hw=no 00:24:23.436 00:31:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # remove_spdk_ns 00:24:23.436 00:31:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:23.436 00:31:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:23.436 00:31:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:23.436 00:31:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:24:23.436 00:31:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:24:23.436 00:31:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@309 -- # xtrace_disable 00:24:23.436 00:31:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:31.779 00:32:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:31.779 00:32:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # pci_devs=() 00:24:31.779 00:32:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:31.779 00:32:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:31.779 00:32:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:31.779 00:32:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:31.779 00:32:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:31.779 00:32:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # net_devs=() 00:24:31.779 00:32:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:31.779 00:32:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # e810=() 00:24:31.779 00:32:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # local -ga e810 00:24:31.780 00:32:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # x722=() 00:24:31.780 00:32:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # local -ga x722 00:24:31.780 00:32:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # mlx=() 00:24:31.780 00:32:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # local -ga mlx 00:24:31.780 00:32:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:31.780 00:32:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:31.780 00:32:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:31.780 00:32:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:31.780 00:32:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:31.780 00:32:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:31.780 00:32:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:31.780 00:32:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:31.780 00:32:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:31.780 00:32:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:31.780 00:32:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:31.780 00:32:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:31.780 00:32:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:31.780 00:32:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:31.780 00:32:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:31.780 00:32:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:31.780 00:32:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:31.780 00:32:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:31.780 00:32:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:31.780 00:32:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:24:31.780 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:24:31.780 00:32:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:31.780 00:32:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:31.780 00:32:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:31.780 00:32:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:31.780 00:32:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:31.780 00:32:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:31.780 00:32:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:24:31.780 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:24:31.780 00:32:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:31.780 00:32:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:31.780 00:32:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:31.780 00:32:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:31.780 00:32:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:31.780 00:32:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:31.780 00:32:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:31.780 00:32:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:31.780 00:32:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:24:31.780 00:32:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:31.780 00:32:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:24:31.780 00:32:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:31.780 00:32:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ up == up ]] 00:24:31.780 00:32:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:24:31.780 00:32:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:31.780 00:32:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:24:31.780 Found net devices under 0000:4b:00.0: cvl_0_0 00:24:31.780 00:32:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:24:31.780 00:32:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:24:31.780 00:32:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:31.780 00:32:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:24:31.780 00:32:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:31.780 00:32:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ up == up ]] 00:24:31.780 00:32:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:24:31.780 00:32:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:31.780 00:32:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:24:31.780 Found net devices under 0000:4b:00.1: cvl_0_1 00:24:31.780 00:32:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:24:31.780 00:32:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:24:31.780 00:32:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # is_hw=yes 00:24:31.780 00:32:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:24:31.780 00:32:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:24:31.780 00:32:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:24:31.780 00:32:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:31.780 00:32:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:31.780 00:32:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:31.780 00:32:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:31.780 00:32:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:31.780 00:32:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:31.780 00:32:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:31.780 00:32:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:31.780 00:32:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:31.780 00:32:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:31.780 00:32:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:31.780 00:32:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:31.780 00:32:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:31.780 00:32:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:31.780 00:32:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:31.780 00:32:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:31.780 00:32:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:31.780 00:32:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:31.780 00:32:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:31.780 00:32:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:31.780 00:32:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:31.780 00:32:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:31.780 00:32:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:31.780 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:31.780 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.619 ms 00:24:31.780 00:24:31.780 --- 10.0.0.2 ping statistics --- 00:24:31.780 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:31.780 rtt min/avg/max/mdev = 0.619/0.619/0.619/0.000 ms 00:24:31.780 00:32:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:31.780 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:31.780 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.284 ms 00:24:31.780 00:24:31.780 --- 10.0.0.1 ping statistics --- 00:24:31.780 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:31.780 rtt min/avg/max/mdev = 0.284/0.284/0.284/0.000 ms 00:24:31.780 00:32:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:31.780 00:32:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@448 -- # return 0 00:24:31.780 00:32:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:24:31.780 00:32:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:31.780 00:32:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:24:31.780 00:32:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:24:31.780 00:32:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:31.780 00:32:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:24:31.780 00:32:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:24:31.780 00:32:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:24:31.780 00:32:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:24:31.780 00:32:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:31.780 00:32:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:31.780 00:32:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # nvmfpid=3360003 00:24:31.780 00:32:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # waitforlisten 3360003 00:24:31.780 00:32:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:24:31.780 00:32:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 3360003 ']' 00:24:31.780 00:32:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:31.780 00:32:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:31.780 00:32:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:31.780 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:31.780 00:32:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:31.780 00:32:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:31.781 [2024-10-09 00:32:01.417757] Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 initialization... 00:24:31.781 [2024-10-09 00:32:01.417826] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:31.781 [2024-10-09 00:32:01.504942] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:24:31.781 [2024-10-09 00:32:01.600299] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:31.781 [2024-10-09 00:32:01.600352] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:31.781 [2024-10-09 00:32:01.600361] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:31.781 [2024-10-09 00:32:01.600369] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:31.781 [2024-10-09 00:32:01.600375] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:31.781 [2024-10-09 00:32:01.601574] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:24:31.781 [2024-10-09 00:32:01.601751] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:24:31.781 [2024-10-09 00:32:01.601755] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:24:31.781 00:32:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:31.781 00:32:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:24:31.781 00:32:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:24:31.781 00:32:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:31.781 00:32:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:31.781 00:32:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:31.781 00:32:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:24:32.041 [2024-10-09 00:32:02.453991] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:32.041 00:32:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:24:32.302 Malloc0 00:24:32.302 00:32:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:32.302 00:32:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:32.561 00:32:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:32.821 [2024-10-09 00:32:03.282083] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:32.821 00:32:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:33.082 [2024-10-09 00:32:03.482632] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:33.082 00:32:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:24:33.082 [2024-10-09 00:32:03.687372] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:24:33.343 00:32:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:24:33.343 00:32:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=3360530 00:24:33.343 00:32:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:33.343 00:32:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 3360530 /var/tmp/bdevperf.sock 00:24:33.343 00:32:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 3360530 ']' 00:24:33.343 00:32:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:33.343 00:32:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:33.343 00:32:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:33.343 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:33.343 00:32:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:33.343 00:32:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:34.299 00:32:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:34.299 00:32:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:24:34.300 00:32:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:24:34.300 NVMe0n1 00:24:34.300 00:32:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:24:34.565 00:24:34.565 00:32:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=3360816 00:24:34.565 00:32:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:34.565 00:32:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:24:35.951 00:32:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:35.951 00:32:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:24:39.246 00:32:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:24:39.246 00:24:39.246 00:32:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:39.507 [2024-10-09 00:32:09.961036] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x130cdc0 is same with the state(6) to be set 00:24:39.507 [2024-10-09 00:32:09.961080] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x130cdc0 is same with the state(6) to be set 00:24:39.507 [2024-10-09 00:32:09.961086] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x130cdc0 is same with the state(6) to be set 00:24:39.507 [2024-10-09 00:32:09.961091] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x130cdc0 is same with the state(6) to be set 00:24:39.507 [2024-10-09 00:32:09.961096] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x130cdc0 is same with the state(6) to be set 00:24:39.507 [2024-10-09 00:32:09.961101] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x130cdc0 is same with the state(6) to be set 00:24:39.507 [2024-10-09 00:32:09.961106] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x130cdc0 is same with the state(6) to be set 00:24:39.507 [2024-10-09 00:32:09.961110] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x130cdc0 is same with the state(6) to be set 00:24:39.507 [2024-10-09 00:32:09.961115] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x130cdc0 is same with the state(6) to be set 00:24:39.507 [2024-10-09 00:32:09.961119] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x130cdc0 is same with the state(6) to be set 00:24:39.507 [2024-10-09 00:32:09.961124] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x130cdc0 is same with the state(6) to be set 00:24:39.507 [2024-10-09 00:32:09.961129] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x130cdc0 is same with the state(6) to be set 00:24:39.507 [2024-10-09 00:32:09.961133] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x130cdc0 is same with the state(6) to be set 00:24:39.507 [2024-10-09 00:32:09.961138] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x130cdc0 is same with the state(6) to be set 00:24:39.507 [2024-10-09 00:32:09.961143] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x130cdc0 is same with the state(6) to be set 00:24:39.507 [2024-10-09 00:32:09.961147] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x130cdc0 is same with the state(6) to be set 00:24:39.507 [2024-10-09 00:32:09.961152] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x130cdc0 is same with the state(6) to be set 00:24:39.507 [2024-10-09 00:32:09.961156] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x130cdc0 is same with the state(6) to be set 00:24:39.507 [2024-10-09 00:32:09.961161] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x130cdc0 is same with the state(6) to be set 00:24:39.507 [2024-10-09 00:32:09.961165] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x130cdc0 is same with the state(6) to be set 00:24:39.507 [2024-10-09 00:32:09.961170] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x130cdc0 is same with the state(6) to be set 00:24:39.507 [2024-10-09 00:32:09.961175] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x130cdc0 is same with the state(6) to be set 00:24:39.507 [2024-10-09 00:32:09.961179] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x130cdc0 is same with the state(6) to be set 00:24:39.507 [2024-10-09 00:32:09.961184] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x130cdc0 is same with the state(6) to be set 00:24:39.507 [2024-10-09 00:32:09.961194] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x130cdc0 is same with the state(6) to be set 00:24:39.507 [2024-10-09 00:32:09.961199] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x130cdc0 is same with the state(6) to be set 00:24:39.507 [2024-10-09 00:32:09.961203] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x130cdc0 is same with the state(6) to be set 00:24:39.507 [2024-10-09 00:32:09.961208] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x130cdc0 is same with the state(6) to be set 00:24:39.507 [2024-10-09 00:32:09.961212] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x130cdc0 is same with the state(6) to be set 00:24:39.507 [2024-10-09 00:32:09.961217] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x130cdc0 is same with the state(6) to be set 00:24:39.507 [2024-10-09 00:32:09.961222] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x130cdc0 is same with the state(6) to be set 00:24:39.507 [2024-10-09 00:32:09.961227] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x130cdc0 is same with the state(6) to be set 00:24:39.507 [2024-10-09 00:32:09.961232] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x130cdc0 is same with the state(6) to be set 00:24:39.507 [2024-10-09 00:32:09.961236] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x130cdc0 is same with the state(6) to be set 00:24:39.507 [2024-10-09 00:32:09.961241] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x130cdc0 is same with the state(6) to be set 00:24:39.507 [2024-10-09 00:32:09.961246] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x130cdc0 is same with the state(6) to be set 00:24:39.507 [2024-10-09 00:32:09.961251] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x130cdc0 is same with the state(6) to be set 00:24:39.507 [2024-10-09 00:32:09.961255] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x130cdc0 is same with the state(6) to be set 00:24:39.507 [2024-10-09 00:32:09.961260] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x130cdc0 is same with the state(6) to be set 00:24:39.507 [2024-10-09 00:32:09.961264] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x130cdc0 is same with the state(6) to be set 00:24:39.507 [2024-10-09 00:32:09.961269] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x130cdc0 is same with the state(6) to be set 00:24:39.507 [2024-10-09 00:32:09.961274] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x130cdc0 is same with the state(6) to be set 00:24:39.507 [2024-10-09 00:32:09.961278] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x130cdc0 is same with the state(6) to be set 00:24:39.507 [2024-10-09 00:32:09.961283] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x130cdc0 is same with the state(6) to be set 00:24:39.507 [2024-10-09 00:32:09.961288] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x130cdc0 is same with the state(6) to be set 00:24:39.507 [2024-10-09 00:32:09.961292] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x130cdc0 is same with the state(6) to be set 00:24:39.507 [2024-10-09 00:32:09.961297] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x130cdc0 is same with the state(6) to be set 00:24:39.507 [2024-10-09 00:32:09.961301] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x130cdc0 is same with the state(6) to be set 00:24:39.507 [2024-10-09 00:32:09.961306] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x130cdc0 is same with the state(6) to be set 00:24:39.507 [2024-10-09 00:32:09.961311] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x130cdc0 is same with the state(6) to be set 00:24:39.507 [2024-10-09 00:32:09.961315] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x130cdc0 is same with the state(6) to be set 00:24:39.507 [2024-10-09 00:32:09.961324] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x130cdc0 is same with the state(6) to be set 00:24:39.507 [2024-10-09 00:32:09.961329] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x130cdc0 is same with the state(6) to be set 00:24:39.507 [2024-10-09 00:32:09.961334] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x130cdc0 is same with the state(6) to be set 00:24:39.507 [2024-10-09 00:32:09.961339] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x130cdc0 is same with the state(6) to be set 00:24:39.507 [2024-10-09 00:32:09.961343] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x130cdc0 is same with the state(6) to be set 00:24:39.507 [2024-10-09 00:32:09.961349] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x130cdc0 is same with the state(6) to be set 00:24:39.507 [2024-10-09 00:32:09.961353] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x130cdc0 is same with the state(6) to be set 00:24:39.507 [2024-10-09 00:32:09.961358] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x130cdc0 is same with the state(6) to be set 00:24:39.507 [2024-10-09 00:32:09.961362] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x130cdc0 is same with the state(6) to be set 00:24:39.507 [2024-10-09 00:32:09.961367] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x130cdc0 is same with the state(6) to be set 00:24:39.507 [2024-10-09 00:32:09.961371] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x130cdc0 is same with the state(6) to be set 00:24:39.507 [2024-10-09 00:32:09.961376] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x130cdc0 is same with the state(6) to be set 00:24:39.507 [2024-10-09 00:32:09.961380] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x130cdc0 is same with the state(6) to be set 00:24:39.507 [2024-10-09 00:32:09.961385] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x130cdc0 is same with the state(6) to be set 00:24:39.507 [2024-10-09 00:32:09.961389] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x130cdc0 is same with the state(6) to be set 00:24:39.508 [2024-10-09 00:32:09.961394] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x130cdc0 is same with the state(6) to be set 00:24:39.508 [2024-10-09 00:32:09.961399] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x130cdc0 is same with the state(6) to be set 00:24:39.508 00:32:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:24:42.809 00:32:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:42.809 [2024-10-09 00:32:13.150784] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:42.809 00:32:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:24:43.750 00:32:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:24:43.750 [2024-10-09 00:32:14.338453] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x130dd10 is same with the state(6) to be set 00:24:43.750 [2024-10-09 00:32:14.338482] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x130dd10 is same with the state(6) to be set 00:24:43.750 [2024-10-09 00:32:14.338488] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x130dd10 is same with the state(6) to be set 00:24:43.750 [2024-10-09 00:32:14.338493] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x130dd10 is same with the state(6) to be set 00:24:43.750 [2024-10-09 00:32:14.338498] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x130dd10 is same with the state(6) to be set 00:24:43.750 [2024-10-09 00:32:14.338508] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x130dd10 is same with the state(6) to be set 00:24:43.750 [2024-10-09 00:32:14.338513] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x130dd10 is same with the state(6) to be set 00:24:43.750 [2024-10-09 00:32:14.338517] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x130dd10 is same with the state(6) to be set 00:24:43.750 00:32:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 3360816 00:24:50.334 { 00:24:50.334 "results": [ 00:24:50.334 { 00:24:50.334 "job": "NVMe0n1", 00:24:50.334 "core_mask": "0x1", 00:24:50.334 "workload": "verify", 00:24:50.334 "status": "finished", 00:24:50.334 "verify_range": { 00:24:50.334 "start": 0, 00:24:50.334 "length": 16384 00:24:50.334 }, 00:24:50.334 "queue_depth": 128, 00:24:50.334 "io_size": 4096, 00:24:50.334 "runtime": 15.006422, 00:24:50.334 "iops": 12523.638213026396, 00:24:50.334 "mibps": 48.92046176963436, 00:24:50.334 "io_failed": 9421, 00:24:50.334 "io_timeout": 0, 00:24:50.334 "avg_latency_us": 9712.2054207287, 00:24:50.334 "min_latency_us": 532.48, 00:24:50.334 "max_latency_us": 21080.746666666666 00:24:50.334 } 00:24:50.334 ], 00:24:50.334 "core_count": 1 00:24:50.334 } 00:24:50.334 00:32:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 3360530 00:24:50.335 00:32:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 3360530 ']' 00:24:50.335 00:32:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 3360530 00:24:50.335 00:32:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:24:50.335 00:32:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:50.335 00:32:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3360530 00:24:50.335 00:32:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:50.335 00:32:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:50.335 00:32:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3360530' 00:24:50.335 killing process with pid 3360530 00:24:50.335 00:32:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 3360530 00:24:50.335 00:32:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 3360530 00:24:50.335 00:32:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:50.335 [2024-10-09 00:32:03.769841] Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 initialization... 00:24:50.335 [2024-10-09 00:32:03.769922] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3360530 ] 00:24:50.335 [2024-10-09 00:32:03.854221] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:50.335 [2024-10-09 00:32:03.949206] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:24:50.335 Running I/O for 15 seconds... 00:24:50.335 11352.00 IOPS, 44.34 MiB/s [2024-10-08T22:32:20.970Z] [2024-10-09 00:32:06.332059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:97616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.335 [2024-10-09 00:32:06.332103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.335 [2024-10-09 00:32:06.332122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:97624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.335 [2024-10-09 00:32:06.332130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.335 [2024-10-09 00:32:06.332140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:97632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.335 [2024-10-09 00:32:06.332148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.335 [2024-10-09 00:32:06.332157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:97640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.335 [2024-10-09 00:32:06.332165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.335 [2024-10-09 00:32:06.332174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:97648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.335 [2024-10-09 00:32:06.332182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.335 [2024-10-09 00:32:06.332191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:97656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.335 [2024-10-09 00:32:06.332199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.335 [2024-10-09 00:32:06.332210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:97664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.335 [2024-10-09 00:32:06.332218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.335 [2024-10-09 00:32:06.332227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:97672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.335 [2024-10-09 00:32:06.332234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.335 [2024-10-09 00:32:06.332244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:97680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.335 [2024-10-09 00:32:06.332252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.335 [2024-10-09 00:32:06.332261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:97688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.335 [2024-10-09 00:32:06.332268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.335 [2024-10-09 00:32:06.332278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:97696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.335 [2024-10-09 00:32:06.332285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.335 [2024-10-09 00:32:06.332303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:97704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.335 [2024-10-09 00:32:06.332311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.335 [2024-10-09 00:32:06.332320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:97712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.335 [2024-10-09 00:32:06.332328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.335 [2024-10-09 00:32:06.332338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:97720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.335 [2024-10-09 00:32:06.332346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.335 [2024-10-09 00:32:06.332356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:97728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.335 [2024-10-09 00:32:06.332364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.335 [2024-10-09 00:32:06.332374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:97736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.335 [2024-10-09 00:32:06.332381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.335 [2024-10-09 00:32:06.332391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:97744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.335 [2024-10-09 00:32:06.332398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.335 [2024-10-09 00:32:06.332408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:97752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.335 [2024-10-09 00:32:06.332415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.335 [2024-10-09 00:32:06.332425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:97760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.335 [2024-10-09 00:32:06.332433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.335 [2024-10-09 00:32:06.332443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:97768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.335 [2024-10-09 00:32:06.332450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.335 [2024-10-09 00:32:06.332460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:97776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.335 [2024-10-09 00:32:06.332468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.335 [2024-10-09 00:32:06.332477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:97784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.335 [2024-10-09 00:32:06.332485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.335 [2024-10-09 00:32:06.332495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:97792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.335 [2024-10-09 00:32:06.332503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.335 [2024-10-09 00:32:06.332512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:97800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.335 [2024-10-09 00:32:06.332522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.335 [2024-10-09 00:32:06.332531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:97808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.336 [2024-10-09 00:32:06.332539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.336 [2024-10-09 00:32:06.332549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:97816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.336 [2024-10-09 00:32:06.332556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.336 [2024-10-09 00:32:06.332566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:97824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.336 [2024-10-09 00:32:06.332573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.336 [2024-10-09 00:32:06.332582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:97832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.336 [2024-10-09 00:32:06.332589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.336 [2024-10-09 00:32:06.332598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:97840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.336 [2024-10-09 00:32:06.332606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.336 [2024-10-09 00:32:06.332615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:97848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.336 [2024-10-09 00:32:06.332623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.336 [2024-10-09 00:32:06.332632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:97856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.336 [2024-10-09 00:32:06.332639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.336 [2024-10-09 00:32:06.332648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:97864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.336 [2024-10-09 00:32:06.332656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.336 [2024-10-09 00:32:06.332665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:97872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.336 [2024-10-09 00:32:06.332672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.336 [2024-10-09 00:32:06.332682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:97880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.336 [2024-10-09 00:32:06.332690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.336 [2024-10-09 00:32:06.332699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:97888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.336 [2024-10-09 00:32:06.332706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.336 [2024-10-09 00:32:06.332715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:97896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.336 [2024-10-09 00:32:06.332728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.336 [2024-10-09 00:32:06.332739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:97904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.336 [2024-10-09 00:32:06.332746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.336 [2024-10-09 00:32:06.332756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:97912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.336 [2024-10-09 00:32:06.332763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.336 [2024-10-09 00:32:06.332772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:97920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.336 [2024-10-09 00:32:06.332779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.336 [2024-10-09 00:32:06.332788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:97928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.336 [2024-10-09 00:32:06.332796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.336 [2024-10-09 00:32:06.332805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:97936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.336 [2024-10-09 00:32:06.332812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.336 [2024-10-09 00:32:06.332822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:97944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.336 [2024-10-09 00:32:06.332829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.336 [2024-10-09 00:32:06.332839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:97952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.336 [2024-10-09 00:32:06.332846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.336 [2024-10-09 00:32:06.332855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:97960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.336 [2024-10-09 00:32:06.332862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.336 [2024-10-09 00:32:06.332871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:97968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.336 [2024-10-09 00:32:06.332879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.336 [2024-10-09 00:32:06.332889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:97976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.336 [2024-10-09 00:32:06.332896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.336 [2024-10-09 00:32:06.332905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:97008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.336 [2024-10-09 00:32:06.332913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.336 [2024-10-09 00:32:06.332923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:97016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.336 [2024-10-09 00:32:06.332930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.336 [2024-10-09 00:32:06.332940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:97024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.336 [2024-10-09 00:32:06.332947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.336 [2024-10-09 00:32:06.332959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:97032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.336 [2024-10-09 00:32:06.332966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.336 [2024-10-09 00:32:06.332975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:97040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.336 [2024-10-09 00:32:06.332982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.336 [2024-10-09 00:32:06.332992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:97048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.336 [2024-10-09 00:32:06.333000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.336 [2024-10-09 00:32:06.333010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:97056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.336 [2024-10-09 00:32:06.333017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.336 [2024-10-09 00:32:06.333026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:97064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.336 [2024-10-09 00:32:06.333033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.336 [2024-10-09 00:32:06.333043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:97072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.336 [2024-10-09 00:32:06.333050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.336 [2024-10-09 00:32:06.333059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:97080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.336 [2024-10-09 00:32:06.333067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.336 [2024-10-09 00:32:06.333076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:97088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.336 [2024-10-09 00:32:06.333083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.336 [2024-10-09 00:32:06.333093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:97096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.336 [2024-10-09 00:32:06.333100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.336 [2024-10-09 00:32:06.333110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:97104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.336 [2024-10-09 00:32:06.333117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.336 [2024-10-09 00:32:06.333126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:97112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.336 [2024-10-09 00:32:06.333134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.336 [2024-10-09 00:32:06.333144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:97120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.336 [2024-10-09 00:32:06.333151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.336 [2024-10-09 00:32:06.333160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:97128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.336 [2024-10-09 00:32:06.333168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.336 [2024-10-09 00:32:06.333178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:97136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.336 [2024-10-09 00:32:06.333186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.337 [2024-10-09 00:32:06.333195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:97144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.337 [2024-10-09 00:32:06.333203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.337 [2024-10-09 00:32:06.333212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:97152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.337 [2024-10-09 00:32:06.333219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.337 [2024-10-09 00:32:06.333230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:97160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.337 [2024-10-09 00:32:06.333237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.337 [2024-10-09 00:32:06.333247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:97168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.337 [2024-10-09 00:32:06.333254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.337 [2024-10-09 00:32:06.333264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:97176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.337 [2024-10-09 00:32:06.333272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.337 [2024-10-09 00:32:06.333281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:97184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.337 [2024-10-09 00:32:06.333288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.337 [2024-10-09 00:32:06.333298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:97984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.337 [2024-10-09 00:32:06.333305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.337 [2024-10-09 00:32:06.333315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:97192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.337 [2024-10-09 00:32:06.333322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.337 [2024-10-09 00:32:06.333332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:97200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.337 [2024-10-09 00:32:06.333339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.337 [2024-10-09 00:32:06.333348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:97208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.337 [2024-10-09 00:32:06.333355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.337 [2024-10-09 00:32:06.333365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:97216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.337 [2024-10-09 00:32:06.333372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.337 [2024-10-09 00:32:06.333383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:97224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.337 [2024-10-09 00:32:06.333390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.337 [2024-10-09 00:32:06.333400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:97232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.337 [2024-10-09 00:32:06.333407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.337 [2024-10-09 00:32:06.333417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:97992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.337 [2024-10-09 00:32:06.333424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.337 [2024-10-09 00:32:06.333433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:98000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.337 [2024-10-09 00:32:06.333440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.337 [2024-10-09 00:32:06.333450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:98008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.337 [2024-10-09 00:32:06.333457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.337 [2024-10-09 00:32:06.333466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:98016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.337 [2024-10-09 00:32:06.333473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.337 [2024-10-09 00:32:06.333483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:97240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.337 [2024-10-09 00:32:06.333490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.337 [2024-10-09 00:32:06.333499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:97248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.337 [2024-10-09 00:32:06.333507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.337 [2024-10-09 00:32:06.333516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:97256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.337 [2024-10-09 00:32:06.333523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.337 [2024-10-09 00:32:06.333533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:97264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.337 [2024-10-09 00:32:06.333540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.337 [2024-10-09 00:32:06.333550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:97272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.337 [2024-10-09 00:32:06.333557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.337 [2024-10-09 00:32:06.333566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:97280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.337 [2024-10-09 00:32:06.333573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.337 [2024-10-09 00:32:06.333583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:97288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.337 [2024-10-09 00:32:06.333595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.337 [2024-10-09 00:32:06.333604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:97296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.337 [2024-10-09 00:32:06.333612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.337 [2024-10-09 00:32:06.333621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:97304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.337 [2024-10-09 00:32:06.333629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.337 [2024-10-09 00:32:06.333638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:97312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.337 [2024-10-09 00:32:06.333645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.337 [2024-10-09 00:32:06.333654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:97320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.337 [2024-10-09 00:32:06.333662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.337 [2024-10-09 00:32:06.333671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:97328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.337 [2024-10-09 00:32:06.333679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.337 [2024-10-09 00:32:06.333688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:97336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.337 [2024-10-09 00:32:06.333695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.337 [2024-10-09 00:32:06.333704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:97344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.337 [2024-10-09 00:32:06.333712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.337 [2024-10-09 00:32:06.333724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:97352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.337 [2024-10-09 00:32:06.333732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.337 [2024-10-09 00:32:06.333741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:97360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.337 [2024-10-09 00:32:06.333748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.337 [2024-10-09 00:32:06.333757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:97368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.337 [2024-10-09 00:32:06.333765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.337 [2024-10-09 00:32:06.333775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:97376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.337 [2024-10-09 00:32:06.333782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.337 [2024-10-09 00:32:06.333791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:97384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.337 [2024-10-09 00:32:06.333799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.337 [2024-10-09 00:32:06.333810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:97392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.337 [2024-10-09 00:32:06.333818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.337 [2024-10-09 00:32:06.333827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:97400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.337 [2024-10-09 00:32:06.333834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.337 [2024-10-09 00:32:06.333844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:97408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.337 [2024-10-09 00:32:06.333851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.337 [2024-10-09 00:32:06.333860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:97416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.337 [2024-10-09 00:32:06.333868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.337 [2024-10-09 00:32:06.333877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:98024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.337 [2024-10-09 00:32:06.333885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.337 [2024-10-09 00:32:06.333894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:97424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.338 [2024-10-09 00:32:06.333901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.338 [2024-10-09 00:32:06.333911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:97432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.338 [2024-10-09 00:32:06.333918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.338 [2024-10-09 00:32:06.333927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:97440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.338 [2024-10-09 00:32:06.333935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.338 [2024-10-09 00:32:06.333944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:97448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.338 [2024-10-09 00:32:06.333951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.338 [2024-10-09 00:32:06.333961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:97456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.338 [2024-10-09 00:32:06.333968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.338 [2024-10-09 00:32:06.333977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:97464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.338 [2024-10-09 00:32:06.333985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.338 [2024-10-09 00:32:06.333995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:97472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.338 [2024-10-09 00:32:06.334002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.338 [2024-10-09 00:32:06.334011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:97480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.338 [2024-10-09 00:32:06.334018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.338 [2024-10-09 00:32:06.334031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:97488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.338 [2024-10-09 00:32:06.334038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.338 [2024-10-09 00:32:06.334048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:97496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.338 [2024-10-09 00:32:06.334055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.338 [2024-10-09 00:32:06.334064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:97504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.338 [2024-10-09 00:32:06.334072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.338 [2024-10-09 00:32:06.334081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:97512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.338 [2024-10-09 00:32:06.334089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.338 [2024-10-09 00:32:06.334098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:97520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.338 [2024-10-09 00:32:06.334105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.338 [2024-10-09 00:32:06.334115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:97528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.338 [2024-10-09 00:32:06.334122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.338 [2024-10-09 00:32:06.334131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:97536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.338 [2024-10-09 00:32:06.334138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.338 [2024-10-09 00:32:06.334148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:97544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.338 [2024-10-09 00:32:06.334155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.338 [2024-10-09 00:32:06.334164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:97552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.338 [2024-10-09 00:32:06.334172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.338 [2024-10-09 00:32:06.334182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:97560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.338 [2024-10-09 00:32:06.334189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.338 [2024-10-09 00:32:06.334198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:97568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.338 [2024-10-09 00:32:06.334206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.338 [2024-10-09 00:32:06.334215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:97576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.338 [2024-10-09 00:32:06.334222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.338 [2024-10-09 00:32:06.334232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:97584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.338 [2024-10-09 00:32:06.334240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.338 [2024-10-09 00:32:06.334250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:97592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.338 [2024-10-09 00:32:06.334257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.338 [2024-10-09 00:32:06.334267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:97600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.338 [2024-10-09 00:32:06.334274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.338 [2024-10-09 00:32:06.334283] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bab790 is same with the state(6) to be set 00:24:50.338 [2024-10-09 00:32:06.334292] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:50.338 [2024-10-09 00:32:06.334298] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:50.338 [2024-10-09 00:32:06.334305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:97608 len:8 PRP1 0x0 PRP2 0x0 00:24:50.338 [2024-10-09 00:32:06.334313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.338 [2024-10-09 00:32:06.334351] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1bab790 was disconnected and freed. reset controller. 00:24:50.338 [2024-10-09 00:32:06.334361] bdev_nvme.c:1987:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:24:50.338 [2024-10-09 00:32:06.334383] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:50.338 [2024-10-09 00:32:06.334391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.338 [2024-10-09 00:32:06.334400] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:50.338 [2024-10-09 00:32:06.334407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.338 [2024-10-09 00:32:06.334416] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:50.338 [2024-10-09 00:32:06.334423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.338 [2024-10-09 00:32:06.334432] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:50.338 [2024-10-09 00:32:06.334439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.338 [2024-10-09 00:32:06.334446] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:50.338 [2024-10-09 00:32:06.338037] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:50.338 [2024-10-09 00:32:06.338062] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8ae30 (9): Bad file descriptor 00:24:50.338 [2024-10-09 00:32:06.459746] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:50.338 10836.00 IOPS, 42.33 MiB/s [2024-10-08T22:32:20.973Z] 10996.33 IOPS, 42.95 MiB/s [2024-10-08T22:32:20.973Z] 11459.50 IOPS, 44.76 MiB/s [2024-10-08T22:32:20.973Z] [2024-10-09 00:32:09.961751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:78808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.338 [2024-10-09 00:32:09.961782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.338 [2024-10-09 00:32:09.961800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:78816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.339 [2024-10-09 00:32:09.961806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.339 [2024-10-09 00:32:09.961813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:78824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.339 [2024-10-09 00:32:09.961819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.339 [2024-10-09 00:32:09.961825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:78832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.339 [2024-10-09 00:32:09.961830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.339 [2024-10-09 00:32:09.961837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:78840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.339 [2024-10-09 00:32:09.961842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.339 [2024-10-09 00:32:09.961848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:78848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.339 [2024-10-09 00:32:09.961853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.339 [2024-10-09 00:32:09.961860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:78856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.339 [2024-10-09 00:32:09.961865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.339 [2024-10-09 00:32:09.961871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:78864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.339 [2024-10-09 00:32:09.961876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.339 [2024-10-09 00:32:09.961882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:78872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.339 [2024-10-09 00:32:09.961888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.339 [2024-10-09 00:32:09.961894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:78880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.339 [2024-10-09 00:32:09.961899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.339 [2024-10-09 00:32:09.961905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:78888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.339 [2024-10-09 00:32:09.961910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.339 [2024-10-09 00:32:09.961917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:78896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.339 [2024-10-09 00:32:09.961922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.339 [2024-10-09 00:32:09.961928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:78904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.339 [2024-10-09 00:32:09.961933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.339 [2024-10-09 00:32:09.961940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:78912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.339 [2024-10-09 00:32:09.961945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.339 [2024-10-09 00:32:09.961953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:78920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.339 [2024-10-09 00:32:09.961958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.339 [2024-10-09 00:32:09.961964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:78928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.339 [2024-10-09 00:32:09.961969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.339 [2024-10-09 00:32:09.961976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:78936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.339 [2024-10-09 00:32:09.961981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.339 [2024-10-09 00:32:09.961987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:78944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.339 [2024-10-09 00:32:09.961992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.339 [2024-10-09 00:32:09.961999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:78952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.339 [2024-10-09 00:32:09.962004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.339 [2024-10-09 00:32:09.962010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:78960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.339 [2024-10-09 00:32:09.962015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.339 [2024-10-09 00:32:09.962021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:78968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.339 [2024-10-09 00:32:09.962026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.339 [2024-10-09 00:32:09.962033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:78976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.339 [2024-10-09 00:32:09.962037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.339 [2024-10-09 00:32:09.962044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:78984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.339 [2024-10-09 00:32:09.962050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.339 [2024-10-09 00:32:09.962056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:78992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.339 [2024-10-09 00:32:09.962061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.339 [2024-10-09 00:32:09.962068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:79000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.339 [2024-10-09 00:32:09.962073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.339 [2024-10-09 00:32:09.962079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:79008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.339 [2024-10-09 00:32:09.962084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.339 [2024-10-09 00:32:09.962091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:79016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.339 [2024-10-09 00:32:09.962096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.339 [2024-10-09 00:32:09.962103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:79024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.339 [2024-10-09 00:32:09.962108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.339 [2024-10-09 00:32:09.962114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:79032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.339 [2024-10-09 00:32:09.962119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.339 [2024-10-09 00:32:09.962125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:79040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.339 [2024-10-09 00:32:09.962130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.339 [2024-10-09 00:32:09.962136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:79048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.339 [2024-10-09 00:32:09.962141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.339 [2024-10-09 00:32:09.962148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:79056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.339 [2024-10-09 00:32:09.962153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.339 [2024-10-09 00:32:09.962159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:79064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.339 [2024-10-09 00:32:09.962165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.339 [2024-10-09 00:32:09.962172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:79072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.339 [2024-10-09 00:32:09.962177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.339 [2024-10-09 00:32:09.962183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:79080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.340 [2024-10-09 00:32:09.962188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.340 [2024-10-09 00:32:09.962195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:79088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.340 [2024-10-09 00:32:09.962200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.340 [2024-10-09 00:32:09.962207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:79096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.340 [2024-10-09 00:32:09.962212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.340 [2024-10-09 00:32:09.962218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:79104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.340 [2024-10-09 00:32:09.962223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.340 [2024-10-09 00:32:09.962229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:79112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.340 [2024-10-09 00:32:09.962234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.340 [2024-10-09 00:32:09.962245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:79120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.340 [2024-10-09 00:32:09.962250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.340 [2024-10-09 00:32:09.962256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:79128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.340 [2024-10-09 00:32:09.962261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.340 [2024-10-09 00:32:09.962268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:79136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.340 [2024-10-09 00:32:09.962273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.340 [2024-10-09 00:32:09.962279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:79144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.340 [2024-10-09 00:32:09.962284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.340 [2024-10-09 00:32:09.962290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:79152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.340 [2024-10-09 00:32:09.962296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.340 [2024-10-09 00:32:09.962303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:79160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.340 [2024-10-09 00:32:09.962308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.340 [2024-10-09 00:32:09.962314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:79168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.340 [2024-10-09 00:32:09.962319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.340 [2024-10-09 00:32:09.962326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:79176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.340 [2024-10-09 00:32:09.962331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.340 [2024-10-09 00:32:09.962337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:79184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.340 [2024-10-09 00:32:09.962342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.340 [2024-10-09 00:32:09.962348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:79192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.340 [2024-10-09 00:32:09.962353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.340 [2024-10-09 00:32:09.962360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:79200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.340 [2024-10-09 00:32:09.962365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.340 [2024-10-09 00:32:09.962371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:79208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.340 [2024-10-09 00:32:09.962376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.340 [2024-10-09 00:32:09.962383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:79216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.340 [2024-10-09 00:32:09.962389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.340 [2024-10-09 00:32:09.962395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:79224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.340 [2024-10-09 00:32:09.962401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.340 [2024-10-09 00:32:09.962407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:79232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.340 [2024-10-09 00:32:09.962412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.340 [2024-10-09 00:32:09.962418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:79240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.340 [2024-10-09 00:32:09.962423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.340 [2024-10-09 00:32:09.962429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:79248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.340 [2024-10-09 00:32:09.962434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.340 [2024-10-09 00:32:09.962440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:79256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.340 [2024-10-09 00:32:09.962445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.340 [2024-10-09 00:32:09.962452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:79264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.340 [2024-10-09 00:32:09.962457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.340 [2024-10-09 00:32:09.962463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:79272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.340 [2024-10-09 00:32:09.962468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.340 [2024-10-09 00:32:09.962474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:79280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.340 [2024-10-09 00:32:09.962479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.340 [2024-10-09 00:32:09.962485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:79288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.340 [2024-10-09 00:32:09.962490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.340 [2024-10-09 00:32:09.962497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:79296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.340 [2024-10-09 00:32:09.962502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.340 [2024-10-09 00:32:09.962508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:79304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.340 [2024-10-09 00:32:09.962513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.340 [2024-10-09 00:32:09.962519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:79312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.340 [2024-10-09 00:32:09.962524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.340 [2024-10-09 00:32:09.962532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:79320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.340 [2024-10-09 00:32:09.962537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.340 [2024-10-09 00:32:09.962544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:79328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.340 [2024-10-09 00:32:09.962549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.340 [2024-10-09 00:32:09.962555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:79336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.340 [2024-10-09 00:32:09.962560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.340 [2024-10-09 00:32:09.962567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:79344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.340 [2024-10-09 00:32:09.962572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.340 [2024-10-09 00:32:09.962578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:79352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.340 [2024-10-09 00:32:09.962583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.340 [2024-10-09 00:32:09.962590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:79360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.340 [2024-10-09 00:32:09.962595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.340 [2024-10-09 00:32:09.962602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:79368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.340 [2024-10-09 00:32:09.962607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.340 [2024-10-09 00:32:09.962613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:79376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.340 [2024-10-09 00:32:09.962618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.340 [2024-10-09 00:32:09.962624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:79384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.340 [2024-10-09 00:32:09.962629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.340 [2024-10-09 00:32:09.962636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:79392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.341 [2024-10-09 00:32:09.962641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.341 [2024-10-09 00:32:09.962647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:79400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.341 [2024-10-09 00:32:09.962652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.341 [2024-10-09 00:32:09.962659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:79408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.341 [2024-10-09 00:32:09.962663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.341 [2024-10-09 00:32:09.962670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:79416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.341 [2024-10-09 00:32:09.962675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.341 [2024-10-09 00:32:09.962683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:79424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.341 [2024-10-09 00:32:09.962688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.341 [2024-10-09 00:32:09.962694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:79432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.341 [2024-10-09 00:32:09.962699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.341 [2024-10-09 00:32:09.962705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:79440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.341 [2024-10-09 00:32:09.962710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.341 [2024-10-09 00:32:09.962717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:79448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.341 [2024-10-09 00:32:09.962726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.341 [2024-10-09 00:32:09.962733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:79456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.341 [2024-10-09 00:32:09.962737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.341 [2024-10-09 00:32:09.962744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:79464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.341 [2024-10-09 00:32:09.962749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.341 [2024-10-09 00:32:09.962755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:79472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.341 [2024-10-09 00:32:09.962760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.341 [2024-10-09 00:32:09.962766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:79480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.341 [2024-10-09 00:32:09.962771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.341 [2024-10-09 00:32:09.962777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:79488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.341 [2024-10-09 00:32:09.962782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.341 [2024-10-09 00:32:09.962789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:79496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.341 [2024-10-09 00:32:09.962793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.341 [2024-10-09 00:32:09.962800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:79504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.341 [2024-10-09 00:32:09.962805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.341 [2024-10-09 00:32:09.962811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:79512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.341 [2024-10-09 00:32:09.962816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.341 [2024-10-09 00:32:09.962822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:79520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.341 [2024-10-09 00:32:09.962828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.341 [2024-10-09 00:32:09.962834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:79528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.341 [2024-10-09 00:32:09.962839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.341 [2024-10-09 00:32:09.962846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:79536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.341 [2024-10-09 00:32:09.962851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.341 [2024-10-09 00:32:09.962857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:79544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.341 [2024-10-09 00:32:09.962862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.341 [2024-10-09 00:32:09.962868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:79552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.341 [2024-10-09 00:32:09.962873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.341 [2024-10-09 00:32:09.962880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:79560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.341 [2024-10-09 00:32:09.962884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.341 [2024-10-09 00:32:09.962891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:79568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.341 [2024-10-09 00:32:09.962896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.341 [2024-10-09 00:32:09.962902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:79576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.341 [2024-10-09 00:32:09.962908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.341 [2024-10-09 00:32:09.962915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:79584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.341 [2024-10-09 00:32:09.962920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.341 [2024-10-09 00:32:09.962926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:79592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.341 [2024-10-09 00:32:09.962931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.341 [2024-10-09 00:32:09.962938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:79600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.341 [2024-10-09 00:32:09.962943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.341 [2024-10-09 00:32:09.962950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:79608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.341 [2024-10-09 00:32:09.962955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.341 [2024-10-09 00:32:09.962961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:79616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.341 [2024-10-09 00:32:09.962966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.341 [2024-10-09 00:32:09.962974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:79624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.341 [2024-10-09 00:32:09.962979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.341 [2024-10-09 00:32:09.962985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:79632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.341 [2024-10-09 00:32:09.962990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.341 [2024-10-09 00:32:09.962996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:79640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.341 [2024-10-09 00:32:09.963001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.341 [2024-10-09 00:32:09.963008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:79648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.341 [2024-10-09 00:32:09.963013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.341 [2024-10-09 00:32:09.963019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:79656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.341 [2024-10-09 00:32:09.963024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.341 [2024-10-09 00:32:09.963030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:79664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.341 [2024-10-09 00:32:09.963035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.341 [2024-10-09 00:32:09.963042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:79672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.341 [2024-10-09 00:32:09.963046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.341 [2024-10-09 00:32:09.963053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:79680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.341 [2024-10-09 00:32:09.963058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.341 [2024-10-09 00:32:09.963064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:79688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.341 [2024-10-09 00:32:09.963072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.341 [2024-10-09 00:32:09.963079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:79696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.341 [2024-10-09 00:32:09.963084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.341 [2024-10-09 00:32:09.963091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:79704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.341 [2024-10-09 00:32:09.963096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.341 [2024-10-09 00:32:09.963102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:79712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.342 [2024-10-09 00:32:09.963107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.342 [2024-10-09 00:32:09.963113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:79720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.342 [2024-10-09 00:32:09.963119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.342 [2024-10-09 00:32:09.963126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:79728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.342 [2024-10-09 00:32:09.963130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.342 [2024-10-09 00:32:09.963137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:79736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.342 [2024-10-09 00:32:09.963141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.342 [2024-10-09 00:32:09.963148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:79744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.342 [2024-10-09 00:32:09.963153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.342 [2024-10-09 00:32:09.963173] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:50.342 [2024-10-09 00:32:09.963179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79752 len:8 PRP1 0x0 PRP2 0x0 00:24:50.342 [2024-10-09 00:32:09.963184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.342 [2024-10-09 00:32:09.963192] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:50.342 [2024-10-09 00:32:09.963196] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:50.342 [2024-10-09 00:32:09.963200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79760 len:8 PRP1 0x0 PRP2 0x0 00:24:50.342 [2024-10-09 00:32:09.963205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.342 [2024-10-09 00:32:09.963210] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:50.342 [2024-10-09 00:32:09.963214] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:50.342 [2024-10-09 00:32:09.963218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79768 len:8 PRP1 0x0 PRP2 0x0 00:24:50.342 [2024-10-09 00:32:09.963223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.342 [2024-10-09 00:32:09.963228] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:50.342 [2024-10-09 00:32:09.963232] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:50.342 [2024-10-09 00:32:09.963237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79776 len:8 PRP1 0x0 PRP2 0x0 00:24:50.342 [2024-10-09 00:32:09.963242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.342 [2024-10-09 00:32:09.963247] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:50.342 [2024-10-09 00:32:09.963251] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:50.342 [2024-10-09 00:32:09.963255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79784 len:8 PRP1 0x0 PRP2 0x0 00:24:50.342 [2024-10-09 00:32:09.963259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.342 [2024-10-09 00:32:09.963265] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:50.342 [2024-10-09 00:32:09.963269] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:50.342 [2024-10-09 00:32:09.963274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79792 len:8 PRP1 0x0 PRP2 0x0 00:24:50.342 [2024-10-09 00:32:09.963279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.342 [2024-10-09 00:32:09.963286] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:50.342 [2024-10-09 00:32:09.963290] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:50.342 [2024-10-09 00:32:09.963294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79800 len:8 PRP1 0x0 PRP2 0x0 00:24:50.342 [2024-10-09 00:32:09.963299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.342 [2024-10-09 00:32:09.963304] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:50.342 [2024-10-09 00:32:09.963308] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:50.342 [2024-10-09 00:32:09.963312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79808 len:8 PRP1 0x0 PRP2 0x0 00:24:50.342 [2024-10-09 00:32:09.963317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.342 [2024-10-09 00:32:09.963323] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:50.342 [2024-10-09 00:32:09.963327] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:50.342 [2024-10-09 00:32:09.963331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79816 len:8 PRP1 0x0 PRP2 0x0 00:24:50.342 [2024-10-09 00:32:09.963336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.342 [2024-10-09 00:32:09.963342] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:50.342 [2024-10-09 00:32:09.963345] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:50.342 [2024-10-09 00:32:09.963350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79824 len:8 PRP1 0x0 PRP2 0x0 00:24:50.342 [2024-10-09 00:32:09.963355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.342 [2024-10-09 00:32:09.963385] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1bad600 was disconnected and freed. reset controller. 00:24:50.342 [2024-10-09 00:32:09.963391] bdev_nvme.c:1987:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:24:50.342 [2024-10-09 00:32:09.963407] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:50.342 [2024-10-09 00:32:09.963412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.342 [2024-10-09 00:32:09.975766] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:50.342 [2024-10-09 00:32:09.975795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.342 [2024-10-09 00:32:09.975805] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:50.342 [2024-10-09 00:32:09.975812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.342 [2024-10-09 00:32:09.975820] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:50.342 [2024-10-09 00:32:09.975827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.342 [2024-10-09 00:32:09.975836] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:50.342 [2024-10-09 00:32:09.975879] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8ae30 (9): Bad file descriptor 00:24:50.342 [2024-10-09 00:32:09.979200] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:50.342 [2024-10-09 00:32:10.009287] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:50.342 11609.20 IOPS, 45.35 MiB/s [2024-10-08T22:32:20.977Z] 11843.83 IOPS, 46.26 MiB/s [2024-10-08T22:32:20.977Z] 12005.57 IOPS, 46.90 MiB/s [2024-10-08T22:32:20.977Z] 12147.25 IOPS, 47.45 MiB/s [2024-10-08T22:32:20.977Z] 12251.89 IOPS, 47.86 MiB/s [2024-10-08T22:32:20.977Z] [2024-10-09 00:32:14.338322] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:50.342 [2024-10-09 00:32:14.338360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.342 [2024-10-09 00:32:14.338368] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:50.342 [2024-10-09 00:32:14.338373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.342 [2024-10-09 00:32:14.338379] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:50.342 [2024-10-09 00:32:14.338384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.342 [2024-10-09 00:32:14.338390] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:50.342 [2024-10-09 00:32:14.338395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.342 [2024-10-09 00:32:14.338400] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8ae30 is same with the state(6) to be set 00:24:50.342 [2024-10-09 00:32:14.338728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:17240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.342 [2024-10-09 00:32:14.338740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.342 [2024-10-09 00:32:14.338751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:17248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.342 [2024-10-09 00:32:14.338757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.342 [2024-10-09 00:32:14.338764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:17256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.342 [2024-10-09 00:32:14.338770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.342 [2024-10-09 00:32:14.338776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:17264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.342 [2024-10-09 00:32:14.338781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.342 [2024-10-09 00:32:14.338788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:17272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.342 [2024-10-09 00:32:14.338793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.342 [2024-10-09 00:32:14.338800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:17280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.342 [2024-10-09 00:32:14.338805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.342 [2024-10-09 00:32:14.338812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.343 [2024-10-09 00:32:14.338817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.343 [2024-10-09 00:32:14.338828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:17296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.343 [2024-10-09 00:32:14.338834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.343 [2024-10-09 00:32:14.338841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:17304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.343 [2024-10-09 00:32:14.338846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.343 [2024-10-09 00:32:14.338853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:17312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.343 [2024-10-09 00:32:14.338858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.343 [2024-10-09 00:32:14.338865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:17320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.343 [2024-10-09 00:32:14.338870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.343 [2024-10-09 00:32:14.338876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:17328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.343 [2024-10-09 00:32:14.338881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.343 [2024-10-09 00:32:14.338887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:17336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.343 [2024-10-09 00:32:14.338893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.343 [2024-10-09 00:32:14.338899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:17344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.343 [2024-10-09 00:32:14.338904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.343 [2024-10-09 00:32:14.338911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:17352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.343 [2024-10-09 00:32:14.338916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.343 [2024-10-09 00:32:14.338923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:17360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.343 [2024-10-09 00:32:14.338928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.343 [2024-10-09 00:32:14.338935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:17368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.343 [2024-10-09 00:32:14.338940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.343 [2024-10-09 00:32:14.338947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:17376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.343 [2024-10-09 00:32:14.338951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.343 [2024-10-09 00:32:14.338958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:17384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.343 [2024-10-09 00:32:14.338963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.343 [2024-10-09 00:32:14.338969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:17392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.343 [2024-10-09 00:32:14.338976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.343 [2024-10-09 00:32:14.338982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:17400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.343 [2024-10-09 00:32:14.338987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.343 [2024-10-09 00:32:14.338993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:17408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.343 [2024-10-09 00:32:14.338998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.343 [2024-10-09 00:32:14.339005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:17416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.343 [2024-10-09 00:32:14.339010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.343 [2024-10-09 00:32:14.339016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:17424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.343 [2024-10-09 00:32:14.339021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.343 [2024-10-09 00:32:14.339028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:17432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.343 [2024-10-09 00:32:14.339032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.343 [2024-10-09 00:32:14.339039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:17440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.343 [2024-10-09 00:32:14.339044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.343 [2024-10-09 00:32:14.339050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:17448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.343 [2024-10-09 00:32:14.339055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.343 [2024-10-09 00:32:14.339061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:17456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.343 [2024-10-09 00:32:14.339066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.343 [2024-10-09 00:32:14.339073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:17464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.343 [2024-10-09 00:32:14.339079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.343 [2024-10-09 00:32:14.339086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:17472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.343 [2024-10-09 00:32:14.339091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.343 [2024-10-09 00:32:14.339097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:17480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.343 [2024-10-09 00:32:14.339103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.343 [2024-10-09 00:32:14.339109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:17488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.343 [2024-10-09 00:32:14.339114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.343 [2024-10-09 00:32:14.339120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:17496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.343 [2024-10-09 00:32:14.339126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.343 [2024-10-09 00:32:14.339133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:17504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.343 [2024-10-09 00:32:14.339138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.343 [2024-10-09 00:32:14.339144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.343 [2024-10-09 00:32:14.339150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.343 [2024-10-09 00:32:14.339156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:17520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.343 [2024-10-09 00:32:14.339161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.343 [2024-10-09 00:32:14.339167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:17528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.343 [2024-10-09 00:32:14.339172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.343 [2024-10-09 00:32:14.339178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:17536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.343 [2024-10-09 00:32:14.339184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.343 [2024-10-09 00:32:14.339190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:17544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.343 [2024-10-09 00:32:14.339195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.343 [2024-10-09 00:32:14.339201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:17552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.343 [2024-10-09 00:32:14.339206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.343 [2024-10-09 00:32:14.339212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:17560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.343 [2024-10-09 00:32:14.339217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.343 [2024-10-09 00:32:14.339224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:17568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.343 [2024-10-09 00:32:14.339228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.343 [2024-10-09 00:32:14.339235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:17576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.343 [2024-10-09 00:32:14.339240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.343 [2024-10-09 00:32:14.339246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:17584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.343 [2024-10-09 00:32:14.339251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.343 [2024-10-09 00:32:14.339258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:17592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.343 [2024-10-09 00:32:14.339263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.343 [2024-10-09 00:32:14.339270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:17600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.343 [2024-10-09 00:32:14.339275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.343 [2024-10-09 00:32:14.339281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:17608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.343 [2024-10-09 00:32:14.339286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.344 [2024-10-09 00:32:14.339293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:16624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.344 [2024-10-09 00:32:14.339298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.344 [2024-10-09 00:32:14.339304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.344 [2024-10-09 00:32:14.339309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.344 [2024-10-09 00:32:14.339316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:16640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.344 [2024-10-09 00:32:14.339320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.344 [2024-10-09 00:32:14.339327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:16648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.344 [2024-10-09 00:32:14.339332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.344 [2024-10-09 00:32:14.339338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:16656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.344 [2024-10-09 00:32:14.339343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.344 [2024-10-09 00:32:14.339350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:16664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.344 [2024-10-09 00:32:14.339355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.344 [2024-10-09 00:32:14.339361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:16672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.344 [2024-10-09 00:32:14.339366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.344 [2024-10-09 00:32:14.339373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:16680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.344 [2024-10-09 00:32:14.339378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.344 [2024-10-09 00:32:14.339384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:16688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.344 [2024-10-09 00:32:14.339389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.344 [2024-10-09 00:32:14.339396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:16696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.344 [2024-10-09 00:32:14.339401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.344 [2024-10-09 00:32:14.339408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:16704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.344 [2024-10-09 00:32:14.339414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.344 [2024-10-09 00:32:14.339420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.344 [2024-10-09 00:32:14.339425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.344 [2024-10-09 00:32:14.339431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:16720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.344 [2024-10-09 00:32:14.339436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.344 [2024-10-09 00:32:14.339443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:16728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.344 [2024-10-09 00:32:14.339449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.344 [2024-10-09 00:32:14.339456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:16736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.344 [2024-10-09 00:32:14.339461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.344 [2024-10-09 00:32:14.339467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:16744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.344 [2024-10-09 00:32:14.339472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.344 [2024-10-09 00:32:14.339479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:16752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.344 [2024-10-09 00:32:14.339484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.344 [2024-10-09 00:32:14.339490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:16760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.344 [2024-10-09 00:32:14.339495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.344 [2024-10-09 00:32:14.339502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:16768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.344 [2024-10-09 00:32:14.339507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.344 [2024-10-09 00:32:14.339514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:16776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.344 [2024-10-09 00:32:14.339519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.344 [2024-10-09 00:32:14.339525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:16784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.344 [2024-10-09 00:32:14.339530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.344 [2024-10-09 00:32:14.339537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:16792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.344 [2024-10-09 00:32:14.339541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.344 [2024-10-09 00:32:14.339548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:16800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.344 [2024-10-09 00:32:14.339553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.344 [2024-10-09 00:32:14.339560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:17616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.344 [2024-10-09 00:32:14.339565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.344 [2024-10-09 00:32:14.339572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:16808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.344 [2024-10-09 00:32:14.339576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.344 [2024-10-09 00:32:14.339583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:16816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.344 [2024-10-09 00:32:14.339588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.344 [2024-10-09 00:32:14.339594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:16824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.344 [2024-10-09 00:32:14.339599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.344 [2024-10-09 00:32:14.339606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:16832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.344 [2024-10-09 00:32:14.339611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.344 [2024-10-09 00:32:14.339617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:16840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.344 [2024-10-09 00:32:14.339622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.344 [2024-10-09 00:32:14.339629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:16848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.344 [2024-10-09 00:32:14.339634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.344 [2024-10-09 00:32:14.339640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:16856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.345 [2024-10-09 00:32:14.339645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.345 [2024-10-09 00:32:14.339652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.345 [2024-10-09 00:32:14.339657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.345 [2024-10-09 00:32:14.339663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:16872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.345 [2024-10-09 00:32:14.339668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.345 [2024-10-09 00:32:14.339675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:16880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.345 [2024-10-09 00:32:14.339680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.345 [2024-10-09 00:32:14.339686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:16888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.345 [2024-10-09 00:32:14.339691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.345 [2024-10-09 00:32:14.339698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:16896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.345 [2024-10-09 00:32:14.339703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.345 [2024-10-09 00:32:14.339713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:16904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.345 [2024-10-09 00:32:14.339719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.345 [2024-10-09 00:32:14.339728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.345 [2024-10-09 00:32:14.339734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.345 [2024-10-09 00:32:14.339740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:16920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.345 [2024-10-09 00:32:14.339745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.345 [2024-10-09 00:32:14.339751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:16928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.345 [2024-10-09 00:32:14.339756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.345 [2024-10-09 00:32:14.339763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:16936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.345 [2024-10-09 00:32:14.339767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.345 [2024-10-09 00:32:14.339774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:16944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.345 [2024-10-09 00:32:14.339778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.345 [2024-10-09 00:32:14.339785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:16952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.345 [2024-10-09 00:32:14.339790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.345 [2024-10-09 00:32:14.339797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:16960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.345 [2024-10-09 00:32:14.339802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.345 [2024-10-09 00:32:14.339808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:16968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.345 [2024-10-09 00:32:14.339813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.345 [2024-10-09 00:32:14.339819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.345 [2024-10-09 00:32:14.339824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.345 [2024-10-09 00:32:14.339831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:16984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.345 [2024-10-09 00:32:14.339836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.345 [2024-10-09 00:32:14.339842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:16992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.345 [2024-10-09 00:32:14.339847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.345 [2024-10-09 00:32:14.339854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:17000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.345 [2024-10-09 00:32:14.339861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.345 [2024-10-09 00:32:14.339868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:17008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.345 [2024-10-09 00:32:14.339873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.345 [2024-10-09 00:32:14.339879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.345 [2024-10-09 00:32:14.339884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.345 [2024-10-09 00:32:14.339891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:17024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.345 [2024-10-09 00:32:14.339897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.345 [2024-10-09 00:32:14.339903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:17032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.345 [2024-10-09 00:32:14.339908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.345 [2024-10-09 00:32:14.339915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:17040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.345 [2024-10-09 00:32:14.339920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.345 [2024-10-09 00:32:14.339926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:17048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.345 [2024-10-09 00:32:14.339931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.345 [2024-10-09 00:32:14.339938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:17056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.345 [2024-10-09 00:32:14.339943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.345 [2024-10-09 00:32:14.339950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.345 [2024-10-09 00:32:14.339954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.345 [2024-10-09 00:32:14.339961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:17072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.345 [2024-10-09 00:32:14.339966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.345 [2024-10-09 00:32:14.339972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:17080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.345 [2024-10-09 00:32:14.339977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.345 [2024-10-09 00:32:14.339984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:17088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.345 [2024-10-09 00:32:14.339989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.345 [2024-10-09 00:32:14.339995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:17096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.345 [2024-10-09 00:32:14.340000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.345 [2024-10-09 00:32:14.340008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:17104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.345 [2024-10-09 00:32:14.340013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.345 [2024-10-09 00:32:14.340019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:17112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.345 [2024-10-09 00:32:14.340024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.345 [2024-10-09 00:32:14.340030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:17624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.345 [2024-10-09 00:32:14.340035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.345 [2024-10-09 00:32:14.340041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:17632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.345 [2024-10-09 00:32:14.340046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.345 [2024-10-09 00:32:14.340053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:17120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.345 [2024-10-09 00:32:14.340058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.345 [2024-10-09 00:32:14.340064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:17128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.345 [2024-10-09 00:32:14.340069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.345 [2024-10-09 00:32:14.340075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:17136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.345 [2024-10-09 00:32:14.340080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.345 [2024-10-09 00:32:14.340087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:17144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.345 [2024-10-09 00:32:14.340092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.346 [2024-10-09 00:32:14.340099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:17152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.346 [2024-10-09 00:32:14.340103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.346 [2024-10-09 00:32:14.340110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:17160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.346 [2024-10-09 00:32:14.340115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.346 [2024-10-09 00:32:14.340122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:17168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.346 [2024-10-09 00:32:14.340126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.346 [2024-10-09 00:32:14.340133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:17176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.346 [2024-10-09 00:32:14.340138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.346 [2024-10-09 00:32:14.340144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:17184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.346 [2024-10-09 00:32:14.340150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.346 [2024-10-09 00:32:14.340157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:17192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.346 [2024-10-09 00:32:14.340162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.346 [2024-10-09 00:32:14.340168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:17200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.346 [2024-10-09 00:32:14.340173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.346 [2024-10-09 00:32:14.340180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:17208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.346 [2024-10-09 00:32:14.340184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.346 [2024-10-09 00:32:14.340191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:17216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.346 [2024-10-09 00:32:14.340196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.346 [2024-10-09 00:32:14.340203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:17224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.346 [2024-10-09 00:32:14.340208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.346 [2024-10-09 00:32:14.340215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.346 [2024-10-09 00:32:14.340220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.346 [2024-10-09 00:32:14.340238] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:50.346 [2024-10-09 00:32:14.340242] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:50.346 [2024-10-09 00:32:14.340247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17640 len:8 PRP1 0x0 PRP2 0x0 00:24:50.346 [2024-10-09 00:32:14.340253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.346 [2024-10-09 00:32:14.340286] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1bba600 was disconnected and freed. reset controller. 00:24:50.346 [2024-10-09 00:32:14.340293] bdev_nvme.c:1987:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:24:50.346 [2024-10-09 00:32:14.340299] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:50.346 [2024-10-09 00:32:14.342753] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:50.346 [2024-10-09 00:32:14.342772] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8ae30 (9): Bad file descriptor 00:24:50.346 [2024-10-09 00:32:14.409919] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:50.346 12249.80 IOPS, 47.85 MiB/s [2024-10-08T22:32:20.981Z] 12318.27 IOPS, 48.12 MiB/s [2024-10-08T22:32:20.981Z] 12383.17 IOPS, 48.37 MiB/s [2024-10-08T22:32:20.981Z] 12452.46 IOPS, 48.64 MiB/s [2024-10-08T22:32:20.981Z] 12491.29 IOPS, 48.79 MiB/s 00:24:50.346 Latency(us) 00:24:50.346 [2024-10-08T22:32:20.981Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:50.346 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:24:50.346 Verification LBA range: start 0x0 length 0x4000 00:24:50.346 NVMe0n1 : 15.01 12523.64 48.92 627.80 0.00 9712.21 532.48 21080.75 00:24:50.346 [2024-10-08T22:32:20.981Z] =================================================================================================================== 00:24:50.346 [2024-10-08T22:32:20.981Z] Total : 12523.64 48.92 627.80 0.00 9712.21 532.48 21080.75 00:24:50.346 Received shutdown signal, test time was about 15.000000 seconds 00:24:50.346 00:24:50.346 Latency(us) 00:24:50.346 [2024-10-08T22:32:20.981Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:50.346 [2024-10-08T22:32:20.981Z] =================================================================================================================== 00:24:50.346 [2024-10-08T22:32:20.981Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:50.346 00:32:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:24:50.346 00:32:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:24:50.346 00:32:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:24:50.346 00:32:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=3363711 00:24:50.346 00:32:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 3363711 /var/tmp/bdevperf.sock 00:24:50.346 00:32:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:24:50.346 00:32:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 3363711 ']' 00:24:50.346 00:32:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:50.346 00:32:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:50.346 00:32:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:50.346 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:50.346 00:32:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:50.346 00:32:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:50.916 00:32:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:50.916 00:32:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:24:50.916 00:32:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:50.916 [2024-10-09 00:32:21.526652] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:51.176 00:32:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:24:51.177 [2024-10-09 00:32:21.703061] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:24:51.177 00:32:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:24:51.436 NVMe0n1 00:24:51.436 00:32:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:24:51.697 00:24:51.957 00:32:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:24:52.217 00:24:52.217 00:32:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:52.217 00:32:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:24:52.217 00:32:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:52.478 00:32:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:24:55.803 00:32:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:55.804 00:32:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:24:55.804 00:32:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=3364911 00:24:55.804 00:32:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:55.804 00:32:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 3364911 00:24:56.745 { 00:24:56.745 "results": [ 00:24:56.745 { 00:24:56.745 "job": "NVMe0n1", 00:24:56.745 "core_mask": "0x1", 00:24:56.745 "workload": "verify", 00:24:56.745 "status": "finished", 00:24:56.745 "verify_range": { 00:24:56.745 "start": 0, 00:24:56.745 "length": 16384 00:24:56.745 }, 00:24:56.745 "queue_depth": 128, 00:24:56.745 "io_size": 4096, 00:24:56.745 "runtime": 1.009213, 00:24:56.745 "iops": 12971.493629194234, 00:24:56.745 "mibps": 50.66989698903998, 00:24:56.745 "io_failed": 0, 00:24:56.745 "io_timeout": 0, 00:24:56.745 "avg_latency_us": 9828.113580322359, 00:24:56.745 "min_latency_us": 1454.08, 00:24:56.745 "max_latency_us": 13762.56 00:24:56.745 } 00:24:56.745 ], 00:24:56.745 "core_count": 1 00:24:56.745 } 00:24:56.745 00:32:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:56.745 [2024-10-09 00:32:20.570503] Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 initialization... 00:24:56.745 [2024-10-09 00:32:20.570561] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3363711 ] 00:24:56.745 [2024-10-09 00:32:20.646963] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:56.745 [2024-10-09 00:32:20.699630] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:24:56.745 [2024-10-09 00:32:22.972089] bdev_nvme.c:1987:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:24:56.745 [2024-10-09 00:32:22.972126] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:56.745 [2024-10-09 00:32:22.972135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.745 [2024-10-09 00:32:22.972142] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:56.745 [2024-10-09 00:32:22.972148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.745 [2024-10-09 00:32:22.972153] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:56.745 [2024-10-09 00:32:22.972158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.745 [2024-10-09 00:32:22.972164] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:56.745 [2024-10-09 00:32:22.972169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.745 [2024-10-09 00:32:22.972174] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:56.745 [2024-10-09 00:32:22.972197] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:56.745 [2024-10-09 00:32:22.972208] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5cae30 (9): Bad file descriptor 00:24:56.745 [2024-10-09 00:32:23.105884] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:56.745 Running I/O for 1 seconds... 00:24:56.745 12930.00 IOPS, 50.51 MiB/s 00:24:56.745 Latency(us) 00:24:56.745 [2024-10-08T22:32:27.380Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:56.745 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:24:56.745 Verification LBA range: start 0x0 length 0x4000 00:24:56.745 NVMe0n1 : 1.01 12971.49 50.67 0.00 0.00 9828.11 1454.08 13762.56 00:24:56.745 [2024-10-08T22:32:27.380Z] =================================================================================================================== 00:24:56.745 [2024-10-08T22:32:27.380Z] Total : 12971.49 50.67 0.00 0.00 9828.11 1454.08 13762.56 00:24:56.745 00:32:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:56.745 00:32:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:24:57.007 00:32:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:57.268 00:32:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:57.268 00:32:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:24:57.268 00:32:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:57.541 00:32:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:25:00.861 00:32:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:00.861 00:32:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:25:00.861 00:32:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 3363711 00:25:00.861 00:32:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 3363711 ']' 00:25:00.861 00:32:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 3363711 00:25:00.861 00:32:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:25:00.861 00:32:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:00.861 00:32:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3363711 00:25:00.861 00:32:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:25:00.861 00:32:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:25:00.861 00:32:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3363711' 00:25:00.861 killing process with pid 3363711 00:25:00.861 00:32:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 3363711 00:25:00.861 00:32:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 3363711 00:25:00.861 00:32:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:25:00.861 00:32:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:01.121 00:32:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:25:01.121 00:32:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:01.121 00:32:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:25:01.121 00:32:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@514 -- # nvmfcleanup 00:25:01.121 00:32:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:25:01.121 00:32:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:01.121 00:32:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:25:01.121 00:32:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:01.121 00:32:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:01.121 rmmod nvme_tcp 00:25:01.121 rmmod nvme_fabrics 00:25:01.121 rmmod nvme_keyring 00:25:01.121 00:32:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:01.121 00:32:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:25:01.121 00:32:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:25:01.121 00:32:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@515 -- # '[' -n 3360003 ']' 00:25:01.121 00:32:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # killprocess 3360003 00:25:01.121 00:32:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 3360003 ']' 00:25:01.121 00:32:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 3360003 00:25:01.121 00:32:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:25:01.121 00:32:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:01.121 00:32:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3360003 00:25:01.380 00:32:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:25:01.380 00:32:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:25:01.380 00:32:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3360003' 00:25:01.380 killing process with pid 3360003 00:25:01.380 00:32:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 3360003 00:25:01.381 00:32:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 3360003 00:25:01.381 00:32:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:25:01.381 00:32:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:25:01.381 00:32:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:25:01.381 00:32:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:25:01.381 00:32:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:25:01.381 00:32:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@789 -- # iptables-save 00:25:01.381 00:32:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@789 -- # iptables-restore 00:25:01.381 00:32:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:01.381 00:32:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:01.381 00:32:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:01.381 00:32:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:01.381 00:32:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:03.939 00:32:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:03.939 00:25:03.939 real 0m40.406s 00:25:03.939 user 2m3.772s 00:25:03.939 sys 0m8.952s 00:25:03.939 00:32:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:03.939 00:32:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:03.939 ************************************ 00:25:03.939 END TEST nvmf_failover 00:25:03.939 ************************************ 00:25:03.939 00:32:34 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:25:03.939 00:32:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:25:03.939 00:32:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:03.939 00:32:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:03.939 ************************************ 00:25:03.939 START TEST nvmf_host_discovery 00:25:03.939 ************************************ 00:25:03.939 00:32:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:25:03.939 * Looking for test storage... 00:25:03.939 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:03.939 00:32:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:25:03.939 00:32:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1681 -- # lcov --version 00:25:03.939 00:32:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:25:03.939 00:32:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:25:03.939 00:32:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:03.939 00:32:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:03.939 00:32:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:03.939 00:32:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:25:03.939 00:32:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:25:03.939 00:32:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:25:03.939 00:32:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:25:03.939 00:32:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:25:03.939 00:32:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:25:03.939 00:32:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:25:03.939 00:32:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:03.939 00:32:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:25:03.939 00:32:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:25:03.939 00:32:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:03.939 00:32:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:03.939 00:32:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:25:03.939 00:32:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:25:03.939 00:32:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:03.939 00:32:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:25:03.939 00:32:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:25:03.939 00:32:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:25:03.939 00:32:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:25:03.939 00:32:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:03.939 00:32:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:25:03.939 00:32:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:25:03.939 00:32:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:03.939 00:32:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:03.939 00:32:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:25:03.939 00:32:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:03.939 00:32:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:25:03.939 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:03.939 --rc genhtml_branch_coverage=1 00:25:03.939 --rc genhtml_function_coverage=1 00:25:03.939 --rc genhtml_legend=1 00:25:03.939 --rc geninfo_all_blocks=1 00:25:03.939 --rc geninfo_unexecuted_blocks=1 00:25:03.939 00:25:03.939 ' 00:25:03.939 00:32:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:25:03.939 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:03.939 --rc genhtml_branch_coverage=1 00:25:03.939 --rc genhtml_function_coverage=1 00:25:03.939 --rc genhtml_legend=1 00:25:03.939 --rc geninfo_all_blocks=1 00:25:03.939 --rc geninfo_unexecuted_blocks=1 00:25:03.939 00:25:03.939 ' 00:25:03.939 00:32:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:25:03.939 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:03.939 --rc genhtml_branch_coverage=1 00:25:03.939 --rc genhtml_function_coverage=1 00:25:03.939 --rc genhtml_legend=1 00:25:03.939 --rc geninfo_all_blocks=1 00:25:03.939 --rc geninfo_unexecuted_blocks=1 00:25:03.939 00:25:03.939 ' 00:25:03.939 00:32:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:25:03.939 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:03.939 --rc genhtml_branch_coverage=1 00:25:03.939 --rc genhtml_function_coverage=1 00:25:03.939 --rc genhtml_legend=1 00:25:03.939 --rc geninfo_all_blocks=1 00:25:03.939 --rc geninfo_unexecuted_blocks=1 00:25:03.939 00:25:03.939 ' 00:25:03.939 00:32:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:03.939 00:32:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:25:03.939 00:32:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:03.939 00:32:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:03.939 00:32:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:03.939 00:32:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:03.939 00:32:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:03.939 00:32:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:03.939 00:32:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:03.939 00:32:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:03.939 00:32:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:03.939 00:32:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:03.939 00:32:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:03.939 00:32:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:03.939 00:32:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:03.939 00:32:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:03.939 00:32:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:03.939 00:32:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:03.939 00:32:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:03.939 00:32:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:25:03.939 00:32:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:03.939 00:32:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:03.939 00:32:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:03.939 00:32:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:03.939 00:32:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:03.939 00:32:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:03.939 00:32:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:25:03.939 00:32:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:03.939 00:32:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:25:03.939 00:32:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:03.939 00:32:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:03.939 00:32:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:03.939 00:32:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:03.939 00:32:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:03.939 00:32:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:03.939 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:03.939 00:32:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:03.939 00:32:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:03.939 00:32:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:03.940 00:32:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:25:03.940 00:32:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:25:03.940 00:32:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:25:03.940 00:32:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:25:03.940 00:32:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:25:03.940 00:32:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:25:03.940 00:32:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:25:03.940 00:32:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:25:03.940 00:32:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:03.940 00:32:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # prepare_net_devs 00:25:03.940 00:32:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@436 -- # local -g is_hw=no 00:25:03.940 00:32:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # remove_spdk_ns 00:25:03.940 00:32:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:03.940 00:32:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:03.940 00:32:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:03.940 00:32:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:25:03.940 00:32:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:25:03.940 00:32:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:25:03.940 00:32:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:12.092 00:32:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:12.092 00:32:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:25:12.092 00:32:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:12.092 00:32:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:12.092 00:32:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:12.092 00:32:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:12.092 00:32:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:12.092 00:32:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:25:12.092 00:32:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:12.092 00:32:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # e810=() 00:25:12.092 00:32:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:25:12.092 00:32:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # x722=() 00:25:12.092 00:32:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:25:12.092 00:32:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # mlx=() 00:25:12.092 00:32:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:25:12.092 00:32:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:12.092 00:32:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:12.092 00:32:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:12.092 00:32:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:12.092 00:32:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:12.092 00:32:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:12.092 00:32:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:12.092 00:32:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:12.092 00:32:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:12.092 00:32:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:12.092 00:32:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:12.092 00:32:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:12.092 00:32:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:12.092 00:32:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:12.092 00:32:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:12.092 00:32:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:12.092 00:32:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:12.092 00:32:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:12.092 00:32:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:12.092 00:32:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:25:12.092 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:25:12.092 00:32:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:12.092 00:32:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:12.092 00:32:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:12.092 00:32:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:12.092 00:32:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:12.092 00:32:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:12.092 00:32:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:25:12.092 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:25:12.092 00:32:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:12.092 00:32:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:12.092 00:32:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:12.092 00:32:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:12.092 00:32:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:12.092 00:32:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:12.092 00:32:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:12.092 00:32:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:12.092 00:32:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:25:12.092 00:32:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:12.092 00:32:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:25:12.092 00:32:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:12.092 00:32:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ up == up ]] 00:25:12.092 00:32:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:25:12.092 00:32:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:12.092 00:32:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:25:12.092 Found net devices under 0000:4b:00.0: cvl_0_0 00:25:12.092 00:32:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:25:12.092 00:32:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:25:12.092 00:32:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:12.092 00:32:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:25:12.092 00:32:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:12.092 00:32:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ up == up ]] 00:25:12.092 00:32:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:25:12.092 00:32:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:12.093 00:32:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:25:12.093 Found net devices under 0000:4b:00.1: cvl_0_1 00:25:12.093 00:32:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:25:12.093 00:32:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:25:12.093 00:32:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # is_hw=yes 00:25:12.093 00:32:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:25:12.093 00:32:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:25:12.093 00:32:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:25:12.093 00:32:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:12.093 00:32:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:12.093 00:32:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:12.093 00:32:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:12.093 00:32:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:12.093 00:32:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:12.093 00:32:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:12.093 00:32:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:12.093 00:32:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:12.093 00:32:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:12.093 00:32:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:12.093 00:32:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:12.093 00:32:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:12.093 00:32:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:12.093 00:32:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:12.093 00:32:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:12.093 00:32:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:12.093 00:32:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:12.093 00:32:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:12.093 00:32:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:12.093 00:32:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:12.093 00:32:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:12.093 00:32:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:12.093 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:12.093 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.676 ms 00:25:12.093 00:25:12.093 --- 10.0.0.2 ping statistics --- 00:25:12.093 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:12.093 rtt min/avg/max/mdev = 0.676/0.676/0.676/0.000 ms 00:25:12.093 00:32:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:12.093 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:12.093 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.282 ms 00:25:12.093 00:25:12.093 --- 10.0.0.1 ping statistics --- 00:25:12.093 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:12.093 rtt min/avg/max/mdev = 0.282/0.282/0.282/0.000 ms 00:25:12.093 00:32:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:12.093 00:32:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@448 -- # return 0 00:25:12.093 00:32:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:25:12.093 00:32:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:12.093 00:32:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:25:12.093 00:32:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:25:12.093 00:32:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:12.093 00:32:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:25:12.093 00:32:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:25:12.093 00:32:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:25:12.093 00:32:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:25:12.093 00:32:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:12.093 00:32:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:12.093 00:32:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # nvmfpid=3370078 00:25:12.093 00:32:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # waitforlisten 3370078 00:25:12.093 00:32:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:25:12.093 00:32:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@831 -- # '[' -z 3370078 ']' 00:25:12.093 00:32:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:12.093 00:32:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:12.093 00:32:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:12.093 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:12.093 00:32:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:12.093 00:32:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:12.093 [2024-10-09 00:32:41.832355] Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 initialization... 00:25:12.093 [2024-10-09 00:32:41.832420] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:12.093 [2024-10-09 00:32:41.925564] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:12.093 [2024-10-09 00:32:42.018998] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:12.093 [2024-10-09 00:32:42.019057] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:12.093 [2024-10-09 00:32:42.019066] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:12.093 [2024-10-09 00:32:42.019073] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:12.093 [2024-10-09 00:32:42.019079] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:12.093 [2024-10-09 00:32:42.019876] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:25:12.093 00:32:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:12.093 00:32:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # return 0 00:25:12.093 00:32:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:25:12.093 00:32:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:12.093 00:32:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:12.093 00:32:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:12.093 00:32:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:12.093 00:32:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:12.093 00:32:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:12.093 [2024-10-09 00:32:42.713609] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:12.093 00:32:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:12.093 00:32:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:25:12.093 00:32:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:12.093 00:32:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:12.354 [2024-10-09 00:32:42.725937] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:25:12.354 00:32:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:12.354 00:32:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:25:12.354 00:32:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:12.354 00:32:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:12.354 null0 00:25:12.354 00:32:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:12.354 00:32:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:25:12.354 00:32:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:12.354 00:32:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:12.354 null1 00:25:12.354 00:32:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:12.354 00:32:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:25:12.354 00:32:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:12.354 00:32:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:12.354 00:32:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:12.354 00:32:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=3370278 00:25:12.354 00:32:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 3370278 /tmp/host.sock 00:25:12.354 00:32:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:25:12.354 00:32:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@831 -- # '[' -z 3370278 ']' 00:25:12.354 00:32:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/tmp/host.sock 00:25:12.354 00:32:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:12.354 00:32:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:25:12.354 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:25:12.354 00:32:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:12.354 00:32:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:12.354 [2024-10-09 00:32:42.823535] Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 initialization... 00:25:12.354 [2024-10-09 00:32:42.823604] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3370278 ] 00:25:12.354 [2024-10-09 00:32:42.907356] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:12.615 [2024-10-09 00:32:43.003392] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:25:13.201 00:32:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:13.201 00:32:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # return 0 00:25:13.201 00:32:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:13.201 00:32:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:25:13.201 00:32:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:13.201 00:32:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:13.201 00:32:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:13.201 00:32:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:25:13.201 00:32:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:13.201 00:32:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:13.201 00:32:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:13.201 00:32:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:25:13.201 00:32:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:25:13.201 00:32:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:13.201 00:32:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:13.201 00:32:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:13.201 00:32:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:13.201 00:32:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:13.201 00:32:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:13.201 00:32:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:13.201 00:32:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:25:13.201 00:32:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:25:13.202 00:32:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:13.202 00:32:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:13.202 00:32:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:13.202 00:32:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:13.202 00:32:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:13.202 00:32:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:13.202 00:32:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:13.202 00:32:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:25:13.202 00:32:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:25:13.202 00:32:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:13.202 00:32:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:13.202 00:32:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:13.202 00:32:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:25:13.202 00:32:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:13.202 00:32:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:13.202 00:32:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:13.202 00:32:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:13.202 00:32:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:13.202 00:32:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:13.202 00:32:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:13.202 00:32:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:25:13.202 00:32:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:25:13.202 00:32:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:13.202 00:32:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:13.202 00:32:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:13.202 00:32:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:13.202 00:32:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:13.202 00:32:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:13.473 00:32:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:13.473 00:32:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:25:13.473 00:32:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:25:13.473 00:32:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:13.473 00:32:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:13.473 00:32:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:13.473 00:32:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:25:13.473 00:32:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:13.473 00:32:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:13.473 00:32:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:13.473 00:32:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:13.473 00:32:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:13.473 00:32:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:13.473 00:32:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:13.474 00:32:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:25:13.474 00:32:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:25:13.474 00:32:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:13.474 00:32:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:13.474 00:32:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:13.474 00:32:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:13.474 00:32:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:13.474 00:32:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:13.474 00:32:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:13.474 00:32:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:25:13.474 00:32:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:13.474 00:32:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:13.474 00:32:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:13.474 [2024-10-09 00:32:43.989140] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:13.474 00:32:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:13.474 00:32:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:25:13.474 00:32:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:13.474 00:32:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:13.474 00:32:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:13.474 00:32:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:13.474 00:32:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:13.474 00:32:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:13.474 00:32:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:13.474 00:32:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:25:13.474 00:32:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:25:13.474 00:32:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:13.474 00:32:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:13.474 00:32:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:13.474 00:32:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:13.474 00:32:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:13.474 00:32:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:13.474 00:32:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:13.474 00:32:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:25:13.474 00:32:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:25:13.474 00:32:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:25:13.474 00:32:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:13.474 00:32:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:13.474 00:32:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:13.474 00:32:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:13.474 00:32:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:13.474 00:32:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:25:13.474 00:32:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:25:13.474 00:32:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:13.474 00:32:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:13.474 00:32:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:13.735 00:32:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:13.735 00:32:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:25:13.735 00:32:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:25:13.735 00:32:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:25:13.735 00:32:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:13.735 00:32:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:25:13.735 00:32:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:13.735 00:32:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:13.735 00:32:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:13.735 00:32:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:13.735 00:32:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:13.735 00:32:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:13.735 00:32:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:13.735 00:32:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:13.735 00:32:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:25:13.735 00:32:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:13.735 00:32:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:13.735 00:32:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:13.735 00:32:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:13.735 00:32:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:13.735 00:32:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:13.735 00:32:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:13.736 00:32:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == \n\v\m\e\0 ]] 00:25:13.736 00:32:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # sleep 1 00:25:14.306 [2024-10-09 00:32:44.713812] bdev_nvme.c:7153:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:25:14.306 [2024-10-09 00:32:44.713836] bdev_nvme.c:7239:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:25:14.306 [2024-10-09 00:32:44.713853] bdev_nvme.c:7116:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:14.306 [2024-10-09 00:32:44.841230] bdev_nvme.c:7082:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:25:14.567 [2024-10-09 00:32:44.943770] bdev_nvme.c:6972:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:14.567 [2024-10-09 00:32:44.943794] bdev_nvme.c:6931:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:25:14.828 00:32:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:14.828 00:32:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:14.828 00:32:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:25:14.828 00:32:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:14.828 00:32:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:14.828 00:32:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:14.828 00:32:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:14.828 00:32:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:14.828 00:32:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:14.828 00:32:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:14.828 00:32:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:14.828 00:32:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:14.828 00:32:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:25:14.828 00:32:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:25:14.828 00:32:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:14.828 00:32:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:14.828 00:32:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:25:14.828 00:32:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:25:14.828 00:32:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:14.828 00:32:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:14.828 00:32:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:14.828 00:32:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:14.828 00:32:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:14.828 00:32:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:14.828 00:32:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:14.828 00:32:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:25:14.828 00:32:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:14.828 00:32:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:25:14.828 00:32:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:25:14.828 00:32:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:14.828 00:32:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:14.828 00:32:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:25:14.828 00:32:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:25:14.828 00:32:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:14.828 00:32:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:14.828 00:32:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:14.828 00:32:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:14.828 00:32:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:14.828 00:32:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:14.828 00:32:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:14.828 00:32:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 == \4\4\2\0 ]] 00:25:14.828 00:32:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:14.828 00:32:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:25:14.828 00:32:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:25:14.828 00:32:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:14.828 00:32:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:14.828 00:32:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:14.828 00:32:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:14.828 00:32:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:14.828 00:32:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:25:14.828 00:32:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:25:14.828 00:32:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:14.828 00:32:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:14.828 00:32:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:14.828 00:32:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:14.828 00:32:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:25:14.828 00:32:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:25:14.828 00:32:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:25:14.828 00:32:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:14.828 00:32:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:25:14.828 00:32:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:14.828 00:32:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:14.828 00:32:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:14.828 00:32:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:14.828 00:32:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:14.828 00:32:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:14.828 00:32:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:14.829 00:32:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:25:14.829 00:32:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:25:14.829 00:32:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:14.829 00:32:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:14.829 00:32:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:14.829 00:32:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:14.829 00:32:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:14.829 00:32:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:15.090 00:32:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:15.090 00:32:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:15.090 00:32:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:15.090 00:32:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:25:15.090 00:32:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:25:15.090 00:32:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:15.090 00:32:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:15.090 00:32:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:15.090 00:32:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:15.090 00:32:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:15.090 00:32:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:25:15.090 00:32:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:25:15.090 00:32:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:15.090 00:32:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:15.090 00:32:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:15.090 00:32:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:15.090 00:32:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:25:15.090 00:32:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:25:15.090 00:32:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:25:15.090 00:32:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:15.090 00:32:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:25:15.090 00:32:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:15.090 00:32:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:15.090 [2024-10-09 00:32:45.536984] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:15.090 [2024-10-09 00:32:45.537342] bdev_nvme.c:7135:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:25:15.090 [2024-10-09 00:32:45.537369] bdev_nvme.c:7116:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:15.090 00:32:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:15.090 00:32:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:15.090 00:32:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:15.090 00:32:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:15.090 00:32:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:15.090 00:32:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:15.090 00:32:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:25:15.090 00:32:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:15.090 00:32:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:15.090 00:32:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:15.090 00:32:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:15.090 00:32:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:15.091 00:32:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:15.091 00:32:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:15.091 00:32:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:15.091 00:32:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:15.091 00:32:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:15.091 00:32:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:15.091 00:32:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:15.091 00:32:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:15.091 00:32:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:25:15.091 00:32:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:25:15.091 00:32:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:15.091 00:32:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:15.091 00:32:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:15.091 00:32:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:15.091 00:32:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:15.091 00:32:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:15.091 00:32:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:15.091 00:32:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:15.091 00:32:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:15.091 00:32:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:25:15.091 00:32:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:25:15.091 00:32:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:15.091 00:32:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:15.091 00:32:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:25:15.091 00:32:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:25:15.091 00:32:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:15.091 00:32:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:15.091 00:32:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:15.091 00:32:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:15.091 00:32:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:15.091 00:32:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:15.091 00:32:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:15.091 [2024-10-09 00:32:45.665769] bdev_nvme.c:7077:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:25:15.091 00:32:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:25:15.091 00:32:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # sleep 1 00:25:15.351 [2024-10-09 00:32:45.726548] bdev_nvme.c:6972:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:15.351 [2024-10-09 00:32:45.726566] bdev_nvme.c:6931:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:25:15.351 [2024-10-09 00:32:45.726572] bdev_nvme.c:6931:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:25:16.293 00:32:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:16.293 00:32:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:25:16.293 00:32:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:25:16.293 00:32:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:16.293 00:32:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:16.293 00:32:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:16.293 00:32:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:16.293 00:32:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:16.293 00:32:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:16.293 00:32:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:16.293 00:32:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:25:16.293 00:32:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:16.293 00:32:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:25:16.293 00:32:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:25:16.293 00:32:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:16.293 00:32:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:16.293 00:32:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:16.293 00:32:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:16.293 00:32:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:16.293 00:32:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:25:16.293 00:32:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:25:16.293 00:32:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:16.293 00:32:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:16.293 00:32:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:16.293 00:32:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:16.293 00:32:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:25:16.293 00:32:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:25:16.293 00:32:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:25:16.293 00:32:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:16.294 00:32:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:16.294 00:32:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:16.294 00:32:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:16.294 [2024-10-09 00:32:46.804527] bdev_nvme.c:7135:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:25:16.294 [2024-10-09 00:32:46.804545] bdev_nvme.c:7116:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:16.294 [2024-10-09 00:32:46.805703] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:16.294 [2024-10-09 00:32:46.805718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.294 [2024-10-09 00:32:46.805729] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:16.294 [2024-10-09 00:32:46.805734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.294 [2024-10-09 00:32:46.805740] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:16.294 [2024-10-09 00:32:46.805746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.294 [2024-10-09 00:32:46.805752] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:16.294 [2024-10-09 00:32:46.805758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.294 [2024-10-09 00:32:46.805764] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dded0 is same with the state(6) to be set 00:25:16.294 00:32:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:16.294 00:32:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:16.294 00:32:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:16.294 00:32:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:16.294 00:32:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:16.294 00:32:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:16.294 00:32:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:25:16.294 00:32:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:16.294 00:32:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:16.294 [2024-10-09 00:32:46.815719] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dded0 (9): Bad file descriptor 00:25:16.294 00:32:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:16.294 00:32:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:16.294 00:32:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:16.294 00:32:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:16.294 [2024-10-09 00:32:46.825754] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:16.294 [2024-10-09 00:32:46.826119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.294 [2024-10-09 00:32:46.826130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dded0 with addr=10.0.0.2, port=4420 00:25:16.294 [2024-10-09 00:32:46.826136] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dded0 is same with the state(6) to be set 00:25:16.294 [2024-10-09 00:32:46.826144] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dded0 (9): Bad file descriptor 00:25:16.294 [2024-10-09 00:32:46.826152] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:16.294 [2024-10-09 00:32:46.826158] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:16.294 [2024-10-09 00:32:46.826164] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:16.294 [2024-10-09 00:32:46.826172] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:16.294 00:32:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:16.294 [2024-10-09 00:32:46.835803] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:16.294 [2024-10-09 00:32:46.836118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.294 [2024-10-09 00:32:46.836127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dded0 with addr=10.0.0.2, port=4420 00:25:16.294 [2024-10-09 00:32:46.836132] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dded0 is same with the state(6) to be set 00:25:16.294 [2024-10-09 00:32:46.836140] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dded0 (9): Bad file descriptor 00:25:16.294 [2024-10-09 00:32:46.836147] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:16.294 [2024-10-09 00:32:46.836152] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:16.294 [2024-10-09 00:32:46.836157] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:16.294 [2024-10-09 00:32:46.836164] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:16.294 [2024-10-09 00:32:46.845847] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:16.294 [2024-10-09 00:32:46.846226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.294 [2024-10-09 00:32:46.846235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dded0 with addr=10.0.0.2, port=4420 00:25:16.294 [2024-10-09 00:32:46.846240] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dded0 is same with the state(6) to be set 00:25:16.294 [2024-10-09 00:32:46.846247] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dded0 (9): Bad file descriptor 00:25:16.294 [2024-10-09 00:32:46.846255] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:16.294 [2024-10-09 00:32:46.846259] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:16.294 [2024-10-09 00:32:46.846264] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:16.294 [2024-10-09 00:32:46.846272] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:16.294 [2024-10-09 00:32:46.855892] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:16.294 [2024-10-09 00:32:46.856219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.294 [2024-10-09 00:32:46.856231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dded0 with addr=10.0.0.2, port=4420 00:25:16.294 [2024-10-09 00:32:46.856237] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dded0 is same with the state(6) to be set 00:25:16.294 [2024-10-09 00:32:46.856245] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dded0 (9): Bad file descriptor 00:25:16.294 [2024-10-09 00:32:46.856253] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:16.294 [2024-10-09 00:32:46.856258] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:16.294 [2024-10-09 00:32:46.856264] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:16.294 [2024-10-09 00:32:46.856271] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:16.294 00:32:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:16.294 00:32:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:16.294 00:32:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:16.294 00:32:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:16.294 00:32:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:16.294 00:32:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:16.294 00:32:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:25:16.294 [2024-10-09 00:32:46.865938] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:16.294 [2024-10-09 00:32:46.866249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.294 [2024-10-09 00:32:46.866259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dded0 with addr=10.0.0.2, port=4420 00:25:16.294 [2024-10-09 00:32:46.866265] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dded0 is same with the state(6) to be set 00:25:16.294 [2024-10-09 00:32:46.866272] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dded0 (9): Bad file descriptor 00:25:16.294 [2024-10-09 00:32:46.866280] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:16.294 [2024-10-09 00:32:46.866284] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:16.294 [2024-10-09 00:32:46.866289] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:16.294 [2024-10-09 00:32:46.866297] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:16.294 00:32:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:25:16.294 00:32:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:16.294 00:32:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:16.294 00:32:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:16.294 00:32:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:16.294 00:32:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:16.294 00:32:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:16.294 [2024-10-09 00:32:46.875983] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:16.294 [2024-10-09 00:32:46.876305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.294 [2024-10-09 00:32:46.876314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dded0 with addr=10.0.0.2, port=4420 00:25:16.294 [2024-10-09 00:32:46.876323] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dded0 is same with the state(6) to be set 00:25:16.294 [2024-10-09 00:32:46.876330] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dded0 (9): Bad file descriptor 00:25:16.294 [2024-10-09 00:32:46.876337] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:16.294 [2024-10-09 00:32:46.876342] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:16.294 [2024-10-09 00:32:46.876347] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:16.294 [2024-10-09 00:32:46.876354] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:16.294 [2024-10-09 00:32:46.886029] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:16.294 [2024-10-09 00:32:46.886333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.294 [2024-10-09 00:32:46.886341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dded0 with addr=10.0.0.2, port=4420 00:25:16.294 [2024-10-09 00:32:46.886346] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dded0 is same with the state(6) to be set 00:25:16.295 [2024-10-09 00:32:46.886354] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dded0 (9): Bad file descriptor 00:25:16.295 [2024-10-09 00:32:46.886361] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:16.295 [2024-10-09 00:32:46.886365] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:16.295 [2024-10-09 00:32:46.886370] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:16.295 [2024-10-09 00:32:46.886377] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:16.295 [2024-10-09 00:32:46.893797] bdev_nvme.c:6940:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:25:16.295 [2024-10-09 00:32:46.893810] bdev_nvme.c:6931:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:25:16.295 00:32:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:16.295 00:32:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:16.295 00:32:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:16.295 00:32:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:25:16.295 00:32:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:25:16.295 00:32:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:16.295 00:32:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:16.295 00:32:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:25:16.295 00:32:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:25:16.295 00:32:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:16.295 00:32:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:16.295 00:32:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:16.295 00:32:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:16.295 00:32:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:16.295 00:32:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:16.557 00:32:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:16.557 00:32:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4421 == \4\4\2\1 ]] 00:25:16.557 00:32:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:16.557 00:32:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:25:16.557 00:32:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:25:16.557 00:32:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:16.557 00:32:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:16.557 00:32:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:16.557 00:32:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:16.557 00:32:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:16.557 00:32:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:25:16.557 00:32:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:25:16.557 00:32:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:16.557 00:32:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:16.557 00:32:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:16.557 00:32:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:16.557 00:32:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:25:16.557 00:32:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:25:16.557 00:32:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:25:16.557 00:32:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:16.557 00:32:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:25:16.557 00:32:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:16.557 00:32:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:16.557 00:32:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:16.557 00:32:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:25:16.557 00:32:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:25:16.557 00:32:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:16.557 00:32:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:16.557 00:32:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:25:16.557 00:32:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:25:16.557 00:32:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:16.557 00:32:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:16.557 00:32:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:16.557 00:32:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:16.557 00:32:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:16.557 00:32:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:16.557 00:32:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:16.557 00:32:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == '' ]] 00:25:16.557 00:32:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:16.557 00:32:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:25:16.557 00:32:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:25:16.557 00:32:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:16.557 00:32:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:16.557 00:32:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:25:16.557 00:32:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:25:16.557 00:32:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:16.557 00:32:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:16.557 00:32:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:16.557 00:32:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:16.557 00:32:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:16.557 00:32:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:16.557 00:32:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:16.557 00:32:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == '' ]] 00:25:16.557 00:32:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:16.557 00:32:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:25:16.557 00:32:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:25:16.557 00:32:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:16.557 00:32:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:16.557 00:32:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:16.557 00:32:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:16.558 00:32:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:16.558 00:32:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:25:16.558 00:32:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:25:16.558 00:32:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:16.558 00:32:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:16.558 00:32:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:16.558 00:32:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:16.558 00:32:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:25:16.558 00:32:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:25:16.558 00:32:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:25:16.558 00:32:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:16.558 00:32:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:16.558 00:32:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:16.558 00:32:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:17.941 [2024-10-09 00:32:48.211641] bdev_nvme.c:7153:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:25:17.941 [2024-10-09 00:32:48.211655] bdev_nvme.c:7239:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:25:17.941 [2024-10-09 00:32:48.211664] bdev_nvme.c:7116:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:17.941 [2024-10-09 00:32:48.299928] bdev_nvme.c:7082:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:25:17.941 [2024-10-09 00:32:48.406709] bdev_nvme.c:6972:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:17.941 [2024-10-09 00:32:48.406738] bdev_nvme.c:6931:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:25:17.941 00:32:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:17.941 00:32:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:17.941 00:32:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:25:17.941 00:32:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:17.941 00:32:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:25:17.941 00:32:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:17.941 00:32:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:25:17.941 00:32:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:17.941 00:32:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:17.941 00:32:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:17.941 00:32:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:17.941 request: 00:25:17.941 { 00:25:17.941 "name": "nvme", 00:25:17.942 "trtype": "tcp", 00:25:17.942 "traddr": "10.0.0.2", 00:25:17.942 "adrfam": "ipv4", 00:25:17.942 "trsvcid": "8009", 00:25:17.942 "hostnqn": "nqn.2021-12.io.spdk:test", 00:25:17.942 "wait_for_attach": true, 00:25:17.942 "method": "bdev_nvme_start_discovery", 00:25:17.942 "req_id": 1 00:25:17.942 } 00:25:17.942 Got JSON-RPC error response 00:25:17.942 response: 00:25:17.942 { 00:25:17.942 "code": -17, 00:25:17.942 "message": "File exists" 00:25:17.942 } 00:25:17.942 00:32:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:25:17.942 00:32:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:25:17.942 00:32:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:17.942 00:32:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:25:17.942 00:32:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:17.942 00:32:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:25:17.942 00:32:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:25:17.942 00:32:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:25:17.942 00:32:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:17.942 00:32:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:25:17.942 00:32:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:17.942 00:32:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:25:17.942 00:32:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:17.942 00:32:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:25:17.942 00:32:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:25:17.942 00:32:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:17.942 00:32:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:17.942 00:32:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:17.942 00:32:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:17.942 00:32:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:17.942 00:32:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:17.942 00:32:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:17.942 00:32:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:17.942 00:32:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:17.942 00:32:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:25:17.942 00:32:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:17.942 00:32:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:25:17.942 00:32:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:17.942 00:32:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:25:17.942 00:32:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:17.942 00:32:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:17.942 00:32:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:17.942 00:32:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:17.942 request: 00:25:17.942 { 00:25:17.942 "name": "nvme_second", 00:25:17.942 "trtype": "tcp", 00:25:17.942 "traddr": "10.0.0.2", 00:25:17.942 "adrfam": "ipv4", 00:25:17.942 "trsvcid": "8009", 00:25:17.942 "hostnqn": "nqn.2021-12.io.spdk:test", 00:25:17.942 "wait_for_attach": true, 00:25:17.942 "method": "bdev_nvme_start_discovery", 00:25:17.942 "req_id": 1 00:25:17.942 } 00:25:17.942 Got JSON-RPC error response 00:25:17.942 response: 00:25:17.942 { 00:25:17.942 "code": -17, 00:25:17.942 "message": "File exists" 00:25:17.942 } 00:25:17.942 00:32:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:25:17.942 00:32:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:25:17.942 00:32:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:17.942 00:32:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:25:17.942 00:32:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:17.942 00:32:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:25:17.942 00:32:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:25:17.942 00:32:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:25:17.942 00:32:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:17.942 00:32:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:25:17.942 00:32:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:17.942 00:32:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:25:17.942 00:32:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:18.203 00:32:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:25:18.203 00:32:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:25:18.203 00:32:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:18.203 00:32:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:18.203 00:32:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:18.203 00:32:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:18.203 00:32:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:18.203 00:32:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:18.203 00:32:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:18.203 00:32:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:18.203 00:32:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:25:18.203 00:32:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:25:18.203 00:32:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:25:18.203 00:32:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:25:18.203 00:32:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:18.203 00:32:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:25:18.203 00:32:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:18.203 00:32:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:25:18.203 00:32:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:18.203 00:32:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:19.160 [2024-10-09 00:32:49.654625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.160 [2024-10-09 00:32:49.654649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa0cfc0 with addr=10.0.0.2, port=8010 00:25:19.160 [2024-10-09 00:32:49.654659] nvme_tcp.c:2723:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:25:19.160 [2024-10-09 00:32:49.654664] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:25:19.160 [2024-10-09 00:32:49.654670] bdev_nvme.c:7221:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:25:20.109 [2024-10-09 00:32:50.656878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.109 [2024-10-09 00:32:50.656909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa0cfc0 with addr=10.0.0.2, port=8010 00:25:20.109 [2024-10-09 00:32:50.656921] nvme_tcp.c:2723:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:25:20.109 [2024-10-09 00:32:50.656926] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:25:20.109 [2024-10-09 00:32:50.656932] bdev_nvme.c:7221:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:25:21.051 [2024-10-09 00:32:51.658974] bdev_nvme.c:7196:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:25:21.051 request: 00:25:21.051 { 00:25:21.051 "name": "nvme_second", 00:25:21.051 "trtype": "tcp", 00:25:21.051 "traddr": "10.0.0.2", 00:25:21.051 "adrfam": "ipv4", 00:25:21.051 "trsvcid": "8010", 00:25:21.051 "hostnqn": "nqn.2021-12.io.spdk:test", 00:25:21.051 "wait_for_attach": false, 00:25:21.051 "attach_timeout_ms": 3000, 00:25:21.051 "method": "bdev_nvme_start_discovery", 00:25:21.051 "req_id": 1 00:25:21.051 } 00:25:21.051 Got JSON-RPC error response 00:25:21.051 response: 00:25:21.051 { 00:25:21.051 "code": -110, 00:25:21.051 "message": "Connection timed out" 00:25:21.051 } 00:25:21.051 00:32:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:25:21.051 00:32:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:25:21.051 00:32:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:21.051 00:32:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:25:21.051 00:32:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:21.051 00:32:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:25:21.051 00:32:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:25:21.051 00:32:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:21.051 00:32:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:25:21.051 00:32:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:21.051 00:32:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:25:21.051 00:32:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:25:21.051 00:32:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:21.312 00:32:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:25:21.312 00:32:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:25:21.312 00:32:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 3370278 00:25:21.312 00:32:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:25:21.312 00:32:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@514 -- # nvmfcleanup 00:25:21.312 00:32:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:25:21.312 00:32:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:21.312 00:32:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:25:21.312 00:32:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:21.312 00:32:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:21.312 rmmod nvme_tcp 00:25:21.312 rmmod nvme_fabrics 00:25:21.312 rmmod nvme_keyring 00:25:21.312 00:32:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:21.312 00:32:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:25:21.312 00:32:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:25:21.312 00:32:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@515 -- # '[' -n 3370078 ']' 00:25:21.312 00:32:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # killprocess 3370078 00:25:21.312 00:32:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@950 -- # '[' -z 3370078 ']' 00:25:21.312 00:32:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # kill -0 3370078 00:25:21.312 00:32:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@955 -- # uname 00:25:21.312 00:32:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:21.312 00:32:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3370078 00:25:21.312 00:32:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:25:21.312 00:32:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:25:21.312 00:32:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3370078' 00:25:21.312 killing process with pid 3370078 00:25:21.312 00:32:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@969 -- # kill 3370078 00:25:21.312 00:32:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@974 -- # wait 3370078 00:25:21.584 00:32:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:25:21.584 00:32:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:25:21.584 00:32:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:25:21.584 00:32:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:25:21.584 00:32:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@789 -- # iptables-save 00:25:21.584 00:32:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:25:21.584 00:32:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@789 -- # iptables-restore 00:25:21.584 00:32:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:21.584 00:32:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:21.584 00:32:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:21.584 00:32:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:21.584 00:32:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:23.519 00:32:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:23.519 00:25:23.519 real 0m19.979s 00:25:23.519 user 0m22.921s 00:25:23.519 sys 0m7.218s 00:25:23.519 00:32:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:23.519 00:32:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:23.519 ************************************ 00:25:23.519 END TEST nvmf_host_discovery 00:25:23.519 ************************************ 00:25:23.519 00:32:54 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:25:23.519 00:32:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:25:23.519 00:32:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:23.519 00:32:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.519 ************************************ 00:25:23.519 START TEST nvmf_host_multipath_status 00:25:23.519 ************************************ 00:25:23.519 00:32:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:25:23.787 * Looking for test storage... 00:25:23.787 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:23.787 00:32:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:25:23.787 00:32:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1681 -- # lcov --version 00:25:23.787 00:32:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:25:23.787 00:32:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:25:23.787 00:32:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:23.787 00:32:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:23.787 00:32:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:23.787 00:32:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:25:23.787 00:32:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:25:23.787 00:32:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:25:23.787 00:32:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:25:23.787 00:32:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:25:23.787 00:32:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:25:23.787 00:32:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:25:23.787 00:32:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:23.787 00:32:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:25:23.787 00:32:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:25:23.787 00:32:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:23.787 00:32:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:23.787 00:32:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:25:23.787 00:32:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:25:23.787 00:32:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:23.787 00:32:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:25:23.787 00:32:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:25:23.787 00:32:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:25:23.787 00:32:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:25:23.787 00:32:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:23.787 00:32:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:25:23.787 00:32:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:25:23.787 00:32:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:23.787 00:32:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:23.787 00:32:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:25:23.787 00:32:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:23.787 00:32:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:25:23.787 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:23.787 --rc genhtml_branch_coverage=1 00:25:23.787 --rc genhtml_function_coverage=1 00:25:23.787 --rc genhtml_legend=1 00:25:23.787 --rc geninfo_all_blocks=1 00:25:23.787 --rc geninfo_unexecuted_blocks=1 00:25:23.787 00:25:23.787 ' 00:25:23.787 00:32:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:25:23.787 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:23.787 --rc genhtml_branch_coverage=1 00:25:23.787 --rc genhtml_function_coverage=1 00:25:23.787 --rc genhtml_legend=1 00:25:23.787 --rc geninfo_all_blocks=1 00:25:23.787 --rc geninfo_unexecuted_blocks=1 00:25:23.787 00:25:23.787 ' 00:25:23.787 00:32:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:25:23.787 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:23.787 --rc genhtml_branch_coverage=1 00:25:23.787 --rc genhtml_function_coverage=1 00:25:23.787 --rc genhtml_legend=1 00:25:23.787 --rc geninfo_all_blocks=1 00:25:23.787 --rc geninfo_unexecuted_blocks=1 00:25:23.787 00:25:23.787 ' 00:25:23.787 00:32:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:25:23.787 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:23.787 --rc genhtml_branch_coverage=1 00:25:23.787 --rc genhtml_function_coverage=1 00:25:23.787 --rc genhtml_legend=1 00:25:23.787 --rc geninfo_all_blocks=1 00:25:23.787 --rc geninfo_unexecuted_blocks=1 00:25:23.787 00:25:23.787 ' 00:25:23.787 00:32:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:23.787 00:32:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:25:23.787 00:32:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:23.787 00:32:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:23.787 00:32:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:23.787 00:32:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:23.787 00:32:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:23.787 00:32:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:23.787 00:32:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:23.787 00:32:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:23.787 00:32:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:23.787 00:32:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:23.787 00:32:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:23.787 00:32:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:23.787 00:32:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:23.787 00:32:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:23.787 00:32:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:23.787 00:32:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:23.787 00:32:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:23.787 00:32:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:25:23.787 00:32:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:23.787 00:32:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:23.787 00:32:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:23.787 00:32:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:23.787 00:32:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:23.787 00:32:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:23.787 00:32:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:25:23.787 00:32:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:23.788 00:32:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:25:23.788 00:32:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:23.788 00:32:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:23.788 00:32:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:23.788 00:32:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:23.788 00:32:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:23.788 00:32:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:23.788 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:23.788 00:32:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:23.788 00:32:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:23.788 00:32:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:23.788 00:32:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:25:23.788 00:32:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:25:23.788 00:32:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:25:23.788 00:32:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:25:23.788 00:32:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:23.788 00:32:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:25:23.788 00:32:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:25:23.788 00:32:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:25:23.788 00:32:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:23.788 00:32:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # prepare_net_devs 00:25:23.788 00:32:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@436 -- # local -g is_hw=no 00:25:23.788 00:32:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # remove_spdk_ns 00:25:23.788 00:32:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:23.788 00:32:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:23.788 00:32:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:23.788 00:32:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:25:23.788 00:32:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:25:23.788 00:32:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@309 -- # xtrace_disable 00:25:23.788 00:32:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:32.018 00:33:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:32.018 00:33:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # pci_devs=() 00:25:32.018 00:33:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:32.018 00:33:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:32.018 00:33:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:32.018 00:33:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:32.018 00:33:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:32.018 00:33:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # net_devs=() 00:25:32.018 00:33:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:32.018 00:33:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # e810=() 00:25:32.018 00:33:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # local -ga e810 00:25:32.018 00:33:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # x722=() 00:25:32.018 00:33:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # local -ga x722 00:25:32.018 00:33:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # mlx=() 00:25:32.018 00:33:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # local -ga mlx 00:25:32.018 00:33:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:32.018 00:33:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:32.018 00:33:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:32.018 00:33:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:32.018 00:33:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:32.018 00:33:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:32.018 00:33:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:32.018 00:33:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:32.018 00:33:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:32.018 00:33:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:32.018 00:33:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:32.018 00:33:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:32.018 00:33:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:32.018 00:33:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:32.018 00:33:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:32.018 00:33:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:32.018 00:33:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:32.018 00:33:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:32.018 00:33:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:32.018 00:33:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:25:32.018 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:25:32.018 00:33:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:32.018 00:33:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:32.018 00:33:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:32.018 00:33:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:32.018 00:33:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:32.018 00:33:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:32.018 00:33:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:25:32.018 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:25:32.018 00:33:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:32.018 00:33:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:32.018 00:33:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:32.018 00:33:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:32.018 00:33:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:32.018 00:33:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:32.018 00:33:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:32.018 00:33:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:32.018 00:33:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:25:32.018 00:33:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:32.018 00:33:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:25:32.018 00:33:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:32.018 00:33:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ up == up ]] 00:25:32.018 00:33:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:25:32.018 00:33:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:32.018 00:33:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:25:32.018 Found net devices under 0000:4b:00.0: cvl_0_0 00:25:32.018 00:33:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:25:32.018 00:33:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:25:32.018 00:33:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:32.018 00:33:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:25:32.018 00:33:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:32.018 00:33:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ up == up ]] 00:25:32.018 00:33:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:25:32.018 00:33:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:32.018 00:33:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:25:32.018 Found net devices under 0000:4b:00.1: cvl_0_1 00:25:32.018 00:33:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:25:32.018 00:33:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:25:32.018 00:33:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # is_hw=yes 00:25:32.018 00:33:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:25:32.018 00:33:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:25:32.018 00:33:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:25:32.018 00:33:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:32.018 00:33:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:32.018 00:33:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:32.018 00:33:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:32.018 00:33:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:32.018 00:33:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:32.018 00:33:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:32.018 00:33:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:32.018 00:33:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:32.018 00:33:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:32.018 00:33:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:32.018 00:33:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:32.019 00:33:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:32.019 00:33:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:32.019 00:33:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:32.019 00:33:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:32.019 00:33:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:32.019 00:33:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:32.019 00:33:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:32.019 00:33:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:32.019 00:33:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:32.019 00:33:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:32.019 00:33:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:32.019 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:32.019 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.694 ms 00:25:32.019 00:25:32.019 --- 10.0.0.2 ping statistics --- 00:25:32.019 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:32.019 rtt min/avg/max/mdev = 0.694/0.694/0.694/0.000 ms 00:25:32.019 00:33:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:32.019 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:32.019 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.266 ms 00:25:32.019 00:25:32.019 --- 10.0.0.1 ping statistics --- 00:25:32.019 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:32.019 rtt min/avg/max/mdev = 0.266/0.266/0.266/0.000 ms 00:25:32.019 00:33:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:32.019 00:33:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # return 0 00:25:32.019 00:33:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:25:32.019 00:33:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:32.019 00:33:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:25:32.019 00:33:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:25:32.019 00:33:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:32.019 00:33:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:25:32.019 00:33:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:25:32.019 00:33:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:25:32.019 00:33:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:25:32.019 00:33:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:32.019 00:33:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:32.019 00:33:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # nvmfpid=3376562 00:25:32.019 00:33:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # waitforlisten 3376562 00:25:32.019 00:33:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:25:32.019 00:33:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # '[' -z 3376562 ']' 00:25:32.019 00:33:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:32.019 00:33:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:32.019 00:33:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:32.019 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:32.019 00:33:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:32.019 00:33:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:32.019 [2024-10-09 00:33:01.992229] Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 initialization... 00:25:32.019 [2024-10-09 00:33:01.992291] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:32.019 [2024-10-09 00:33:02.079685] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:25:32.019 [2024-10-09 00:33:02.174783] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:32.019 [2024-10-09 00:33:02.174842] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:32.019 [2024-10-09 00:33:02.174850] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:32.019 [2024-10-09 00:33:02.174858] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:32.019 [2024-10-09 00:33:02.174864] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:32.019 [2024-10-09 00:33:02.176181] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:25:32.019 [2024-10-09 00:33:02.176182] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:25:32.279 00:33:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:32.279 00:33:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # return 0 00:25:32.279 00:33:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:25:32.279 00:33:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:32.279 00:33:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:32.279 00:33:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:32.279 00:33:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=3376562 00:25:32.279 00:33:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:25:32.540 [2024-10-09 00:33:03.019631] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:32.540 00:33:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:25:32.801 Malloc0 00:25:32.801 00:33:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:25:33.063 00:33:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:33.063 00:33:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:33.324 [2024-10-09 00:33:03.843142] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:33.324 00:33:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:33.594 [2024-10-09 00:33:04.039665] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:33.594 00:33:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=3376934 00:25:33.594 00:33:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:25:33.594 00:33:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:25:33.594 00:33:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 3376934 /var/tmp/bdevperf.sock 00:25:33.594 00:33:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # '[' -z 3376934 ']' 00:25:33.594 00:33:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:33.594 00:33:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:33.594 00:33:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:33.594 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:33.594 00:33:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:33.594 00:33:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:34.546 00:33:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:34.546 00:33:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # return 0 00:25:34.546 00:33:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:25:34.546 00:33:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:25:35.117 Nvme0n1 00:25:35.117 00:33:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:25:35.388 Nvme0n1 00:25:35.388 00:33:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:25:35.388 00:33:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:25:37.929 00:33:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:25:37.929 00:33:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:25:37.929 00:33:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:37.929 00:33:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:25:38.869 00:33:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:25:38.869 00:33:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:38.869 00:33:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:38.869 00:33:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:39.129 00:33:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:39.129 00:33:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:39.129 00:33:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:39.129 00:33:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:39.129 00:33:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:39.129 00:33:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:39.129 00:33:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:39.129 00:33:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:39.408 00:33:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:39.408 00:33:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:39.408 00:33:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:39.408 00:33:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:39.675 00:33:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:39.675 00:33:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:39.675 00:33:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:39.675 00:33:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:39.675 00:33:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:39.675 00:33:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:39.675 00:33:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:39.675 00:33:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:39.935 00:33:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:39.935 00:33:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:25:39.935 00:33:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:40.195 00:33:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:40.195 00:33:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:25:41.578 00:33:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:25:41.578 00:33:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:25:41.578 00:33:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:41.578 00:33:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:41.578 00:33:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:41.578 00:33:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:41.578 00:33:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:41.578 00:33:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:41.578 00:33:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:41.578 00:33:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:41.578 00:33:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:41.578 00:33:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:41.838 00:33:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:41.838 00:33:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:41.838 00:33:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:41.838 00:33:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:41.838 00:33:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:41.838 00:33:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:41.838 00:33:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:41.839 00:33:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:42.099 00:33:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:42.099 00:33:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:42.099 00:33:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:42.099 00:33:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:42.359 00:33:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:42.359 00:33:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:25:42.359 00:33:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:42.620 00:33:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:25:42.620 00:33:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:25:44.014 00:33:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:25:44.014 00:33:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:44.014 00:33:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:44.014 00:33:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:44.014 00:33:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:44.014 00:33:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:44.014 00:33:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:44.014 00:33:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:44.014 00:33:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:44.014 00:33:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:44.014 00:33:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:44.014 00:33:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:44.274 00:33:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:44.274 00:33:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:44.274 00:33:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:44.274 00:33:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:44.533 00:33:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:44.533 00:33:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:44.533 00:33:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:44.533 00:33:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:44.533 00:33:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:44.533 00:33:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:44.794 00:33:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:44.794 00:33:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:44.794 00:33:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:44.794 00:33:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:25:44.794 00:33:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:45.055 00:33:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:25:45.316 00:33:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:25:46.257 00:33:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:25:46.257 00:33:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:46.257 00:33:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:46.257 00:33:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:46.517 00:33:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:46.517 00:33:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:46.517 00:33:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:46.517 00:33:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:46.517 00:33:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:46.517 00:33:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:46.517 00:33:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:46.517 00:33:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:46.778 00:33:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:46.778 00:33:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:46.778 00:33:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:46.778 00:33:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:47.039 00:33:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:47.039 00:33:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:47.039 00:33:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:47.039 00:33:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:47.039 00:33:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:47.039 00:33:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:25:47.039 00:33:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:47.039 00:33:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:47.300 00:33:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:47.300 00:33:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:25:47.300 00:33:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:25:47.561 00:33:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:25:47.561 00:33:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:25:48.943 00:33:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:25:48.943 00:33:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:25:48.943 00:33:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:48.943 00:33:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:48.943 00:33:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:48.943 00:33:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:48.943 00:33:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:48.943 00:33:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:48.943 00:33:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:48.943 00:33:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:48.943 00:33:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:48.943 00:33:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:49.203 00:33:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:49.203 00:33:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:49.203 00:33:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:49.203 00:33:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:49.465 00:33:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:49.465 00:33:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:25:49.465 00:33:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:49.465 00:33:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:49.465 00:33:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:49.465 00:33:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:25:49.465 00:33:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:49.465 00:33:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:49.738 00:33:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:49.739 00:33:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:25:49.739 00:33:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:25:50.001 00:33:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:50.001 00:33:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:25:51.387 00:33:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:25:51.387 00:33:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:25:51.387 00:33:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:51.387 00:33:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:51.387 00:33:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:51.387 00:33:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:51.387 00:33:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:51.387 00:33:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:51.387 00:33:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:51.387 00:33:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:51.387 00:33:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:51.387 00:33:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:51.648 00:33:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:51.648 00:33:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:51.648 00:33:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:51.648 00:33:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:51.909 00:33:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:51.909 00:33:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:25:51.909 00:33:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:51.909 00:33:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:51.909 00:33:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:51.909 00:33:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:51.909 00:33:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:51.909 00:33:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:52.169 00:33:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:52.169 00:33:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:25:52.437 00:33:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:25:52.438 00:33:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:25:52.438 00:33:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:52.703 00:33:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:25:53.663 00:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:25:53.663 00:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:53.663 00:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:53.663 00:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:53.930 00:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:53.930 00:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:53.930 00:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:53.930 00:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:54.190 00:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:54.190 00:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:54.190 00:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:54.190 00:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:54.190 00:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:54.190 00:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:54.190 00:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:54.190 00:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:54.450 00:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:54.450 00:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:54.450 00:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:54.450 00:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:54.710 00:33:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:54.710 00:33:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:54.710 00:33:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:54.710 00:33:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:54.710 00:33:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:54.711 00:33:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:25:54.711 00:33:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:54.971 00:33:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:55.232 00:33:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:25:56.172 00:33:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:25:56.172 00:33:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:25:56.172 00:33:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:56.172 00:33:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:56.432 00:33:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:56.432 00:33:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:56.432 00:33:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:56.432 00:33:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:56.693 00:33:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:56.693 00:33:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:56.693 00:33:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:56.693 00:33:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:56.693 00:33:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:56.693 00:33:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:56.693 00:33:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:56.693 00:33:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:56.954 00:33:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:56.954 00:33:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:56.954 00:33:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:56.954 00:33:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:57.215 00:33:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:57.215 00:33:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:57.215 00:33:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:57.215 00:33:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:57.215 00:33:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:57.215 00:33:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:25:57.215 00:33:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:57.475 00:33:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:25:57.736 00:33:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:25:58.676 00:33:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:25:58.676 00:33:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:58.676 00:33:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:58.676 00:33:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:58.937 00:33:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:58.937 00:33:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:58.937 00:33:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:58.937 00:33:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:58.937 00:33:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:58.937 00:33:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:58.937 00:33:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:58.937 00:33:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:59.198 00:33:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:59.198 00:33:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:59.198 00:33:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:59.198 00:33:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:59.458 00:33:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:59.458 00:33:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:59.458 00:33:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:59.458 00:33:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:59.458 00:33:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:59.458 00:33:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:59.458 00:33:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:59.458 00:33:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:59.718 00:33:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:59.718 00:33:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:25:59.718 00:33:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:59.978 00:33:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:26:00.239 00:33:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:26:01.181 00:33:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:26:01.181 00:33:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:01.181 00:33:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:01.181 00:33:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:01.440 00:33:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:01.440 00:33:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:01.440 00:33:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:01.440 00:33:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:01.440 00:33:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:01.440 00:33:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:01.440 00:33:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:01.440 00:33:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:01.702 00:33:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:01.702 00:33:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:01.702 00:33:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:01.702 00:33:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:01.962 00:33:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:01.962 00:33:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:01.962 00:33:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:01.963 00:33:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:01.963 00:33:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:01.963 00:33:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:26:01.963 00:33:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:01.963 00:33:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:02.224 00:33:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:02.224 00:33:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 3376934 00:26:02.224 00:33:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # '[' -z 3376934 ']' 00:26:02.224 00:33:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # kill -0 3376934 00:26:02.224 00:33:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # uname 00:26:02.224 00:33:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:02.224 00:33:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3376934 00:26:02.224 00:33:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:26:02.224 00:33:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:26:02.224 00:33:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3376934' 00:26:02.224 killing process with pid 3376934 00:26:02.224 00:33:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@969 -- # kill 3376934 00:26:02.224 00:33:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@974 -- # wait 3376934 00:26:02.224 { 00:26:02.224 "results": [ 00:26:02.224 { 00:26:02.224 "job": "Nvme0n1", 00:26:02.224 "core_mask": "0x4", 00:26:02.224 "workload": "verify", 00:26:02.224 "status": "terminated", 00:26:02.224 "verify_range": { 00:26:02.224 "start": 0, 00:26:02.224 "length": 16384 00:26:02.224 }, 00:26:02.224 "queue_depth": 128, 00:26:02.224 "io_size": 4096, 00:26:02.224 "runtime": 26.69765, 00:26:02.224 "iops": 11900.597992707224, 00:26:02.224 "mibps": 46.486710909012594, 00:26:02.224 "io_failed": 0, 00:26:02.224 "io_timeout": 0, 00:26:02.224 "avg_latency_us": 10736.575717586034, 00:26:02.224 "min_latency_us": 484.6933333333333, 00:26:02.224 "max_latency_us": 3019898.88 00:26:02.224 } 00:26:02.224 ], 00:26:02.224 "core_count": 1 00:26:02.224 } 00:26:02.502 00:33:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 3376934 00:26:02.502 00:33:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:26:02.502 [2024-10-09 00:33:04.119967] Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 initialization... 00:26:02.502 [2024-10-09 00:33:04.120047] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3376934 ] 00:26:02.502 [2024-10-09 00:33:04.204295] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:02.502 [2024-10-09 00:33:04.295860] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:26:02.502 Running I/O for 90 seconds... 00:26:02.502 10321.00 IOPS, 40.32 MiB/s [2024-10-08T22:33:33.137Z] 10771.50 IOPS, 42.08 MiB/s [2024-10-08T22:33:33.137Z] 10877.67 IOPS, 42.49 MiB/s [2024-10-08T22:33:33.137Z] 11153.00 IOPS, 43.57 MiB/s [2024-10-08T22:33:33.137Z] 11510.20 IOPS, 44.96 MiB/s [2024-10-08T22:33:33.137Z] 11742.67 IOPS, 45.87 MiB/s [2024-10-08T22:33:33.137Z] 11920.14 IOPS, 46.56 MiB/s [2024-10-08T22:33:33.137Z] 12056.88 IOPS, 47.10 MiB/s [2024-10-08T22:33:33.137Z] 12198.33 IOPS, 47.65 MiB/s [2024-10-08T22:33:33.137Z] 12310.50 IOPS, 48.09 MiB/s [2024-10-08T22:33:33.137Z] 12373.00 IOPS, 48.33 MiB/s [2024-10-08T22:33:33.137Z] [2024-10-09 00:33:17.936809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:128136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.502 [2024-10-09 00:33:17.936844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:02.502 [2024-10-09 00:33:17.936880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:128144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.502 [2024-10-09 00:33:17.936887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:02.502 [2024-10-09 00:33:17.936898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:128152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.502 [2024-10-09 00:33:17.936904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:02.502 [2024-10-09 00:33:17.936914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:128160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.502 [2024-10-09 00:33:17.936920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:02.502 [2024-10-09 00:33:17.936930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:128168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.502 [2024-10-09 00:33:17.936935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:02.502 [2024-10-09 00:33:17.936945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:128176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.502 [2024-10-09 00:33:17.936951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:02.502 [2024-10-09 00:33:17.936961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:128184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.502 [2024-10-09 00:33:17.936966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:02.502 [2024-10-09 00:33:17.936977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:128192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.502 [2024-10-09 00:33:17.936982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:02.502 [2024-10-09 00:33:17.937271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:128200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.502 [2024-10-09 00:33:17.937280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:02.502 [2024-10-09 00:33:17.937292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:128208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.503 [2024-10-09 00:33:17.937303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:02.503 [2024-10-09 00:33:17.937314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:128216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.503 [2024-10-09 00:33:17.937320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.503 [2024-10-09 00:33:17.937331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:128224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.503 [2024-10-09 00:33:17.937337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:02.503 [2024-10-09 00:33:17.937348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:128232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.503 [2024-10-09 00:33:17.937353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:02.503 [2024-10-09 00:33:17.937364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:128240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.503 [2024-10-09 00:33:17.937369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:02.503 [2024-10-09 00:33:17.937380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:128248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.503 [2024-10-09 00:33:17.937385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:02.503 [2024-10-09 00:33:17.937396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:128256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.503 [2024-10-09 00:33:17.937401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:02.503 [2024-10-09 00:33:17.937968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:128264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.503 [2024-10-09 00:33:17.937975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:02.503 [2024-10-09 00:33:17.937987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:127560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.503 [2024-10-09 00:33:17.937993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:02.503 [2024-10-09 00:33:17.938004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:127568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.503 [2024-10-09 00:33:17.938010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:02.503 [2024-10-09 00:33:17.938022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:127576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.503 [2024-10-09 00:33:17.938027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:02.503 [2024-10-09 00:33:17.938038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:127584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.503 [2024-10-09 00:33:17.938043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:02.503 [2024-10-09 00:33:17.938055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:127592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.503 [2024-10-09 00:33:17.938062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:02.503 [2024-10-09 00:33:17.938073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:127600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.503 [2024-10-09 00:33:17.938078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:02.503 [2024-10-09 00:33:17.938090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:127608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.503 [2024-10-09 00:33:17.938095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:02.503 [2024-10-09 00:33:17.938107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:127616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.503 [2024-10-09 00:33:17.938112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:02.503 [2024-10-09 00:33:17.938124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:127624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.503 [2024-10-09 00:33:17.938129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:02.503 [2024-10-09 00:33:17.938141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:127632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.503 [2024-10-09 00:33:17.938146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:02.503 [2024-10-09 00:33:17.938157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:127640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.503 [2024-10-09 00:33:17.938163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:02.503 [2024-10-09 00:33:17.938174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:127648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.503 [2024-10-09 00:33:17.938179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:02.503 [2024-10-09 00:33:17.938191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:127656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.503 [2024-10-09 00:33:17.938196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:02.503 [2024-10-09 00:33:17.938207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:127664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.503 [2024-10-09 00:33:17.938212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:02.503 [2024-10-09 00:33:17.938224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:127672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.503 [2024-10-09 00:33:17.938229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:02.503 [2024-10-09 00:33:17.938240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:127680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.503 [2024-10-09 00:33:17.938246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:02.503 [2024-10-09 00:33:17.938291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:127688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.503 [2024-10-09 00:33:17.938298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:02.503 [2024-10-09 00:33:17.938312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:127696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.503 [2024-10-09 00:33:17.938318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:02.503 [2024-10-09 00:33:17.938330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:127704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.503 [2024-10-09 00:33:17.938335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:02.503 [2024-10-09 00:33:17.938347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:127712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.503 [2024-10-09 00:33:17.938352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:02.503 [2024-10-09 00:33:17.938365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:127720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.503 [2024-10-09 00:33:17.938370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:02.503 [2024-10-09 00:33:17.938382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:127728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.503 [2024-10-09 00:33:17.938387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:02.503 [2024-10-09 00:33:17.938399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:127736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.503 [2024-10-09 00:33:17.938404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:02.503 [2024-10-09 00:33:17.938417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:127744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.503 [2024-10-09 00:33:17.938422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:02.503 [2024-10-09 00:33:17.938436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:127752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.503 [2024-10-09 00:33:17.938441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:02.503 [2024-10-09 00:33:17.938454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:127760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.503 [2024-10-09 00:33:17.938459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:02.503 [2024-10-09 00:33:17.938471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:127768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.503 [2024-10-09 00:33:17.938476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:02.503 [2024-10-09 00:33:17.938489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:127776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.503 [2024-10-09 00:33:17.938494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:02.503 [2024-10-09 00:33:17.938506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:127784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.503 [2024-10-09 00:33:17.938511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:02.503 [2024-10-09 00:33:17.938525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:127792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.503 [2024-10-09 00:33:17.938531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:02.503 [2024-10-09 00:33:17.938543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:127800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.503 [2024-10-09 00:33:17.938548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:02.503 [2024-10-09 00:33:17.938560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:127808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.503 [2024-10-09 00:33:17.938565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:02.503 [2024-10-09 00:33:17.938578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:127816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.503 [2024-10-09 00:33:17.938583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:02.503 [2024-10-09 00:33:17.938595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:127824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.504 [2024-10-09 00:33:17.938601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:02.504 [2024-10-09 00:33:17.938613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:127832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.504 [2024-10-09 00:33:17.938618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:02.504 [2024-10-09 00:33:17.938630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:127840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.504 [2024-10-09 00:33:17.938636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:02.504 [2024-10-09 00:33:17.938648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:127848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.504 [2024-10-09 00:33:17.938653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:02.504 [2024-10-09 00:33:17.938665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:127856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.504 [2024-10-09 00:33:17.938670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:02.504 [2024-10-09 00:33:17.938682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:127864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.504 [2024-10-09 00:33:17.938688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:02.504 [2024-10-09 00:33:17.938700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:127872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.504 [2024-10-09 00:33:17.938705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:02.504 [2024-10-09 00:33:17.938717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:127880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.504 [2024-10-09 00:33:17.938725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:02.504 [2024-10-09 00:33:17.938738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:127888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.504 [2024-10-09 00:33:17.938745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:02.504 [2024-10-09 00:33:17.938757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:127896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.504 [2024-10-09 00:33:17.938762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:02.504 [2024-10-09 00:33:17.938774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:127904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.504 [2024-10-09 00:33:17.938779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:02.504 [2024-10-09 00:33:17.938792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:127912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.504 [2024-10-09 00:33:17.938797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:02.504 [2024-10-09 00:33:17.938810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:127920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.504 [2024-10-09 00:33:17.938815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:02.504 [2024-10-09 00:33:17.938827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:127928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.504 [2024-10-09 00:33:17.938832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:02.504 [2024-10-09 00:33:17.938844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:127936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.504 [2024-10-09 00:33:17.938849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:02.504 [2024-10-09 00:33:17.938862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:127944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.504 [2024-10-09 00:33:17.938867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:02.504 [2024-10-09 00:33:17.938879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:127952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.504 [2024-10-09 00:33:17.938884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:02.504 [2024-10-09 00:33:17.938896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:127960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.504 [2024-10-09 00:33:17.938901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:02.504 [2024-10-09 00:33:17.938914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:127968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.504 [2024-10-09 00:33:17.938919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:02.504 [2024-10-09 00:33:17.938931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:127976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.504 [2024-10-09 00:33:17.938936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:02.504 [2024-10-09 00:33:17.938948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:127984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.504 [2024-10-09 00:33:17.938955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:02.504 [2024-10-09 00:33:17.938967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:127992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.504 [2024-10-09 00:33:17.938972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:02.504 [2024-10-09 00:33:17.938984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:128000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.504 [2024-10-09 00:33:17.938989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:02.504 [2024-10-09 00:33:17.939002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:128272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.504 [2024-10-09 00:33:17.939007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:02.504 [2024-10-09 00:33:17.939019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:128280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.504 [2024-10-09 00:33:17.939024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:02.504 [2024-10-09 00:33:17.939036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:128288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.504 [2024-10-09 00:33:17.939042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:02.504 [2024-10-09 00:33:17.939054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:128296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.504 [2024-10-09 00:33:17.939059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:02.504 [2024-10-09 00:33:17.939071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:128304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.504 [2024-10-09 00:33:17.939076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:02.504 [2024-10-09 00:33:17.939088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:128312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.504 [2024-10-09 00:33:17.939094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:02.504 [2024-10-09 00:33:17.939106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:128320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.504 [2024-10-09 00:33:17.939111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:02.504 [2024-10-09 00:33:17.939123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:128328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.504 [2024-10-09 00:33:17.939128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:02.504 [2024-10-09 00:33:17.939140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:128336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.504 [2024-10-09 00:33:17.939145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:02.504 [2024-10-09 00:33:17.939157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:128344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.504 [2024-10-09 00:33:17.939164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:02.504 [2024-10-09 00:33:17.939176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:128352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.504 [2024-10-09 00:33:17.939181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:02.504 [2024-10-09 00:33:17.939193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:128360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.504 [2024-10-09 00:33:17.939198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:02.504 [2024-10-09 00:33:17.939210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:128368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.504 [2024-10-09 00:33:17.939216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:02.504 [2024-10-09 00:33:17.939228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:128376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.504 [2024-10-09 00:33:17.939233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:02.504 [2024-10-09 00:33:17.939368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:128384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.504 [2024-10-09 00:33:17.939377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:02.504 [2024-10-09 00:33:17.939392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:128008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.504 [2024-10-09 00:33:17.939398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:02.504 [2024-10-09 00:33:17.939413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:128016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.504 [2024-10-09 00:33:17.939418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:02.504 [2024-10-09 00:33:17.939433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:128024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.504 [2024-10-09 00:33:17.939438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:02.505 [2024-10-09 00:33:17.939453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:128032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.505 [2024-10-09 00:33:17.939459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:02.505 [2024-10-09 00:33:17.939474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:128040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.505 [2024-10-09 00:33:17.939479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:02.505 [2024-10-09 00:33:17.939493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:128048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.505 [2024-10-09 00:33:17.939499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:02.505 [2024-10-09 00:33:17.939514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:128056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.505 [2024-10-09 00:33:17.939519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:02.505 [2024-10-09 00:33:17.939538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:128064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.505 [2024-10-09 00:33:17.939543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:02.505 [2024-10-09 00:33:17.939558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:128392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.505 [2024-10-09 00:33:17.939564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:02.505 [2024-10-09 00:33:17.939578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:128400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.505 [2024-10-09 00:33:17.939583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:02.505 [2024-10-09 00:33:17.939598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:128408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.505 [2024-10-09 00:33:17.939603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:02.505 [2024-10-09 00:33:17.939618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:128416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.505 [2024-10-09 00:33:17.939623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:02.505 [2024-10-09 00:33:17.939638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:128424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.505 [2024-10-09 00:33:17.939643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:02.505 [2024-10-09 00:33:17.939658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:128432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.505 [2024-10-09 00:33:17.939663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:02.505 [2024-10-09 00:33:17.939678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:128440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.505 [2024-10-09 00:33:17.939683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:02.505 [2024-10-09 00:33:17.939698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:128448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.505 [2024-10-09 00:33:17.939703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:02.505 [2024-10-09 00:33:17.939718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:128072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.505 [2024-10-09 00:33:17.939732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:02.505 [2024-10-09 00:33:17.939751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:128080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.505 [2024-10-09 00:33:17.939757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:02.505 [2024-10-09 00:33:17.939772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:128088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.505 [2024-10-09 00:33:17.939777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:02.505 [2024-10-09 00:33:17.939794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:128096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.505 [2024-10-09 00:33:17.939799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:02.505 [2024-10-09 00:33:17.939814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:128104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.505 [2024-10-09 00:33:17.939819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:02.505 [2024-10-09 00:33:17.939834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:128112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.505 [2024-10-09 00:33:17.939839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:02.505 [2024-10-09 00:33:17.939854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:128120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.505 [2024-10-09 00:33:17.939859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:02.505 [2024-10-09 00:33:17.939874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:128128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.505 [2024-10-09 00:33:17.939879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:02.505 [2024-10-09 00:33:17.939894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:128456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.505 [2024-10-09 00:33:17.939900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:02.505 [2024-10-09 00:33:17.939916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:128464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.505 [2024-10-09 00:33:17.939921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:02.505 [2024-10-09 00:33:17.939936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:128472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.505 [2024-10-09 00:33:17.939941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:02.505 [2024-10-09 00:33:17.939956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:128480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.505 [2024-10-09 00:33:17.939961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:02.505 [2024-10-09 00:33:17.939976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:128488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.505 [2024-10-09 00:33:17.939982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:02.505 [2024-10-09 00:33:17.939996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:128496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.505 [2024-10-09 00:33:17.940001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:02.505 [2024-10-09 00:33:17.940016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:128504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.505 [2024-10-09 00:33:17.940022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:02.505 [2024-10-09 00:33:17.940037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:128512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.505 [2024-10-09 00:33:17.940043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:02.505 12251.42 IOPS, 47.86 MiB/s [2024-10-08T22:33:33.140Z] 11309.00 IOPS, 44.18 MiB/s [2024-10-08T22:33:33.140Z] 10501.21 IOPS, 41.02 MiB/s [2024-10-08T22:33:33.140Z] 9946.33 IOPS, 38.85 MiB/s [2024-10-08T22:33:33.140Z] 10129.25 IOPS, 39.57 MiB/s [2024-10-08T22:33:33.140Z] 10292.41 IOPS, 40.20 MiB/s [2024-10-08T22:33:33.140Z] 10648.89 IOPS, 41.60 MiB/s [2024-10-08T22:33:33.140Z] 10975.21 IOPS, 42.87 MiB/s [2024-10-08T22:33:33.140Z] 11154.90 IOPS, 43.57 MiB/s [2024-10-08T22:33:33.140Z] 11228.57 IOPS, 43.86 MiB/s [2024-10-08T22:33:33.140Z] 11297.64 IOPS, 44.13 MiB/s [2024-10-08T22:33:33.140Z] 11526.35 IOPS, 45.02 MiB/s [2024-10-08T22:33:33.140Z] 11737.54 IOPS, 45.85 MiB/s [2024-10-08T22:33:33.140Z] [2024-10-09 00:33:30.615866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:88624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.505 [2024-10-09 00:33:30.615902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:02.505 [2024-10-09 00:33:30.615920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:88640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.505 [2024-10-09 00:33:30.615926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:02.505 [2024-10-09 00:33:30.615937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:88656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.505 [2024-10-09 00:33:30.615942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:02.505 [2024-10-09 00:33:30.615953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:88672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.505 [2024-10-09 00:33:30.615958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:02.505 [2024-10-09 00:33:30.615968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:88688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.505 [2024-10-09 00:33:30.615973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:02.505 [2024-10-09 00:33:30.615984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:88704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.505 [2024-10-09 00:33:30.615989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:02.505 [2024-10-09 00:33:30.615999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:88720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.505 [2024-10-09 00:33:30.616004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:02.505 [2024-10-09 00:33:30.616014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:88736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.505 [2024-10-09 00:33:30.616019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:02.505 [2024-10-09 00:33:30.616029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:88752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.505 [2024-10-09 00:33:30.616035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:02.505 [2024-10-09 00:33:30.616045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:88768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.505 [2024-10-09 00:33:30.616050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:02.506 [2024-10-09 00:33:30.618118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:88784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.506 [2024-10-09 00:33:30.618134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:02.506 [2024-10-09 00:33:30.618148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:88800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.506 [2024-10-09 00:33:30.618153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:02.506 [2024-10-09 00:33:30.618164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:88816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.506 [2024-10-09 00:33:30.618169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:02.506 [2024-10-09 00:33:30.618179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:88832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.506 [2024-10-09 00:33:30.618185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:02.506 [2024-10-09 00:33:30.618195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:88848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.506 [2024-10-09 00:33:30.618200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:02.506 [2024-10-09 00:33:30.618210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:88864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.506 [2024-10-09 00:33:30.618216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:02.506 [2024-10-09 00:33:30.618226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:88880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.506 [2024-10-09 00:33:30.618231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:02.506 [2024-10-09 00:33:30.618242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:88896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.506 [2024-10-09 00:33:30.618247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:02.506 [2024-10-09 00:33:30.618257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:88592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.506 [2024-10-09 00:33:30.618262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:02.506 [2024-10-09 00:33:30.618272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:88912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.506 [2024-10-09 00:33:30.618278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:02.506 [2024-10-09 00:33:30.618288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:88928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.506 [2024-10-09 00:33:30.618293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:02.506 [2024-10-09 00:33:30.618303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:88944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.506 [2024-10-09 00:33:30.618308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:02.506 [2024-10-09 00:33:30.618318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:88960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.506 [2024-10-09 00:33:30.618326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:02.506 [2024-10-09 00:33:30.618336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:88976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.506 [2024-10-09 00:33:30.618341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:02.506 [2024-10-09 00:33:30.618352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:88992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.506 [2024-10-09 00:33:30.618357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:02.506 [2024-10-09 00:33:30.618367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:89008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.506 [2024-10-09 00:33:30.618372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:02.506 [2024-10-09 00:33:30.618382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:89024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.506 [2024-10-09 00:33:30.618387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:02.506 [2024-10-09 00:33:30.618397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:89040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.506 [2024-10-09 00:33:30.618402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:02.506 [2024-10-09 00:33:30.618412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:89056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.506 [2024-10-09 00:33:30.618418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:02.506 [2024-10-09 00:33:30.618428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:89072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.506 [2024-10-09 00:33:30.618433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:02.506 [2024-10-09 00:33:30.618443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:89088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.506 [2024-10-09 00:33:30.618448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:02.506 [2024-10-09 00:33:30.618458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:89104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.506 [2024-10-09 00:33:30.618463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:02.506 [2024-10-09 00:33:30.618473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:89120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.506 [2024-10-09 00:33:30.618478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:02.506 [2024-10-09 00:33:30.618488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:89136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.506 [2024-10-09 00:33:30.618493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:02.506 [2024-10-09 00:33:30.618503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:89152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.506 [2024-10-09 00:33:30.618510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:02.506 [2024-10-09 00:33:30.618521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:89168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.506 [2024-10-09 00:33:30.618527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:02.506 [2024-10-09 00:33:30.618537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:89184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.506 [2024-10-09 00:33:30.618542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:02.506 [2024-10-09 00:33:30.618552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:89200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.506 [2024-10-09 00:33:30.618557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:02.506 [2024-10-09 00:33:30.618567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:89216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.506 [2024-10-09 00:33:30.618572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:02.506 [2024-10-09 00:33:30.618583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:89232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.506 [2024-10-09 00:33:30.618588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:02.506 [2024-10-09 00:33:30.618598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:89248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.506 [2024-10-09 00:33:30.618603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:02.506 [2024-10-09 00:33:30.618613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:89264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.506 [2024-10-09 00:33:30.618619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:02.506 [2024-10-09 00:33:30.618630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:88640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.506 [2024-10-09 00:33:30.618635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:02.506 [2024-10-09 00:33:30.618645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:88672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.506 [2024-10-09 00:33:30.618650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:02.506 [2024-10-09 00:33:30.618661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:88704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.507 [2024-10-09 00:33:30.618666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:02.507 [2024-10-09 00:33:30.618676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:88736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.507 [2024-10-09 00:33:30.618681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:02.507 [2024-10-09 00:33:30.618692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:88768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.507 [2024-10-09 00:33:30.618697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:02.507 [2024-10-09 00:33:30.619359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:89288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.507 [2024-10-09 00:33:30.619371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:02.507 [2024-10-09 00:33:30.619382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:89304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.507 [2024-10-09 00:33:30.619387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:02.507 [2024-10-09 00:33:30.619397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:89320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.507 [2024-10-09 00:33:30.619402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:02.507 [2024-10-09 00:33:30.619413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:89336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.507 [2024-10-09 00:33:30.619418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:02.507 [2024-10-09 00:33:30.619428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:89352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.507 [2024-10-09 00:33:30.619433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:02.507 [2024-10-09 00:33:30.619443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:89368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.507 [2024-10-09 00:33:30.619448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:02.507 [2024-10-09 00:33:30.619459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:89384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.507 [2024-10-09 00:33:30.619464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:02.507 [2024-10-09 00:33:30.619474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:89400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.507 [2024-10-09 00:33:30.619479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:02.507 [2024-10-09 00:33:30.619490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:88616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.507 [2024-10-09 00:33:30.619496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:02.507 [2024-10-09 00:33:30.619506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:89416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.507 [2024-10-09 00:33:30.619511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:02.507 [2024-10-09 00:33:30.619522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:89432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.507 [2024-10-09 00:33:30.619527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:02.507 [2024-10-09 00:33:30.619537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:89448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.507 [2024-10-09 00:33:30.619542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:02.507 [2024-10-09 00:33:30.619554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:89464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.507 [2024-10-09 00:33:30.619559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:02.507 [2024-10-09 00:33:30.619569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:89480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.507 [2024-10-09 00:33:30.619575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:02.507 [2024-10-09 00:33:30.619585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:89496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.507 [2024-10-09 00:33:30.619590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:02.507 [2024-10-09 00:33:30.619600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:89512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.507 [2024-10-09 00:33:30.619605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:02.507 [2024-10-09 00:33:30.619615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:89528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.507 [2024-10-09 00:33:30.619620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:02.507 [2024-10-09 00:33:30.619630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:89544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.507 [2024-10-09 00:33:30.619635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:02.507 [2024-10-09 00:33:30.619645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:89560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.507 [2024-10-09 00:33:30.619651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:02.507 [2024-10-09 00:33:30.619661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:89576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.507 [2024-10-09 00:33:30.619666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:02.507 [2024-10-09 00:33:30.619676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.507 [2024-10-09 00:33:30.619681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:02.507 [2024-10-09 00:33:30.619692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:89608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.507 [2024-10-09 00:33:30.619697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:02.507 [2024-10-09 00:33:30.619707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:88648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.507 [2024-10-09 00:33:30.619712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:02.507 [2024-10-09 00:33:30.619728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:88680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.507 [2024-10-09 00:33:30.619734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:02.507 [2024-10-09 00:33:30.619744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:88712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.507 [2024-10-09 00:33:30.619751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:02.507 [2024-10-09 00:33:30.619761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:88744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.507 [2024-10-09 00:33:30.619767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:02.507 [2024-10-09 00:33:30.619777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:88776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.507 [2024-10-09 00:33:30.619783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:02.507 [2024-10-09 00:33:30.620201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:88808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.507 [2024-10-09 00:33:30.620211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:02.507 [2024-10-09 00:33:30.620222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:88840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.507 [2024-10-09 00:33:30.620227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:02.507 [2024-10-09 00:33:30.620237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:88872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.507 [2024-10-09 00:33:30.620243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:02.507 [2024-10-09 00:33:30.620253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:88904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.507 [2024-10-09 00:33:30.620258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:02.507 [2024-10-09 00:33:30.620268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:88936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.507 [2024-10-09 00:33:30.620273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:02.507 [2024-10-09 00:33:30.620283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:88968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.507 [2024-10-09 00:33:30.620288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:02.507 [2024-10-09 00:33:30.620299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:89000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.507 [2024-10-09 00:33:30.620304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:02.507 [2024-10-09 00:33:30.620314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:89032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.507 [2024-10-09 00:33:30.620319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:02.507 [2024-10-09 00:33:30.620329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:89064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.507 [2024-10-09 00:33:30.620334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:02.507 [2024-10-09 00:33:30.620344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:89096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.507 [2024-10-09 00:33:30.620351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:02.507 [2024-10-09 00:33:30.620361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:89128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.507 [2024-10-09 00:33:30.620366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:02.508 [2024-10-09 00:33:30.620377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:89616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.508 [2024-10-09 00:33:30.620382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:02.508 [2024-10-09 00:33:30.620392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:89176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.508 [2024-10-09 00:33:30.620397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:02.508 [2024-10-09 00:33:30.620407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:89208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.508 [2024-10-09 00:33:30.620413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:02.508 [2024-10-09 00:33:30.620423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:89240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.508 [2024-10-09 00:33:30.620428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:02.508 [2024-10-09 00:33:30.620438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:89272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.508 [2024-10-09 00:33:30.620444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:02.508 [2024-10-09 00:33:30.620455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:88656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.508 [2024-10-09 00:33:30.620460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:02.508 [2024-10-09 00:33:30.620470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:88720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.508 [2024-10-09 00:33:30.620475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:02.508 [2024-10-09 00:33:30.620485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:88800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.508 [2024-10-09 00:33:30.620490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:02.508 [2024-10-09 00:33:30.620500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:88832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.508 [2024-10-09 00:33:30.620506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:02.508 [2024-10-09 00:33:30.620516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:88864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.508 [2024-10-09 00:33:30.620521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:02.508 [2024-10-09 00:33:30.620531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:88896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.508 [2024-10-09 00:33:30.620536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:02.508 [2024-10-09 00:33:30.620547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:88912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.508 [2024-10-09 00:33:30.620553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:02.508 [2024-10-09 00:33:30.620564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:88944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.508 [2024-10-09 00:33:30.620569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:02.508 [2024-10-09 00:33:30.620989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:88976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.508 [2024-10-09 00:33:30.621001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:02.508 [2024-10-09 00:33:30.621012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:89008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.508 [2024-10-09 00:33:30.621018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.508 [2024-10-09 00:33:30.621028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:89040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.508 [2024-10-09 00:33:30.621034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:02.508 [2024-10-09 00:33:30.621044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:89072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.508 [2024-10-09 00:33:30.621049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:02.508 [2024-10-09 00:33:30.621059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:89104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.508 [2024-10-09 00:33:30.621065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:02.508 [2024-10-09 00:33:30.621075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:89136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.508 [2024-10-09 00:33:30.621080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:02.508 [2024-10-09 00:33:30.621090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:89168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.508 [2024-10-09 00:33:30.621095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:02.508 [2024-10-09 00:33:30.621105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:89200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.508 [2024-10-09 00:33:30.621110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:02.508 [2024-10-09 00:33:30.621121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:89232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.508 [2024-10-09 00:33:30.621126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:02.508 [2024-10-09 00:33:30.621136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:89264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.508 [2024-10-09 00:33:30.621141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:02.508 [2024-10-09 00:33:30.621153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:88672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.508 [2024-10-09 00:33:30.621159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:02.508 [2024-10-09 00:33:30.621169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:88736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.508 [2024-10-09 00:33:30.621174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:02.508 [2024-10-09 00:33:30.621185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:89280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.508 [2024-10-09 00:33:30.621190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:02.508 [2024-10-09 00:33:30.621201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:89312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.508 [2024-10-09 00:33:30.621206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:02.508 [2024-10-09 00:33:30.621216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:89344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.508 [2024-10-09 00:33:30.621221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:02.508 [2024-10-09 00:33:30.621232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:89376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.508 [2024-10-09 00:33:30.621237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:02.508 [2024-10-09 00:33:30.621247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:89408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.508 [2024-10-09 00:33:30.621252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:02.508 [2024-10-09 00:33:30.621263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:89440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.508 [2024-10-09 00:33:30.621268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:02.508 [2024-10-09 00:33:30.621279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:89472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.508 [2024-10-09 00:33:30.621284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:02.508 [2024-10-09 00:33:30.621294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:89504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.508 [2024-10-09 00:33:30.621299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:02.508 [2024-10-09 00:33:30.621309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:89536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.508 [2024-10-09 00:33:30.621314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:02.508 [2024-10-09 00:33:30.621325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:89568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.508 [2024-10-09 00:33:30.621331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:02.508 [2024-10-09 00:33:30.621341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:89600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.508 [2024-10-09 00:33:30.621348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:02.508 [2024-10-09 00:33:30.621358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:89304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.508 [2024-10-09 00:33:30.621364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:02.508 [2024-10-09 00:33:30.621374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:89336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.508 [2024-10-09 00:33:30.621380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:02.508 [2024-10-09 00:33:30.621390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:89368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.508 [2024-10-09 00:33:30.621395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:02.508 [2024-10-09 00:33:30.621405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:89400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.508 [2024-10-09 00:33:30.621410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:02.508 [2024-10-09 00:33:30.621614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:89416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.508 [2024-10-09 00:33:30.621621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:02.508 [2024-10-09 00:33:30.621632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:89448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.509 [2024-10-09 00:33:30.621637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:02.509 [2024-10-09 00:33:30.621648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.509 [2024-10-09 00:33:30.621653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:02.509 [2024-10-09 00:33:30.621663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:89512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.509 [2024-10-09 00:33:30.621668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:02.509 [2024-10-09 00:33:30.621678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:89544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.509 [2024-10-09 00:33:30.621684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:02.509 [2024-10-09 00:33:30.621694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:89576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.509 [2024-10-09 00:33:30.621699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:02.509 [2024-10-09 00:33:30.621709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:89608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.509 [2024-10-09 00:33:30.621714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:02.509 [2024-10-09 00:33:30.621729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:88680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.509 [2024-10-09 00:33:30.621736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:02.509 [2024-10-09 00:33:30.621746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:88744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.509 [2024-10-09 00:33:30.621751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:02.509 [2024-10-09 00:33:30.621762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:89624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.509 [2024-10-09 00:33:30.621767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:02.509 [2024-10-09 00:33:30.621777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:89640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.509 [2024-10-09 00:33:30.621782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:02.509 [2024-10-09 00:33:30.621792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:89656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.509 [2024-10-09 00:33:30.621797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:02.509 [2024-10-09 00:33:30.621808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:88840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.509 [2024-10-09 00:33:30.621813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:02.509 [2024-10-09 00:33:30.621823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:88904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.509 [2024-10-09 00:33:30.621828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:02.509 [2024-10-09 00:33:30.621838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:88968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.509 [2024-10-09 00:33:30.621844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:02.509 [2024-10-09 00:33:30.621854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:89032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.509 [2024-10-09 00:33:30.621859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:02.509 [2024-10-09 00:33:30.621869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:89096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.509 [2024-10-09 00:33:30.621874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:02.509 [2024-10-09 00:33:30.621884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:89616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.509 [2024-10-09 00:33:30.621890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:02.509 [2024-10-09 00:33:30.621900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:89208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.509 [2024-10-09 00:33:30.621905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:02.509 [2024-10-09 00:33:30.621916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:89272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.509 [2024-10-09 00:33:30.621920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:02.509 [2024-10-09 00:33:30.621932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:88720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.509 [2024-10-09 00:33:30.621938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:02.509 [2024-10-09 00:33:30.621948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:88832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.509 [2024-10-09 00:33:30.621954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:02.509 [2024-10-09 00:33:30.621964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:88896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.509 [2024-10-09 00:33:30.621969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:02.509 [2024-10-09 00:33:30.621979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:88944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.509 [2024-10-09 00:33:30.621984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:02.509 [2024-10-09 00:33:30.623153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:88816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.509 [2024-10-09 00:33:30.623167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:02.509 [2024-10-09 00:33:30.623179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:88880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.509 [2024-10-09 00:33:30.623185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:02.509 [2024-10-09 00:33:30.623195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:88960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.509 [2024-10-09 00:33:30.623202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:02.509 [2024-10-09 00:33:30.623212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:89008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.509 [2024-10-09 00:33:30.623218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:02.509 [2024-10-09 00:33:30.623228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:89072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.509 [2024-10-09 00:33:30.623233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:02.509 [2024-10-09 00:33:30.623244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:89136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.509 [2024-10-09 00:33:30.623249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:02.509 [2024-10-09 00:33:30.623259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:89200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.509 [2024-10-09 00:33:30.623264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:02.509 [2024-10-09 00:33:30.623274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:89264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.509 [2024-10-09 00:33:30.623280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:02.509 [2024-10-09 00:33:30.623295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:88736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.509 [2024-10-09 00:33:30.623301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:02.509 [2024-10-09 00:33:30.623311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:89312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.509 [2024-10-09 00:33:30.623316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:02.509 [2024-10-09 00:33:30.623326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:89376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.509 [2024-10-09 00:33:30.623331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:02.509 [2024-10-09 00:33:30.623341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:89440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.509 [2024-10-09 00:33:30.623347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:02.509 [2024-10-09 00:33:30.623357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:89504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.509 [2024-10-09 00:33:30.623362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:02.509 [2024-10-09 00:33:30.623372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:89568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.509 [2024-10-09 00:33:30.623377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:02.509 [2024-10-09 00:33:30.623387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:89304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.509 [2024-10-09 00:33:30.623392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:02.509 [2024-10-09 00:33:30.623402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:89368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.509 [2024-10-09 00:33:30.623407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:02.509 [2024-10-09 00:33:30.623417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:89056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.509 [2024-10-09 00:33:30.623423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:02.509 [2024-10-09 00:33:30.623433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:89120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.509 [2024-10-09 00:33:30.623438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:02.509 [2024-10-09 00:33:30.623448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:89184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.510 [2024-10-09 00:33:30.623453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:02.510 [2024-10-09 00:33:30.623464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:89248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.510 [2024-10-09 00:33:30.623469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:02.510 [2024-10-09 00:33:30.623479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:88704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.510 [2024-10-09 00:33:30.623485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:02.510 [2024-10-09 00:33:30.623496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:89416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.510 [2024-10-09 00:33:30.623501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:02.510 [2024-10-09 00:33:30.623511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:89480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.510 [2024-10-09 00:33:30.623516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:02.510 [2024-10-09 00:33:30.623527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:89544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.510 [2024-10-09 00:33:30.623532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:02.510 [2024-10-09 00:33:30.623542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:89608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.510 [2024-10-09 00:33:30.623547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:02.510 [2024-10-09 00:33:30.623558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:88744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.510 [2024-10-09 00:33:30.623563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:02.510 [2024-10-09 00:33:30.623573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:89640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.510 [2024-10-09 00:33:30.623577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:02.510 [2024-10-09 00:33:30.623588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:88840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.510 [2024-10-09 00:33:30.623593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:02.510 [2024-10-09 00:33:30.623603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:88968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.510 [2024-10-09 00:33:30.623608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:02.510 [2024-10-09 00:33:30.623618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:89096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.510 [2024-10-09 00:33:30.623623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:02.510 [2024-10-09 00:33:30.623633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:89208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.510 [2024-10-09 00:33:30.623639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:02.510 [2024-10-09 00:33:30.623649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:88720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.510 [2024-10-09 00:33:30.623654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:02.510 [2024-10-09 00:33:30.623665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:88896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.510 [2024-10-09 00:33:30.623671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:02.510 [2024-10-09 00:33:30.625006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:89288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.510 [2024-10-09 00:33:30.625021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:02.510 [2024-10-09 00:33:30.625033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:89352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.510 [2024-10-09 00:33:30.625039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:02.510 [2024-10-09 00:33:30.625050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:89432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.510 [2024-10-09 00:33:30.625056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:02.510 [2024-10-09 00:33:30.625067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:89680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.510 [2024-10-09 00:33:30.625072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:02.510 [2024-10-09 00:33:30.625083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.510 [2024-10-09 00:33:30.625088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:02.510 [2024-10-09 00:33:30.625098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:89712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.510 [2024-10-09 00:33:30.625103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:02.510 [2024-10-09 00:33:30.625114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:89728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.510 [2024-10-09 00:33:30.625119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:02.510 [2024-10-09 00:33:30.625130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:89744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.510 [2024-10-09 00:33:30.625135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:02.510 [2024-10-09 00:33:30.625145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:89760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.510 [2024-10-09 00:33:30.625150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:02.510 [2024-10-09 00:33:30.625160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:89776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.510 [2024-10-09 00:33:30.625166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:02.510 [2024-10-09 00:33:30.625176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:89792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.510 [2024-10-09 00:33:30.625181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:02.510 [2024-10-09 00:33:30.625191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:89808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.510 [2024-10-09 00:33:30.625196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:02.510 [2024-10-09 00:33:30.625209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:89824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.510 [2024-10-09 00:33:30.625214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:02.510 [2024-10-09 00:33:30.625224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:89496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.510 [2024-10-09 00:33:30.625230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:02.510 [2024-10-09 00:33:30.625240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:89560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.510 [2024-10-09 00:33:30.625245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:02.510 [2024-10-09 00:33:30.625255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:89632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.510 [2024-10-09 00:33:30.625260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:02.510 [2024-10-09 00:33:30.625271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:89664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.510 [2024-10-09 00:33:30.625276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:02.510 [2024-10-09 00:33:30.625287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:88880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.510 [2024-10-09 00:33:30.625292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:02.510 [2024-10-09 00:33:30.625302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:89008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.510 [2024-10-09 00:33:30.625307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:02.510 [2024-10-09 00:33:30.625317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:89136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.510 [2024-10-09 00:33:30.625323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:02.510 [2024-10-09 00:33:30.625333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:89264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.510 [2024-10-09 00:33:30.625338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:02.510 [2024-10-09 00:33:30.625348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:89312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.510 [2024-10-09 00:33:30.625353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:02.510 [2024-10-09 00:33:30.625363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:89440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.510 [2024-10-09 00:33:30.625369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:02.510 [2024-10-09 00:33:30.625379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:89568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.510 [2024-10-09 00:33:30.625384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:02.510 [2024-10-09 00:33:30.625396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:89368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.510 [2024-10-09 00:33:30.625401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:02.510 [2024-10-09 00:33:30.625411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:89120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.510 [2024-10-09 00:33:30.625417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:02.510 [2024-10-09 00:33:30.625427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:89248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.510 [2024-10-09 00:33:30.625432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:02.511 [2024-10-09 00:33:30.625442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:89416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.511 [2024-10-09 00:33:30.625447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:02.511 [2024-10-09 00:33:30.626453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:89544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.511 [2024-10-09 00:33:30.626464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:02.511 [2024-10-09 00:33:30.626476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:88744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.511 [2024-10-09 00:33:30.626481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:02.511 [2024-10-09 00:33:30.626492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:88840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.511 [2024-10-09 00:33:30.626497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:02.511 [2024-10-09 00:33:30.626507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:89096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.511 [2024-10-09 00:33:30.626512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:02.511 [2024-10-09 00:33:30.626522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:88720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.511 [2024-10-09 00:33:30.626527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:02.511 [2024-10-09 00:33:30.626538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:89832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.511 [2024-10-09 00:33:30.626543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:02.511 [2024-10-09 00:33:30.626554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:89848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.511 [2024-10-09 00:33:30.626559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:02.511 [2024-10-09 00:33:30.626569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:89864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.511 [2024-10-09 00:33:30.626574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:02.511 [2024-10-09 00:33:30.626584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:89880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.511 [2024-10-09 00:33:30.626592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:02.511 [2024-10-09 00:33:30.626602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:89896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.511 [2024-10-09 00:33:30.626607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:02.511 [2024-10-09 00:33:30.626617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:89912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.511 [2024-10-09 00:33:30.626622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:02.511 [2024-10-09 00:33:30.626632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:88864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.511 [2024-10-09 00:33:30.626638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:02.511 [2024-10-09 00:33:30.626648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:88976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.511 [2024-10-09 00:33:30.626653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:02.511 [2024-10-09 00:33:30.626663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:89104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.511 [2024-10-09 00:33:30.626668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:02.511 [2024-10-09 00:33:30.626678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:89232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.511 [2024-10-09 00:33:30.626683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:02.511 [2024-10-09 00:33:30.626694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:89928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.511 [2024-10-09 00:33:30.626699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:02.511 [2024-10-09 00:33:30.626709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:89944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.511 [2024-10-09 00:33:30.626714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:02.511 [2024-10-09 00:33:30.626734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:89960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.511 [2024-10-09 00:33:30.626740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.511 [2024-10-09 00:33:30.626750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:89976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.511 [2024-10-09 00:33:30.626755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:02.511 [2024-10-09 00:33:30.626765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:89992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.511 [2024-10-09 00:33:30.626770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:02.511 [2024-10-09 00:33:30.626781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:90008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.511 [2024-10-09 00:33:30.626787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:02.511 [2024-10-09 00:33:30.627137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:89336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.511 [2024-10-09 00:33:30.627148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:02.511 [2024-10-09 00:33:30.627160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:89448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.511 [2024-10-09 00:33:30.627165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:02.511 [2024-10-09 00:33:30.627175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:89576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.511 [2024-10-09 00:33:30.627180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:02.511 [2024-10-09 00:33:30.627191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:89352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.511 [2024-10-09 00:33:30.627196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:02.511 [2024-10-09 00:33:30.627207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:89680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.511 [2024-10-09 00:33:30.627212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:02.511 [2024-10-09 00:33:30.627222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:89712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.511 [2024-10-09 00:33:30.627227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:02.511 [2024-10-09 00:33:30.627237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:89744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.511 [2024-10-09 00:33:30.627242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:02.511 [2024-10-09 00:33:30.636592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:89776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.511 [2024-10-09 00:33:30.636615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:02.511 [2024-10-09 00:33:30.636627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:89808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.511 [2024-10-09 00:33:30.636633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:02.511 [2024-10-09 00:33:30.636644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:89496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.511 [2024-10-09 00:33:30.636650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:02.511 [2024-10-09 00:33:30.636662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:89632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.511 [2024-10-09 00:33:30.636668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:02.511 [2024-10-09 00:33:30.636679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:88880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.511 [2024-10-09 00:33:30.636685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:02.511 [2024-10-09 00:33:30.636700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:89136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.511 [2024-10-09 00:33:30.636706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:02.511 [2024-10-09 00:33:30.636717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:89312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.511 [2024-10-09 00:33:30.636730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:02.511 [2024-10-09 00:33:30.636746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:89568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.512 [2024-10-09 00:33:30.636752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:02.512 [2024-10-09 00:33:30.636764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:89120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.512 [2024-10-09 00:33:30.636769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:02.512 [2024-10-09 00:33:30.636781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:89416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.512 [2024-10-09 00:33:30.636787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:02.512 [2024-10-09 00:33:30.637648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:89656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.512 [2024-10-09 00:33:30.637661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:02.512 [2024-10-09 00:33:30.637675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:88832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.512 [2024-10-09 00:33:30.637680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:02.512 [2024-10-09 00:33:30.637692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:90016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.512 [2024-10-09 00:33:30.637698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:02.512 [2024-10-09 00:33:30.637709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:90032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.512 [2024-10-09 00:33:30.637715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:02.512 [2024-10-09 00:33:30.637732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:90048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.512 [2024-10-09 00:33:30.637738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:02.512 [2024-10-09 00:33:30.637749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:90064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.512 [2024-10-09 00:33:30.637754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:02.512 [2024-10-09 00:33:30.637765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:90080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.512 [2024-10-09 00:33:30.637771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:02.512 [2024-10-09 00:33:30.637785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:90096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.512 [2024-10-09 00:33:30.637791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:02.512 [2024-10-09 00:33:30.637802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:90112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.512 [2024-10-09 00:33:30.637808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:02.512 [2024-10-09 00:33:30.637819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.512 [2024-10-09 00:33:30.637825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:02.512 [2024-10-09 00:33:30.637836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:90144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.512 [2024-10-09 00:33:30.637842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:02.512 [2024-10-09 00:33:30.637853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:90160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.512 [2024-10-09 00:33:30.637858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:02.512 [2024-10-09 00:33:30.637870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:89672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.512 [2024-10-09 00:33:30.637876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:02.512 [2024-10-09 00:33:30.637887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:89704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.512 [2024-10-09 00:33:30.637893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:02.512 [2024-10-09 00:33:30.637904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:89736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.512 [2024-10-09 00:33:30.637910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:02.512 [2024-10-09 00:33:30.637921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:89768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.512 [2024-10-09 00:33:30.637927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:02.512 [2024-10-09 00:33:30.637938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:89800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.512 [2024-10-09 00:33:30.637943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:02.512 [2024-10-09 00:33:30.637954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:88744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.512 [2024-10-09 00:33:30.637960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:02.512 [2024-10-09 00:33:30.637971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:89096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.512 [2024-10-09 00:33:30.637977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:02.512 [2024-10-09 00:33:30.637988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:89832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.512 [2024-10-09 00:33:30.637995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:02.512 [2024-10-09 00:33:30.638006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:89864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.512 [2024-10-09 00:33:30.638011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:02.512 [2024-10-09 00:33:30.638022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:89896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.512 [2024-10-09 00:33:30.638028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:02.512 [2024-10-09 00:33:30.638039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:88864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.512 [2024-10-09 00:33:30.638044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:02.512 [2024-10-09 00:33:30.638056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:89104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.512 [2024-10-09 00:33:30.638061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:02.512 [2024-10-09 00:33:30.638072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:89928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.512 [2024-10-09 00:33:30.638078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:02.512 [2024-10-09 00:33:30.638089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:89960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.512 [2024-10-09 00:33:30.638094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:02.512 [2024-10-09 00:33:30.638106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:89992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.512 [2024-10-09 00:33:30.638111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:02.512 [2024-10-09 00:33:30.638122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:89072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.512 [2024-10-09 00:33:30.638128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:02.512 [2024-10-09 00:33:30.638139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:88736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.512 [2024-10-09 00:33:30.638145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:02.512 [2024-10-09 00:33:30.638156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:89448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.512 [2024-10-09 00:33:30.638161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:02.512 [2024-10-09 00:33:30.638172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:89352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.512 [2024-10-09 00:33:30.638178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:02.512 [2024-10-09 00:33:30.638189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:89712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.512 [2024-10-09 00:33:30.638196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:02.512 [2024-10-09 00:33:30.638207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:89776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.512 [2024-10-09 00:33:30.638213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:02.512 [2024-10-09 00:33:30.638224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:89496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.512 [2024-10-09 00:33:30.638230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:02.512 [2024-10-09 00:33:30.638241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:88880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.512 [2024-10-09 00:33:30.638247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:02.512 [2024-10-09 00:33:30.638258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:89312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.512 [2024-10-09 00:33:30.638264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:02.512 [2024-10-09 00:33:30.638275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:89120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.512 [2024-10-09 00:33:30.638281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:02.512 [2024-10-09 00:33:30.640079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:89480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.512 [2024-10-09 00:33:30.640094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:02.512 [2024-10-09 00:33:30.640107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:90184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.513 [2024-10-09 00:33:30.640113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:02.513 [2024-10-09 00:33:30.640125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:89640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.513 [2024-10-09 00:33:30.640130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:02.513 [2024-10-09 00:33:30.640141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:89840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.513 [2024-10-09 00:33:30.640147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:02.513 [2024-10-09 00:33:30.640158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:89872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.513 [2024-10-09 00:33:30.640164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:02.513 [2024-10-09 00:33:30.640175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:89904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.513 [2024-10-09 00:33:30.640181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:02.513 [2024-10-09 00:33:30.640192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:90200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.513 [2024-10-09 00:33:30.640200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:02.513 [2024-10-09 00:33:30.640211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:90216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.513 [2024-10-09 00:33:30.640217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:02.513 [2024-10-09 00:33:30.640228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:90232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.513 [2024-10-09 00:33:30.640234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:02.513 [2024-10-09 00:33:30.640245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:90248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.513 [2024-10-09 00:33:30.640251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:02.513 [2024-10-09 00:33:30.640262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:90264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.513 [2024-10-09 00:33:30.640267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:02.513 [2024-10-09 00:33:30.640278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:89920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.513 [2024-10-09 00:33:30.640284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:02.513 [2024-10-09 00:33:30.640295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:89952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.513 [2024-10-09 00:33:30.640301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:02.513 [2024-10-09 00:33:30.640312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:89984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.513 [2024-10-09 00:33:30.640318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:02.513 [2024-10-09 00:33:30.640329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:88832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.513 [2024-10-09 00:33:30.640334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:02.513 [2024-10-09 00:33:30.640345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:90032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.513 [2024-10-09 00:33:30.640351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:02.513 [2024-10-09 00:33:30.640362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:90064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.513 [2024-10-09 00:33:30.640368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:02.513 [2024-10-09 00:33:30.640379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:90096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.513 [2024-10-09 00:33:30.640384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:02.513 [2024-10-09 00:33:30.640395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:90128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.513 [2024-10-09 00:33:30.640401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:02.513 [2024-10-09 00:33:30.640414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:90160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.513 [2024-10-09 00:33:30.640419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:02.513 [2024-10-09 00:33:30.640430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:89704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.513 [2024-10-09 00:33:30.640436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:02.513 [2024-10-09 00:33:30.640447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:89768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.513 [2024-10-09 00:33:30.640453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:02.513 [2024-10-09 00:33:30.640464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:88744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.513 [2024-10-09 00:33:30.640470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:02.513 [2024-10-09 00:33:30.640481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:89832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.513 [2024-10-09 00:33:30.640486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:02.513 [2024-10-09 00:33:30.640497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:89896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.513 [2024-10-09 00:33:30.640503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:02.513 [2024-10-09 00:33:30.640514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:89104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.513 [2024-10-09 00:33:30.640520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:02.513 [2024-10-09 00:33:30.640531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:89960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.513 [2024-10-09 00:33:30.640537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:02.513 [2024-10-09 00:33:30.640548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:89072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.513 [2024-10-09 00:33:30.640553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:02.513 [2024-10-09 00:33:30.640564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:89448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.513 [2024-10-09 00:33:30.640570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:02.513 [2024-10-09 00:33:30.640581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:89712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.513 [2024-10-09 00:33:30.640587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:02.513 [2024-10-09 00:33:30.640598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:89496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.513 [2024-10-09 00:33:30.640603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:02.513 [2024-10-09 00:33:30.640616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:89312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.513 [2024-10-09 00:33:30.640622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:02.513 [2024-10-09 00:33:30.641294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:89696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.513 [2024-10-09 00:33:30.641306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:02.513 [2024-10-09 00:33:30.641319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:90280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.513 [2024-10-09 00:33:30.641325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:02.513 [2024-10-09 00:33:30.641336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:90296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.513 [2024-10-09 00:33:30.641342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:02.513 [2024-10-09 00:33:30.641353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:90312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.513 [2024-10-09 00:33:30.641358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:02.513 [2024-10-09 00:33:30.641369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:89760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.513 [2024-10-09 00:33:30.641375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:02.513 [2024-10-09 00:33:30.641387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:89824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.513 [2024-10-09 00:33:30.641392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:02.513 [2024-10-09 00:33:30.641403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:89264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.513 [2024-10-09 00:33:30.641409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:02.513 [2024-10-09 00:33:30.641420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:90328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.513 [2024-10-09 00:33:30.641426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:02.513 [2024-10-09 00:33:30.641436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:90344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.513 [2024-10-09 00:33:30.641442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:02.513 [2024-10-09 00:33:30.641453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.513 [2024-10-09 00:33:30.641459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:02.513 [2024-10-09 00:33:30.641470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:90040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.513 [2024-10-09 00:33:30.641475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:02.514 [2024-10-09 00:33:30.641487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:90072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.514 [2024-10-09 00:33:30.641495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:02.514 [2024-10-09 00:33:30.641506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:90104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.514 [2024-10-09 00:33:30.641511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:02.514 [2024-10-09 00:33:30.641522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:90136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.514 [2024-10-09 00:33:30.641528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:02.514 [2024-10-09 00:33:30.641539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:90168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.514 [2024-10-09 00:33:30.641545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:02.514 [2024-10-09 00:33:30.641884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:89848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.514 [2024-10-09 00:33:30.641893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:02.514 [2024-10-09 00:33:30.641906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:89912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.514 [2024-10-09 00:33:30.641912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:02.514 [2024-10-09 00:33:30.641923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:90368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.514 [2024-10-09 00:33:30.641928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:02.514 [2024-10-09 00:33:30.641939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:90384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.514 [2024-10-09 00:33:30.641945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:02.514 [2024-10-09 00:33:30.641956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:89976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.514 [2024-10-09 00:33:30.641962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:02.514 [2024-10-09 00:33:30.641973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:90400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.514 [2024-10-09 00:33:30.641978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:02.514 [2024-10-09 00:33:30.641989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:90416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.514 [2024-10-09 00:33:30.641995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:02.514 [2024-10-09 00:33:30.642006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:90432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.514 [2024-10-09 00:33:30.642011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:02.514 [2024-10-09 00:33:30.642023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:90448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.514 [2024-10-09 00:33:30.642033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:02.514 [2024-10-09 00:33:30.642044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:90464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.514 [2024-10-09 00:33:30.642050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:02.514 [2024-10-09 00:33:30.642538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:90480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.514 [2024-10-09 00:33:30.642548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:02.514 [2024-10-09 00:33:30.642560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:89680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.514 [2024-10-09 00:33:30.642567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:02.514 [2024-10-09 00:33:30.642579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:89808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.514 [2024-10-09 00:33:30.642584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:02.514 [2024-10-09 00:33:30.642595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:90184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.514 [2024-10-09 00:33:30.642601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:02.514 [2024-10-09 00:33:30.642612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:89840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.514 [2024-10-09 00:33:30.642618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:02.514 [2024-10-09 00:33:30.642629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:89904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.514 [2024-10-09 00:33:30.642634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:02.514 [2024-10-09 00:33:30.642646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:90216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.514 [2024-10-09 00:33:30.642651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:02.514 [2024-10-09 00:33:30.642662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:90248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.514 [2024-10-09 00:33:30.642668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:02.514 [2024-10-09 00:33:30.642679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:89920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.514 [2024-10-09 00:33:30.642685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:02.514 [2024-10-09 00:33:30.642696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:89984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.514 [2024-10-09 00:33:30.642701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:02.514 [2024-10-09 00:33:30.642712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:90032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.514 [2024-10-09 00:33:30.642718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:02.514 [2024-10-09 00:33:30.642736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:90096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.514 [2024-10-09 00:33:30.642742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:02.514 [2024-10-09 00:33:30.642753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:90160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.514 [2024-10-09 00:33:30.642758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:02.514 [2024-10-09 00:33:30.642770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:89768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.514 [2024-10-09 00:33:30.642775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.514 [2024-10-09 00:33:30.642786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:89832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.514 [2024-10-09 00:33:30.642792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:02.514 [2024-10-09 00:33:30.642803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:89104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.514 [2024-10-09 00:33:30.642809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:02.514 [2024-10-09 00:33:30.642820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:89072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.514 [2024-10-09 00:33:30.642825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:02.514 [2024-10-09 00:33:30.642836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:89712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.514 [2024-10-09 00:33:30.642842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:02.514 [2024-10-09 00:33:30.642853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:89312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.514 [2024-10-09 00:33:30.642859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:02.514 [2024-10-09 00:33:30.642870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:89416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.514 [2024-10-09 00:33:30.642875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:02.514 [2024-10-09 00:33:30.642886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:90504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.514 [2024-10-09 00:33:30.642892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:02.514 [2024-10-09 00:33:30.642903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:90520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.514 [2024-10-09 00:33:30.642908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:02.514 [2024-10-09 00:33:30.642919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:90536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.514 [2024-10-09 00:33:30.642925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:02.514 [2024-10-09 00:33:30.642937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:90192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.514 [2024-10-09 00:33:30.642943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:02.514 [2024-10-09 00:33:30.642954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:90224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.514 [2024-10-09 00:33:30.642959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:02.514 [2024-10-09 00:33:30.642971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:90256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.514 [2024-10-09 00:33:30.642976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:02.514 [2024-10-09 00:33:30.642987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:90280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.514 [2024-10-09 00:33:30.642993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:02.515 [2024-10-09 00:33:30.643004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:90312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.515 [2024-10-09 00:33:30.643009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:02.515 [2024-10-09 00:33:30.643020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:89824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.515 [2024-10-09 00:33:30.643026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:02.515 [2024-10-09 00:33:30.643037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:90328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.515 [2024-10-09 00:33:30.643043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:02.515 [2024-10-09 00:33:30.643054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:90360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.515 [2024-10-09 00:33:30.643060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:02.515 [2024-10-09 00:33:30.643071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:90072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.515 [2024-10-09 00:33:30.643076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:02.515 [2024-10-09 00:33:30.643088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:90136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.515 [2024-10-09 00:33:30.643094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:02.515 [2024-10-09 00:33:30.644118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:90016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.515 [2024-10-09 00:33:30.644130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:02.515 [2024-10-09 00:33:30.644143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:90080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.515 [2024-10-09 00:33:30.644149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:02.515 [2024-10-09 00:33:30.644160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:90144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.515 [2024-10-09 00:33:30.644168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:02.515 [2024-10-09 00:33:30.644180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:89928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.515 [2024-10-09 00:33:30.644185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:02.515 [2024-10-09 00:33:30.644196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:89912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.515 [2024-10-09 00:33:30.644202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:02.515 [2024-10-09 00:33:30.644214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:90384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.515 [2024-10-09 00:33:30.644220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:02.515 [2024-10-09 00:33:30.644231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:90400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.515 [2024-10-09 00:33:30.644236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:02.515 [2024-10-09 00:33:30.644247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:90432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.515 [2024-10-09 00:33:30.644253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:02.515 [2024-10-09 00:33:30.644264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:90464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.515 [2024-10-09 00:33:30.644270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:02.515 [2024-10-09 00:33:30.644577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:90552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.515 [2024-10-09 00:33:30.644587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:02.515 [2024-10-09 00:33:30.644599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:90568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.515 [2024-10-09 00:33:30.644604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:02.515 [2024-10-09 00:33:30.644616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:90584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.515 [2024-10-09 00:33:30.644621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:02.515 [2024-10-09 00:33:30.644632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:90600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.515 [2024-10-09 00:33:30.644638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:02.515 [2024-10-09 00:33:30.644649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:90288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.515 [2024-10-09 00:33:30.644655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:02.515 [2024-10-09 00:33:30.644666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:90320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.515 [2024-10-09 00:33:30.644674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:02.515 [2024-10-09 00:33:30.644685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:90352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.515 [2024-10-09 00:33:30.644691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:02.515 [2024-10-09 00:33:30.644702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:89680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.515 [2024-10-09 00:33:30.644708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:02.515 [2024-10-09 00:33:30.644719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.515 [2024-10-09 00:33:30.644733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:02.515 [2024-10-09 00:33:30.644749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:89904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.515 [2024-10-09 00:33:30.644759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:02.515 [2024-10-09 00:33:30.644777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:90248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.515 [2024-10-09 00:33:30.644786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:02.515 [2024-10-09 00:33:30.644803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:89984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.515 [2024-10-09 00:33:30.644810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:02.515 [2024-10-09 00:33:30.644821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:90096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.515 [2024-10-09 00:33:30.644826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:02.515 [2024-10-09 00:33:30.644837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:89768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.515 [2024-10-09 00:33:30.644843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:02.515 [2024-10-09 00:33:30.644854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:89104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.515 [2024-10-09 00:33:30.644860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:02.515 [2024-10-09 00:33:30.644871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:89712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.515 [2024-10-09 00:33:30.644877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:02.515 [2024-10-09 00:33:30.644888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:89416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.515 [2024-10-09 00:33:30.644894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:02.515 [2024-10-09 00:33:30.644905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:90520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.515 [2024-10-09 00:33:30.644910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:02.515 [2024-10-09 00:33:30.644923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:90192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.515 [2024-10-09 00:33:30.644929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:02.515 [2024-10-09 00:33:30.644940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:90256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.515 [2024-10-09 00:33:30.644945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:02.515 [2024-10-09 00:33:30.644957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:90312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.515 [2024-10-09 00:33:30.644963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:02.515 [2024-10-09 00:33:30.644974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:90328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.515 [2024-10-09 00:33:30.644979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:02.516 [2024-10-09 00:33:30.644991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:90072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.516 [2024-10-09 00:33:30.644997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:02.516 [2024-10-09 00:33:30.645494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:90608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.516 [2024-10-09 00:33:30.645505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:02.516 [2024-10-09 00:33:30.645518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:90624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.516 [2024-10-09 00:33:30.645524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:02.516 [2024-10-09 00:33:30.645535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:90640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.516 [2024-10-09 00:33:30.645541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:02.516 [2024-10-09 00:33:30.645553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:90656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.516 [2024-10-09 00:33:30.645559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:02.516 [2024-10-09 00:33:30.645570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:90672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.516 [2024-10-09 00:33:30.645575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:02.516 [2024-10-09 00:33:30.645587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:90688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.516 [2024-10-09 00:33:30.645592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:02.516 [2024-10-09 00:33:30.645604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:90704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.516 [2024-10-09 00:33:30.645609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:02.516 [2024-10-09 00:33:30.645623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:90720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.516 [2024-10-09 00:33:30.645629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:02.516 [2024-10-09 00:33:30.645640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:90736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.516 [2024-10-09 00:33:30.645645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:02.516 [2024-10-09 00:33:30.645656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:90752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.516 [2024-10-09 00:33:30.645662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:02.516 [2024-10-09 00:33:30.645673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:90392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.516 [2024-10-09 00:33:30.645679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:02.516 [2024-10-09 00:33:30.645690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:90424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.516 [2024-10-09 00:33:30.645696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:02.516 [2024-10-09 00:33:30.645707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:90456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.516 [2024-10-09 00:33:30.645713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:02.516 [2024-10-09 00:33:30.645729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:90488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.516 [2024-10-09 00:33:30.645735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:02.516 [2024-10-09 00:33:30.645747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:90080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.516 [2024-10-09 00:33:30.645752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:02.516 [2024-10-09 00:33:30.645764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:89928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.516 [2024-10-09 00:33:30.645769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:02.516 [2024-10-09 00:33:30.645781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:90384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.516 [2024-10-09 00:33:30.645786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:02.516 [2024-10-09 00:33:30.645797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:90432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.516 [2024-10-09 00:33:30.645803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:02.516 [2024-10-09 00:33:30.646956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:90200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.516 [2024-10-09 00:33:30.646969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:02.516 [2024-10-09 00:33:30.646981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:90264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.516 [2024-10-09 00:33:30.646990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:02.516 [2024-10-09 00:33:30.647002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:90128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.516 [2024-10-09 00:33:30.647007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:02.516 [2024-10-09 00:33:30.647018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:89960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.516 [2024-10-09 00:33:30.647024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:02.516 [2024-10-09 00:33:30.647035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:90568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.516 [2024-10-09 00:33:30.647041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:02.516 [2024-10-09 00:33:30.647052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:90600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.516 [2024-10-09 00:33:30.647058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:02.516 [2024-10-09 00:33:30.647069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:90320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.516 [2024-10-09 00:33:30.647075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:02.516 [2024-10-09 00:33:30.647086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:89680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.516 [2024-10-09 00:33:30.647091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:02.516 [2024-10-09 00:33:30.647102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:89904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.516 [2024-10-09 00:33:30.647108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:02.516 [2024-10-09 00:33:30.647119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:89984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.516 [2024-10-09 00:33:30.647124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:02.516 [2024-10-09 00:33:30.647136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:89768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.516 [2024-10-09 00:33:30.647141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:02.516 [2024-10-09 00:33:30.647152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:89712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.516 [2024-10-09 00:33:30.647158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:02.516 [2024-10-09 00:33:30.647169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:90520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.516 [2024-10-09 00:33:30.647174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:02.516 [2024-10-09 00:33:30.647186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:90256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.516 [2024-10-09 00:33:30.647193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:02.516 [2024-10-09 00:33:30.647204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:90328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.516 [2024-10-09 00:33:30.647210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:02.516 [2024-10-09 00:33:30.647221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:90760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.516 [2024-10-09 00:33:30.647227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:02.516 [2024-10-09 00:33:30.647238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:90776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.516 [2024-10-09 00:33:30.647243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:02.516 [2024-10-09 00:33:30.647254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:90512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.516 [2024-10-09 00:33:30.647260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:02.516 [2024-10-09 00:33:30.647271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:90296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.516 [2024-10-09 00:33:30.647277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:02.516 [2024-10-09 00:33:30.647288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:90624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.516 [2024-10-09 00:33:30.647293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:02.516 [2024-10-09 00:33:30.647304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:90656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.516 [2024-10-09 00:33:30.647310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:02.516 [2024-10-09 00:33:30.647321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:90688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.516 [2024-10-09 00:33:30.647327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:02.516 [2024-10-09 00:33:30.647338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:90720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.517 [2024-10-09 00:33:30.647343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:02.517 [2024-10-09 00:33:30.647355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:90752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.517 [2024-10-09 00:33:30.647360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:02.517 [2024-10-09 00:33:30.647371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:90424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.517 [2024-10-09 00:33:30.647377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:02.517 [2024-10-09 00:33:30.647388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:90488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.517 [2024-10-09 00:33:30.647393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:02.517 [2024-10-09 00:33:30.647406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:89928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.517 [2024-10-09 00:33:30.647412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:02.517 [2024-10-09 00:33:30.647423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:90432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.517 [2024-10-09 00:33:30.647429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:02.517 [2024-10-09 00:33:30.648750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:90416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.517 [2024-10-09 00:33:30.648764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:02.517 [2024-10-09 00:33:30.648776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:90784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.517 [2024-10-09 00:33:30.648781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:02.517 [2024-10-09 00:33:30.648792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:90800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.517 [2024-10-09 00:33:30.648797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:02.517 [2024-10-09 00:33:30.648807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:90816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.517 [2024-10-09 00:33:30.648813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:02.517 [2024-10-09 00:33:30.648823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:90832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.517 [2024-10-09 00:33:30.648828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:02.517 [2024-10-09 00:33:30.648838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:90848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.517 [2024-10-09 00:33:30.648843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:02.517 [2024-10-09 00:33:30.648854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:90864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.517 [2024-10-09 00:33:30.648859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:02.517 [2024-10-09 00:33:30.648869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:90880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.517 [2024-10-09 00:33:30.648874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:02.517 [2024-10-09 00:33:30.648884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:90896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.517 [2024-10-09 00:33:30.648889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:02.517 [2024-10-09 00:33:30.648900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:90912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.517 [2024-10-09 00:33:30.648905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:02.517 [2024-10-09 00:33:30.648917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:90928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.517 [2024-10-09 00:33:30.648923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:02.517 [2024-10-09 00:33:30.648933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:90944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.517 [2024-10-09 00:33:30.648938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:02.517 [2024-10-09 00:33:30.648948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:90960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.517 [2024-10-09 00:33:30.648953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:02.517 [2024-10-09 00:33:30.648963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:90976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.517 [2024-10-09 00:33:30.648968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:02.517 [2024-10-09 00:33:30.648978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:90992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.517 [2024-10-09 00:33:30.648984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:02.517 [2024-10-09 00:33:30.648994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:90560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.517 [2024-10-09 00:33:30.648999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:02.517 [2024-10-09 00:33:30.649009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:90592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.517 [2024-10-09 00:33:30.649014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:02.517 [2024-10-09 00:33:30.649024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:90216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.517 [2024-10-09 00:33:30.649029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:02.517 [2024-10-09 00:33:30.649040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:90264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.517 [2024-10-09 00:33:30.649045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:02.517 [2024-10-09 00:33:30.649055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:89960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.517 [2024-10-09 00:33:30.649060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:02.517 [2024-10-09 00:33:30.649071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:90600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.517 [2024-10-09 00:33:30.649076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:02.517 [2024-10-09 00:33:30.649086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:89680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.517 [2024-10-09 00:33:30.649091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:02.517 [2024-10-09 00:33:30.649102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:89984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.517 [2024-10-09 00:33:30.649108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:02.517 [2024-10-09 00:33:30.650015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:89712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.517 [2024-10-09 00:33:30.650026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:02.517 [2024-10-09 00:33:30.650037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:90256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.517 [2024-10-09 00:33:30.650043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:02.517 [2024-10-09 00:33:30.650053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:90760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.517 [2024-10-09 00:33:30.650058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:02.517 [2024-10-09 00:33:30.650069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:90512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.517 [2024-10-09 00:33:30.650074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:02.517 [2024-10-09 00:33:30.650084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.517 [2024-10-09 00:33:30.650089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:02.517 [2024-10-09 00:33:30.650099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:90688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.517 [2024-10-09 00:33:30.650104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:02.517 [2024-10-09 00:33:30.650114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:90752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.517 [2024-10-09 00:33:30.650120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:02.517 [2024-10-09 00:33:30.650130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:90488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.517 [2024-10-09 00:33:30.650135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.517 [2024-10-09 00:33:30.650145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:90432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.517 [2024-10-09 00:33:30.650150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:02.517 [2024-10-09 00:33:30.650160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:90160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.517 [2024-10-09 00:33:30.650166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:02.517 [2024-10-09 00:33:30.650176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:90504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.517 [2024-10-09 00:33:30.650181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:02.517 [2024-10-09 00:33:30.650191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:90280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.517 [2024-10-09 00:33:30.650198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:02.517 [2024-10-09 00:33:30.650208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:91000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.518 [2024-10-09 00:33:30.650213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:02.518 [2024-10-09 00:33:30.650224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:91016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.518 [2024-10-09 00:33:30.650229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:02.518 [2024-10-09 00:33:30.650239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:91032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.518 [2024-10-09 00:33:30.650244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:02.518 [2024-10-09 00:33:30.650254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:90616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.518 [2024-10-09 00:33:30.650259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:02.518 [2024-10-09 00:33:30.650269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:90648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.518 [2024-10-09 00:33:30.650274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:02.518 [2024-10-09 00:33:30.650284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:90680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.518 [2024-10-09 00:33:30.650290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:02.518 [2024-10-09 00:33:30.650300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:90712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.518 [2024-10-09 00:33:30.650305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:02.518 [2024-10-09 00:33:30.650315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:90744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.518 [2024-10-09 00:33:30.650320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:02.518 [2024-10-09 00:33:30.650330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:90464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.518 [2024-10-09 00:33:30.650335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:02.518 [2024-10-09 00:33:30.650346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:91056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.518 [2024-10-09 00:33:30.650351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:02.518 [2024-10-09 00:33:30.650361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:91072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.518 [2024-10-09 00:33:30.650366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:02.518 [2024-10-09 00:33:30.650376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:90552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.518 [2024-10-09 00:33:30.650381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:02.518 [2024-10-09 00:33:30.650393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:90184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.518 [2024-10-09 00:33:30.650398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:02.518 [2024-10-09 00:33:30.650408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:90096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.518 [2024-10-09 00:33:30.650413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:02.518 [2024-10-09 00:33:30.650423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:91088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.518 [2024-10-09 00:33:30.650428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:02.518 [2024-10-09 00:33:30.650438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:91104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.518 [2024-10-09 00:33:30.650444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:02.518 [2024-10-09 00:33:30.650454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:91120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.518 [2024-10-09 00:33:30.650459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:02.518 [2024-10-09 00:33:30.650469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:91136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.518 [2024-10-09 00:33:30.650474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:02.518 [2024-10-09 00:33:30.650484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:90784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.518 [2024-10-09 00:33:30.650489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:02.518 [2024-10-09 00:33:30.650499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:90816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.518 [2024-10-09 00:33:30.650504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:02.518 [2024-10-09 00:33:30.650515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:90848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.518 [2024-10-09 00:33:30.650520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:02.518 [2024-10-09 00:33:30.651008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:90880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.518 [2024-10-09 00:33:30.651019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:02.518 [2024-10-09 00:33:30.651030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:90912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.518 [2024-10-09 00:33:30.651035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:02.518 [2024-10-09 00:33:30.651046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:90944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.518 [2024-10-09 00:33:30.651051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:02.518 [2024-10-09 00:33:30.651063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:90976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.518 [2024-10-09 00:33:30.651068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:02.518 [2024-10-09 00:33:30.651078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:90560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.518 [2024-10-09 00:33:30.651083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:02.518 [2024-10-09 00:33:30.651094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:90216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.518 [2024-10-09 00:33:30.651099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:02.518 [2024-10-09 00:33:30.651109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:89960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.518 [2024-10-09 00:33:30.651114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:02.518 [2024-10-09 00:33:30.651125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:89680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.518 [2024-10-09 00:33:30.651130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:02.518 [2024-10-09 00:33:30.652517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:90608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.518 [2024-10-09 00:33:30.652528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:02.518 [2024-10-09 00:33:30.652540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:90672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.518 [2024-10-09 00:33:30.652546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:02.518 [2024-10-09 00:33:30.652556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:90736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.518 [2024-10-09 00:33:30.652561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:02.518 [2024-10-09 00:33:30.652572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:91144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.518 [2024-10-09 00:33:30.652577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:02.518 [2024-10-09 00:33:30.652587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:91160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.518 [2024-10-09 00:33:30.652592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:02.518 [2024-10-09 00:33:30.652602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:91176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.518 [2024-10-09 00:33:30.652607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:02.518 [2024-10-09 00:33:30.652618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:91192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.518 [2024-10-09 00:33:30.652623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:02.518 [2024-10-09 00:33:30.652633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:91208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.518 [2024-10-09 00:33:30.652643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:02.518 [2024-10-09 00:33:30.652653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:91224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.518 [2024-10-09 00:33:30.652658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:02.518 [2024-10-09 00:33:30.652668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:91240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.518 [2024-10-09 00:33:30.652673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:02.518 [2024-10-09 00:33:30.652684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:91256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.518 [2024-10-09 00:33:30.652689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:02.518 [2024-10-09 00:33:30.652699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:90808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.518 [2024-10-09 00:33:30.652704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:02.518 [2024-10-09 00:33:30.652714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:90840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.518 [2024-10-09 00:33:30.652724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:02.519 [2024-10-09 00:33:30.652740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:90872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.519 [2024-10-09 00:33:30.652747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:02.519 [2024-10-09 00:33:30.652757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:90904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.519 [2024-10-09 00:33:30.652762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:02.519 [2024-10-09 00:33:30.652773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:90936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.519 [2024-10-09 00:33:30.652778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:02.519 [2024-10-09 00:33:30.652788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:90968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.519 [2024-10-09 00:33:30.652793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:02.519 [2024-10-09 00:33:30.652804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:90256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.519 [2024-10-09 00:33:30.652809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:02.519 [2024-10-09 00:33:30.652819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:90512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.519 [2024-10-09 00:33:30.652824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:02.519 [2024-10-09 00:33:30.652835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:90688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.519 [2024-10-09 00:33:30.652842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:02.519 [2024-10-09 00:33:30.652852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:90488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.519 [2024-10-09 00:33:30.652857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:02.519 [2024-10-09 00:33:30.652867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:90160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.519 [2024-10-09 00:33:30.652872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:02.519 [2024-10-09 00:33:30.652883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:90280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.519 [2024-10-09 00:33:30.652888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:02.519 [2024-10-09 00:33:30.652898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:91016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.519 [2024-10-09 00:33:30.652903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:02.519 [2024-10-09 00:33:30.652914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:90616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.519 [2024-10-09 00:33:30.652918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:02.519 [2024-10-09 00:33:30.652929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:90680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.519 [2024-10-09 00:33:30.652934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:02.519 [2024-10-09 00:33:30.652944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:90744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.519 [2024-10-09 00:33:30.652950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:02.519 [2024-10-09 00:33:30.652960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:91056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.519 [2024-10-09 00:33:30.652965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:02.519 [2024-10-09 00:33:30.652975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:90552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.519 [2024-10-09 00:33:30.652980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:02.519 [2024-10-09 00:33:30.652990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:90096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.519 [2024-10-09 00:33:30.652996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:02.519 [2024-10-09 00:33:30.653006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:91104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.519 [2024-10-09 00:33:30.653011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:02.519 [2024-10-09 00:33:30.653021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:91136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.519 [2024-10-09 00:33:30.653026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:02.519 [2024-10-09 00:33:30.653038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:90816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.519 [2024-10-09 00:33:30.653043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:02.519 [2024-10-09 00:33:30.653054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:90568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.519 [2024-10-09 00:33:30.653059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:02.519 [2024-10-09 00:33:30.653068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:91264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.519 [2024-10-09 00:33:30.653074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:02.519 [2024-10-09 00:33:30.653084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:90328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.519 [2024-10-09 00:33:30.653089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:02.519 [2024-10-09 00:33:30.653099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:90656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.519 [2024-10-09 00:33:30.653104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:02.519 [2024-10-09 00:33:30.653114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:90912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.519 [2024-10-09 00:33:30.653120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:02.519 [2024-10-09 00:33:30.653130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:90976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.519 [2024-10-09 00:33:30.653135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:02.519 [2024-10-09 00:33:30.653145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:90216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.519 [2024-10-09 00:33:30.653150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:02.519 [2024-10-09 00:33:30.653160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:89680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.519 [2024-10-09 00:33:30.653165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:02.519 [2024-10-09 00:33:30.653176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:91024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.519 [2024-10-09 00:33:30.653181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:02.519 [2024-10-09 00:33:30.653657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:91048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.519 [2024-10-09 00:33:30.653666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:02.519 [2024-10-09 00:33:30.653677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:91288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.519 [2024-10-09 00:33:30.653683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:02.519 [2024-10-09 00:33:30.653695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:91304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.519 [2024-10-09 00:33:30.653700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:02.519 [2024-10-09 00:33:30.653710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:91320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.519 [2024-10-09 00:33:30.653716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:02.519 [2024-10-09 00:33:30.653730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:91336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.519 [2024-10-09 00:33:30.653735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:02.519 [2024-10-09 00:33:30.653745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:91080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.519 [2024-10-09 00:33:30.653750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:02.519 [2024-10-09 00:33:30.653760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:91112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.519 [2024-10-09 00:33:30.653766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:02.520 [2024-10-09 00:33:30.653776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:90800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.520 [2024-10-09 00:33:30.653781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:02.520 [2024-10-09 00:33:30.653791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:90864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.520 [2024-10-09 00:33:30.653796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:02.520 [2024-10-09 00:33:30.653807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:91352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.520 [2024-10-09 00:33:30.653811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:02.520 [2024-10-09 00:33:30.653822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:91368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.520 [2024-10-09 00:33:30.653827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:02.520 [2024-10-09 00:33:30.653837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.520 [2024-10-09 00:33:30.653842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:02.520 [2024-10-09 00:33:30.653852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:91400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.520 [2024-10-09 00:33:30.653857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:02.520 [2024-10-09 00:33:30.653867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:91416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.520 [2024-10-09 00:33:30.653872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:02.520 [2024-10-09 00:33:30.653882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:91432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.520 [2024-10-09 00:33:30.653889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:02.520 [2024-10-09 00:33:30.653899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:90928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.520 [2024-10-09 00:33:30.653904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:02.520 [2024-10-09 00:33:30.653915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:90992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.520 [2024-10-09 00:33:30.653920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:02.520 [2024-10-09 00:33:30.655203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:91440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.520 [2024-10-09 00:33:30.655216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:02.520 [2024-10-09 00:33:30.655227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:91456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.520 [2024-10-09 00:33:30.655233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:02.520 [2024-10-09 00:33:30.655243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:91472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.520 [2024-10-09 00:33:30.655249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:02.520 [2024-10-09 00:33:30.655259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:91488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.520 [2024-10-09 00:33:30.655264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:02.520 [2024-10-09 00:33:30.655274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:91152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.520 [2024-10-09 00:33:30.655279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:02.520 [2024-10-09 00:33:30.655290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:91184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.520 [2024-10-09 00:33:30.655295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:02.520 [2024-10-09 00:33:30.655305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:91216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.520 [2024-10-09 00:33:30.655310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:02.520 [2024-10-09 00:33:30.655320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:91248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.520 [2024-10-09 00:33:30.655325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:02.520 [2024-10-09 00:33:30.655336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:90672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.520 [2024-10-09 00:33:30.655341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:02.520 [2024-10-09 00:33:30.655351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:91144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.520 [2024-10-09 00:33:30.655358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:02.520 [2024-10-09 00:33:30.655368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:91176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.520 [2024-10-09 00:33:30.655374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:02.520 [2024-10-09 00:33:30.655384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:91208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.520 [2024-10-09 00:33:30.655389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:02.520 [2024-10-09 00:33:30.655399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:91240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.520 [2024-10-09 00:33:30.655404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:02.520 [2024-10-09 00:33:30.655414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:90808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.520 [2024-10-09 00:33:30.655419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:02.520 [2024-10-09 00:33:30.655430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:90872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.520 [2024-10-09 00:33:30.655435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:02.520 [2024-10-09 00:33:30.655445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:90936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.520 [2024-10-09 00:33:30.655450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:02.520 [2024-10-09 00:33:30.655460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:90256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.520 [2024-10-09 00:33:30.655466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:02.520 [2024-10-09 00:33:30.655476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:90688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.520 [2024-10-09 00:33:30.655481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:02.520 [2024-10-09 00:33:30.655491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:90160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.520 [2024-10-09 00:33:30.655496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:02.520 [2024-10-09 00:33:30.655506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:91016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.520 [2024-10-09 00:33:30.655511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:02.520 [2024-10-09 00:33:30.655521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:90680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.520 [2024-10-09 00:33:30.655526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:02.520 [2024-10-09 00:33:30.655537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:91056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.520 [2024-10-09 00:33:30.655543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:02.520 [2024-10-09 00:33:30.655553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:90096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.520 [2024-10-09 00:33:30.655558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:02.520 [2024-10-09 00:33:30.655569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:91136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.520 [2024-10-09 00:33:30.655574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:02.520 [2024-10-09 00:33:30.655584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:90568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.520 [2024-10-09 00:33:30.655589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:02.520 [2024-10-09 00:33:30.655599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:90328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.520 [2024-10-09 00:33:30.655604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:02.520 [2024-10-09 00:33:30.655615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:90912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.520 [2024-10-09 00:33:30.655620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:02.520 [2024-10-09 00:33:30.655630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:90216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.520 [2024-10-09 00:33:30.655635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:02.520 [2024-10-09 00:33:30.655645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:91024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.520 [2024-10-09 00:33:30.655650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:02.520 [2024-10-09 00:33:30.655660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:90760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.520 [2024-10-09 00:33:30.655666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:02.520 [2024-10-09 00:33:30.655676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:90752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.520 [2024-10-09 00:33:30.655681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:02.520 [2024-10-09 00:33:30.655691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:91000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.521 [2024-10-09 00:33:30.655696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:02.521 [2024-10-09 00:33:30.655707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:91072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.521 [2024-10-09 00:33:30.655712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:02.521 [2024-10-09 00:33:30.655725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:91288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.521 [2024-10-09 00:33:30.655731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:02.521 [2024-10-09 00:33:30.655742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:91320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.521 [2024-10-09 00:33:30.655747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:02.521 [2024-10-09 00:33:30.655757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:91080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.521 [2024-10-09 00:33:30.655762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.521 [2024-10-09 00:33:30.655773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:90800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.521 [2024-10-09 00:33:30.655778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:02.521 [2024-10-09 00:33:30.655788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:91352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.521 [2024-10-09 00:33:30.655793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:02.521 [2024-10-09 00:33:30.655803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.521 [2024-10-09 00:33:30.655808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:02.521 [2024-10-09 00:33:30.655818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:91416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.521 [2024-10-09 00:33:30.655823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:02.521 [2024-10-09 00:33:30.655834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:90928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.521 [2024-10-09 00:33:30.655839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:02.521 [2024-10-09 00:33:30.656324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:91504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.521 [2024-10-09 00:33:30.656334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:02.521 [2024-10-09 00:33:30.656345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:91520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.521 [2024-10-09 00:33:30.656351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:02.521 [2024-10-09 00:33:30.656361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:91536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.521 [2024-10-09 00:33:30.656366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:02.521 [2024-10-09 00:33:30.656376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:91552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.521 [2024-10-09 00:33:30.656381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:02.521 [2024-10-09 00:33:30.656392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:91568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.521 [2024-10-09 00:33:30.656397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:02.521 [2024-10-09 00:33:30.656409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:91088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.521 [2024-10-09 00:33:30.656414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:02.521 [2024-10-09 00:33:30.656424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:90784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.521 [2024-10-09 00:33:30.656429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:02.521 [2024-10-09 00:33:30.656440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:91272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.521 [2024-10-09 00:33:30.656445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:02.522 [2024-10-09 00:33:30.656455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:90944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.522 [2024-10-09 00:33:30.656460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:02.522 [2024-10-09 00:33:30.656470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:91592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.522 [2024-10-09 00:33:30.656475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:02.522 [2024-10-09 00:33:30.656485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:91608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.522 [2024-10-09 00:33:30.656491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:02.522 [2024-10-09 00:33:30.656501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:91280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.522 [2024-10-09 00:33:30.656506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:02.522 [2024-10-09 00:33:30.656517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:91312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.522 [2024-10-09 00:33:30.656522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:02.522 [2024-10-09 00:33:30.657962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:91344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.522 [2024-10-09 00:33:30.657974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:02.522 [2024-10-09 00:33:30.657986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:91376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.522 [2024-10-09 00:33:30.657992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:02.522 [2024-10-09 00:33:30.658002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:91632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.522 [2024-10-09 00:33:30.658007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:02.522 [2024-10-09 00:33:30.658017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:91648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.522 [2024-10-09 00:33:30.658022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:02.522 [2024-10-09 00:33:30.658033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:91664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.522 [2024-10-09 00:33:30.658041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:02.522 [2024-10-09 00:33:30.658051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:91392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.522 [2024-10-09 00:33:30.658056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:02.522 [2024-10-09 00:33:30.658067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:91424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.523 [2024-10-09 00:33:30.658071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:02.523 [2024-10-09 00:33:30.658082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:91688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.523 [2024-10-09 00:33:30.658087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:02.523 [2024-10-09 00:33:30.658097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:91704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.523 [2024-10-09 00:33:30.658102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:02.523 [2024-10-09 00:33:30.658112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:91464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.523 [2024-10-09 00:33:30.658117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:02.523 [2024-10-09 00:33:30.658127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:91496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.523 [2024-10-09 00:33:30.658133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:02.523 [2024-10-09 00:33:30.658143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:91456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.523 [2024-10-09 00:33:30.658148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:02.523 [2024-10-09 00:33:30.658158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:91488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.523 [2024-10-09 00:33:30.658163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:02.523 [2024-10-09 00:33:30.658173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:91184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.523 [2024-10-09 00:33:30.658179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:02.523 [2024-10-09 00:33:30.658189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:91248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.523 [2024-10-09 00:33:30.658194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:02.523 [2024-10-09 00:33:30.658204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:91144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.524 [2024-10-09 00:33:30.658209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:02.524 [2024-10-09 00:33:30.658219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:91208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.524 [2024-10-09 00:33:30.658226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:02.524 [2024-10-09 00:33:30.658236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:90808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.524 [2024-10-09 00:33:30.658241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:02.524 [2024-10-09 00:33:30.658251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:90936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.524 [2024-10-09 00:33:30.658256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:02.524 [2024-10-09 00:33:30.658266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:90688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.524 [2024-10-09 00:33:30.658272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:02.524 [2024-10-09 00:33:30.658282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:91016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.524 [2024-10-09 00:33:30.658287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:02.524 [2024-10-09 00:33:30.658297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:91056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.524 [2024-10-09 00:33:30.658302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:02.524 [2024-10-09 00:33:30.658312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:91136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.524 [2024-10-09 00:33:30.658317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:02.524 [2024-10-09 00:33:30.658327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:90328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.524 [2024-10-09 00:33:30.658332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:02.524 [2024-10-09 00:33:30.658343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:90216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.524 [2024-10-09 00:33:30.658348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:02.524 [2024-10-09 00:33:30.658358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:90760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.525 [2024-10-09 00:33:30.658363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:02.525 [2024-10-09 00:33:30.658373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:91000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.525 [2024-10-09 00:33:30.658378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:02.525 [2024-10-09 00:33:30.658389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:91288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.525 [2024-10-09 00:33:30.658394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:02.525 [2024-10-09 00:33:30.658404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:91080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.525 [2024-10-09 00:33:30.658409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:02.525 [2024-10-09 00:33:30.658422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:91352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.525 [2024-10-09 00:33:30.658427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:02.525 [2024-10-09 00:33:30.658437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:91416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.525 [2024-10-09 00:33:30.658442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:02.525 [2024-10-09 00:33:30.658453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:91160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.525 [2024-10-09 00:33:30.658458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:02.525 [2024-10-09 00:33:30.658468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:91224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.525 [2024-10-09 00:33:30.658473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:02.525 [2024-10-09 00:33:30.658483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:91720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.525 [2024-10-09 00:33:30.658489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:02.525 [2024-10-09 00:33:30.658499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:91520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.525 [2024-10-09 00:33:30.658504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:02.525 [2024-10-09 00:33:30.658514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:91552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.526 [2024-10-09 00:33:30.658519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:02.526 [2024-10-09 00:33:30.658530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:91088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.526 [2024-10-09 00:33:30.658535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:02.526 [2024-10-09 00:33:30.658545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:91272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.526 [2024-10-09 00:33:30.658550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:02.526 [2024-10-09 00:33:30.658560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:91592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.526 [2024-10-09 00:33:30.658565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:02.526 [2024-10-09 00:33:30.658576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:91280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.526 [2024-10-09 00:33:30.658581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:02.526 [2024-10-09 00:33:30.658591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:91104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.526 [2024-10-09 00:33:30.658596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:02.526 [2024-10-09 00:33:30.658607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:91264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.526 [2024-10-09 00:33:30.658613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:02.526 [2024-10-09 00:33:30.658623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:91304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.526 [2024-10-09 00:33:30.658628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:02.526 [2024-10-09 00:33:30.658638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:91368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.526 [2024-10-09 00:33:30.658643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:02.526 [2024-10-09 00:33:30.658653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:91736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.526 [2024-10-09 00:33:30.658659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:02.526 [2024-10-09 00:33:30.658669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:91752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.526 [2024-10-09 00:33:30.658674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:02.526 [2024-10-09 00:33:30.658684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:91768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.527 [2024-10-09 00:33:30.658689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:02.527 [2024-10-09 00:33:30.658699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:91432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.527 [2024-10-09 00:33:30.658705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:02.527 [2024-10-09 00:33:30.658715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:91528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.527 [2024-10-09 00:33:30.658723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:02.527 [2024-10-09 00:33:30.658734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:91560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.527 [2024-10-09 00:33:30.658739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:02.527 [2024-10-09 00:33:30.660390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:91584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.527 [2024-10-09 00:33:30.660403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:02.527 [2024-10-09 00:33:30.660415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:91784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.527 [2024-10-09 00:33:30.660421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:02.527 [2024-10-09 00:33:30.660431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:91800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.527 [2024-10-09 00:33:30.660436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:02.527 [2024-10-09 00:33:30.660447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:91816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.527 [2024-10-09 00:33:30.660454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:02.527 [2024-10-09 00:33:30.660465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:91832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.527 [2024-10-09 00:33:30.660470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:02.527 [2024-10-09 00:33:30.660480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:91848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.527 [2024-10-09 00:33:30.660486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:02.527 [2024-10-09 00:33:30.660496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:91864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.527 [2024-10-09 00:33:30.660501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:02.527 [2024-10-09 00:33:30.660511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:91880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.527 [2024-10-09 00:33:30.660516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:02.527 [2024-10-09 00:33:30.660527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:91896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.528 [2024-10-09 00:33:30.660532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:02.528 [2024-10-09 00:33:30.660542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:91912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.528 [2024-10-09 00:33:30.660548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:02.528 [2024-10-09 00:33:30.660558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:91616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.528 [2024-10-09 00:33:30.660563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:02.528 [2024-10-09 00:33:30.660573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:91928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.528 [2024-10-09 00:33:30.660578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:02.528 [2024-10-09 00:33:30.660588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:91944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.528 [2024-10-09 00:33:30.660594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:02.528 [2024-10-09 00:33:30.660604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:91960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.528 [2024-10-09 00:33:30.660609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:02.528 [2024-10-09 00:33:30.660619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:91640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.528 [2024-10-09 00:33:30.660624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:02.528 [2024-10-09 00:33:30.660635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:91672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.528 [2024-10-09 00:33:30.660641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:02.528 [2024-10-09 00:33:30.660652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:91696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.528 [2024-10-09 00:33:30.660657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:02.528 [2024-10-09 00:33:30.660667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:91376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.528 [2024-10-09 00:33:30.660672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:02.528 [2024-10-09 00:33:30.660682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:91648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.528 [2024-10-09 00:33:30.660687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:02.529 [2024-10-09 00:33:30.660698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:91392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.529 [2024-10-09 00:33:30.660703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:02.529 [2024-10-09 00:33:30.660713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:91688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.529 [2024-10-09 00:33:30.660718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:02.529 [2024-10-09 00:33:30.660739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:91464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.529 [2024-10-09 00:33:30.660746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:02.529 [2024-10-09 00:33:30.660756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:91456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.529 [2024-10-09 00:33:30.660761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:02.529 [2024-10-09 00:33:30.660772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:91184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.529 [2024-10-09 00:33:30.660777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:02.529 [2024-10-09 00:33:30.661258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:91144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.529 [2024-10-09 00:33:30.661268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:02.529 [2024-10-09 00:33:30.661279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:90808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.529 [2024-10-09 00:33:30.661285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:02.529 [2024-10-09 00:33:30.661295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:90688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.529 [2024-10-09 00:33:30.661300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:02.529 [2024-10-09 00:33:30.661310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:91056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.529 [2024-10-09 00:33:30.661315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:02.529 [2024-10-09 00:33:30.661331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:90328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.529 [2024-10-09 00:33:30.661336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:02.529 [2024-10-09 00:33:30.661346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:90760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.529 [2024-10-09 00:33:30.661352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:02.529 [2024-10-09 00:33:30.661362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:91288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.529 [2024-10-09 00:33:30.661367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:02.529 [2024-10-09 00:33:30.661377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:91352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.529 [2024-10-09 00:33:30.661382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:02.529 [2024-10-09 00:33:30.661393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:91160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.529 [2024-10-09 00:33:30.661398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:02.529 [2024-10-09 00:33:30.661408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:91720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.529 [2024-10-09 00:33:30.661413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:02.529 [2024-10-09 00:33:30.661424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:91552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.529 [2024-10-09 00:33:30.661429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:02.529 [2024-10-09 00:33:30.661439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:91272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.529 [2024-10-09 00:33:30.661444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:02.530 [2024-10-09 00:33:30.661455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:91280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.530 [2024-10-09 00:33:30.661460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:02.530 [2024-10-09 00:33:30.661470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:91264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.530 [2024-10-09 00:33:30.661475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:02.530 [2024-10-09 00:33:30.661485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:91368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.530 [2024-10-09 00:33:30.661491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:02.530 [2024-10-09 00:33:30.661501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:91752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.530 [2024-10-09 00:33:30.661506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:02.530 [2024-10-09 00:33:30.661517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:91432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.530 [2024-10-09 00:33:30.661522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:02.530 [2024-10-09 00:33:30.661533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:91560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.530 [2024-10-09 00:33:30.661538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:02.530 [2024-10-09 00:33:30.661548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:91176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.530 [2024-10-09 00:33:30.661553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:02.530 [2024-10-09 00:33:30.661563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:90912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.530 [2024-10-09 00:33:30.661569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:02.530 [2024-10-09 00:33:30.661579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:91384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.530 [2024-10-09 00:33:30.661584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:02.530 [2024-10-09 00:33:30.661594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:91504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.530 [2024-10-09 00:33:30.661599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:02.530 [2024-10-09 00:33:30.661609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:91976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.530 [2024-10-09 00:33:30.661615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:02.530 [2024-10-09 00:33:30.661625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:91992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.530 [2024-10-09 00:33:30.661630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:02.530 [2024-10-09 00:33:30.661641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:91568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.530 [2024-10-09 00:33:30.661646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:02.530 [2024-10-09 00:33:30.661656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:91728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.530 [2024-10-09 00:33:30.661661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:02.530 [2024-10-09 00:33:30.661672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:91760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.530 [2024-10-09 00:33:30.661677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:02.530 [2024-10-09 00:33:30.662855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:92008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.530 [2024-10-09 00:33:30.662867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:02.530 [2024-10-09 00:33:30.662880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:92024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.530 [2024-10-09 00:33:30.662887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:02.530 [2024-10-09 00:33:30.662897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.530 [2024-10-09 00:33:30.662902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:02.530 [2024-10-09 00:33:30.662913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:92056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.530 [2024-10-09 00:33:30.662918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:02.530 [2024-10-09 00:33:30.662928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:92072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.530 [2024-10-09 00:33:30.662933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:02.530 [2024-10-09 00:33:30.662943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:92088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.530 [2024-10-09 00:33:30.662949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:02.530 [2024-10-09 00:33:30.662959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:92104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.530 [2024-10-09 00:33:30.662964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:02.530 [2024-10-09 00:33:30.662974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:92120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.530 [2024-10-09 00:33:30.662979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:02.530 [2024-10-09 00:33:30.662990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:92136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.530 [2024-10-09 00:33:30.662995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.530 [2024-10-09 00:33:30.663005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:92152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.530 [2024-10-09 00:33:30.663010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:02.530 [2024-10-09 00:33:30.663020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:92168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.530 [2024-10-09 00:33:30.663025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:02.530 [2024-10-09 00:33:30.663036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:92184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.530 [2024-10-09 00:33:30.663041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:02.530 [2024-10-09 00:33:30.663051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:92200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.530 [2024-10-09 00:33:30.663056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:02.530 [2024-10-09 00:33:30.663066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:91784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.530 [2024-10-09 00:33:30.663073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:02.530 [2024-10-09 00:33:30.663083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:91816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.530 [2024-10-09 00:33:30.663088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:02.530 [2024-10-09 00:33:30.663098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:91848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.530 [2024-10-09 00:33:30.663103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:02.530 [2024-10-09 00:33:30.663113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:91880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.530 [2024-10-09 00:33:30.663119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:02.530 [2024-10-09 00:33:30.663128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:91912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.530 [2024-10-09 00:33:30.663134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:02.530 [2024-10-09 00:33:30.663144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:91928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.530 [2024-10-09 00:33:30.663149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:02.530 [2024-10-09 00:33:30.663159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:91960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.530 [2024-10-09 00:33:30.663164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:02.530 [2024-10-09 00:33:30.663174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:91672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.530 [2024-10-09 00:33:30.663179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:02.530 [2024-10-09 00:33:30.663189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:91376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.530 [2024-10-09 00:33:30.663194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:02.530 [2024-10-09 00:33:30.663205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:91392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.530 [2024-10-09 00:33:30.663210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:02.530 [2024-10-09 00:33:30.663220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:91464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.530 [2024-10-09 00:33:30.663225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:02.530 [2024-10-09 00:33:30.663235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:91184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.530 [2024-10-09 00:33:30.663240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:02.530 [2024-10-09 00:33:30.663251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:91792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.530 [2024-10-09 00:33:30.663256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:02.530 [2024-10-09 00:33:30.663267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:91824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.530 [2024-10-09 00:33:30.663272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:02.531 [2024-10-09 00:33:30.663282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:91856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.531 [2024-10-09 00:33:30.663287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:02.531 [2024-10-09 00:33:30.663298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:91888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.531 [2024-10-09 00:33:30.663303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:02.531 [2024-10-09 00:33:30.663313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:91920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.531 [2024-10-09 00:33:30.663318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:02.531 [2024-10-09 00:33:30.663328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:91952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.531 [2024-10-09 00:33:30.663333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:02.531 [2024-10-09 00:33:30.663344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:91664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.531 [2024-10-09 00:33:30.663349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:02.531 [2024-10-09 00:33:30.663359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:90808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.531 [2024-10-09 00:33:30.663364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:02.531 [2024-10-09 00:33:30.663374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:91056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.531 [2024-10-09 00:33:30.663379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:02.531 [2024-10-09 00:33:30.663390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:90760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.531 [2024-10-09 00:33:30.663395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:02.531 [2024-10-09 00:33:30.663405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:91352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.531 [2024-10-09 00:33:30.663410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:02.531 [2024-10-09 00:33:30.663420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:91720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.531 [2024-10-09 00:33:30.663425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:02.531 [2024-10-09 00:33:30.663436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:91272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.531 [2024-10-09 00:33:30.663441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:02.531 [2024-10-09 00:33:30.663452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:91264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.531 [2024-10-09 00:33:30.663457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:02.531 [2024-10-09 00:33:30.663467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:91752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.531 [2024-10-09 00:33:30.663472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:02.531 [2024-10-09 00:33:30.663483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:91560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.531 [2024-10-09 00:33:30.663487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:02.531 [2024-10-09 00:33:30.663498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:90912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.531 [2024-10-09 00:33:30.663503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:02.531 [2024-10-09 00:33:30.663514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:91504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.531 [2024-10-09 00:33:30.663519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:02.531 [2024-10-09 00:33:30.664218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:91992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.531 [2024-10-09 00:33:30.664229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:02.531 [2024-10-09 00:33:30.664241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:91728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.531 [2024-10-09 00:33:30.664246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:02.531 [2024-10-09 00:33:30.664257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:91488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.531 [2024-10-09 00:33:30.664262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:02.531 [2024-10-09 00:33:30.664272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:91016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.531 [2024-10-09 00:33:30.664277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:02.531 [2024-10-09 00:33:30.664288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:91416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.531 [2024-10-09 00:33:30.664293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:02.531 [2024-10-09 00:33:30.664304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:91592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.531 [2024-10-09 00:33:30.664309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:02.531 [2024-10-09 00:33:30.664319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:92208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.531 [2024-10-09 00:33:30.664324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:02.531 [2024-10-09 00:33:30.664334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:92224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.531 [2024-10-09 00:33:30.664342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:02.531 [2024-10-09 00:33:30.664352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:92240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.531 [2024-10-09 00:33:30.664357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:02.531 [2024-10-09 00:33:30.664367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:92256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.531 [2024-10-09 00:33:30.664373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:02.531 [2024-10-09 00:33:30.664383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:91768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.531 [2024-10-09 00:33:30.664388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:02.531 [2024-10-09 00:33:30.664399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:91984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.531 [2024-10-09 00:33:30.664404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:02.531 [2024-10-09 00:33:30.664767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:92280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.531 [2024-10-09 00:33:30.664777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:02.531 [2024-10-09 00:33:30.664788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:92296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.531 [2024-10-09 00:33:30.664793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:02.531 [2024-10-09 00:33:30.664803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:92312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.531 [2024-10-09 00:33:30.664809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:02.531 [2024-10-09 00:33:30.664819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:92328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.531 [2024-10-09 00:33:30.664824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:02.531 [2024-10-09 00:33:30.664834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:92000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.531 [2024-10-09 00:33:30.664839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:02.531 [2024-10-09 00:33:30.664849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:92032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.531 [2024-10-09 00:33:30.664855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:02.531 [2024-10-09 00:33:30.664865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:92064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.531 [2024-10-09 00:33:30.664870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:02.531 [2024-10-09 00:33:30.664880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:92096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.531 [2024-10-09 00:33:30.664888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:02.531 [2024-10-09 00:33:30.664898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:92128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.531 [2024-10-09 00:33:30.664904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:02.531 [2024-10-09 00:33:30.664914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:92160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.531 [2024-10-09 00:33:30.664919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:02.531 [2024-10-09 00:33:30.664929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:92192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.531 [2024-10-09 00:33:30.664934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:02.531 [2024-10-09 00:33:30.664944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:91832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.531 [2024-10-09 00:33:30.664949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:02.531 [2024-10-09 00:33:30.664960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:92024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.531 [2024-10-09 00:33:30.664965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:02.531 [2024-10-09 00:33:30.664975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:92056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.531 [2024-10-09 00:33:30.664980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:02.531 [2024-10-09 00:33:30.664991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:92088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.531 [2024-10-09 00:33:30.664996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:02.532 [2024-10-09 00:33:30.665967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:92120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.532 [2024-10-09 00:33:30.665977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:02.532 [2024-10-09 00:33:30.665989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:92152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.532 [2024-10-09 00:33:30.665994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:02.532 [2024-10-09 00:33:30.666004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:92184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.532 [2024-10-09 00:33:30.666009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:02.532 [2024-10-09 00:33:30.666019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:91784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.532 [2024-10-09 00:33:30.666025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:02.532 [2024-10-09 00:33:30.666035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:91848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.532 [2024-10-09 00:33:30.666043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:02.532 [2024-10-09 00:33:30.666053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:91912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.532 [2024-10-09 00:33:30.666058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:02.532 [2024-10-09 00:33:30.666068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:91960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.532 [2024-10-09 00:33:30.666074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:02.532 [2024-10-09 00:33:30.666084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:91376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.532 [2024-10-09 00:33:30.666089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:02.532 [2024-10-09 00:33:30.666100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:91464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.532 [2024-10-09 00:33:30.666105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:02.532 [2024-10-09 00:33:30.666115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:91792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.532 [2024-10-09 00:33:30.666120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:02.532 [2024-10-09 00:33:30.666130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:91856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.532 [2024-10-09 00:33:30.666135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:02.532 [2024-10-09 00:33:30.666146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:91920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.532 [2024-10-09 00:33:30.666151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:02.532 [2024-10-09 00:33:30.666161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:91664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.532 [2024-10-09 00:33:30.666166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:02.532 [2024-10-09 00:33:30.666176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:91056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.532 [2024-10-09 00:33:30.666182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:02.532 [2024-10-09 00:33:30.666192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:91352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.532 [2024-10-09 00:33:30.666197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:02.532 [2024-10-09 00:33:30.666207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:91272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.532 [2024-10-09 00:33:30.666212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:02.532 [2024-10-09 00:33:30.666222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:91752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.532 [2024-10-09 00:33:30.666228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:02.532 [2024-10-09 00:33:30.666239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:90912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.532 [2024-10-09 00:33:30.666244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:02.532 [2024-10-09 00:33:30.666254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:92344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.532 [2024-10-09 00:33:30.666259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:02.532 [2024-10-09 00:33:30.666270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:92360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.532 [2024-10-09 00:33:30.666275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:02.532 [2024-10-09 00:33:30.666285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:92376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.532 [2024-10-09 00:33:30.666290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:02.532 [2024-10-09 00:33:30.666300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:92392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.532 [2024-10-09 00:33:30.666306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:02.532 [2024-10-09 00:33:30.666316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:91864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.532 [2024-10-09 00:33:30.666321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:02.532 [2024-10-09 00:33:30.666332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:91944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.532 [2024-10-09 00:33:30.666337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:02.532 [2024-10-09 00:33:30.666347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:91688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.532 [2024-10-09 00:33:30.666353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:02.532 [2024-10-09 00:33:30.666363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:91728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.532 [2024-10-09 00:33:30.666368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:02.532 [2024-10-09 00:33:30.666379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:91016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.532 [2024-10-09 00:33:30.666384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:02.532 [2024-10-09 00:33:30.666394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:91592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.532 [2024-10-09 00:33:30.666399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:02.532 [2024-10-09 00:33:30.666409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:92224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.532 [2024-10-09 00:33:30.666414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:02.532 [2024-10-09 00:33:30.666426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:92256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.532 [2024-10-09 00:33:30.666431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:02.532 [2024-10-09 00:33:30.666441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:91984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.532 [2024-10-09 00:33:30.666446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:02.532 [2024-10-09 00:33:30.666456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:90688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.532 [2024-10-09 00:33:30.666461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:02.533 [2024-10-09 00:33:30.666471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:91288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.533 [2024-10-09 00:33:30.666477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:02.533 [2024-10-09 00:33:30.666487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:91976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.533 [2024-10-09 00:33:30.666492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:02.533 [2024-10-09 00:33:30.666502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:92296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.533 [2024-10-09 00:33:30.666507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:02.533 [2024-10-09 00:33:30.666517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:92328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.533 [2024-10-09 00:33:30.666522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:02.533 [2024-10-09 00:33:30.666533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:92032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.533 [2024-10-09 00:33:30.666538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:02.533 [2024-10-09 00:33:30.666548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:92096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.533 [2024-10-09 00:33:30.666553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:02.533 [2024-10-09 00:33:30.666563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:92160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.533 [2024-10-09 00:33:30.666568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:02.533 [2024-10-09 00:33:30.666578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:91832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.533 [2024-10-09 00:33:30.666584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:02.533 [2024-10-09 00:33:30.666594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:92056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.533 [2024-10-09 00:33:30.666599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:02.533 [2024-10-09 00:33:30.668547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:92216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.533 [2024-10-09 00:33:30.668564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:02.533 [2024-10-09 00:33:30.668576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:92248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.533 [2024-10-09 00:33:30.668581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:02.533 [2024-10-09 00:33:30.668591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:92416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.533 [2024-10-09 00:33:30.668596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:02.533 [2024-10-09 00:33:30.668606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:92432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.533 [2024-10-09 00:33:30.668611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:02.533 [2024-10-09 00:33:30.668621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:92448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.533 [2024-10-09 00:33:30.668626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:02.533 [2024-10-09 00:33:30.668636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:92464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.533 [2024-10-09 00:33:30.668641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:02.533 [2024-10-09 00:33:30.668651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:92480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.533 [2024-10-09 00:33:30.668656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:02.533 [2024-10-09 00:33:30.668666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:92496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.533 [2024-10-09 00:33:30.668671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:02.533 [2024-10-09 00:33:30.668682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:92512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.533 [2024-10-09 00:33:30.668687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:02.533 [2024-10-09 00:33:30.668697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:92528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.533 [2024-10-09 00:33:30.668702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:02.533 [2024-10-09 00:33:30.668712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:92544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.533 [2024-10-09 00:33:30.668717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:02.533 [2024-10-09 00:33:30.668737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:92560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.533 [2024-10-09 00:33:30.668745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:02.533 [2024-10-09 00:33:30.668755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:92576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.533 [2024-10-09 00:33:30.668762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:02.533 [2024-10-09 00:33:30.668772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:92592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.533 [2024-10-09 00:33:30.668778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:02.533 [2024-10-09 00:33:30.668788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:92608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.533 [2024-10-09 00:33:30.668793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:02.533 [2024-10-09 00:33:30.668803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:92288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.533 [2024-10-09 00:33:30.668808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:02.533 [2024-10-09 00:33:30.668818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:92320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.533 [2024-10-09 00:33:30.668823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:02.533 [2024-10-09 00:33:30.668833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:92152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.533 [2024-10-09 00:33:30.668838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:02.533 [2024-10-09 00:33:30.668849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:91784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.533 [2024-10-09 00:33:30.668854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:02.533 [2024-10-09 00:33:30.668864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:91912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.533 [2024-10-09 00:33:30.668869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:02.534 [2024-10-09 00:33:30.668879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:91376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.534 [2024-10-09 00:33:30.668884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:02.534 [2024-10-09 00:33:30.668894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:91792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.534 [2024-10-09 00:33:30.668900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:02.534 [2024-10-09 00:33:30.668910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:91920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.534 [2024-10-09 00:33:30.668915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:02.534 [2024-10-09 00:33:30.668925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:91056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.534 [2024-10-09 00:33:30.668930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:02.534 [2024-10-09 00:33:30.668940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:91272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.534 [2024-10-09 00:33:30.668945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:02.534 [2024-10-09 00:33:30.668957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:90912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.534 [2024-10-09 00:33:30.668962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.534 [2024-10-09 00:33:30.668972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:92360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.534 [2024-10-09 00:33:30.668977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:02.534 [2024-10-09 00:33:30.668988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:92392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.534 [2024-10-09 00:33:30.668992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:02.534 [2024-10-09 00:33:30.669003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:91944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.534 [2024-10-09 00:33:30.669008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:02.534 [2024-10-09 00:33:30.669018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:91728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.534 [2024-10-09 00:33:30.669023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:02.534 [2024-10-09 00:33:30.669033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:91592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.534 [2024-10-09 00:33:30.669039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:02.534 [2024-10-09 00:33:30.669049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:92256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.534 [2024-10-09 00:33:30.669054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:02.534 [2024-10-09 00:33:30.669064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:90688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.534 [2024-10-09 00:33:30.669069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:02.534 [2024-10-09 00:33:30.669079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:91976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.534 [2024-10-09 00:33:30.669084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:02.534 [2024-10-09 00:33:30.669095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:92328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.534 [2024-10-09 00:33:30.669100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:02.534 [2024-10-09 00:33:30.669110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:92096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.534 [2024-10-09 00:33:30.669115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:02.534 [2024-10-09 00:33:30.669125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:91832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.534 [2024-10-09 00:33:30.669131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:02.534 [2024-10-09 00:33:30.669142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:92008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.534 [2024-10-09 00:33:30.669147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:02.534 [2024-10-09 00:33:30.669157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:92072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.534 [2024-10-09 00:33:30.669162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:02.534 [2024-10-09 00:33:30.669173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:92136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.534 [2024-10-09 00:33:30.669178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:02.534 [2024-10-09 00:33:30.669188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:92200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.534 [2024-10-09 00:33:30.669193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:02.534 [2024-10-09 00:33:30.669204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:91880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.534 [2024-10-09 00:33:30.669209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:02.534 [2024-10-09 00:33:30.669220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:92616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.535 [2024-10-09 00:33:30.669225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:02.535 [2024-10-09 00:33:30.669235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:92632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.535 [2024-10-09 00:33:30.669240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:02.535 [2024-10-09 00:33:30.669809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:92648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.535 [2024-10-09 00:33:30.669821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:02.535 [2024-10-09 00:33:30.669844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:92664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.535 [2024-10-09 00:33:30.669849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:02.535 [2024-10-09 00:33:30.669860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:92352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.535 [2024-10-09 00:33:30.669865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:02.535 [2024-10-09 00:33:30.669876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:92384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.535 [2024-10-09 00:33:30.669881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:02.535 [2024-10-09 00:33:30.669891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:91992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.535 [2024-10-09 00:33:30.669897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:02.535 [2024-10-09 00:33:30.669907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:92240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.535 [2024-10-09 00:33:30.669916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:02.535 [2024-10-09 00:33:30.669927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:92280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.535 [2024-10-09 00:33:30.669932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:02.535 [2024-10-09 00:33:30.669942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:92024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.535 [2024-10-09 00:33:30.669947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:02.535 [2024-10-09 00:33:30.669958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:92672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.535 [2024-10-09 00:33:30.669963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:02.535 [2024-10-09 00:33:30.669973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:92688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.535 [2024-10-09 00:33:30.669978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:02.535 [2024-10-09 00:33:30.669988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:92704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.535 [2024-10-09 00:33:30.669993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:02.535 [2024-10-09 00:33:30.670003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:92720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.535 [2024-10-09 00:33:30.670008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:02.535 [2024-10-09 00:33:30.670018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:92736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.535 [2024-10-09 00:33:30.670024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:02.535 [2024-10-09 00:33:30.670034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:92752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.535 [2024-10-09 00:33:30.670039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:02.535 [2024-10-09 00:33:30.670049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:92768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.535 [2024-10-09 00:33:30.670054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:02.535 [2024-10-09 00:33:30.670064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:92784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.535 [2024-10-09 00:33:30.670069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:02.536 [2024-10-09 00:33:30.670079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:92800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.536 [2024-10-09 00:33:30.670085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:02.536 [2024-10-09 00:33:30.670095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:92816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.536 [2024-10-09 00:33:30.670101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:02.536 [2024-10-09 00:33:30.670112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:92832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.536 [2024-10-09 00:33:30.670117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:02.536 [2024-10-09 00:33:30.670127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:92848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.536 [2024-10-09 00:33:30.670132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:02.536 [2024-10-09 00:33:30.670505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:92440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.536 [2024-10-09 00:33:30.670514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:02.536 [2024-10-09 00:33:30.670526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:92472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.536 [2024-10-09 00:33:30.670531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:02.536 [2024-10-09 00:33:30.670541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:92504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.536 [2024-10-09 00:33:30.670546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:02.536 [2024-10-09 00:33:30.670557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:92536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.536 [2024-10-09 00:33:30.670562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:02.536 [2024-10-09 00:33:30.670572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:92568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.536 [2024-10-09 00:33:30.670577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:02.536 [2024-10-09 00:33:30.670588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:92600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.536 [2024-10-09 00:33:30.670593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:02.536 [2024-10-09 00:33:30.670603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:92184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.536 [2024-10-09 00:33:30.670608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:02.536 11847.92 IOPS, 46.28 MiB/s [2024-10-08T22:33:33.171Z] 11880.35 IOPS, 46.41 MiB/s [2024-10-08T22:33:33.171Z] Received shutdown signal, test time was about 26.698270 seconds 00:26:02.536 00:26:02.536 Latency(us) 00:26:02.536 [2024-10-08T22:33:33.171Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:02.536 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:26:02.536 Verification LBA range: start 0x0 length 0x4000 00:26:02.537 Nvme0n1 : 26.70 11900.60 46.49 0.00 0.00 10736.58 484.69 3019898.88 00:26:02.537 [2024-10-08T22:33:33.172Z] =================================================================================================================== 00:26:02.537 [2024-10-08T22:33:33.172Z] Total : 11900.60 46.49 0.00 0.00 10736.58 484.69 3019898.88 00:26:02.537 00:33:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:02.807 00:33:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:26:02.807 00:33:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:26:02.807 00:33:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:26:02.807 00:33:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@514 -- # nvmfcleanup 00:26:02.807 00:33:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:26:02.807 00:33:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:02.807 00:33:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:26:02.807 00:33:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:02.807 00:33:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:02.807 rmmod nvme_tcp 00:26:02.807 rmmod nvme_fabrics 00:26:02.807 rmmod nvme_keyring 00:26:02.807 00:33:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:02.807 00:33:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:26:02.807 00:33:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:26:02.807 00:33:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@515 -- # '[' -n 3376562 ']' 00:26:02.807 00:33:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # killprocess 3376562 00:26:02.807 00:33:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # '[' -z 3376562 ']' 00:26:02.807 00:33:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # kill -0 3376562 00:26:02.807 00:33:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # uname 00:26:02.807 00:33:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:02.807 00:33:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3376562 00:26:02.807 00:33:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:26:02.807 00:33:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:26:02.807 00:33:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3376562' 00:26:02.807 killing process with pid 3376562 00:26:02.807 00:33:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@969 -- # kill 3376562 00:26:02.807 00:33:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@974 -- # wait 3376562 00:26:02.807 00:33:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:26:02.807 00:33:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:26:02.808 00:33:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:26:02.808 00:33:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:26:02.808 00:33:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@789 -- # iptables-save 00:26:02.808 00:33:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:26:02.808 00:33:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@789 -- # iptables-restore 00:26:02.808 00:33:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:02.808 00:33:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:02.808 00:33:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:02.808 00:33:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:02.808 00:33:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:05.353 00:33:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:05.353 00:26:05.353 real 0m41.361s 00:26:05.353 user 1m46.565s 00:26:05.353 sys 0m11.716s 00:26:05.353 00:33:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:05.353 00:33:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:05.353 ************************************ 00:26:05.353 END TEST nvmf_host_multipath_status 00:26:05.353 ************************************ 00:26:05.353 00:33:35 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:26:05.353 00:33:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:26:05.353 00:33:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:05.353 00:33:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.353 ************************************ 00:26:05.353 START TEST nvmf_discovery_remove_ifc 00:26:05.353 ************************************ 00:26:05.353 00:33:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:26:05.353 * Looking for test storage... 00:26:05.353 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:05.353 00:33:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:26:05.353 00:33:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1681 -- # lcov --version 00:26:05.353 00:33:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:26:05.353 00:33:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:26:05.353 00:33:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:05.353 00:33:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:05.353 00:33:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:05.353 00:33:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:26:05.353 00:33:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:26:05.353 00:33:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:26:05.353 00:33:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:26:05.353 00:33:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:26:05.353 00:33:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:26:05.353 00:33:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:26:05.353 00:33:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:05.353 00:33:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:26:05.353 00:33:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:26:05.353 00:33:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:05.353 00:33:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:05.353 00:33:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:26:05.353 00:33:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:26:05.353 00:33:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:05.353 00:33:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:26:05.353 00:33:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:26:05.354 00:33:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:26:05.354 00:33:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:26:05.354 00:33:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:05.354 00:33:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:26:05.354 00:33:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:26:05.354 00:33:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:05.354 00:33:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:05.354 00:33:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:26:05.354 00:33:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:05.354 00:33:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:26:05.354 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:05.354 --rc genhtml_branch_coverage=1 00:26:05.354 --rc genhtml_function_coverage=1 00:26:05.354 --rc genhtml_legend=1 00:26:05.354 --rc geninfo_all_blocks=1 00:26:05.354 --rc geninfo_unexecuted_blocks=1 00:26:05.354 00:26:05.354 ' 00:26:05.354 00:33:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:26:05.354 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:05.354 --rc genhtml_branch_coverage=1 00:26:05.354 --rc genhtml_function_coverage=1 00:26:05.354 --rc genhtml_legend=1 00:26:05.354 --rc geninfo_all_blocks=1 00:26:05.354 --rc geninfo_unexecuted_blocks=1 00:26:05.354 00:26:05.354 ' 00:26:05.354 00:33:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:26:05.354 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:05.354 --rc genhtml_branch_coverage=1 00:26:05.354 --rc genhtml_function_coverage=1 00:26:05.354 --rc genhtml_legend=1 00:26:05.354 --rc geninfo_all_blocks=1 00:26:05.354 --rc geninfo_unexecuted_blocks=1 00:26:05.354 00:26:05.354 ' 00:26:05.354 00:33:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:26:05.354 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:05.354 --rc genhtml_branch_coverage=1 00:26:05.354 --rc genhtml_function_coverage=1 00:26:05.354 --rc genhtml_legend=1 00:26:05.354 --rc geninfo_all_blocks=1 00:26:05.354 --rc geninfo_unexecuted_blocks=1 00:26:05.354 00:26:05.354 ' 00:26:05.354 00:33:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:05.354 00:33:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:26:05.354 00:33:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:05.354 00:33:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:05.354 00:33:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:05.354 00:33:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:05.354 00:33:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:05.354 00:33:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:05.354 00:33:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:05.354 00:33:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:05.354 00:33:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:05.354 00:33:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:05.354 00:33:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:05.354 00:33:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:05.354 00:33:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:05.354 00:33:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:05.354 00:33:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:05.354 00:33:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:05.354 00:33:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:05.354 00:33:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:26:05.354 00:33:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:05.354 00:33:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:05.354 00:33:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:05.354 00:33:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:05.354 00:33:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:05.354 00:33:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:05.354 00:33:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:26:05.354 00:33:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:05.354 00:33:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:26:05.354 00:33:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:05.354 00:33:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:05.354 00:33:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:05.354 00:33:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:05.354 00:33:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:05.354 00:33:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:05.354 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:05.354 00:33:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:05.354 00:33:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:05.354 00:33:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:05.354 00:33:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:26:05.354 00:33:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:26:05.354 00:33:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:26:05.354 00:33:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:26:05.354 00:33:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:26:05.354 00:33:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:26:05.354 00:33:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:26:05.354 00:33:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:26:05.354 00:33:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:05.354 00:33:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # prepare_net_devs 00:26:05.354 00:33:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@436 -- # local -g is_hw=no 00:26:05.354 00:33:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # remove_spdk_ns 00:26:05.354 00:33:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:05.354 00:33:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:05.354 00:33:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:05.354 00:33:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:26:05.354 00:33:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:26:05.354 00:33:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@309 -- # xtrace_disable 00:26:05.354 00:33:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:13.494 00:33:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:13.494 00:33:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # pci_devs=() 00:26:13.494 00:33:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:13.494 00:33:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:13.494 00:33:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:13.494 00:33:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:13.494 00:33:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:13.494 00:33:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # net_devs=() 00:26:13.494 00:33:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:13.494 00:33:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # e810=() 00:26:13.494 00:33:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # local -ga e810 00:26:13.494 00:33:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # x722=() 00:26:13.494 00:33:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # local -ga x722 00:26:13.494 00:33:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # mlx=() 00:26:13.494 00:33:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # local -ga mlx 00:26:13.494 00:33:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:13.494 00:33:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:13.494 00:33:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:13.494 00:33:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:13.494 00:33:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:13.494 00:33:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:13.494 00:33:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:13.494 00:33:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:13.494 00:33:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:13.494 00:33:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:13.494 00:33:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:13.494 00:33:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:13.494 00:33:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:13.494 00:33:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:13.494 00:33:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:13.494 00:33:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:13.494 00:33:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:13.494 00:33:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:13.494 00:33:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:13.494 00:33:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:26:13.494 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:26:13.494 00:33:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:13.494 00:33:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:13.494 00:33:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:13.494 00:33:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:13.494 00:33:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:13.494 00:33:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:13.494 00:33:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:26:13.494 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:26:13.494 00:33:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:13.494 00:33:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:13.494 00:33:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:13.494 00:33:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:13.494 00:33:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:13.494 00:33:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:13.494 00:33:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:13.494 00:33:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:13.494 00:33:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:26:13.494 00:33:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:13.494 00:33:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:26:13.494 00:33:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:13.494 00:33:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ up == up ]] 00:26:13.494 00:33:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:26:13.494 00:33:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:13.494 00:33:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:26:13.494 Found net devices under 0000:4b:00.0: cvl_0_0 00:26:13.494 00:33:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:26:13.494 00:33:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:26:13.494 00:33:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:13.494 00:33:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:26:13.494 00:33:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:13.494 00:33:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ up == up ]] 00:26:13.494 00:33:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:26:13.494 00:33:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:13.494 00:33:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:26:13.494 Found net devices under 0000:4b:00.1: cvl_0_1 00:26:13.494 00:33:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:26:13.494 00:33:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:26:13.494 00:33:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # is_hw=yes 00:26:13.494 00:33:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:26:13.495 00:33:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:26:13.495 00:33:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:26:13.495 00:33:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:13.495 00:33:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:13.495 00:33:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:13.495 00:33:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:13.495 00:33:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:13.495 00:33:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:13.495 00:33:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:13.495 00:33:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:13.495 00:33:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:13.495 00:33:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:13.495 00:33:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:13.495 00:33:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:13.495 00:33:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:13.495 00:33:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:13.495 00:33:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:13.495 00:33:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:13.495 00:33:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:13.495 00:33:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:13.495 00:33:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:13.495 00:33:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:13.495 00:33:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:13.495 00:33:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:13.495 00:33:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:13.495 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:13.495 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.574 ms 00:26:13.495 00:26:13.495 --- 10.0.0.2 ping statistics --- 00:26:13.495 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:13.495 rtt min/avg/max/mdev = 0.574/0.574/0.574/0.000 ms 00:26:13.495 00:33:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:13.495 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:13.495 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.270 ms 00:26:13.495 00:26:13.495 --- 10.0.0.1 ping statistics --- 00:26:13.495 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:13.495 rtt min/avg/max/mdev = 0.270/0.270/0.270/0.000 ms 00:26:13.495 00:33:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:13.495 00:33:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # return 0 00:26:13.495 00:33:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:26:13.495 00:33:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:13.495 00:33:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:26:13.495 00:33:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:26:13.495 00:33:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:13.495 00:33:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:26:13.495 00:33:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:26:13.495 00:33:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:26:13.495 00:33:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:26:13.495 00:33:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:13.495 00:33:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:13.495 00:33:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # nvmfpid=3387275 00:26:13.495 00:33:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # waitforlisten 3387275 00:26:13.495 00:33:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:26:13.495 00:33:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # '[' -z 3387275 ']' 00:26:13.495 00:33:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:13.495 00:33:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:13.495 00:33:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:13.495 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:13.495 00:33:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:13.495 00:33:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:13.495 [2024-10-09 00:33:43.388439] Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 initialization... 00:26:13.495 [2024-10-09 00:33:43.388501] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:13.495 [2024-10-09 00:33:43.476088] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:13.495 [2024-10-09 00:33:43.569090] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:13.495 [2024-10-09 00:33:43.569149] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:13.495 [2024-10-09 00:33:43.569158] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:13.495 [2024-10-09 00:33:43.569165] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:13.495 [2024-10-09 00:33:43.569171] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:13.495 [2024-10-09 00:33:43.569961] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:26:13.756 00:33:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:13.756 00:33:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # return 0 00:26:13.756 00:33:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:26:13.756 00:33:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:13.756 00:33:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:13.756 00:33:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:13.756 00:33:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:26:13.756 00:33:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:13.756 00:33:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:13.756 [2024-10-09 00:33:44.264815] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:13.756 [2024-10-09 00:33:44.273086] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:26:13.756 null0 00:26:13.756 [2024-10-09 00:33:44.305020] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:13.756 00:33:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:13.756 00:33:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=3387625 00:26:13.756 00:33:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 3387625 /tmp/host.sock 00:26:13.756 00:33:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:26:13.756 00:33:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # '[' -z 3387625 ']' 00:26:13.756 00:33:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # local rpc_addr=/tmp/host.sock 00:26:13.756 00:33:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:13.756 00:33:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:26:13.756 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:26:13.756 00:33:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:13.756 00:33:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:13.756 [2024-10-09 00:33:44.389239] Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 initialization... 00:26:13.756 [2024-10-09 00:33:44.389302] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3387625 ] 00:26:14.017 [2024-10-09 00:33:44.470439] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:14.017 [2024-10-09 00:33:44.565761] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:26:14.588 00:33:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:14.588 00:33:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # return 0 00:26:14.588 00:33:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:14.588 00:33:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:26:14.588 00:33:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:14.588 00:33:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:14.588 00:33:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:14.588 00:33:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:26:14.588 00:33:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:14.588 00:33:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:14.848 00:33:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:14.848 00:33:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:26:14.848 00:33:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:14.848 00:33:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:15.923 [2024-10-09 00:33:46.354924] bdev_nvme.c:7153:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:26:15.923 [2024-10-09 00:33:46.354948] bdev_nvme.c:7239:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:26:15.923 [2024-10-09 00:33:46.354962] bdev_nvme.c:7116:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:15.923 [2024-10-09 00:33:46.441232] bdev_nvme.c:7082:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:26:15.923 [2024-10-09 00:33:46.504444] bdev_nvme.c:7949:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:26:15.923 [2024-10-09 00:33:46.504493] bdev_nvme.c:7949:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:26:15.923 [2024-10-09 00:33:46.504515] bdev_nvme.c:7949:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:26:15.923 [2024-10-09 00:33:46.504530] bdev_nvme.c:6972:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:26:15.923 [2024-10-09 00:33:46.504550] bdev_nvme.c:6931:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:26:15.923 00:33:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:15.923 00:33:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:26:15.923 00:33:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:15.923 00:33:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:15.923 00:33:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:15.923 00:33:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:15.923 00:33:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:15.923 00:33:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:15.923 00:33:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:15.923 [2024-10-09 00:33:46.512850] bdev_nvme.c:1735:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x136e450 was disconnected and freed. delete nvme_qpair. 00:26:15.923 00:33:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:16.182 00:33:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:26:16.182 00:33:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:26:16.183 00:33:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:26:16.183 00:33:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:26:16.183 00:33:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:16.183 00:33:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:16.183 00:33:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:16.183 00:33:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:16.183 00:33:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:16.183 00:33:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:16.183 00:33:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:16.183 00:33:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:16.183 00:33:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:16.183 00:33:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:17.123 00:33:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:17.123 00:33:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:17.123 00:33:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:17.123 00:33:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:17.123 00:33:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:17.123 00:33:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:17.123 00:33:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:17.383 00:33:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:17.383 00:33:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:17.383 00:33:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:18.325 00:33:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:18.325 00:33:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:18.325 00:33:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:18.325 00:33:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:18.325 00:33:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:18.325 00:33:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:18.325 00:33:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:18.325 00:33:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:18.325 00:33:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:18.325 00:33:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:19.266 00:33:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:19.266 00:33:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:19.266 00:33:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:19.266 00:33:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:19.266 00:33:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:19.266 00:33:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:19.266 00:33:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:19.266 00:33:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:19.525 00:33:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:19.525 00:33:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:20.478 00:33:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:20.478 00:33:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:20.478 00:33:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:20.478 00:33:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:20.478 00:33:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:20.478 00:33:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:20.478 00:33:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:20.478 00:33:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:20.478 00:33:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:20.478 00:33:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:21.419 [2024-10-09 00:33:51.945136] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:26:21.419 [2024-10-09 00:33:51.945175] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:21.419 [2024-10-09 00:33:51.945185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.419 [2024-10-09 00:33:51.945194] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:21.419 [2024-10-09 00:33:51.945199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.419 [2024-10-09 00:33:51.945205] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:21.419 [2024-10-09 00:33:51.945210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.419 [2024-10-09 00:33:51.945216] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:21.419 [2024-10-09 00:33:51.945221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.419 [2024-10-09 00:33:51.945227] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:26:21.419 [2024-10-09 00:33:51.945232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.419 [2024-10-09 00:33:51.945242] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134acc0 is same with the state(6) to be set 00:26:21.419 [2024-10-09 00:33:51.955157] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x134acc0 (9): Bad file descriptor 00:26:21.419 [2024-10-09 00:33:51.965192] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:21.419 00:33:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:21.419 00:33:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:21.419 00:33:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:21.419 00:33:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:21.419 00:33:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:21.419 00:33:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:21.419 00:33:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:22.800 [2024-10-09 00:33:52.993801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:26:22.800 [2024-10-09 00:33:52.993893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x134acc0 with addr=10.0.0.2, port=4420 00:26:22.800 [2024-10-09 00:33:52.993925] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134acc0 is same with the state(6) to be set 00:26:22.800 [2024-10-09 00:33:52.993980] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x134acc0 (9): Bad file descriptor 00:26:22.800 [2024-10-09 00:33:52.995099] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:26:22.800 [2024-10-09 00:33:52.995168] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:22.800 [2024-10-09 00:33:52.995190] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:22.800 [2024-10-09 00:33:52.995213] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:22.800 [2024-10-09 00:33:52.995277] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:22.800 [2024-10-09 00:33:52.995304] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:22.800 00:33:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:22.800 00:33:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:22.800 00:33:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:23.369 [2024-10-09 00:33:53.997703] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:23.369 [2024-10-09 00:33:53.997722] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:23.369 [2024-10-09 00:33:53.997729] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:23.369 [2024-10-09 00:33:53.997735] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:26:23.369 [2024-10-09 00:33:53.997745] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:23.369 [2024-10-09 00:33:53.997761] bdev_nvme.c:6904:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:26:23.369 [2024-10-09 00:33:53.997777] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:23.369 [2024-10-09 00:33:53.997785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.369 [2024-10-09 00:33:53.997792] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:23.369 [2024-10-09 00:33:53.997802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.369 [2024-10-09 00:33:53.997808] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:23.369 [2024-10-09 00:33:53.997813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.369 [2024-10-09 00:33:53.997819] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:23.369 [2024-10-09 00:33:53.997825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.369 [2024-10-09 00:33:53.997831] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:26:23.369 [2024-10-09 00:33:53.997836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.369 [2024-10-09 00:33:53.997841] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:26:23.369 [2024-10-09 00:33:53.998282] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x133a400 (9): Bad file descriptor 00:26:23.369 [2024-10-09 00:33:53.999293] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:26:23.369 [2024-10-09 00:33:53.999302] nvme_ctrlr.c:1213:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:26:23.629 00:33:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:23.629 00:33:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:23.629 00:33:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:23.629 00:33:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:23.629 00:33:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:23.629 00:33:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:23.629 00:33:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:23.629 00:33:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:23.629 00:33:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:26:23.629 00:33:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:23.629 00:33:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:23.629 00:33:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:26:23.629 00:33:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:23.629 00:33:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:23.629 00:33:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:23.629 00:33:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:23.629 00:33:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:23.629 00:33:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:23.629 00:33:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:23.629 00:33:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:23.629 00:33:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:26:23.629 00:33:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:25.011 00:33:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:25.011 00:33:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:25.011 00:33:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:25.011 00:33:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:25.011 00:33:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:25.011 00:33:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:25.011 00:33:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:25.011 00:33:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:25.011 00:33:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:26:25.011 00:33:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:25.580 [2024-10-09 00:33:56.018177] bdev_nvme.c:7153:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:26:25.581 [2024-10-09 00:33:56.018190] bdev_nvme.c:7239:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:26:25.581 [2024-10-09 00:33:56.018199] bdev_nvme.c:7116:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:25.581 [2024-10-09 00:33:56.148574] bdev_nvme.c:7082:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:26:25.841 00:33:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:25.841 00:33:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:25.841 00:33:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:25.841 00:33:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:25.841 00:33:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:25.841 00:33:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:25.841 00:33:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:25.841 00:33:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:25.841 00:33:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:26:25.841 00:33:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:25.841 [2024-10-09 00:33:56.372405] bdev_nvme.c:7949:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:26:25.841 [2024-10-09 00:33:56.372438] bdev_nvme.c:7949:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:26:25.841 [2024-10-09 00:33:56.372453] bdev_nvme.c:7949:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:26:25.841 [2024-10-09 00:33:56.372464] bdev_nvme.c:6972:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:26:25.841 [2024-10-09 00:33:56.372471] bdev_nvme.c:6931:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:26:25.841 [2024-10-09 00:33:56.376644] bdev_nvme.c:1735:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x1346490 was disconnected and freed. delete nvme_qpair. 00:26:26.782 00:33:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:26.782 00:33:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:26.782 00:33:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:26.782 00:33:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:26.782 00:33:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:26.782 00:33:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:26.782 00:33:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:26.782 00:33:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:26.782 00:33:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:26:26.782 00:33:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:26:26.782 00:33:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 3387625 00:26:26.782 00:33:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # '[' -z 3387625 ']' 00:26:26.782 00:33:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # kill -0 3387625 00:26:26.782 00:33:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # uname 00:26:26.782 00:33:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:26.782 00:33:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3387625 00:26:27.044 00:33:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:26:27.044 00:33:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:26:27.044 00:33:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3387625' 00:26:27.044 killing process with pid 3387625 00:26:27.044 00:33:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@969 -- # kill 3387625 00:26:27.044 00:33:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@974 -- # wait 3387625 00:26:27.044 00:33:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:26:27.044 00:33:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@514 -- # nvmfcleanup 00:26:27.044 00:33:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:26:27.044 00:33:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:27.044 00:33:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:26:27.044 00:33:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:27.044 00:33:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:27.044 rmmod nvme_tcp 00:26:27.044 rmmod nvme_fabrics 00:26:27.044 rmmod nvme_keyring 00:26:27.044 00:33:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:27.044 00:33:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:26:27.044 00:33:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:26:27.044 00:33:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@515 -- # '[' -n 3387275 ']' 00:26:27.044 00:33:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # killprocess 3387275 00:26:27.044 00:33:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # '[' -z 3387275 ']' 00:26:27.044 00:33:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # kill -0 3387275 00:26:27.044 00:33:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # uname 00:26:27.044 00:33:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:27.044 00:33:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3387275 00:26:27.304 00:33:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:26:27.304 00:33:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:26:27.304 00:33:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3387275' 00:26:27.304 killing process with pid 3387275 00:26:27.304 00:33:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@969 -- # kill 3387275 00:26:27.304 00:33:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@974 -- # wait 3387275 00:26:27.304 00:33:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:26:27.304 00:33:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:26:27.304 00:33:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:26:27.304 00:33:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:26:27.304 00:33:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@789 -- # iptables-save 00:26:27.304 00:33:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:26:27.304 00:33:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@789 -- # iptables-restore 00:26:27.304 00:33:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:27.304 00:33:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:27.304 00:33:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:27.304 00:33:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:27.304 00:33:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:29.855 00:33:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:29.855 00:26:29.855 real 0m24.333s 00:26:29.855 user 0m29.174s 00:26:29.855 sys 0m7.255s 00:26:29.855 00:33:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:29.855 00:33:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:29.855 ************************************ 00:26:29.855 END TEST nvmf_discovery_remove_ifc 00:26:29.855 ************************************ 00:26:29.855 00:33:59 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:26:29.855 00:33:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:26:29.855 00:33:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:29.855 00:33:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.855 ************************************ 00:26:29.855 START TEST nvmf_identify_kernel_target 00:26:29.855 ************************************ 00:26:29.855 00:33:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:26:29.855 * Looking for test storage... 00:26:29.856 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:29.856 00:34:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:26:29.856 00:34:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1681 -- # lcov --version 00:26:29.856 00:34:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:26:29.856 00:34:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:26:29.856 00:34:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:29.856 00:34:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:29.856 00:34:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:29.856 00:34:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:26:29.856 00:34:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:26:29.856 00:34:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:26:29.856 00:34:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:26:29.856 00:34:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:26:29.856 00:34:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:26:29.856 00:34:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:26:29.856 00:34:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:29.856 00:34:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:26:29.856 00:34:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:26:29.856 00:34:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:29.856 00:34:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:29.856 00:34:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:26:29.856 00:34:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:26:29.856 00:34:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:29.856 00:34:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:26:29.856 00:34:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:26:29.856 00:34:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:26:29.856 00:34:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:26:29.856 00:34:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:29.856 00:34:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:26:29.856 00:34:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:26:29.856 00:34:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:29.856 00:34:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:29.856 00:34:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:26:29.856 00:34:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:29.856 00:34:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:26:29.856 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:29.856 --rc genhtml_branch_coverage=1 00:26:29.856 --rc genhtml_function_coverage=1 00:26:29.856 --rc genhtml_legend=1 00:26:29.856 --rc geninfo_all_blocks=1 00:26:29.856 --rc geninfo_unexecuted_blocks=1 00:26:29.856 00:26:29.856 ' 00:26:29.856 00:34:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:26:29.856 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:29.856 --rc genhtml_branch_coverage=1 00:26:29.856 --rc genhtml_function_coverage=1 00:26:29.856 --rc genhtml_legend=1 00:26:29.856 --rc geninfo_all_blocks=1 00:26:29.856 --rc geninfo_unexecuted_blocks=1 00:26:29.856 00:26:29.856 ' 00:26:29.856 00:34:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:26:29.856 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:29.856 --rc genhtml_branch_coverage=1 00:26:29.856 --rc genhtml_function_coverage=1 00:26:29.856 --rc genhtml_legend=1 00:26:29.856 --rc geninfo_all_blocks=1 00:26:29.856 --rc geninfo_unexecuted_blocks=1 00:26:29.856 00:26:29.856 ' 00:26:29.856 00:34:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:26:29.856 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:29.856 --rc genhtml_branch_coverage=1 00:26:29.856 --rc genhtml_function_coverage=1 00:26:29.856 --rc genhtml_legend=1 00:26:29.856 --rc geninfo_all_blocks=1 00:26:29.856 --rc geninfo_unexecuted_blocks=1 00:26:29.856 00:26:29.856 ' 00:26:29.856 00:34:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:29.856 00:34:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:26:29.856 00:34:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:29.856 00:34:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:29.856 00:34:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:29.856 00:34:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:29.856 00:34:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:29.856 00:34:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:29.856 00:34:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:29.856 00:34:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:29.856 00:34:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:29.856 00:34:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:29.856 00:34:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:29.856 00:34:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:29.856 00:34:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:29.856 00:34:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:29.856 00:34:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:29.856 00:34:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:29.856 00:34:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:29.856 00:34:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:26:29.856 00:34:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:29.856 00:34:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:29.856 00:34:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:29.856 00:34:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:29.856 00:34:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:29.856 00:34:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:29.856 00:34:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:26:29.856 00:34:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:29.856 00:34:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:26:29.856 00:34:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:29.857 00:34:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:29.857 00:34:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:29.857 00:34:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:29.857 00:34:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:29.857 00:34:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:29.857 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:29.857 00:34:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:29.857 00:34:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:29.857 00:34:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:29.857 00:34:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:26:29.857 00:34:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:26:29.857 00:34:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:29.857 00:34:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # prepare_net_devs 00:26:29.857 00:34:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@436 -- # local -g is_hw=no 00:26:29.857 00:34:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # remove_spdk_ns 00:26:29.857 00:34:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:29.857 00:34:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:29.857 00:34:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:29.857 00:34:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:26:29.857 00:34:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:26:29.857 00:34:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@309 -- # xtrace_disable 00:26:29.857 00:34:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:26:38.050 00:34:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:38.050 00:34:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # pci_devs=() 00:26:38.050 00:34:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:38.050 00:34:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:38.050 00:34:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:38.050 00:34:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:38.050 00:34:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:38.050 00:34:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # net_devs=() 00:26:38.050 00:34:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:38.050 00:34:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # e810=() 00:26:38.050 00:34:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # local -ga e810 00:26:38.050 00:34:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # x722=() 00:26:38.050 00:34:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # local -ga x722 00:26:38.050 00:34:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # mlx=() 00:26:38.050 00:34:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # local -ga mlx 00:26:38.050 00:34:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:38.050 00:34:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:38.050 00:34:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:38.050 00:34:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:38.050 00:34:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:38.050 00:34:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:38.050 00:34:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:38.050 00:34:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:38.050 00:34:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:38.050 00:34:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:38.050 00:34:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:38.050 00:34:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:38.050 00:34:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:38.050 00:34:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:38.050 00:34:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:38.050 00:34:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:38.050 00:34:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:38.050 00:34:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:38.050 00:34:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:38.050 00:34:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:26:38.050 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:26:38.050 00:34:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:38.050 00:34:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:38.050 00:34:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:38.050 00:34:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:38.050 00:34:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:38.050 00:34:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:38.050 00:34:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:26:38.050 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:26:38.050 00:34:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:38.050 00:34:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:38.050 00:34:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:38.050 00:34:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:38.050 00:34:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:38.050 00:34:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:38.050 00:34:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:38.050 00:34:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:38.050 00:34:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:26:38.050 00:34:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:38.050 00:34:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:26:38.050 00:34:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:38.050 00:34:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ up == up ]] 00:26:38.050 00:34:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:26:38.050 00:34:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:38.050 00:34:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:26:38.050 Found net devices under 0000:4b:00.0: cvl_0_0 00:26:38.050 00:34:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:26:38.050 00:34:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:26:38.050 00:34:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:38.050 00:34:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:26:38.050 00:34:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:38.050 00:34:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ up == up ]] 00:26:38.050 00:34:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:26:38.050 00:34:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:38.050 00:34:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:26:38.050 Found net devices under 0000:4b:00.1: cvl_0_1 00:26:38.050 00:34:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:26:38.050 00:34:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:26:38.050 00:34:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # is_hw=yes 00:26:38.050 00:34:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:26:38.050 00:34:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:26:38.050 00:34:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:26:38.050 00:34:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:38.050 00:34:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:38.050 00:34:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:38.051 00:34:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:38.051 00:34:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:38.051 00:34:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:38.051 00:34:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:38.051 00:34:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:38.051 00:34:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:38.051 00:34:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:38.051 00:34:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:38.051 00:34:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:38.051 00:34:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:38.051 00:34:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:38.051 00:34:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:38.051 00:34:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:38.051 00:34:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:38.051 00:34:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:38.051 00:34:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:38.051 00:34:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:38.051 00:34:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:38.051 00:34:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:38.051 00:34:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:38.051 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:38.051 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.561 ms 00:26:38.051 00:26:38.051 --- 10.0.0.2 ping statistics --- 00:26:38.051 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:38.051 rtt min/avg/max/mdev = 0.561/0.561/0.561/0.000 ms 00:26:38.051 00:34:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:38.051 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:38.051 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.317 ms 00:26:38.051 00:26:38.051 --- 10.0.0.1 ping statistics --- 00:26:38.051 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:38.051 rtt min/avg/max/mdev = 0.317/0.317/0.317/0.000 ms 00:26:38.051 00:34:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:38.051 00:34:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # return 0 00:26:38.051 00:34:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:26:38.051 00:34:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:38.051 00:34:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:26:38.051 00:34:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:26:38.051 00:34:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:38.051 00:34:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:26:38.051 00:34:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:26:38.051 00:34:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:26:38.051 00:34:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:26:38.051 00:34:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@767 -- # local ip 00:26:38.051 00:34:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:38.051 00:34:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:38.051 00:34:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:38.051 00:34:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:38.051 00:34:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:38.051 00:34:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:38.051 00:34:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:38.051 00:34:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:38.051 00:34:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:38.051 00:34:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:26:38.051 00:34:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:26:38.051 00:34:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:26:38.051 00:34:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # nvmet=/sys/kernel/config/nvmet 00:26:38.051 00:34:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@661 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:38.051 00:34:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:26:38.051 00:34:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:26:38.051 00:34:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # local block nvme 00:26:38.051 00:34:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # [[ ! -e /sys/module/nvmet ]] 00:26:38.051 00:34:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # modprobe nvmet 00:26:38.051 00:34:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # [[ -e /sys/kernel/config/nvmet ]] 00:26:38.051 00:34:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:26:40.595 Waiting for block devices as requested 00:26:40.595 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:26:40.856 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:26:40.856 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:26:40.856 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:26:41.117 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:26:41.117 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:26:41.117 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:26:41.377 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:26:41.377 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:26:41.638 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:26:41.638 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:26:41.638 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:26:41.916 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:26:41.916 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:26:41.916 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:26:41.916 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:26:42.178 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:26:42.178 00:34:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@676 -- # for block in /sys/block/nvme* 00:26:42.178 00:34:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # [[ -e /sys/block/nvme0n1 ]] 00:26:42.178 00:34:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # is_block_zoned nvme0n1 00:26:42.178 00:34:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:26:42.178 00:34:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:26:42.178 00:34:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:26:42.178 00:34:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # block_in_use nvme0n1 00:26:42.178 00:34:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:26:42.178 00:34:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:26:42.178 No valid GPT data, bailing 00:26:42.178 00:34:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:26:42.178 00:34:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:26:42.178 00:34:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:26:42.178 00:34:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # nvme=/dev/nvme0n1 00:26:42.178 00:34:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@682 -- # [[ -b /dev/nvme0n1 ]] 00:26:42.178 00:34:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:42.178 00:34:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@685 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:26:42.178 00:34:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:26:42.178 00:34:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:26:42.178 00:34:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo 1 00:26:42.178 00:34:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@694 -- # echo /dev/nvme0n1 00:26:42.178 00:34:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:26:42.178 00:34:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 10.0.0.1 00:26:42.178 00:34:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # echo tcp 00:26:42.178 00:34:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 4420 00:26:42.178 00:34:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo ipv4 00:26:42.178 00:34:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@703 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:26:42.178 00:34:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@706 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.1 -t tcp -s 4420 00:26:42.178 00:26:42.178 Discovery Log Number of Records 2, Generation counter 2 00:26:42.178 =====Discovery Log Entry 0====== 00:26:42.178 trtype: tcp 00:26:42.178 adrfam: ipv4 00:26:42.178 subtype: current discovery subsystem 00:26:42.178 treq: not specified, sq flow control disable supported 00:26:42.178 portid: 1 00:26:42.178 trsvcid: 4420 00:26:42.178 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:26:42.178 traddr: 10.0.0.1 00:26:42.178 eflags: none 00:26:42.178 sectype: none 00:26:42.178 =====Discovery Log Entry 1====== 00:26:42.178 trtype: tcp 00:26:42.178 adrfam: ipv4 00:26:42.178 subtype: nvme subsystem 00:26:42.178 treq: not specified, sq flow control disable supported 00:26:42.178 portid: 1 00:26:42.178 trsvcid: 4420 00:26:42.178 subnqn: nqn.2016-06.io.spdk:testnqn 00:26:42.178 traddr: 10.0.0.1 00:26:42.178 eflags: none 00:26:42.178 sectype: none 00:26:42.178 00:34:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:26:42.178 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:26:42.440 ===================================================== 00:26:42.440 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:26:42.440 ===================================================== 00:26:42.440 Controller Capabilities/Features 00:26:42.440 ================================ 00:26:42.440 Vendor ID: 0000 00:26:42.440 Subsystem Vendor ID: 0000 00:26:42.440 Serial Number: b3723a2e0c2d23513e6e 00:26:42.440 Model Number: Linux 00:26:42.440 Firmware Version: 6.8.9-20 00:26:42.440 Recommended Arb Burst: 0 00:26:42.440 IEEE OUI Identifier: 00 00 00 00:26:42.440 Multi-path I/O 00:26:42.440 May have multiple subsystem ports: No 00:26:42.440 May have multiple controllers: No 00:26:42.440 Associated with SR-IOV VF: No 00:26:42.440 Max Data Transfer Size: Unlimited 00:26:42.440 Max Number of Namespaces: 0 00:26:42.440 Max Number of I/O Queues: 1024 00:26:42.440 NVMe Specification Version (VS): 1.3 00:26:42.440 NVMe Specification Version (Identify): 1.3 00:26:42.440 Maximum Queue Entries: 1024 00:26:42.440 Contiguous Queues Required: No 00:26:42.440 Arbitration Mechanisms Supported 00:26:42.440 Weighted Round Robin: Not Supported 00:26:42.440 Vendor Specific: Not Supported 00:26:42.440 Reset Timeout: 7500 ms 00:26:42.440 Doorbell Stride: 4 bytes 00:26:42.440 NVM Subsystem Reset: Not Supported 00:26:42.440 Command Sets Supported 00:26:42.440 NVM Command Set: Supported 00:26:42.440 Boot Partition: Not Supported 00:26:42.440 Memory Page Size Minimum: 4096 bytes 00:26:42.440 Memory Page Size Maximum: 4096 bytes 00:26:42.440 Persistent Memory Region: Not Supported 00:26:42.440 Optional Asynchronous Events Supported 00:26:42.440 Namespace Attribute Notices: Not Supported 00:26:42.440 Firmware Activation Notices: Not Supported 00:26:42.440 ANA Change Notices: Not Supported 00:26:42.440 PLE Aggregate Log Change Notices: Not Supported 00:26:42.440 LBA Status Info Alert Notices: Not Supported 00:26:42.440 EGE Aggregate Log Change Notices: Not Supported 00:26:42.440 Normal NVM Subsystem Shutdown event: Not Supported 00:26:42.440 Zone Descriptor Change Notices: Not Supported 00:26:42.440 Discovery Log Change Notices: Supported 00:26:42.440 Controller Attributes 00:26:42.440 128-bit Host Identifier: Not Supported 00:26:42.440 Non-Operational Permissive Mode: Not Supported 00:26:42.440 NVM Sets: Not Supported 00:26:42.440 Read Recovery Levels: Not Supported 00:26:42.440 Endurance Groups: Not Supported 00:26:42.440 Predictable Latency Mode: Not Supported 00:26:42.440 Traffic Based Keep ALive: Not Supported 00:26:42.440 Namespace Granularity: Not Supported 00:26:42.440 SQ Associations: Not Supported 00:26:42.440 UUID List: Not Supported 00:26:42.441 Multi-Domain Subsystem: Not Supported 00:26:42.441 Fixed Capacity Management: Not Supported 00:26:42.441 Variable Capacity Management: Not Supported 00:26:42.441 Delete Endurance Group: Not Supported 00:26:42.441 Delete NVM Set: Not Supported 00:26:42.441 Extended LBA Formats Supported: Not Supported 00:26:42.441 Flexible Data Placement Supported: Not Supported 00:26:42.441 00:26:42.441 Controller Memory Buffer Support 00:26:42.441 ================================ 00:26:42.441 Supported: No 00:26:42.441 00:26:42.441 Persistent Memory Region Support 00:26:42.441 ================================ 00:26:42.441 Supported: No 00:26:42.441 00:26:42.441 Admin Command Set Attributes 00:26:42.441 ============================ 00:26:42.441 Security Send/Receive: Not Supported 00:26:42.441 Format NVM: Not Supported 00:26:42.441 Firmware Activate/Download: Not Supported 00:26:42.441 Namespace Management: Not Supported 00:26:42.441 Device Self-Test: Not Supported 00:26:42.441 Directives: Not Supported 00:26:42.441 NVMe-MI: Not Supported 00:26:42.441 Virtualization Management: Not Supported 00:26:42.441 Doorbell Buffer Config: Not Supported 00:26:42.441 Get LBA Status Capability: Not Supported 00:26:42.441 Command & Feature Lockdown Capability: Not Supported 00:26:42.441 Abort Command Limit: 1 00:26:42.441 Async Event Request Limit: 1 00:26:42.441 Number of Firmware Slots: N/A 00:26:42.441 Firmware Slot 1 Read-Only: N/A 00:26:42.441 Firmware Activation Without Reset: N/A 00:26:42.441 Multiple Update Detection Support: N/A 00:26:42.441 Firmware Update Granularity: No Information Provided 00:26:42.441 Per-Namespace SMART Log: No 00:26:42.441 Asymmetric Namespace Access Log Page: Not Supported 00:26:42.441 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:26:42.441 Command Effects Log Page: Not Supported 00:26:42.441 Get Log Page Extended Data: Supported 00:26:42.441 Telemetry Log Pages: Not Supported 00:26:42.441 Persistent Event Log Pages: Not Supported 00:26:42.441 Supported Log Pages Log Page: May Support 00:26:42.441 Commands Supported & Effects Log Page: Not Supported 00:26:42.441 Feature Identifiers & Effects Log Page:May Support 00:26:42.441 NVMe-MI Commands & Effects Log Page: May Support 00:26:42.441 Data Area 4 for Telemetry Log: Not Supported 00:26:42.441 Error Log Page Entries Supported: 1 00:26:42.441 Keep Alive: Not Supported 00:26:42.441 00:26:42.441 NVM Command Set Attributes 00:26:42.441 ========================== 00:26:42.441 Submission Queue Entry Size 00:26:42.441 Max: 1 00:26:42.441 Min: 1 00:26:42.441 Completion Queue Entry Size 00:26:42.441 Max: 1 00:26:42.441 Min: 1 00:26:42.441 Number of Namespaces: 0 00:26:42.441 Compare Command: Not Supported 00:26:42.441 Write Uncorrectable Command: Not Supported 00:26:42.441 Dataset Management Command: Not Supported 00:26:42.441 Write Zeroes Command: Not Supported 00:26:42.441 Set Features Save Field: Not Supported 00:26:42.441 Reservations: Not Supported 00:26:42.441 Timestamp: Not Supported 00:26:42.441 Copy: Not Supported 00:26:42.441 Volatile Write Cache: Not Present 00:26:42.441 Atomic Write Unit (Normal): 1 00:26:42.441 Atomic Write Unit (PFail): 1 00:26:42.441 Atomic Compare & Write Unit: 1 00:26:42.441 Fused Compare & Write: Not Supported 00:26:42.441 Scatter-Gather List 00:26:42.441 SGL Command Set: Supported 00:26:42.441 SGL Keyed: Not Supported 00:26:42.441 SGL Bit Bucket Descriptor: Not Supported 00:26:42.441 SGL Metadata Pointer: Not Supported 00:26:42.441 Oversized SGL: Not Supported 00:26:42.441 SGL Metadata Address: Not Supported 00:26:42.441 SGL Offset: Supported 00:26:42.441 Transport SGL Data Block: Not Supported 00:26:42.441 Replay Protected Memory Block: Not Supported 00:26:42.441 00:26:42.441 Firmware Slot Information 00:26:42.441 ========================= 00:26:42.441 Active slot: 0 00:26:42.441 00:26:42.441 00:26:42.441 Error Log 00:26:42.441 ========= 00:26:42.441 00:26:42.441 Active Namespaces 00:26:42.441 ================= 00:26:42.441 Discovery Log Page 00:26:42.441 ================== 00:26:42.441 Generation Counter: 2 00:26:42.441 Number of Records: 2 00:26:42.441 Record Format: 0 00:26:42.441 00:26:42.441 Discovery Log Entry 0 00:26:42.441 ---------------------- 00:26:42.441 Transport Type: 3 (TCP) 00:26:42.441 Address Family: 1 (IPv4) 00:26:42.441 Subsystem Type: 3 (Current Discovery Subsystem) 00:26:42.441 Entry Flags: 00:26:42.441 Duplicate Returned Information: 0 00:26:42.441 Explicit Persistent Connection Support for Discovery: 0 00:26:42.441 Transport Requirements: 00:26:42.441 Secure Channel: Not Specified 00:26:42.441 Port ID: 1 (0x0001) 00:26:42.441 Controller ID: 65535 (0xffff) 00:26:42.441 Admin Max SQ Size: 32 00:26:42.441 Transport Service Identifier: 4420 00:26:42.441 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:26:42.441 Transport Address: 10.0.0.1 00:26:42.441 Discovery Log Entry 1 00:26:42.441 ---------------------- 00:26:42.441 Transport Type: 3 (TCP) 00:26:42.441 Address Family: 1 (IPv4) 00:26:42.441 Subsystem Type: 2 (NVM Subsystem) 00:26:42.441 Entry Flags: 00:26:42.441 Duplicate Returned Information: 0 00:26:42.441 Explicit Persistent Connection Support for Discovery: 0 00:26:42.441 Transport Requirements: 00:26:42.441 Secure Channel: Not Specified 00:26:42.441 Port ID: 1 (0x0001) 00:26:42.441 Controller ID: 65535 (0xffff) 00:26:42.441 Admin Max SQ Size: 32 00:26:42.441 Transport Service Identifier: 4420 00:26:42.441 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:26:42.441 Transport Address: 10.0.0.1 00:26:42.441 00:34:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:26:42.441 get_feature(0x01) failed 00:26:42.441 get_feature(0x02) failed 00:26:42.441 get_feature(0x04) failed 00:26:42.441 ===================================================== 00:26:42.441 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:26:42.441 ===================================================== 00:26:42.441 Controller Capabilities/Features 00:26:42.441 ================================ 00:26:42.441 Vendor ID: 0000 00:26:42.441 Subsystem Vendor ID: 0000 00:26:42.441 Serial Number: be7eb86a3543de1aa286 00:26:42.441 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:26:42.441 Firmware Version: 6.8.9-20 00:26:42.441 Recommended Arb Burst: 6 00:26:42.441 IEEE OUI Identifier: 00 00 00 00:26:42.441 Multi-path I/O 00:26:42.441 May have multiple subsystem ports: Yes 00:26:42.441 May have multiple controllers: Yes 00:26:42.442 Associated with SR-IOV VF: No 00:26:42.442 Max Data Transfer Size: Unlimited 00:26:42.442 Max Number of Namespaces: 1024 00:26:42.442 Max Number of I/O Queues: 128 00:26:42.442 NVMe Specification Version (VS): 1.3 00:26:42.442 NVMe Specification Version (Identify): 1.3 00:26:42.442 Maximum Queue Entries: 1024 00:26:42.442 Contiguous Queues Required: No 00:26:42.442 Arbitration Mechanisms Supported 00:26:42.442 Weighted Round Robin: Not Supported 00:26:42.442 Vendor Specific: Not Supported 00:26:42.442 Reset Timeout: 7500 ms 00:26:42.442 Doorbell Stride: 4 bytes 00:26:42.442 NVM Subsystem Reset: Not Supported 00:26:42.442 Command Sets Supported 00:26:42.442 NVM Command Set: Supported 00:26:42.442 Boot Partition: Not Supported 00:26:42.442 Memory Page Size Minimum: 4096 bytes 00:26:42.442 Memory Page Size Maximum: 4096 bytes 00:26:42.442 Persistent Memory Region: Not Supported 00:26:42.442 Optional Asynchronous Events Supported 00:26:42.442 Namespace Attribute Notices: Supported 00:26:42.442 Firmware Activation Notices: Not Supported 00:26:42.442 ANA Change Notices: Supported 00:26:42.442 PLE Aggregate Log Change Notices: Not Supported 00:26:42.442 LBA Status Info Alert Notices: Not Supported 00:26:42.442 EGE Aggregate Log Change Notices: Not Supported 00:26:42.442 Normal NVM Subsystem Shutdown event: Not Supported 00:26:42.442 Zone Descriptor Change Notices: Not Supported 00:26:42.442 Discovery Log Change Notices: Not Supported 00:26:42.442 Controller Attributes 00:26:42.442 128-bit Host Identifier: Supported 00:26:42.442 Non-Operational Permissive Mode: Not Supported 00:26:42.442 NVM Sets: Not Supported 00:26:42.442 Read Recovery Levels: Not Supported 00:26:42.442 Endurance Groups: Not Supported 00:26:42.442 Predictable Latency Mode: Not Supported 00:26:42.442 Traffic Based Keep ALive: Supported 00:26:42.442 Namespace Granularity: Not Supported 00:26:42.442 SQ Associations: Not Supported 00:26:42.442 UUID List: Not Supported 00:26:42.442 Multi-Domain Subsystem: Not Supported 00:26:42.442 Fixed Capacity Management: Not Supported 00:26:42.442 Variable Capacity Management: Not Supported 00:26:42.442 Delete Endurance Group: Not Supported 00:26:42.442 Delete NVM Set: Not Supported 00:26:42.442 Extended LBA Formats Supported: Not Supported 00:26:42.442 Flexible Data Placement Supported: Not Supported 00:26:42.442 00:26:42.442 Controller Memory Buffer Support 00:26:42.442 ================================ 00:26:42.442 Supported: No 00:26:42.442 00:26:42.442 Persistent Memory Region Support 00:26:42.442 ================================ 00:26:42.442 Supported: No 00:26:42.442 00:26:42.442 Admin Command Set Attributes 00:26:42.442 ============================ 00:26:42.442 Security Send/Receive: Not Supported 00:26:42.442 Format NVM: Not Supported 00:26:42.442 Firmware Activate/Download: Not Supported 00:26:42.442 Namespace Management: Not Supported 00:26:42.442 Device Self-Test: Not Supported 00:26:42.442 Directives: Not Supported 00:26:42.442 NVMe-MI: Not Supported 00:26:42.442 Virtualization Management: Not Supported 00:26:42.442 Doorbell Buffer Config: Not Supported 00:26:42.442 Get LBA Status Capability: Not Supported 00:26:42.442 Command & Feature Lockdown Capability: Not Supported 00:26:42.442 Abort Command Limit: 4 00:26:42.442 Async Event Request Limit: 4 00:26:42.442 Number of Firmware Slots: N/A 00:26:42.442 Firmware Slot 1 Read-Only: N/A 00:26:42.442 Firmware Activation Without Reset: N/A 00:26:42.442 Multiple Update Detection Support: N/A 00:26:42.442 Firmware Update Granularity: No Information Provided 00:26:42.442 Per-Namespace SMART Log: Yes 00:26:42.442 Asymmetric Namespace Access Log Page: Supported 00:26:42.442 ANA Transition Time : 10 sec 00:26:42.442 00:26:42.442 Asymmetric Namespace Access Capabilities 00:26:42.442 ANA Optimized State : Supported 00:26:42.442 ANA Non-Optimized State : Supported 00:26:42.442 ANA Inaccessible State : Supported 00:26:42.442 ANA Persistent Loss State : Supported 00:26:42.442 ANA Change State : Supported 00:26:42.442 ANAGRPID is not changed : No 00:26:42.442 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:26:42.442 00:26:42.442 ANA Group Identifier Maximum : 128 00:26:42.442 Number of ANA Group Identifiers : 128 00:26:42.442 Max Number of Allowed Namespaces : 1024 00:26:42.442 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:26:42.442 Command Effects Log Page: Supported 00:26:42.442 Get Log Page Extended Data: Supported 00:26:42.442 Telemetry Log Pages: Not Supported 00:26:42.442 Persistent Event Log Pages: Not Supported 00:26:42.442 Supported Log Pages Log Page: May Support 00:26:42.442 Commands Supported & Effects Log Page: Not Supported 00:26:42.442 Feature Identifiers & Effects Log Page:May Support 00:26:42.442 NVMe-MI Commands & Effects Log Page: May Support 00:26:42.442 Data Area 4 for Telemetry Log: Not Supported 00:26:42.442 Error Log Page Entries Supported: 128 00:26:42.442 Keep Alive: Supported 00:26:42.442 Keep Alive Granularity: 1000 ms 00:26:42.442 00:26:42.442 NVM Command Set Attributes 00:26:42.442 ========================== 00:26:42.442 Submission Queue Entry Size 00:26:42.442 Max: 64 00:26:42.442 Min: 64 00:26:42.442 Completion Queue Entry Size 00:26:42.442 Max: 16 00:26:42.442 Min: 16 00:26:42.442 Number of Namespaces: 1024 00:26:42.442 Compare Command: Not Supported 00:26:42.442 Write Uncorrectable Command: Not Supported 00:26:42.442 Dataset Management Command: Supported 00:26:42.442 Write Zeroes Command: Supported 00:26:42.442 Set Features Save Field: Not Supported 00:26:42.442 Reservations: Not Supported 00:26:42.442 Timestamp: Not Supported 00:26:42.442 Copy: Not Supported 00:26:42.442 Volatile Write Cache: Present 00:26:42.442 Atomic Write Unit (Normal): 1 00:26:42.442 Atomic Write Unit (PFail): 1 00:26:42.442 Atomic Compare & Write Unit: 1 00:26:42.442 Fused Compare & Write: Not Supported 00:26:42.442 Scatter-Gather List 00:26:42.442 SGL Command Set: Supported 00:26:42.442 SGL Keyed: Not Supported 00:26:42.442 SGL Bit Bucket Descriptor: Not Supported 00:26:42.442 SGL Metadata Pointer: Not Supported 00:26:42.442 Oversized SGL: Not Supported 00:26:42.442 SGL Metadata Address: Not Supported 00:26:42.442 SGL Offset: Supported 00:26:42.442 Transport SGL Data Block: Not Supported 00:26:42.442 Replay Protected Memory Block: Not Supported 00:26:42.442 00:26:42.442 Firmware Slot Information 00:26:42.442 ========================= 00:26:42.442 Active slot: 0 00:26:42.442 00:26:42.442 Asymmetric Namespace Access 00:26:42.442 =========================== 00:26:42.442 Change Count : 0 00:26:42.442 Number of ANA Group Descriptors : 1 00:26:42.442 ANA Group Descriptor : 0 00:26:42.442 ANA Group ID : 1 00:26:42.442 Number of NSID Values : 1 00:26:42.442 Change Count : 0 00:26:42.442 ANA State : 1 00:26:42.442 Namespace Identifier : 1 00:26:42.442 00:26:42.442 Commands Supported and Effects 00:26:42.442 ============================== 00:26:42.442 Admin Commands 00:26:42.442 -------------- 00:26:42.442 Get Log Page (02h): Supported 00:26:42.442 Identify (06h): Supported 00:26:42.442 Abort (08h): Supported 00:26:42.442 Set Features (09h): Supported 00:26:42.442 Get Features (0Ah): Supported 00:26:42.442 Asynchronous Event Request (0Ch): Supported 00:26:42.442 Keep Alive (18h): Supported 00:26:42.442 I/O Commands 00:26:42.442 ------------ 00:26:42.442 Flush (00h): Supported 00:26:42.442 Write (01h): Supported LBA-Change 00:26:42.442 Read (02h): Supported 00:26:42.442 Write Zeroes (08h): Supported LBA-Change 00:26:42.442 Dataset Management (09h): Supported 00:26:42.442 00:26:42.442 Error Log 00:26:42.442 ========= 00:26:42.442 Entry: 0 00:26:42.442 Error Count: 0x3 00:26:42.442 Submission Queue Id: 0x0 00:26:42.443 Command Id: 0x5 00:26:42.443 Phase Bit: 0 00:26:42.443 Status Code: 0x2 00:26:42.443 Status Code Type: 0x0 00:26:42.443 Do Not Retry: 1 00:26:42.443 Error Location: 0x28 00:26:42.443 LBA: 0x0 00:26:42.443 Namespace: 0x0 00:26:42.443 Vendor Log Page: 0x0 00:26:42.443 ----------- 00:26:42.443 Entry: 1 00:26:42.443 Error Count: 0x2 00:26:42.443 Submission Queue Id: 0x0 00:26:42.443 Command Id: 0x5 00:26:42.443 Phase Bit: 0 00:26:42.443 Status Code: 0x2 00:26:42.443 Status Code Type: 0x0 00:26:42.443 Do Not Retry: 1 00:26:42.443 Error Location: 0x28 00:26:42.443 LBA: 0x0 00:26:42.443 Namespace: 0x0 00:26:42.443 Vendor Log Page: 0x0 00:26:42.443 ----------- 00:26:42.443 Entry: 2 00:26:42.443 Error Count: 0x1 00:26:42.443 Submission Queue Id: 0x0 00:26:42.443 Command Id: 0x4 00:26:42.443 Phase Bit: 0 00:26:42.443 Status Code: 0x2 00:26:42.443 Status Code Type: 0x0 00:26:42.443 Do Not Retry: 1 00:26:42.443 Error Location: 0x28 00:26:42.443 LBA: 0x0 00:26:42.443 Namespace: 0x0 00:26:42.443 Vendor Log Page: 0x0 00:26:42.443 00:26:42.443 Number of Queues 00:26:42.443 ================ 00:26:42.443 Number of I/O Submission Queues: 128 00:26:42.443 Number of I/O Completion Queues: 128 00:26:42.443 00:26:42.443 ZNS Specific Controller Data 00:26:42.443 ============================ 00:26:42.443 Zone Append Size Limit: 0 00:26:42.443 00:26:42.443 00:26:42.443 Active Namespaces 00:26:42.443 ================= 00:26:42.443 get_feature(0x05) failed 00:26:42.443 Namespace ID:1 00:26:42.443 Command Set Identifier: NVM (00h) 00:26:42.443 Deallocate: Supported 00:26:42.443 Deallocated/Unwritten Error: Not Supported 00:26:42.443 Deallocated Read Value: Unknown 00:26:42.443 Deallocate in Write Zeroes: Not Supported 00:26:42.443 Deallocated Guard Field: 0xFFFF 00:26:42.443 Flush: Supported 00:26:42.443 Reservation: Not Supported 00:26:42.443 Namespace Sharing Capabilities: Multiple Controllers 00:26:42.443 Size (in LBAs): 3750748848 (1788GiB) 00:26:42.443 Capacity (in LBAs): 3750748848 (1788GiB) 00:26:42.443 Utilization (in LBAs): 3750748848 (1788GiB) 00:26:42.443 UUID: 4db27176-fdee-475c-8404-4ed6b242d241 00:26:42.443 Thin Provisioning: Not Supported 00:26:42.443 Per-NS Atomic Units: Yes 00:26:42.443 Atomic Write Unit (Normal): 8 00:26:42.443 Atomic Write Unit (PFail): 8 00:26:42.443 Preferred Write Granularity: 8 00:26:42.443 Atomic Compare & Write Unit: 8 00:26:42.443 Atomic Boundary Size (Normal): 0 00:26:42.443 Atomic Boundary Size (PFail): 0 00:26:42.443 Atomic Boundary Offset: 0 00:26:42.443 NGUID/EUI64 Never Reused: No 00:26:42.443 ANA group ID: 1 00:26:42.443 Namespace Write Protected: No 00:26:42.443 Number of LBA Formats: 1 00:26:42.443 Current LBA Format: LBA Format #00 00:26:42.443 LBA Format #00: Data Size: 512 Metadata Size: 0 00:26:42.443 00:26:42.443 00:34:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:26:42.443 00:34:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@514 -- # nvmfcleanup 00:26:42.443 00:34:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:26:42.443 00:34:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:42.443 00:34:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:26:42.443 00:34:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:42.443 00:34:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:42.443 rmmod nvme_tcp 00:26:42.443 rmmod nvme_fabrics 00:26:42.443 00:34:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:42.443 00:34:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:26:42.443 00:34:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:26:42.443 00:34:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@515 -- # '[' -n '' ']' 00:26:42.443 00:34:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:26:42.443 00:34:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:26:42.443 00:34:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:26:42.443 00:34:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:26:42.443 00:34:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@789 -- # iptables-save 00:26:42.443 00:34:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:26:42.443 00:34:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@789 -- # iptables-restore 00:26:42.443 00:34:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:42.443 00:34:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:42.443 00:34:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:42.443 00:34:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:42.443 00:34:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:44.988 00:34:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:44.988 00:34:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:26:44.988 00:34:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@710 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:26:44.988 00:34:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # echo 0 00:26:44.988 00:34:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:44.988 00:34:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@715 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:26:44.988 00:34:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:26:44.988 00:34:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:44.988 00:34:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # modules=(/sys/module/nvmet/holders/*) 00:26:44.988 00:34:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modprobe -r nvmet_tcp nvmet 00:26:44.988 00:34:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@724 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:26:48.290 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:26:48.290 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:26:48.290 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:26:48.290 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:26:48.290 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:26:48.290 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:26:48.290 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:26:48.290 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:26:48.290 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:26:48.290 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:26:48.290 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:26:48.290 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:26:48.290 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:26:48.290 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:26:48.290 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:26:48.290 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:26:48.290 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:26:48.290 00:26:48.290 real 0m18.874s 00:26:48.290 user 0m5.021s 00:26:48.290 sys 0m10.949s 00:26:48.290 00:34:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:48.290 00:34:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:26:48.290 ************************************ 00:26:48.290 END TEST nvmf_identify_kernel_target 00:26:48.290 ************************************ 00:26:48.291 00:34:18 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:26:48.291 00:34:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:26:48.291 00:34:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:48.291 00:34:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:48.552 ************************************ 00:26:48.552 START TEST nvmf_auth_host 00:26:48.552 ************************************ 00:26:48.552 00:34:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:26:48.552 * Looking for test storage... 00:26:48.552 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:48.552 00:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:26:48.552 00:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1681 -- # lcov --version 00:26:48.552 00:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:26:48.552 00:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:26:48.552 00:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:48.552 00:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:48.552 00:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:48.552 00:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:26:48.552 00:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:26:48.552 00:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:26:48.552 00:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:26:48.552 00:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:26:48.552 00:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:26:48.552 00:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:26:48.552 00:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:48.552 00:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:26:48.552 00:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:26:48.552 00:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:48.552 00:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:48.552 00:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:26:48.553 00:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:26:48.553 00:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:48.553 00:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:26:48.553 00:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:26:48.553 00:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:26:48.553 00:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:26:48.553 00:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:48.553 00:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:26:48.553 00:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:26:48.553 00:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:48.553 00:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:48.553 00:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:26:48.553 00:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:48.553 00:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:26:48.553 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:48.553 --rc genhtml_branch_coverage=1 00:26:48.553 --rc genhtml_function_coverage=1 00:26:48.553 --rc genhtml_legend=1 00:26:48.553 --rc geninfo_all_blocks=1 00:26:48.553 --rc geninfo_unexecuted_blocks=1 00:26:48.553 00:26:48.553 ' 00:26:48.553 00:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:26:48.553 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:48.553 --rc genhtml_branch_coverage=1 00:26:48.553 --rc genhtml_function_coverage=1 00:26:48.553 --rc genhtml_legend=1 00:26:48.553 --rc geninfo_all_blocks=1 00:26:48.553 --rc geninfo_unexecuted_blocks=1 00:26:48.553 00:26:48.553 ' 00:26:48.553 00:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:26:48.553 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:48.553 --rc genhtml_branch_coverage=1 00:26:48.553 --rc genhtml_function_coverage=1 00:26:48.553 --rc genhtml_legend=1 00:26:48.553 --rc geninfo_all_blocks=1 00:26:48.553 --rc geninfo_unexecuted_blocks=1 00:26:48.553 00:26:48.553 ' 00:26:48.553 00:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:26:48.553 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:48.553 --rc genhtml_branch_coverage=1 00:26:48.553 --rc genhtml_function_coverage=1 00:26:48.553 --rc genhtml_legend=1 00:26:48.553 --rc geninfo_all_blocks=1 00:26:48.553 --rc geninfo_unexecuted_blocks=1 00:26:48.553 00:26:48.553 ' 00:26:48.553 00:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:48.553 00:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:26:48.553 00:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:48.553 00:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:48.553 00:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:48.553 00:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:48.553 00:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:48.553 00:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:48.553 00:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:48.553 00:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:48.553 00:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:48.553 00:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:48.553 00:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:48.553 00:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:48.553 00:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:48.553 00:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:48.553 00:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:48.553 00:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:48.553 00:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:48.553 00:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:26:48.553 00:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:48.553 00:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:48.553 00:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:48.553 00:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:48.553 00:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:48.553 00:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:48.553 00:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:26:48.553 00:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:48.553 00:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:26:48.553 00:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:48.553 00:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:48.553 00:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:48.553 00:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:48.553 00:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:48.553 00:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:48.553 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:48.553 00:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:48.553 00:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:48.553 00:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:48.553 00:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:26:48.553 00:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:26:48.815 00:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:26:48.815 00:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:26:48.815 00:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:26:48.815 00:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:26:48.815 00:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:26:48.815 00:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:26:48.815 00:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:26:48.815 00:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:26:48.815 00:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:48.815 00:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # prepare_net_devs 00:26:48.815 00:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@436 -- # local -g is_hw=no 00:26:48.815 00:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # remove_spdk_ns 00:26:48.815 00:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:48.815 00:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:48.815 00:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:48.815 00:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:26:48.815 00:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:26:48.815 00:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@309 -- # xtrace_disable 00:26:48.815 00:34:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:56.962 00:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:56.962 00:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # pci_devs=() 00:26:56.962 00:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:56.962 00:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:56.962 00:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:56.962 00:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:56.962 00:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:56.962 00:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # net_devs=() 00:26:56.962 00:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:56.962 00:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # e810=() 00:26:56.962 00:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # local -ga e810 00:26:56.962 00:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # x722=() 00:26:56.962 00:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # local -ga x722 00:26:56.962 00:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # mlx=() 00:26:56.962 00:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # local -ga mlx 00:26:56.962 00:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:56.962 00:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:56.962 00:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:56.962 00:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:56.962 00:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:56.962 00:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:56.962 00:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:56.962 00:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:56.962 00:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:56.962 00:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:56.962 00:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:56.962 00:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:56.962 00:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:56.962 00:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:56.962 00:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:56.962 00:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:56.962 00:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:56.962 00:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:56.962 00:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:56.962 00:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:26:56.962 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:26:56.962 00:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:56.962 00:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:56.962 00:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:56.963 00:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:56.963 00:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:56.963 00:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:56.963 00:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:26:56.963 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:26:56.963 00:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:56.963 00:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:56.963 00:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:56.963 00:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:56.963 00:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:56.963 00:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:56.963 00:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:56.963 00:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:56.963 00:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:26:56.963 00:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:56.963 00:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:26:56.963 00:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:56.963 00:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ up == up ]] 00:26:56.963 00:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:26:56.963 00:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:56.963 00:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:26:56.963 Found net devices under 0000:4b:00.0: cvl_0_0 00:26:56.963 00:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:26:56.963 00:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:26:56.963 00:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:56.963 00:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:26:56.963 00:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:56.963 00:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ up == up ]] 00:26:56.963 00:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:26:56.963 00:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:56.963 00:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:26:56.963 Found net devices under 0000:4b:00.1: cvl_0_1 00:26:56.963 00:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:26:56.963 00:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:26:56.963 00:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # is_hw=yes 00:26:56.963 00:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:26:56.963 00:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:26:56.963 00:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:26:56.963 00:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:56.963 00:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:56.963 00:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:56.963 00:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:56.963 00:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:56.963 00:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:56.963 00:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:56.963 00:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:56.963 00:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:56.963 00:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:56.963 00:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:56.963 00:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:56.963 00:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:56.963 00:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:56.963 00:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:56.963 00:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:56.963 00:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:56.963 00:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:56.963 00:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:56.963 00:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:56.963 00:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:56.963 00:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:56.963 00:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:56.963 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:56.963 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.575 ms 00:26:56.963 00:26:56.963 --- 10.0.0.2 ping statistics --- 00:26:56.963 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:56.963 rtt min/avg/max/mdev = 0.575/0.575/0.575/0.000 ms 00:26:56.963 00:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:56.963 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:56.963 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.276 ms 00:26:56.963 00:26:56.963 --- 10.0.0.1 ping statistics --- 00:26:56.963 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:56.963 rtt min/avg/max/mdev = 0.276/0.276/0.276/0.000 ms 00:26:56.963 00:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:56.963 00:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@448 -- # return 0 00:26:56.963 00:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:26:56.963 00:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:56.963 00:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:26:56.963 00:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:26:56.963 00:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:56.963 00:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:26:56.963 00:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:26:56.963 00:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:26:56.963 00:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:26:56.963 00:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:56.963 00:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:56.963 00:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # nvmfpid=3401837 00:26:56.963 00:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # waitforlisten 3401837 00:26:56.963 00:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:26:56.963 00:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@831 -- # '[' -z 3401837 ']' 00:26:56.963 00:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:56.963 00:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:56.963 00:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:56.963 00:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:56.963 00:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.225 00:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:57.225 00:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # return 0 00:26:57.225 00:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:26:57.225 00:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:57.225 00:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.225 00:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:57.225 00:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:26:57.225 00:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:26:57.225 00:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:26:57.225 00:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:57.225 00:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:26:57.225 00:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=null 00:26:57.225 00:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=32 00:26:57.225 00:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 16 /dev/urandom 00:26:57.225 00:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=c536ad9a362a0205908f8143c3041439 00:26:57.225 00:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-null.XXX 00:26:57.225 00:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-null.lSU 00:26:57.225 00:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key c536ad9a362a0205908f8143c3041439 0 00:26:57.225 00:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 c536ad9a362a0205908f8143c3041439 0 00:26:57.225 00:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:26:57.225 00:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:26:57.225 00:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=c536ad9a362a0205908f8143c3041439 00:26:57.225 00:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=0 00:26:57.225 00:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:26:57.225 00:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-null.lSU 00:26:57.225 00:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-null.lSU 00:26:57.225 00:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.lSU 00:26:57.225 00:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:26:57.225 00:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:26:57.225 00:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:57.225 00:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:26:57.225 00:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=sha512 00:26:57.225 00:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=64 00:26:57.225 00:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 32 /dev/urandom 00:26:57.225 00:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=7abae133eb3c3520efb5cab87bbca164eec5055a011eddd5d9a359d2a93ab5db 00:26:57.225 00:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha512.XXX 00:26:57.225 00:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha512.3oK 00:26:57.225 00:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key 7abae133eb3c3520efb5cab87bbca164eec5055a011eddd5d9a359d2a93ab5db 3 00:26:57.225 00:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 7abae133eb3c3520efb5cab87bbca164eec5055a011eddd5d9a359d2a93ab5db 3 00:26:57.225 00:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:26:57.225 00:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:26:57.225 00:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=7abae133eb3c3520efb5cab87bbca164eec5055a011eddd5d9a359d2a93ab5db 00:26:57.225 00:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=3 00:26:57.225 00:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:26:57.225 00:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha512.3oK 00:26:57.225 00:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha512.3oK 00:26:57.225 00:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.3oK 00:26:57.225 00:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:26:57.225 00:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:26:57.225 00:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:57.225 00:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:26:57.225 00:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=null 00:26:57.225 00:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=48 00:26:57.225 00:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 24 /dev/urandom 00:26:57.225 00:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=9cd997da85934ccb263d719640f8d053681f909c2fb0b0ae 00:26:57.225 00:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-null.XXX 00:26:57.225 00:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-null.kCs 00:26:57.225 00:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key 9cd997da85934ccb263d719640f8d053681f909c2fb0b0ae 0 00:26:57.225 00:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 9cd997da85934ccb263d719640f8d053681f909c2fb0b0ae 0 00:26:57.225 00:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:26:57.225 00:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:26:57.225 00:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=9cd997da85934ccb263d719640f8d053681f909c2fb0b0ae 00:26:57.225 00:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=0 00:26:57.225 00:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:26:57.225 00:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-null.kCs 00:26:57.225 00:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-null.kCs 00:26:57.225 00:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.kCs 00:26:57.225 00:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:26:57.225 00:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:26:57.225 00:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:57.225 00:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:26:57.225 00:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=sha384 00:26:57.225 00:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=48 00:26:57.225 00:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 24 /dev/urandom 00:26:57.225 00:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=f3ea4d5e657da7242b6d3237d8d15e4412903e82e22b2b5e 00:26:57.225 00:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha384.XXX 00:26:57.488 00:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha384.SOT 00:26:57.488 00:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key f3ea4d5e657da7242b6d3237d8d15e4412903e82e22b2b5e 2 00:26:57.488 00:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 f3ea4d5e657da7242b6d3237d8d15e4412903e82e22b2b5e 2 00:26:57.488 00:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:26:57.488 00:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:26:57.488 00:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=f3ea4d5e657da7242b6d3237d8d15e4412903e82e22b2b5e 00:26:57.488 00:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=2 00:26:57.488 00:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:26:57.488 00:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha384.SOT 00:26:57.488 00:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha384.SOT 00:26:57.488 00:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.SOT 00:26:57.488 00:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:26:57.488 00:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:26:57.488 00:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:57.488 00:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:26:57.488 00:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=sha256 00:26:57.488 00:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=32 00:26:57.488 00:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 16 /dev/urandom 00:26:57.488 00:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=68dc8e5905c864d833e9155a63df993b 00:26:57.488 00:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha256.XXX 00:26:57.488 00:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha256.h0S 00:26:57.488 00:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key 68dc8e5905c864d833e9155a63df993b 1 00:26:57.488 00:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 68dc8e5905c864d833e9155a63df993b 1 00:26:57.488 00:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:26:57.488 00:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:26:57.488 00:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=68dc8e5905c864d833e9155a63df993b 00:26:57.488 00:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=1 00:26:57.488 00:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:26:57.488 00:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha256.h0S 00:26:57.488 00:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha256.h0S 00:26:57.488 00:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.h0S 00:26:57.488 00:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:26:57.488 00:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:26:57.488 00:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:57.488 00:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:26:57.488 00:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=sha256 00:26:57.488 00:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=32 00:26:57.488 00:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 16 /dev/urandom 00:26:57.488 00:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=33f43e3647ff142de9901f0efbc131fb 00:26:57.488 00:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha256.XXX 00:26:57.488 00:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha256.wqT 00:26:57.488 00:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key 33f43e3647ff142de9901f0efbc131fb 1 00:26:57.488 00:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 33f43e3647ff142de9901f0efbc131fb 1 00:26:57.488 00:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:26:57.488 00:34:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:26:57.488 00:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=33f43e3647ff142de9901f0efbc131fb 00:26:57.488 00:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=1 00:26:57.488 00:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:26:57.488 00:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha256.wqT 00:26:57.488 00:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha256.wqT 00:26:57.488 00:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.wqT 00:26:57.488 00:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:26:57.488 00:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:26:57.488 00:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:57.488 00:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:26:57.488 00:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=sha384 00:26:57.488 00:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=48 00:26:57.488 00:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 24 /dev/urandom 00:26:57.488 00:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=1bac542ccd5a19503799b2120a70332b799204cc1bc6b9c6 00:26:57.488 00:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha384.XXX 00:26:57.488 00:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha384.JEg 00:26:57.488 00:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key 1bac542ccd5a19503799b2120a70332b799204cc1bc6b9c6 2 00:26:57.488 00:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 1bac542ccd5a19503799b2120a70332b799204cc1bc6b9c6 2 00:26:57.488 00:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:26:57.488 00:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:26:57.488 00:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=1bac542ccd5a19503799b2120a70332b799204cc1bc6b9c6 00:26:57.488 00:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=2 00:26:57.488 00:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:26:57.488 00:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha384.JEg 00:26:57.488 00:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha384.JEg 00:26:57.488 00:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.JEg 00:26:57.488 00:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:26:57.488 00:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:26:57.488 00:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:57.488 00:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:26:57.488 00:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=null 00:26:57.488 00:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=32 00:26:57.749 00:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 16 /dev/urandom 00:26:57.749 00:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=05e55490844f11df9a056e74e3866b42 00:26:57.749 00:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-null.XXX 00:26:57.749 00:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-null.cfh 00:26:57.749 00:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key 05e55490844f11df9a056e74e3866b42 0 00:26:57.750 00:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 05e55490844f11df9a056e74e3866b42 0 00:26:57.750 00:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:26:57.750 00:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:26:57.750 00:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=05e55490844f11df9a056e74e3866b42 00:26:57.750 00:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=0 00:26:57.750 00:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:26:57.750 00:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-null.cfh 00:26:57.750 00:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-null.cfh 00:26:57.750 00:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.cfh 00:26:57.750 00:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:26:57.750 00:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:26:57.750 00:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:57.750 00:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:26:57.750 00:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=sha512 00:26:57.750 00:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=64 00:26:57.750 00:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 32 /dev/urandom 00:26:57.750 00:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=b963c5ce8dbaa43f17623341a84bd8fbdb57cba53f1d304a7ffc43a2d0c8c8e4 00:26:57.750 00:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha512.XXX 00:26:57.750 00:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha512.rH6 00:26:57.750 00:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key b963c5ce8dbaa43f17623341a84bd8fbdb57cba53f1d304a7ffc43a2d0c8c8e4 3 00:26:57.750 00:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 b963c5ce8dbaa43f17623341a84bd8fbdb57cba53f1d304a7ffc43a2d0c8c8e4 3 00:26:57.750 00:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:26:57.750 00:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:26:57.750 00:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=b963c5ce8dbaa43f17623341a84bd8fbdb57cba53f1d304a7ffc43a2d0c8c8e4 00:26:57.750 00:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=3 00:26:57.750 00:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:26:57.750 00:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha512.rH6 00:26:57.750 00:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha512.rH6 00:26:57.750 00:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.rH6 00:26:57.750 00:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:26:57.750 00:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 3401837 00:26:57.750 00:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@831 -- # '[' -z 3401837 ']' 00:26:57.750 00:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:57.750 00:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:57.750 00:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:57.750 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:57.750 00:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:57.750 00:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.011 00:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:58.012 00:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # return 0 00:26:58.012 00:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:26:58.012 00:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.lSU 00:26:58.012 00:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:58.012 00:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.012 00:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:58.012 00:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.3oK ]] 00:26:58.012 00:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.3oK 00:26:58.012 00:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:58.012 00:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.012 00:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:58.012 00:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:26:58.012 00:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.kCs 00:26:58.012 00:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:58.012 00:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.012 00:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:58.012 00:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.SOT ]] 00:26:58.012 00:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.SOT 00:26:58.012 00:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:58.012 00:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.012 00:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:58.012 00:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:26:58.012 00:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.h0S 00:26:58.012 00:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:58.012 00:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.012 00:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:58.012 00:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.wqT ]] 00:26:58.012 00:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.wqT 00:26:58.012 00:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:58.012 00:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.012 00:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:58.012 00:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:26:58.012 00:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.JEg 00:26:58.012 00:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:58.012 00:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.012 00:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:58.012 00:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.cfh ]] 00:26:58.012 00:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.cfh 00:26:58.012 00:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:58.012 00:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.012 00:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:58.012 00:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:26:58.012 00:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.rH6 00:26:58.012 00:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:58.012 00:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.012 00:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:58.012 00:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:26:58.012 00:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:26:58.012 00:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:26:58.012 00:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:58.012 00:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:58.012 00:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:58.012 00:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:58.012 00:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:58.012 00:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:58.012 00:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:58.012 00:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:58.012 00:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:58.012 00:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:58.012 00:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:26:58.012 00:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@658 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:26:58.012 00:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # nvmet=/sys/kernel/config/nvmet 00:26:58.012 00:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@661 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:26:58.012 00:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:26:58.012 00:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:26:58.012 00:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # local block nvme 00:26:58.012 00:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # [[ ! -e /sys/module/nvmet ]] 00:26:58.012 00:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@668 -- # modprobe nvmet 00:26:58.012 00:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@671 -- # [[ -e /sys/kernel/config/nvmet ]] 00:26:58.012 00:34:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:27:01.315 Waiting for block devices as requested 00:27:01.574 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:27:01.574 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:27:01.574 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:27:01.574 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:27:01.835 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:27:01.835 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:27:01.835 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:27:02.095 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:27:02.095 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:27:02.095 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:27:02.356 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:27:02.356 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:27:02.356 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:27:02.356 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:27:02.617 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:27:02.617 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:27:02.617 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:27:03.559 00:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@676 -- # for block in /sys/block/nvme* 00:27:03.559 00:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@677 -- # [[ -e /sys/block/nvme0n1 ]] 00:27:03.559 00:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # is_block_zoned nvme0n1 00:27:03.559 00:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:27:03.559 00:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:27:03.559 00:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:27:03.559 00:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # block_in_use nvme0n1 00:27:03.559 00:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:27:03.559 00:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:27:03.559 No valid GPT data, bailing 00:27:03.559 00:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:27:03.559 00:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:27:03.559 00:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:27:03.559 00:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # nvme=/dev/nvme0n1 00:27:03.559 00:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@682 -- # [[ -b /dev/nvme0n1 ]] 00:27:03.559 00:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:03.559 00:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@685 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:27:03.559 00:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:27:03.559 00:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@691 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:27:03.559 00:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo 1 00:27:03.559 00:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@694 -- # echo /dev/nvme0n1 00:27:03.559 00:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:27:03.559 00:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 10.0.0.1 00:27:03.559 00:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@698 -- # echo tcp 00:27:03.559 00:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 4420 00:27:03.559 00:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo ipv4 00:27:03.559 00:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@703 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:27:03.559 00:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@706 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.1 -t tcp -s 4420 00:27:03.559 00:27:03.559 Discovery Log Number of Records 2, Generation counter 2 00:27:03.559 =====Discovery Log Entry 0====== 00:27:03.559 trtype: tcp 00:27:03.559 adrfam: ipv4 00:27:03.559 subtype: current discovery subsystem 00:27:03.559 treq: not specified, sq flow control disable supported 00:27:03.559 portid: 1 00:27:03.559 trsvcid: 4420 00:27:03.559 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:27:03.559 traddr: 10.0.0.1 00:27:03.559 eflags: none 00:27:03.559 sectype: none 00:27:03.559 =====Discovery Log Entry 1====== 00:27:03.559 trtype: tcp 00:27:03.559 adrfam: ipv4 00:27:03.559 subtype: nvme subsystem 00:27:03.559 treq: not specified, sq flow control disable supported 00:27:03.559 portid: 1 00:27:03.559 trsvcid: 4420 00:27:03.559 subnqn: nqn.2024-02.io.spdk:cnode0 00:27:03.559 traddr: 10.0.0.1 00:27:03.559 eflags: none 00:27:03.559 sectype: none 00:27:03.559 00:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:27:03.559 00:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:27:03.559 00:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:27:03.559 00:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:27:03.559 00:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:03.559 00:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:03.559 00:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:03.559 00:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:03.559 00:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWNkOTk3ZGE4NTkzNGNjYjI2M2Q3MTk2NDBmOGQwNTM2ODFmOTA5YzJmYjBiMGFloiTCEw==: 00:27:03.559 00:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjNlYTRkNWU2NTdkYTcyNDJiNmQzMjM3ZDhkMTVlNDQxMjkwM2U4MmUyMmIyYjVlkmQc+Q==: 00:27:03.559 00:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:03.559 00:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:03.560 00:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWNkOTk3ZGE4NTkzNGNjYjI2M2Q3MTk2NDBmOGQwNTM2ODFmOTA5YzJmYjBiMGFloiTCEw==: 00:27:03.560 00:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjNlYTRkNWU2NTdkYTcyNDJiNmQzMjM3ZDhkMTVlNDQxMjkwM2U4MmUyMmIyYjVlkmQc+Q==: ]] 00:27:03.560 00:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjNlYTRkNWU2NTdkYTcyNDJiNmQzMjM3ZDhkMTVlNDQxMjkwM2U4MmUyMmIyYjVlkmQc+Q==: 00:27:03.560 00:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:27:03.560 00:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:27:03.560 00:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:27:03.560 00:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:27:03.560 00:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:27:03.560 00:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:03.560 00:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:27:03.560 00:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:27:03.560 00:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:03.560 00:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:03.560 00:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:27:03.560 00:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:03.560 00:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.560 00:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:03.560 00:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:03.560 00:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:03.560 00:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:03.560 00:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:03.560 00:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:03.560 00:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:03.560 00:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:03.560 00:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:03.560 00:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:03.560 00:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:03.560 00:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:03.560 00:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:03.560 00:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:03.560 00:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.560 nvme0n1 00:27:03.560 00:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:03.560 00:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:03.560 00:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:03.560 00:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:03.560 00:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.560 00:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:03.820 00:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:03.820 00:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:03.820 00:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:03.820 00:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.820 00:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:03.820 00:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:27:03.820 00:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:03.820 00:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:03.820 00:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:27:03.820 00:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:03.820 00:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:03.820 00:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:03.820 00:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:03.820 00:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzUzNmFkOWEzNjJhMDIwNTkwOGY4MTQzYzMwNDE0MzlD04KG: 00:27:03.820 00:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:N2FiYWUxMzNlYjNjMzUyMGVmYjVjYWI4N2JiY2ExNjRlZWM1MDU1YTAxMWVkZGQ1ZDlhMzU5ZDJhOTNhYjVkYj/Vaq8=: 00:27:03.820 00:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:03.820 00:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:03.820 00:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzUzNmFkOWEzNjJhMDIwNTkwOGY4MTQzYzMwNDE0MzlD04KG: 00:27:03.820 00:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:N2FiYWUxMzNlYjNjMzUyMGVmYjVjYWI4N2JiY2ExNjRlZWM1MDU1YTAxMWVkZGQ1ZDlhMzU5ZDJhOTNhYjVkYj/Vaq8=: ]] 00:27:03.820 00:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:N2FiYWUxMzNlYjNjMzUyMGVmYjVjYWI4N2JiY2ExNjRlZWM1MDU1YTAxMWVkZGQ1ZDlhMzU5ZDJhOTNhYjVkYj/Vaq8=: 00:27:03.820 00:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:27:03.820 00:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:03.820 00:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:03.820 00:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:03.820 00:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:03.820 00:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:03.820 00:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:03.820 00:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:03.820 00:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.820 00:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:03.820 00:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:03.820 00:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:03.820 00:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:03.820 00:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:03.820 00:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:03.820 00:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:03.820 00:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:03.820 00:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:03.820 00:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:03.820 00:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:03.820 00:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:03.820 00:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:03.820 00:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:03.820 00:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.820 nvme0n1 00:27:03.820 00:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:03.820 00:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:03.820 00:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:03.820 00:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:03.820 00:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.820 00:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:03.820 00:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:03.820 00:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:03.820 00:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:03.820 00:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.820 00:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:03.820 00:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:03.820 00:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:27:03.820 00:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:03.820 00:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:03.820 00:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:03.820 00:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:03.820 00:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWNkOTk3ZGE4NTkzNGNjYjI2M2Q3MTk2NDBmOGQwNTM2ODFmOTA5YzJmYjBiMGFloiTCEw==: 00:27:03.820 00:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjNlYTRkNWU2NTdkYTcyNDJiNmQzMjM3ZDhkMTVlNDQxMjkwM2U4MmUyMmIyYjVlkmQc+Q==: 00:27:03.820 00:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:03.820 00:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:03.820 00:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWNkOTk3ZGE4NTkzNGNjYjI2M2Q3MTk2NDBmOGQwNTM2ODFmOTA5YzJmYjBiMGFloiTCEw==: 00:27:03.820 00:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjNlYTRkNWU2NTdkYTcyNDJiNmQzMjM3ZDhkMTVlNDQxMjkwM2U4MmUyMmIyYjVlkmQc+Q==: ]] 00:27:03.820 00:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjNlYTRkNWU2NTdkYTcyNDJiNmQzMjM3ZDhkMTVlNDQxMjkwM2U4MmUyMmIyYjVlkmQc+Q==: 00:27:03.820 00:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:27:03.820 00:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:03.820 00:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:03.820 00:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:03.820 00:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:03.820 00:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:03.820 00:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:03.820 00:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:03.820 00:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.820 00:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:03.820 00:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:03.820 00:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:03.820 00:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:03.821 00:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:03.821 00:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:03.821 00:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:03.821 00:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:04.082 00:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:04.082 00:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:04.082 00:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:04.082 00:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:04.082 00:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:04.082 00:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:04.082 00:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:04.082 nvme0n1 00:27:04.082 00:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:04.082 00:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:04.082 00:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:04.082 00:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:04.082 00:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:04.082 00:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:04.082 00:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:04.082 00:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:04.082 00:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:04.082 00:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:04.082 00:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:04.082 00:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:04.082 00:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:27:04.082 00:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:04.082 00:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:04.082 00:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:04.082 00:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:04.082 00:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjhkYzhlNTkwNWM4NjRkODMzZTkxNTVhNjNkZjk5M2KPLwBm: 00:27:04.082 00:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzNmNDNlMzY0N2ZmMTQyZGU5OTAxZjBlZmJjMTMxZmJPokLg: 00:27:04.082 00:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:04.082 00:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:04.082 00:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjhkYzhlNTkwNWM4NjRkODMzZTkxNTVhNjNkZjk5M2KPLwBm: 00:27:04.082 00:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzNmNDNlMzY0N2ZmMTQyZGU5OTAxZjBlZmJjMTMxZmJPokLg: ]] 00:27:04.082 00:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzNmNDNlMzY0N2ZmMTQyZGU5OTAxZjBlZmJjMTMxZmJPokLg: 00:27:04.082 00:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:27:04.082 00:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:04.082 00:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:04.082 00:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:04.082 00:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:04.082 00:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:04.082 00:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:04.082 00:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:04.082 00:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:04.082 00:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:04.082 00:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:04.082 00:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:04.082 00:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:04.082 00:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:04.082 00:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:04.082 00:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:04.082 00:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:04.082 00:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:04.082 00:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:04.082 00:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:04.082 00:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:04.082 00:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:04.082 00:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:04.082 00:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:04.342 nvme0n1 00:27:04.342 00:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:04.342 00:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:04.342 00:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:04.342 00:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:04.342 00:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:04.342 00:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:04.342 00:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:04.342 00:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:04.342 00:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:04.342 00:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:04.342 00:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:04.342 00:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:04.342 00:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:27:04.342 00:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:04.342 00:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:04.342 00:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:04.342 00:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:04.342 00:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MWJhYzU0MmNjZDVhMTk1MDM3OTliMjEyMGE3MDMzMmI3OTkyMDRjYzFiYzZiOWM2okmFTA==: 00:27:04.342 00:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MDVlNTU0OTA4NDRmMTFkZjlhMDU2ZTc0ZTM4NjZiNDL//Az2: 00:27:04.342 00:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:04.342 00:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:04.343 00:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MWJhYzU0MmNjZDVhMTk1MDM3OTliMjEyMGE3MDMzMmI3OTkyMDRjYzFiYzZiOWM2okmFTA==: 00:27:04.343 00:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MDVlNTU0OTA4NDRmMTFkZjlhMDU2ZTc0ZTM4NjZiNDL//Az2: ]] 00:27:04.343 00:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MDVlNTU0OTA4NDRmMTFkZjlhMDU2ZTc0ZTM4NjZiNDL//Az2: 00:27:04.343 00:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:27:04.343 00:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:04.343 00:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:04.343 00:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:04.343 00:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:04.343 00:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:04.343 00:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:04.343 00:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:04.343 00:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:04.343 00:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:04.343 00:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:04.343 00:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:04.343 00:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:04.343 00:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:04.343 00:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:04.343 00:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:04.343 00:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:04.343 00:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:04.343 00:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:04.343 00:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:04.343 00:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:04.343 00:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:04.343 00:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:04.343 00:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:04.603 nvme0n1 00:27:04.603 00:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:04.603 00:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:04.603 00:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:04.603 00:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:04.603 00:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:04.603 00:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:04.603 00:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:04.603 00:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:04.603 00:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:04.603 00:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:04.603 00:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:04.603 00:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:04.603 00:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:27:04.603 00:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:04.603 00:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:04.603 00:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:04.603 00:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:04.603 00:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Yjk2M2M1Y2U4ZGJhYTQzZjE3NjIzMzQxYTg0YmQ4ZmJkYjU3Y2JhNTNmMWQzMDRhN2ZmYzQzYTJkMGM4YzhlNB7igZA=: 00:27:04.603 00:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:04.603 00:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:04.603 00:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:04.603 00:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Yjk2M2M1Y2U4ZGJhYTQzZjE3NjIzMzQxYTg0YmQ4ZmJkYjU3Y2JhNTNmMWQzMDRhN2ZmYzQzYTJkMGM4YzhlNB7igZA=: 00:27:04.603 00:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:04.603 00:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:27:04.603 00:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:04.603 00:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:04.603 00:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:04.603 00:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:04.603 00:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:04.603 00:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:04.603 00:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:04.603 00:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:04.603 00:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:04.603 00:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:04.603 00:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:04.603 00:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:04.603 00:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:04.603 00:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:04.603 00:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:04.603 00:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:04.603 00:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:04.603 00:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:04.603 00:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:04.603 00:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:04.603 00:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:04.603 00:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:04.603 00:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:04.603 nvme0n1 00:27:04.603 00:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:04.864 00:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:04.864 00:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:04.864 00:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:04.864 00:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:04.864 00:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:04.864 00:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:04.864 00:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:04.864 00:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:04.864 00:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:04.864 00:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:04.864 00:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:04.864 00:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:04.864 00:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:27:04.864 00:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:04.864 00:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:04.864 00:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:04.864 00:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:04.864 00:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzUzNmFkOWEzNjJhMDIwNTkwOGY4MTQzYzMwNDE0MzlD04KG: 00:27:04.864 00:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:N2FiYWUxMzNlYjNjMzUyMGVmYjVjYWI4N2JiY2ExNjRlZWM1MDU1YTAxMWVkZGQ1ZDlhMzU5ZDJhOTNhYjVkYj/Vaq8=: 00:27:04.864 00:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:04.864 00:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:04.864 00:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzUzNmFkOWEzNjJhMDIwNTkwOGY4MTQzYzMwNDE0MzlD04KG: 00:27:04.864 00:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:N2FiYWUxMzNlYjNjMzUyMGVmYjVjYWI4N2JiY2ExNjRlZWM1MDU1YTAxMWVkZGQ1ZDlhMzU5ZDJhOTNhYjVkYj/Vaq8=: ]] 00:27:04.864 00:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:N2FiYWUxMzNlYjNjMzUyMGVmYjVjYWI4N2JiY2ExNjRlZWM1MDU1YTAxMWVkZGQ1ZDlhMzU5ZDJhOTNhYjVkYj/Vaq8=: 00:27:04.864 00:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:27:04.864 00:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:04.864 00:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:04.864 00:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:04.864 00:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:04.864 00:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:04.864 00:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:04.864 00:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:04.864 00:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:04.864 00:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:04.864 00:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:04.864 00:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:04.864 00:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:04.864 00:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:04.864 00:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:04.864 00:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:04.864 00:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:04.864 00:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:04.864 00:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:04.864 00:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:04.864 00:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:04.864 00:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:04.864 00:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:04.864 00:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:04.864 nvme0n1 00:27:04.864 00:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:04.864 00:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:04.864 00:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:04.864 00:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:04.864 00:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:05.125 00:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:05.125 00:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:05.125 00:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:05.125 00:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:05.125 00:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:05.125 00:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:05.125 00:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:05.125 00:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:27:05.125 00:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:05.125 00:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:05.125 00:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:05.125 00:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:05.125 00:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWNkOTk3ZGE4NTkzNGNjYjI2M2Q3MTk2NDBmOGQwNTM2ODFmOTA5YzJmYjBiMGFloiTCEw==: 00:27:05.125 00:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjNlYTRkNWU2NTdkYTcyNDJiNmQzMjM3ZDhkMTVlNDQxMjkwM2U4MmUyMmIyYjVlkmQc+Q==: 00:27:05.125 00:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:05.125 00:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:05.125 00:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWNkOTk3ZGE4NTkzNGNjYjI2M2Q3MTk2NDBmOGQwNTM2ODFmOTA5YzJmYjBiMGFloiTCEw==: 00:27:05.125 00:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjNlYTRkNWU2NTdkYTcyNDJiNmQzMjM3ZDhkMTVlNDQxMjkwM2U4MmUyMmIyYjVlkmQc+Q==: ]] 00:27:05.125 00:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjNlYTRkNWU2NTdkYTcyNDJiNmQzMjM3ZDhkMTVlNDQxMjkwM2U4MmUyMmIyYjVlkmQc+Q==: 00:27:05.125 00:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:27:05.125 00:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:05.125 00:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:05.125 00:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:05.125 00:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:05.125 00:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:05.125 00:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:05.125 00:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:05.125 00:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:05.125 00:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:05.125 00:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:05.125 00:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:05.125 00:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:05.125 00:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:05.125 00:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:05.125 00:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:05.125 00:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:05.125 00:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:05.125 00:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:05.125 00:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:05.125 00:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:05.125 00:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:05.125 00:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:05.125 00:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:05.125 nvme0n1 00:27:05.125 00:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:05.125 00:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:05.125 00:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:05.125 00:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:05.125 00:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:05.125 00:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:05.387 00:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:05.387 00:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:05.387 00:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:05.387 00:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:05.387 00:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:05.387 00:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:05.387 00:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:27:05.387 00:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:05.387 00:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:05.387 00:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:05.387 00:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:05.387 00:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjhkYzhlNTkwNWM4NjRkODMzZTkxNTVhNjNkZjk5M2KPLwBm: 00:27:05.387 00:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzNmNDNlMzY0N2ZmMTQyZGU5OTAxZjBlZmJjMTMxZmJPokLg: 00:27:05.387 00:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:05.387 00:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:05.387 00:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjhkYzhlNTkwNWM4NjRkODMzZTkxNTVhNjNkZjk5M2KPLwBm: 00:27:05.387 00:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzNmNDNlMzY0N2ZmMTQyZGU5OTAxZjBlZmJjMTMxZmJPokLg: ]] 00:27:05.387 00:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzNmNDNlMzY0N2ZmMTQyZGU5OTAxZjBlZmJjMTMxZmJPokLg: 00:27:05.387 00:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:27:05.387 00:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:05.387 00:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:05.387 00:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:05.387 00:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:05.387 00:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:05.387 00:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:05.387 00:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:05.387 00:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:05.387 00:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:05.387 00:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:05.387 00:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:05.387 00:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:05.387 00:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:05.387 00:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:05.387 00:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:05.387 00:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:05.387 00:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:05.387 00:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:05.387 00:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:05.387 00:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:05.387 00:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:05.387 00:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:05.387 00:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:05.387 nvme0n1 00:27:05.387 00:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:05.387 00:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:05.387 00:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:05.387 00:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:05.387 00:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:05.387 00:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:05.649 00:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:05.649 00:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:05.649 00:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:05.649 00:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:05.649 00:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:05.649 00:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:05.649 00:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:27:05.649 00:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:05.649 00:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:05.649 00:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:05.649 00:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:05.649 00:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MWJhYzU0MmNjZDVhMTk1MDM3OTliMjEyMGE3MDMzMmI3OTkyMDRjYzFiYzZiOWM2okmFTA==: 00:27:05.649 00:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MDVlNTU0OTA4NDRmMTFkZjlhMDU2ZTc0ZTM4NjZiNDL//Az2: 00:27:05.649 00:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:05.649 00:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:05.649 00:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MWJhYzU0MmNjZDVhMTk1MDM3OTliMjEyMGE3MDMzMmI3OTkyMDRjYzFiYzZiOWM2okmFTA==: 00:27:05.649 00:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MDVlNTU0OTA4NDRmMTFkZjlhMDU2ZTc0ZTM4NjZiNDL//Az2: ]] 00:27:05.649 00:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MDVlNTU0OTA4NDRmMTFkZjlhMDU2ZTc0ZTM4NjZiNDL//Az2: 00:27:05.649 00:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:27:05.649 00:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:05.649 00:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:05.649 00:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:05.649 00:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:05.649 00:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:05.649 00:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:05.649 00:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:05.649 00:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:05.649 00:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:05.649 00:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:05.649 00:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:05.649 00:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:05.649 00:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:05.649 00:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:05.649 00:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:05.649 00:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:05.649 00:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:05.649 00:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:05.649 00:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:05.649 00:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:05.649 00:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:05.649 00:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:05.649 00:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:05.649 nvme0n1 00:27:05.649 00:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:05.649 00:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:05.649 00:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:05.649 00:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:05.649 00:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:05.649 00:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:05.649 00:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:05.649 00:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:05.649 00:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:05.649 00:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:05.909 00:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:05.909 00:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:05.909 00:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:27:05.909 00:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:05.909 00:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:05.909 00:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:05.909 00:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:05.909 00:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Yjk2M2M1Y2U4ZGJhYTQzZjE3NjIzMzQxYTg0YmQ4ZmJkYjU3Y2JhNTNmMWQzMDRhN2ZmYzQzYTJkMGM4YzhlNB7igZA=: 00:27:05.909 00:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:05.909 00:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:05.909 00:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:05.909 00:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Yjk2M2M1Y2U4ZGJhYTQzZjE3NjIzMzQxYTg0YmQ4ZmJkYjU3Y2JhNTNmMWQzMDRhN2ZmYzQzYTJkMGM4YzhlNB7igZA=: 00:27:05.909 00:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:05.909 00:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:27:05.909 00:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:05.909 00:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:05.909 00:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:05.909 00:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:05.909 00:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:05.909 00:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:05.909 00:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:05.909 00:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:05.909 00:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:05.909 00:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:05.909 00:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:05.909 00:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:05.909 00:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:05.909 00:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:05.909 00:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:05.909 00:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:05.909 00:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:05.909 00:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:05.909 00:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:05.909 00:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:05.909 00:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:05.909 00:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:05.909 00:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:05.909 nvme0n1 00:27:05.909 00:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:05.909 00:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:05.909 00:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:05.909 00:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:05.909 00:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:05.909 00:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:05.909 00:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:05.909 00:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:05.909 00:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:05.909 00:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.169 00:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:06.169 00:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:06.169 00:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:06.170 00:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:27:06.170 00:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:06.170 00:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:06.170 00:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:06.170 00:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:06.170 00:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzUzNmFkOWEzNjJhMDIwNTkwOGY4MTQzYzMwNDE0MzlD04KG: 00:27:06.170 00:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:N2FiYWUxMzNlYjNjMzUyMGVmYjVjYWI4N2JiY2ExNjRlZWM1MDU1YTAxMWVkZGQ1ZDlhMzU5ZDJhOTNhYjVkYj/Vaq8=: 00:27:06.170 00:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:06.170 00:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:06.170 00:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzUzNmFkOWEzNjJhMDIwNTkwOGY4MTQzYzMwNDE0MzlD04KG: 00:27:06.170 00:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:N2FiYWUxMzNlYjNjMzUyMGVmYjVjYWI4N2JiY2ExNjRlZWM1MDU1YTAxMWVkZGQ1ZDlhMzU5ZDJhOTNhYjVkYj/Vaq8=: ]] 00:27:06.170 00:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:N2FiYWUxMzNlYjNjMzUyMGVmYjVjYWI4N2JiY2ExNjRlZWM1MDU1YTAxMWVkZGQ1ZDlhMzU5ZDJhOTNhYjVkYj/Vaq8=: 00:27:06.170 00:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:27:06.170 00:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:06.170 00:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:06.170 00:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:06.170 00:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:06.170 00:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:06.170 00:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:06.170 00:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:06.170 00:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.170 00:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:06.170 00:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:06.170 00:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:06.170 00:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:06.170 00:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:06.170 00:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:06.170 00:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:06.170 00:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:06.170 00:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:06.170 00:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:06.170 00:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:06.170 00:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:06.170 00:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:06.170 00:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:06.170 00:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.430 nvme0n1 00:27:06.430 00:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:06.430 00:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:06.430 00:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:06.430 00:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:06.430 00:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.430 00:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:06.430 00:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:06.430 00:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:06.430 00:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:06.430 00:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.430 00:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:06.430 00:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:06.430 00:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:27:06.430 00:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:06.430 00:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:06.430 00:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:06.430 00:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:06.430 00:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWNkOTk3ZGE4NTkzNGNjYjI2M2Q3MTk2NDBmOGQwNTM2ODFmOTA5YzJmYjBiMGFloiTCEw==: 00:27:06.430 00:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjNlYTRkNWU2NTdkYTcyNDJiNmQzMjM3ZDhkMTVlNDQxMjkwM2U4MmUyMmIyYjVlkmQc+Q==: 00:27:06.430 00:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:06.430 00:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:06.430 00:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWNkOTk3ZGE4NTkzNGNjYjI2M2Q3MTk2NDBmOGQwNTM2ODFmOTA5YzJmYjBiMGFloiTCEw==: 00:27:06.430 00:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjNlYTRkNWU2NTdkYTcyNDJiNmQzMjM3ZDhkMTVlNDQxMjkwM2U4MmUyMmIyYjVlkmQc+Q==: ]] 00:27:06.430 00:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjNlYTRkNWU2NTdkYTcyNDJiNmQzMjM3ZDhkMTVlNDQxMjkwM2U4MmUyMmIyYjVlkmQc+Q==: 00:27:06.430 00:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:27:06.430 00:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:06.430 00:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:06.430 00:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:06.430 00:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:06.430 00:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:06.430 00:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:06.430 00:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:06.430 00:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.430 00:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:06.430 00:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:06.430 00:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:06.430 00:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:06.430 00:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:06.430 00:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:06.430 00:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:06.430 00:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:06.430 00:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:06.430 00:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:06.430 00:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:06.430 00:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:06.430 00:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:06.430 00:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:06.430 00:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.690 nvme0n1 00:27:06.690 00:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:06.690 00:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:06.690 00:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:06.690 00:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:06.690 00:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.690 00:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:06.690 00:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:06.690 00:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:06.690 00:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:06.690 00:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.690 00:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:06.690 00:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:06.690 00:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:27:06.690 00:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:06.690 00:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:06.690 00:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:06.690 00:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:06.690 00:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjhkYzhlNTkwNWM4NjRkODMzZTkxNTVhNjNkZjk5M2KPLwBm: 00:27:06.690 00:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzNmNDNlMzY0N2ZmMTQyZGU5OTAxZjBlZmJjMTMxZmJPokLg: 00:27:06.690 00:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:06.690 00:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:06.690 00:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjhkYzhlNTkwNWM4NjRkODMzZTkxNTVhNjNkZjk5M2KPLwBm: 00:27:06.690 00:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzNmNDNlMzY0N2ZmMTQyZGU5OTAxZjBlZmJjMTMxZmJPokLg: ]] 00:27:06.690 00:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzNmNDNlMzY0N2ZmMTQyZGU5OTAxZjBlZmJjMTMxZmJPokLg: 00:27:06.690 00:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:27:06.690 00:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:06.690 00:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:06.690 00:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:06.690 00:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:06.690 00:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:06.690 00:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:06.690 00:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:06.690 00:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.690 00:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:06.690 00:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:06.690 00:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:06.690 00:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:06.690 00:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:06.690 00:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:06.690 00:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:06.690 00:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:06.690 00:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:06.690 00:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:06.690 00:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:06.690 00:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:06.690 00:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:06.690 00:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:06.690 00:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.951 nvme0n1 00:27:06.951 00:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:06.951 00:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:06.951 00:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:06.951 00:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:06.951 00:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.951 00:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:06.951 00:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:06.951 00:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:06.951 00:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:06.951 00:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.951 00:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:06.951 00:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:06.951 00:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:27:06.951 00:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:06.951 00:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:06.951 00:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:06.951 00:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:06.951 00:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MWJhYzU0MmNjZDVhMTk1MDM3OTliMjEyMGE3MDMzMmI3OTkyMDRjYzFiYzZiOWM2okmFTA==: 00:27:06.951 00:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MDVlNTU0OTA4NDRmMTFkZjlhMDU2ZTc0ZTM4NjZiNDL//Az2: 00:27:06.951 00:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:06.951 00:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:06.951 00:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MWJhYzU0MmNjZDVhMTk1MDM3OTliMjEyMGE3MDMzMmI3OTkyMDRjYzFiYzZiOWM2okmFTA==: 00:27:06.951 00:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MDVlNTU0OTA4NDRmMTFkZjlhMDU2ZTc0ZTM4NjZiNDL//Az2: ]] 00:27:06.951 00:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MDVlNTU0OTA4NDRmMTFkZjlhMDU2ZTc0ZTM4NjZiNDL//Az2: 00:27:06.951 00:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:27:06.951 00:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:06.951 00:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:06.951 00:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:06.951 00:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:06.951 00:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:06.951 00:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:06.951 00:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:06.951 00:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.951 00:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:06.951 00:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:06.951 00:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:06.951 00:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:06.951 00:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:06.951 00:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:06.951 00:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:06.951 00:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:06.951 00:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:06.951 00:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:06.951 00:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:06.951 00:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:06.951 00:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:06.951 00:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:06.951 00:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.211 nvme0n1 00:27:07.211 00:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:07.211 00:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:07.211 00:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:07.211 00:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:07.211 00:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.211 00:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:07.211 00:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:07.211 00:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:07.211 00:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:07.211 00:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.471 00:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:07.471 00:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:07.471 00:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:27:07.471 00:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:07.471 00:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:07.471 00:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:07.471 00:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:07.471 00:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Yjk2M2M1Y2U4ZGJhYTQzZjE3NjIzMzQxYTg0YmQ4ZmJkYjU3Y2JhNTNmMWQzMDRhN2ZmYzQzYTJkMGM4YzhlNB7igZA=: 00:27:07.471 00:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:07.471 00:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:07.471 00:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:07.471 00:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Yjk2M2M1Y2U4ZGJhYTQzZjE3NjIzMzQxYTg0YmQ4ZmJkYjU3Y2JhNTNmMWQzMDRhN2ZmYzQzYTJkMGM4YzhlNB7igZA=: 00:27:07.471 00:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:07.471 00:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:27:07.471 00:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:07.471 00:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:07.471 00:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:07.471 00:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:07.471 00:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:07.471 00:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:07.472 00:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:07.472 00:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.472 00:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:07.472 00:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:07.472 00:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:07.472 00:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:07.472 00:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:07.472 00:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:07.472 00:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:07.472 00:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:07.472 00:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:07.472 00:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:07.472 00:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:07.472 00:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:07.472 00:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:07.472 00:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:07.472 00:34:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.731 nvme0n1 00:27:07.731 00:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:07.731 00:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:07.731 00:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:07.731 00:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:07.731 00:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.731 00:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:07.731 00:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:07.731 00:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:07.731 00:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:07.731 00:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.731 00:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:07.731 00:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:07.731 00:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:07.731 00:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:27:07.731 00:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:07.731 00:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:07.731 00:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:07.731 00:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:07.731 00:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzUzNmFkOWEzNjJhMDIwNTkwOGY4MTQzYzMwNDE0MzlD04KG: 00:27:07.731 00:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:N2FiYWUxMzNlYjNjMzUyMGVmYjVjYWI4N2JiY2ExNjRlZWM1MDU1YTAxMWVkZGQ1ZDlhMzU5ZDJhOTNhYjVkYj/Vaq8=: 00:27:07.731 00:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:07.731 00:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:07.731 00:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzUzNmFkOWEzNjJhMDIwNTkwOGY4MTQzYzMwNDE0MzlD04KG: 00:27:07.731 00:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:N2FiYWUxMzNlYjNjMzUyMGVmYjVjYWI4N2JiY2ExNjRlZWM1MDU1YTAxMWVkZGQ1ZDlhMzU5ZDJhOTNhYjVkYj/Vaq8=: ]] 00:27:07.731 00:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:N2FiYWUxMzNlYjNjMzUyMGVmYjVjYWI4N2JiY2ExNjRlZWM1MDU1YTAxMWVkZGQ1ZDlhMzU5ZDJhOTNhYjVkYj/Vaq8=: 00:27:07.731 00:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:27:07.731 00:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:07.731 00:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:07.731 00:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:07.731 00:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:07.731 00:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:07.731 00:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:07.731 00:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:07.731 00:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.731 00:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:07.731 00:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:07.731 00:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:07.731 00:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:07.731 00:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:07.731 00:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:07.731 00:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:07.731 00:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:07.731 00:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:07.731 00:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:07.731 00:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:07.731 00:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:07.731 00:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:07.731 00:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:07.731 00:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.990 nvme0n1 00:27:07.990 00:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:07.990 00:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:07.990 00:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:07.990 00:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:07.990 00:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.990 00:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:08.250 00:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:08.250 00:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:08.250 00:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:08.250 00:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.250 00:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:08.250 00:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:08.250 00:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:27:08.250 00:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:08.250 00:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:08.250 00:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:08.250 00:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:08.250 00:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWNkOTk3ZGE4NTkzNGNjYjI2M2Q3MTk2NDBmOGQwNTM2ODFmOTA5YzJmYjBiMGFloiTCEw==: 00:27:08.250 00:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjNlYTRkNWU2NTdkYTcyNDJiNmQzMjM3ZDhkMTVlNDQxMjkwM2U4MmUyMmIyYjVlkmQc+Q==: 00:27:08.250 00:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:08.250 00:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:08.250 00:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWNkOTk3ZGE4NTkzNGNjYjI2M2Q3MTk2NDBmOGQwNTM2ODFmOTA5YzJmYjBiMGFloiTCEw==: 00:27:08.250 00:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjNlYTRkNWU2NTdkYTcyNDJiNmQzMjM3ZDhkMTVlNDQxMjkwM2U4MmUyMmIyYjVlkmQc+Q==: ]] 00:27:08.250 00:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjNlYTRkNWU2NTdkYTcyNDJiNmQzMjM3ZDhkMTVlNDQxMjkwM2U4MmUyMmIyYjVlkmQc+Q==: 00:27:08.250 00:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:27:08.250 00:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:08.250 00:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:08.250 00:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:08.250 00:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:08.250 00:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:08.250 00:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:08.250 00:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:08.250 00:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.250 00:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:08.250 00:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:08.251 00:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:08.251 00:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:08.251 00:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:08.251 00:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:08.251 00:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:08.251 00:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:08.251 00:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:08.251 00:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:08.251 00:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:08.251 00:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:08.251 00:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:08.251 00:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:08.251 00:34:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.511 nvme0n1 00:27:08.511 00:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:08.511 00:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:08.511 00:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:08.511 00:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:08.511 00:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.511 00:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:08.511 00:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:08.511 00:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:08.511 00:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:08.511 00:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.771 00:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:08.771 00:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:08.771 00:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:27:08.771 00:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:08.771 00:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:08.771 00:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:08.771 00:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:08.771 00:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjhkYzhlNTkwNWM4NjRkODMzZTkxNTVhNjNkZjk5M2KPLwBm: 00:27:08.771 00:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzNmNDNlMzY0N2ZmMTQyZGU5OTAxZjBlZmJjMTMxZmJPokLg: 00:27:08.771 00:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:08.771 00:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:08.771 00:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjhkYzhlNTkwNWM4NjRkODMzZTkxNTVhNjNkZjk5M2KPLwBm: 00:27:08.771 00:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzNmNDNlMzY0N2ZmMTQyZGU5OTAxZjBlZmJjMTMxZmJPokLg: ]] 00:27:08.771 00:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzNmNDNlMzY0N2ZmMTQyZGU5OTAxZjBlZmJjMTMxZmJPokLg: 00:27:08.771 00:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:27:08.771 00:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:08.771 00:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:08.771 00:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:08.771 00:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:08.771 00:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:08.771 00:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:08.771 00:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:08.771 00:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.771 00:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:08.771 00:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:08.771 00:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:08.771 00:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:08.771 00:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:08.771 00:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:08.771 00:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:08.772 00:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:08.772 00:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:08.772 00:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:08.772 00:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:08.772 00:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:08.772 00:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:08.772 00:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:08.772 00:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:09.032 nvme0n1 00:27:09.032 00:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:09.032 00:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:09.033 00:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:09.033 00:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:09.033 00:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:09.033 00:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:09.033 00:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:09.033 00:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:09.033 00:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:09.033 00:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:09.033 00:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:09.033 00:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:09.033 00:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:27:09.033 00:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:09.033 00:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:09.033 00:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:09.033 00:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:09.033 00:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MWJhYzU0MmNjZDVhMTk1MDM3OTliMjEyMGE3MDMzMmI3OTkyMDRjYzFiYzZiOWM2okmFTA==: 00:27:09.033 00:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MDVlNTU0OTA4NDRmMTFkZjlhMDU2ZTc0ZTM4NjZiNDL//Az2: 00:27:09.033 00:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:09.033 00:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:09.033 00:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MWJhYzU0MmNjZDVhMTk1MDM3OTliMjEyMGE3MDMzMmI3OTkyMDRjYzFiYzZiOWM2okmFTA==: 00:27:09.033 00:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MDVlNTU0OTA4NDRmMTFkZjlhMDU2ZTc0ZTM4NjZiNDL//Az2: ]] 00:27:09.033 00:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MDVlNTU0OTA4NDRmMTFkZjlhMDU2ZTc0ZTM4NjZiNDL//Az2: 00:27:09.033 00:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:27:09.033 00:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:09.033 00:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:09.033 00:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:09.033 00:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:09.033 00:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:09.033 00:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:09.033 00:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:09.033 00:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:09.033 00:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:09.033 00:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:09.033 00:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:09.033 00:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:09.033 00:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:09.033 00:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:09.033 00:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:09.033 00:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:09.033 00:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:09.033 00:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:09.033 00:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:09.033 00:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:09.033 00:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:09.033 00:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:09.033 00:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:09.605 nvme0n1 00:27:09.605 00:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:09.605 00:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:09.605 00:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:09.605 00:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:09.605 00:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:09.605 00:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:09.605 00:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:09.605 00:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:09.605 00:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:09.605 00:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:09.605 00:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:09.605 00:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:09.605 00:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:27:09.605 00:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:09.605 00:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:09.605 00:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:09.605 00:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:09.605 00:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Yjk2M2M1Y2U4ZGJhYTQzZjE3NjIzMzQxYTg0YmQ4ZmJkYjU3Y2JhNTNmMWQzMDRhN2ZmYzQzYTJkMGM4YzhlNB7igZA=: 00:27:09.605 00:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:09.605 00:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:09.605 00:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:09.605 00:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Yjk2M2M1Y2U4ZGJhYTQzZjE3NjIzMzQxYTg0YmQ4ZmJkYjU3Y2JhNTNmMWQzMDRhN2ZmYzQzYTJkMGM4YzhlNB7igZA=: 00:27:09.605 00:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:09.605 00:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:27:09.605 00:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:09.605 00:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:09.605 00:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:09.605 00:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:09.605 00:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:09.605 00:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:09.605 00:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:09.605 00:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:09.605 00:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:09.605 00:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:09.605 00:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:09.605 00:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:09.605 00:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:09.605 00:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:09.605 00:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:09.605 00:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:09.605 00:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:09.605 00:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:09.605 00:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:09.605 00:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:09.605 00:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:09.605 00:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:09.605 00:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:10.176 nvme0n1 00:27:10.176 00:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:10.176 00:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:10.176 00:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:10.176 00:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:10.176 00:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:10.176 00:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:10.176 00:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:10.176 00:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:10.176 00:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:10.176 00:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:10.176 00:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:10.176 00:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:10.176 00:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:10.176 00:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:27:10.176 00:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:10.176 00:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:10.177 00:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:10.177 00:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:10.177 00:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzUzNmFkOWEzNjJhMDIwNTkwOGY4MTQzYzMwNDE0MzlD04KG: 00:27:10.177 00:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:N2FiYWUxMzNlYjNjMzUyMGVmYjVjYWI4N2JiY2ExNjRlZWM1MDU1YTAxMWVkZGQ1ZDlhMzU5ZDJhOTNhYjVkYj/Vaq8=: 00:27:10.177 00:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:10.177 00:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:10.177 00:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzUzNmFkOWEzNjJhMDIwNTkwOGY4MTQzYzMwNDE0MzlD04KG: 00:27:10.177 00:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:N2FiYWUxMzNlYjNjMzUyMGVmYjVjYWI4N2JiY2ExNjRlZWM1MDU1YTAxMWVkZGQ1ZDlhMzU5ZDJhOTNhYjVkYj/Vaq8=: ]] 00:27:10.177 00:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:N2FiYWUxMzNlYjNjMzUyMGVmYjVjYWI4N2JiY2ExNjRlZWM1MDU1YTAxMWVkZGQ1ZDlhMzU5ZDJhOTNhYjVkYj/Vaq8=: 00:27:10.177 00:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:27:10.177 00:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:10.177 00:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:10.177 00:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:10.177 00:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:10.177 00:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:10.177 00:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:10.177 00:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:10.177 00:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:10.177 00:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:10.177 00:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:10.177 00:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:10.177 00:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:10.177 00:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:10.177 00:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:10.177 00:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:10.177 00:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:10.177 00:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:10.177 00:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:10.177 00:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:10.177 00:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:10.177 00:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:10.177 00:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:10.177 00:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:10.754 nvme0n1 00:27:10.754 00:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:10.754 00:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:10.754 00:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:10.754 00:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:10.754 00:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:10.754 00:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:10.754 00:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:10.754 00:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:10.754 00:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:10.754 00:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:10.754 00:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:10.754 00:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:10.754 00:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:27:10.754 00:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:10.754 00:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:10.754 00:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:10.754 00:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:10.754 00:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWNkOTk3ZGE4NTkzNGNjYjI2M2Q3MTk2NDBmOGQwNTM2ODFmOTA5YzJmYjBiMGFloiTCEw==: 00:27:10.754 00:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjNlYTRkNWU2NTdkYTcyNDJiNmQzMjM3ZDhkMTVlNDQxMjkwM2U4MmUyMmIyYjVlkmQc+Q==: 00:27:10.754 00:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:10.754 00:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:10.754 00:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWNkOTk3ZGE4NTkzNGNjYjI2M2Q3MTk2NDBmOGQwNTM2ODFmOTA5YzJmYjBiMGFloiTCEw==: 00:27:10.754 00:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjNlYTRkNWU2NTdkYTcyNDJiNmQzMjM3ZDhkMTVlNDQxMjkwM2U4MmUyMmIyYjVlkmQc+Q==: ]] 00:27:10.754 00:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjNlYTRkNWU2NTdkYTcyNDJiNmQzMjM3ZDhkMTVlNDQxMjkwM2U4MmUyMmIyYjVlkmQc+Q==: 00:27:10.754 00:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:27:10.754 00:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:10.754 00:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:10.754 00:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:10.754 00:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:10.754 00:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:10.754 00:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:10.754 00:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:10.754 00:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:10.754 00:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:10.754 00:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:10.754 00:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:10.754 00:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:10.754 00:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:10.754 00:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:10.754 00:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:10.754 00:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:10.754 00:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:10.754 00:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:10.754 00:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:10.754 00:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:10.754 00:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:10.754 00:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:10.754 00:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.360 nvme0n1 00:27:11.360 00:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:11.360 00:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:11.360 00:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:11.360 00:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:11.360 00:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.360 00:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:11.360 00:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:11.360 00:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:11.360 00:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:11.360 00:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.360 00:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:11.360 00:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:11.648 00:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:27:11.648 00:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:11.648 00:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:11.648 00:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:11.648 00:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:11.648 00:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjhkYzhlNTkwNWM4NjRkODMzZTkxNTVhNjNkZjk5M2KPLwBm: 00:27:11.648 00:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzNmNDNlMzY0N2ZmMTQyZGU5OTAxZjBlZmJjMTMxZmJPokLg: 00:27:11.648 00:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:11.648 00:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:11.648 00:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjhkYzhlNTkwNWM4NjRkODMzZTkxNTVhNjNkZjk5M2KPLwBm: 00:27:11.648 00:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzNmNDNlMzY0N2ZmMTQyZGU5OTAxZjBlZmJjMTMxZmJPokLg: ]] 00:27:11.648 00:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzNmNDNlMzY0N2ZmMTQyZGU5OTAxZjBlZmJjMTMxZmJPokLg: 00:27:11.648 00:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:27:11.648 00:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:11.648 00:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:11.648 00:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:11.648 00:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:11.648 00:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:11.648 00:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:11.648 00:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:11.648 00:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.648 00:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:11.648 00:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:11.648 00:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:11.648 00:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:11.648 00:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:11.648 00:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:11.648 00:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:11.648 00:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:11.648 00:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:11.648 00:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:11.648 00:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:11.648 00:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:11.648 00:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:11.648 00:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:11.648 00:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:12.220 nvme0n1 00:27:12.220 00:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:12.220 00:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:12.220 00:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:12.220 00:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:12.220 00:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:12.220 00:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:12.220 00:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:12.220 00:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:12.220 00:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:12.220 00:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:12.220 00:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:12.220 00:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:12.220 00:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:27:12.220 00:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:12.220 00:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:12.220 00:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:12.220 00:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:12.220 00:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MWJhYzU0MmNjZDVhMTk1MDM3OTliMjEyMGE3MDMzMmI3OTkyMDRjYzFiYzZiOWM2okmFTA==: 00:27:12.220 00:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MDVlNTU0OTA4NDRmMTFkZjlhMDU2ZTc0ZTM4NjZiNDL//Az2: 00:27:12.220 00:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:12.220 00:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:12.220 00:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MWJhYzU0MmNjZDVhMTk1MDM3OTliMjEyMGE3MDMzMmI3OTkyMDRjYzFiYzZiOWM2okmFTA==: 00:27:12.220 00:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MDVlNTU0OTA4NDRmMTFkZjlhMDU2ZTc0ZTM4NjZiNDL//Az2: ]] 00:27:12.220 00:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MDVlNTU0OTA4NDRmMTFkZjlhMDU2ZTc0ZTM4NjZiNDL//Az2: 00:27:12.220 00:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:27:12.220 00:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:12.220 00:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:12.220 00:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:12.220 00:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:12.220 00:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:12.220 00:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:12.220 00:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:12.220 00:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:12.220 00:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:12.220 00:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:12.220 00:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:12.220 00:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:12.220 00:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:12.220 00:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:12.220 00:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:12.220 00:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:12.220 00:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:12.220 00:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:12.220 00:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:12.220 00:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:12.220 00:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:12.221 00:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:12.221 00:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:12.792 nvme0n1 00:27:12.792 00:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:12.792 00:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:12.792 00:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:12.792 00:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:12.792 00:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:12.792 00:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:12.792 00:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:12.792 00:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:12.792 00:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:12.792 00:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:12.792 00:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:12.792 00:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:12.792 00:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:27:12.792 00:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:12.792 00:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:12.792 00:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:12.792 00:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:12.792 00:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Yjk2M2M1Y2U4ZGJhYTQzZjE3NjIzMzQxYTg0YmQ4ZmJkYjU3Y2JhNTNmMWQzMDRhN2ZmYzQzYTJkMGM4YzhlNB7igZA=: 00:27:12.792 00:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:12.792 00:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:12.792 00:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:12.792 00:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Yjk2M2M1Y2U4ZGJhYTQzZjE3NjIzMzQxYTg0YmQ4ZmJkYjU3Y2JhNTNmMWQzMDRhN2ZmYzQzYTJkMGM4YzhlNB7igZA=: 00:27:12.792 00:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:12.792 00:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:27:12.792 00:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:12.792 00:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:12.792 00:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:12.792 00:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:12.792 00:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:12.792 00:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:12.792 00:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:12.792 00:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:12.792 00:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:12.792 00:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:12.792 00:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:12.792 00:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:12.792 00:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:12.792 00:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:12.792 00:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:12.792 00:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:12.792 00:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:12.792 00:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:12.792 00:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:12.792 00:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:12.792 00:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:12.792 00:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:12.792 00:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.733 nvme0n1 00:27:13.733 00:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:13.733 00:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:13.733 00:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:13.733 00:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:13.733 00:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.733 00:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:13.733 00:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:13.733 00:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:13.733 00:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:13.733 00:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.733 00:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:13.733 00:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:27:13.733 00:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:13.733 00:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:13.733 00:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:27:13.733 00:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:13.733 00:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:13.733 00:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:13.733 00:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:13.733 00:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzUzNmFkOWEzNjJhMDIwNTkwOGY4MTQzYzMwNDE0MzlD04KG: 00:27:13.733 00:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:N2FiYWUxMzNlYjNjMzUyMGVmYjVjYWI4N2JiY2ExNjRlZWM1MDU1YTAxMWVkZGQ1ZDlhMzU5ZDJhOTNhYjVkYj/Vaq8=: 00:27:13.733 00:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:13.733 00:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:13.733 00:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzUzNmFkOWEzNjJhMDIwNTkwOGY4MTQzYzMwNDE0MzlD04KG: 00:27:13.733 00:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:N2FiYWUxMzNlYjNjMzUyMGVmYjVjYWI4N2JiY2ExNjRlZWM1MDU1YTAxMWVkZGQ1ZDlhMzU5ZDJhOTNhYjVkYj/Vaq8=: ]] 00:27:13.733 00:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:N2FiYWUxMzNlYjNjMzUyMGVmYjVjYWI4N2JiY2ExNjRlZWM1MDU1YTAxMWVkZGQ1ZDlhMzU5ZDJhOTNhYjVkYj/Vaq8=: 00:27:13.733 00:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:27:13.733 00:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:13.733 00:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:13.733 00:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:13.733 00:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:13.733 00:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:13.733 00:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:13.733 00:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:13.733 00:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.733 00:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:13.733 00:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:13.733 00:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:13.733 00:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:13.733 00:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:13.733 00:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:13.733 00:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:13.733 00:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:13.733 00:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:13.733 00:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:13.733 00:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:13.733 00:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:13.733 00:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:13.733 00:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:13.733 00:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.733 nvme0n1 00:27:13.733 00:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:13.733 00:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:13.733 00:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:13.733 00:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:13.733 00:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.733 00:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:13.733 00:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:13.733 00:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:13.733 00:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:13.733 00:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.733 00:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:13.733 00:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:13.733 00:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:27:13.733 00:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:13.733 00:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:13.733 00:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:13.733 00:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:13.733 00:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWNkOTk3ZGE4NTkzNGNjYjI2M2Q3MTk2NDBmOGQwNTM2ODFmOTA5YzJmYjBiMGFloiTCEw==: 00:27:13.733 00:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjNlYTRkNWU2NTdkYTcyNDJiNmQzMjM3ZDhkMTVlNDQxMjkwM2U4MmUyMmIyYjVlkmQc+Q==: 00:27:13.733 00:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:13.733 00:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:13.734 00:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWNkOTk3ZGE4NTkzNGNjYjI2M2Q3MTk2NDBmOGQwNTM2ODFmOTA5YzJmYjBiMGFloiTCEw==: 00:27:13.734 00:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjNlYTRkNWU2NTdkYTcyNDJiNmQzMjM3ZDhkMTVlNDQxMjkwM2U4MmUyMmIyYjVlkmQc+Q==: ]] 00:27:13.734 00:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjNlYTRkNWU2NTdkYTcyNDJiNmQzMjM3ZDhkMTVlNDQxMjkwM2U4MmUyMmIyYjVlkmQc+Q==: 00:27:13.734 00:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:27:13.734 00:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:13.734 00:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:13.734 00:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:13.734 00:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:13.734 00:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:13.734 00:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:13.734 00:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:13.734 00:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.734 00:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:13.734 00:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:13.734 00:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:13.734 00:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:13.734 00:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:13.734 00:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:13.734 00:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:13.734 00:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:13.734 00:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:13.734 00:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:13.734 00:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:13.734 00:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:13.734 00:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:13.734 00:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:13.734 00:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.995 nvme0n1 00:27:13.995 00:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:13.995 00:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:13.995 00:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:13.995 00:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:13.995 00:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.995 00:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:13.995 00:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:13.995 00:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:13.995 00:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:13.995 00:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.995 00:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:13.995 00:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:13.995 00:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:27:13.995 00:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:13.995 00:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:13.995 00:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:13.995 00:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:13.995 00:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjhkYzhlNTkwNWM4NjRkODMzZTkxNTVhNjNkZjk5M2KPLwBm: 00:27:13.995 00:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzNmNDNlMzY0N2ZmMTQyZGU5OTAxZjBlZmJjMTMxZmJPokLg: 00:27:13.995 00:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:13.995 00:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:13.995 00:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjhkYzhlNTkwNWM4NjRkODMzZTkxNTVhNjNkZjk5M2KPLwBm: 00:27:13.995 00:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzNmNDNlMzY0N2ZmMTQyZGU5OTAxZjBlZmJjMTMxZmJPokLg: ]] 00:27:13.995 00:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzNmNDNlMzY0N2ZmMTQyZGU5OTAxZjBlZmJjMTMxZmJPokLg: 00:27:13.995 00:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:27:13.995 00:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:13.995 00:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:13.995 00:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:13.995 00:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:13.995 00:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:13.995 00:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:13.995 00:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:13.995 00:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.995 00:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:13.995 00:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:13.995 00:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:13.995 00:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:13.995 00:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:13.995 00:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:13.995 00:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:13.995 00:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:13.995 00:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:13.995 00:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:13.995 00:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:13.995 00:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:13.995 00:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:13.995 00:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:13.995 00:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.257 nvme0n1 00:27:14.257 00:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:14.257 00:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:14.257 00:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:14.257 00:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:14.257 00:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.257 00:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:14.257 00:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:14.257 00:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:14.257 00:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:14.257 00:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.257 00:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:14.257 00:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:14.257 00:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:27:14.257 00:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:14.257 00:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:14.257 00:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:14.257 00:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:14.257 00:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MWJhYzU0MmNjZDVhMTk1MDM3OTliMjEyMGE3MDMzMmI3OTkyMDRjYzFiYzZiOWM2okmFTA==: 00:27:14.257 00:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MDVlNTU0OTA4NDRmMTFkZjlhMDU2ZTc0ZTM4NjZiNDL//Az2: 00:27:14.257 00:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:14.257 00:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:14.257 00:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MWJhYzU0MmNjZDVhMTk1MDM3OTliMjEyMGE3MDMzMmI3OTkyMDRjYzFiYzZiOWM2okmFTA==: 00:27:14.257 00:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MDVlNTU0OTA4NDRmMTFkZjlhMDU2ZTc0ZTM4NjZiNDL//Az2: ]] 00:27:14.257 00:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MDVlNTU0OTA4NDRmMTFkZjlhMDU2ZTc0ZTM4NjZiNDL//Az2: 00:27:14.257 00:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:27:14.257 00:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:14.257 00:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:14.257 00:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:14.257 00:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:14.257 00:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:14.257 00:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:14.257 00:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:14.257 00:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.257 00:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:14.257 00:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:14.257 00:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:14.257 00:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:14.257 00:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:14.257 00:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:14.257 00:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:14.257 00:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:14.257 00:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:14.257 00:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:14.257 00:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:14.257 00:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:14.257 00:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:14.257 00:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:14.257 00:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.518 nvme0n1 00:27:14.518 00:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:14.518 00:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:14.518 00:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:14.518 00:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:14.518 00:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.518 00:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:14.518 00:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:14.518 00:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:14.518 00:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:14.518 00:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.518 00:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:14.518 00:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:14.518 00:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:27:14.518 00:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:14.518 00:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:14.518 00:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:14.518 00:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:14.518 00:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Yjk2M2M1Y2U4ZGJhYTQzZjE3NjIzMzQxYTg0YmQ4ZmJkYjU3Y2JhNTNmMWQzMDRhN2ZmYzQzYTJkMGM4YzhlNB7igZA=: 00:27:14.518 00:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:14.518 00:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:14.518 00:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:14.518 00:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Yjk2M2M1Y2U4ZGJhYTQzZjE3NjIzMzQxYTg0YmQ4ZmJkYjU3Y2JhNTNmMWQzMDRhN2ZmYzQzYTJkMGM4YzhlNB7igZA=: 00:27:14.518 00:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:14.518 00:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:27:14.518 00:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:14.518 00:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:14.518 00:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:14.518 00:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:14.518 00:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:14.518 00:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:14.518 00:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:14.518 00:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.518 00:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:14.518 00:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:14.518 00:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:14.518 00:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:14.518 00:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:14.518 00:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:14.518 00:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:14.518 00:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:14.518 00:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:14.518 00:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:14.518 00:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:14.518 00:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:14.518 00:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:14.518 00:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:14.518 00:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.518 nvme0n1 00:27:14.518 00:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:14.518 00:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:14.518 00:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:14.518 00:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:14.518 00:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.518 00:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:14.518 00:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:14.518 00:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:14.778 00:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:14.778 00:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.778 00:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:14.778 00:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:14.778 00:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:14.778 00:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:27:14.778 00:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:14.778 00:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:14.779 00:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:14.779 00:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:14.779 00:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzUzNmFkOWEzNjJhMDIwNTkwOGY4MTQzYzMwNDE0MzlD04KG: 00:27:14.779 00:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:N2FiYWUxMzNlYjNjMzUyMGVmYjVjYWI4N2JiY2ExNjRlZWM1MDU1YTAxMWVkZGQ1ZDlhMzU5ZDJhOTNhYjVkYj/Vaq8=: 00:27:14.779 00:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:14.779 00:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:14.779 00:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzUzNmFkOWEzNjJhMDIwNTkwOGY4MTQzYzMwNDE0MzlD04KG: 00:27:14.779 00:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:N2FiYWUxMzNlYjNjMzUyMGVmYjVjYWI4N2JiY2ExNjRlZWM1MDU1YTAxMWVkZGQ1ZDlhMzU5ZDJhOTNhYjVkYj/Vaq8=: ]] 00:27:14.779 00:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:N2FiYWUxMzNlYjNjMzUyMGVmYjVjYWI4N2JiY2ExNjRlZWM1MDU1YTAxMWVkZGQ1ZDlhMzU5ZDJhOTNhYjVkYj/Vaq8=: 00:27:14.779 00:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:27:14.779 00:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:14.779 00:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:14.779 00:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:14.779 00:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:14.779 00:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:14.779 00:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:14.779 00:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:14.779 00:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.779 00:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:14.779 00:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:14.779 00:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:14.779 00:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:14.779 00:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:14.779 00:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:14.779 00:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:14.779 00:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:14.779 00:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:14.779 00:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:14.779 00:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:14.779 00:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:14.779 00:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:14.779 00:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:14.779 00:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.779 nvme0n1 00:27:14.779 00:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:14.779 00:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:14.779 00:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:14.779 00:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:14.779 00:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.779 00:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:14.779 00:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:14.779 00:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:14.779 00:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:14.779 00:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.039 00:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:15.039 00:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:15.039 00:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:27:15.039 00:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:15.039 00:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:15.039 00:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:15.039 00:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:15.039 00:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWNkOTk3ZGE4NTkzNGNjYjI2M2Q3MTk2NDBmOGQwNTM2ODFmOTA5YzJmYjBiMGFloiTCEw==: 00:27:15.039 00:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjNlYTRkNWU2NTdkYTcyNDJiNmQzMjM3ZDhkMTVlNDQxMjkwM2U4MmUyMmIyYjVlkmQc+Q==: 00:27:15.039 00:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:15.039 00:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:15.039 00:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWNkOTk3ZGE4NTkzNGNjYjI2M2Q3MTk2NDBmOGQwNTM2ODFmOTA5YzJmYjBiMGFloiTCEw==: 00:27:15.039 00:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjNlYTRkNWU2NTdkYTcyNDJiNmQzMjM3ZDhkMTVlNDQxMjkwM2U4MmUyMmIyYjVlkmQc+Q==: ]] 00:27:15.039 00:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjNlYTRkNWU2NTdkYTcyNDJiNmQzMjM3ZDhkMTVlNDQxMjkwM2U4MmUyMmIyYjVlkmQc+Q==: 00:27:15.039 00:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:27:15.039 00:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:15.039 00:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:15.039 00:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:15.040 00:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:15.040 00:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:15.040 00:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:15.040 00:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:15.040 00:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.040 00:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:15.040 00:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:15.040 00:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:15.040 00:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:15.040 00:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:15.040 00:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:15.040 00:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:15.040 00:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:15.040 00:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:15.040 00:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:15.040 00:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:15.040 00:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:15.040 00:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:15.040 00:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:15.040 00:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.040 nvme0n1 00:27:15.040 00:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:15.040 00:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:15.040 00:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:15.040 00:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:15.040 00:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.040 00:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:15.040 00:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:15.040 00:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:15.040 00:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:15.040 00:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.040 00:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:15.040 00:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:15.040 00:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:27:15.040 00:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:15.040 00:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:15.040 00:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:15.040 00:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:15.300 00:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjhkYzhlNTkwNWM4NjRkODMzZTkxNTVhNjNkZjk5M2KPLwBm: 00:27:15.301 00:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzNmNDNlMzY0N2ZmMTQyZGU5OTAxZjBlZmJjMTMxZmJPokLg: 00:27:15.301 00:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:15.301 00:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:15.301 00:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjhkYzhlNTkwNWM4NjRkODMzZTkxNTVhNjNkZjk5M2KPLwBm: 00:27:15.301 00:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzNmNDNlMzY0N2ZmMTQyZGU5OTAxZjBlZmJjMTMxZmJPokLg: ]] 00:27:15.301 00:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzNmNDNlMzY0N2ZmMTQyZGU5OTAxZjBlZmJjMTMxZmJPokLg: 00:27:15.301 00:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:27:15.301 00:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:15.301 00:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:15.301 00:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:15.301 00:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:15.301 00:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:15.301 00:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:15.301 00:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:15.301 00:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.301 00:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:15.301 00:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:15.301 00:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:15.301 00:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:15.301 00:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:15.301 00:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:15.301 00:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:15.301 00:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:15.301 00:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:15.301 00:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:15.301 00:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:15.301 00:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:15.301 00:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:15.301 00:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:15.301 00:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.301 nvme0n1 00:27:15.301 00:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:15.301 00:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:15.301 00:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:15.301 00:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:15.301 00:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.301 00:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:15.301 00:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:15.301 00:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:15.301 00:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:15.301 00:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.301 00:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:15.301 00:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:15.301 00:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:27:15.301 00:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:15.301 00:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:15.301 00:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:15.301 00:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:15.301 00:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MWJhYzU0MmNjZDVhMTk1MDM3OTliMjEyMGE3MDMzMmI3OTkyMDRjYzFiYzZiOWM2okmFTA==: 00:27:15.301 00:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MDVlNTU0OTA4NDRmMTFkZjlhMDU2ZTc0ZTM4NjZiNDL//Az2: 00:27:15.301 00:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:15.301 00:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:15.301 00:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MWJhYzU0MmNjZDVhMTk1MDM3OTliMjEyMGE3MDMzMmI3OTkyMDRjYzFiYzZiOWM2okmFTA==: 00:27:15.301 00:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MDVlNTU0OTA4NDRmMTFkZjlhMDU2ZTc0ZTM4NjZiNDL//Az2: ]] 00:27:15.301 00:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MDVlNTU0OTA4NDRmMTFkZjlhMDU2ZTc0ZTM4NjZiNDL//Az2: 00:27:15.301 00:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:27:15.301 00:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:15.301 00:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:15.301 00:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:15.301 00:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:15.301 00:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:15.301 00:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:15.301 00:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:15.301 00:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.561 00:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:15.561 00:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:15.561 00:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:15.561 00:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:15.561 00:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:15.561 00:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:15.561 00:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:15.561 00:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:15.561 00:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:15.561 00:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:15.561 00:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:15.561 00:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:15.561 00:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:15.561 00:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:15.561 00:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.561 nvme0n1 00:27:15.561 00:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:15.561 00:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:15.561 00:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:15.561 00:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:15.561 00:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.561 00:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:15.561 00:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:15.561 00:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:15.561 00:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:15.561 00:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.561 00:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:15.561 00:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:15.561 00:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:27:15.561 00:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:15.561 00:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:15.561 00:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:15.561 00:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:15.561 00:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Yjk2M2M1Y2U4ZGJhYTQzZjE3NjIzMzQxYTg0YmQ4ZmJkYjU3Y2JhNTNmMWQzMDRhN2ZmYzQzYTJkMGM4YzhlNB7igZA=: 00:27:15.561 00:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:15.561 00:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:15.561 00:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:15.561 00:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Yjk2M2M1Y2U4ZGJhYTQzZjE3NjIzMzQxYTg0YmQ4ZmJkYjU3Y2JhNTNmMWQzMDRhN2ZmYzQzYTJkMGM4YzhlNB7igZA=: 00:27:15.561 00:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:15.561 00:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:27:15.561 00:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:15.561 00:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:15.561 00:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:15.561 00:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:15.561 00:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:15.561 00:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:15.561 00:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:15.561 00:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.561 00:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:15.561 00:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:15.561 00:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:15.561 00:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:15.561 00:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:15.821 00:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:15.821 00:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:15.821 00:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:15.821 00:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:15.821 00:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:15.821 00:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:15.821 00:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:15.821 00:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:15.821 00:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:15.821 00:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.821 nvme0n1 00:27:15.821 00:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:15.821 00:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:15.821 00:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:15.821 00:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:15.821 00:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.821 00:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:15.821 00:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:15.821 00:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:15.821 00:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:15.821 00:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.821 00:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:15.821 00:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:15.821 00:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:15.821 00:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:27:15.821 00:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:15.821 00:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:15.821 00:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:15.821 00:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:15.821 00:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzUzNmFkOWEzNjJhMDIwNTkwOGY4MTQzYzMwNDE0MzlD04KG: 00:27:15.821 00:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:N2FiYWUxMzNlYjNjMzUyMGVmYjVjYWI4N2JiY2ExNjRlZWM1MDU1YTAxMWVkZGQ1ZDlhMzU5ZDJhOTNhYjVkYj/Vaq8=: 00:27:15.821 00:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:15.821 00:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:15.821 00:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzUzNmFkOWEzNjJhMDIwNTkwOGY4MTQzYzMwNDE0MzlD04KG: 00:27:15.821 00:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:N2FiYWUxMzNlYjNjMzUyMGVmYjVjYWI4N2JiY2ExNjRlZWM1MDU1YTAxMWVkZGQ1ZDlhMzU5ZDJhOTNhYjVkYj/Vaq8=: ]] 00:27:15.821 00:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:N2FiYWUxMzNlYjNjMzUyMGVmYjVjYWI4N2JiY2ExNjRlZWM1MDU1YTAxMWVkZGQ1ZDlhMzU5ZDJhOTNhYjVkYj/Vaq8=: 00:27:15.821 00:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:27:15.821 00:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:15.821 00:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:15.821 00:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:15.821 00:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:15.821 00:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:15.821 00:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:15.821 00:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:15.821 00:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.821 00:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:15.821 00:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:15.821 00:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:15.821 00:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:15.821 00:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:15.821 00:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:15.821 00:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:15.821 00:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:15.821 00:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:15.821 00:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:15.821 00:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:15.821 00:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:15.821 00:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:15.821 00:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:15.821 00:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.082 nvme0n1 00:27:16.082 00:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:16.082 00:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:16.082 00:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:16.082 00:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:16.082 00:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.082 00:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:16.343 00:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:16.343 00:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:16.343 00:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:16.343 00:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.343 00:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:16.343 00:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:16.343 00:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:27:16.343 00:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:16.343 00:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:16.343 00:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:16.343 00:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:16.343 00:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWNkOTk3ZGE4NTkzNGNjYjI2M2Q3MTk2NDBmOGQwNTM2ODFmOTA5YzJmYjBiMGFloiTCEw==: 00:27:16.343 00:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjNlYTRkNWU2NTdkYTcyNDJiNmQzMjM3ZDhkMTVlNDQxMjkwM2U4MmUyMmIyYjVlkmQc+Q==: 00:27:16.343 00:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:16.343 00:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:16.343 00:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWNkOTk3ZGE4NTkzNGNjYjI2M2Q3MTk2NDBmOGQwNTM2ODFmOTA5YzJmYjBiMGFloiTCEw==: 00:27:16.343 00:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjNlYTRkNWU2NTdkYTcyNDJiNmQzMjM3ZDhkMTVlNDQxMjkwM2U4MmUyMmIyYjVlkmQc+Q==: ]] 00:27:16.343 00:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjNlYTRkNWU2NTdkYTcyNDJiNmQzMjM3ZDhkMTVlNDQxMjkwM2U4MmUyMmIyYjVlkmQc+Q==: 00:27:16.343 00:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:27:16.343 00:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:16.343 00:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:16.343 00:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:16.343 00:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:16.343 00:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:16.343 00:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:16.343 00:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:16.343 00:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.343 00:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:16.343 00:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:16.343 00:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:16.343 00:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:16.343 00:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:16.343 00:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:16.343 00:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:16.343 00:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:16.343 00:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:16.343 00:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:16.343 00:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:16.343 00:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:16.343 00:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:16.343 00:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:16.343 00:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.604 nvme0n1 00:27:16.604 00:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:16.604 00:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:16.604 00:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:16.604 00:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:16.604 00:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.604 00:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:16.604 00:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:16.604 00:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:16.604 00:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:16.604 00:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.604 00:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:16.604 00:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:16.604 00:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:27:16.604 00:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:16.604 00:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:16.604 00:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:16.604 00:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:16.604 00:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjhkYzhlNTkwNWM4NjRkODMzZTkxNTVhNjNkZjk5M2KPLwBm: 00:27:16.604 00:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzNmNDNlMzY0N2ZmMTQyZGU5OTAxZjBlZmJjMTMxZmJPokLg: 00:27:16.604 00:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:16.604 00:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:16.604 00:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjhkYzhlNTkwNWM4NjRkODMzZTkxNTVhNjNkZjk5M2KPLwBm: 00:27:16.604 00:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzNmNDNlMzY0N2ZmMTQyZGU5OTAxZjBlZmJjMTMxZmJPokLg: ]] 00:27:16.604 00:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzNmNDNlMzY0N2ZmMTQyZGU5OTAxZjBlZmJjMTMxZmJPokLg: 00:27:16.604 00:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:27:16.604 00:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:16.604 00:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:16.604 00:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:16.604 00:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:16.604 00:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:16.604 00:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:16.604 00:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:16.604 00:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.604 00:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:16.604 00:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:16.604 00:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:16.604 00:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:16.604 00:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:16.604 00:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:16.604 00:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:16.604 00:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:16.604 00:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:16.604 00:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:16.604 00:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:16.604 00:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:16.604 00:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:16.604 00:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:16.604 00:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.864 nvme0n1 00:27:16.864 00:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:16.864 00:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:16.864 00:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:16.864 00:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:16.864 00:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.864 00:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:16.864 00:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:16.864 00:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:16.864 00:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:16.864 00:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.864 00:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:16.864 00:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:16.864 00:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:27:16.864 00:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:16.864 00:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:16.864 00:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:16.864 00:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:16.864 00:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MWJhYzU0MmNjZDVhMTk1MDM3OTliMjEyMGE3MDMzMmI3OTkyMDRjYzFiYzZiOWM2okmFTA==: 00:27:16.864 00:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MDVlNTU0OTA4NDRmMTFkZjlhMDU2ZTc0ZTM4NjZiNDL//Az2: 00:27:16.864 00:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:16.864 00:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:16.864 00:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MWJhYzU0MmNjZDVhMTk1MDM3OTliMjEyMGE3MDMzMmI3OTkyMDRjYzFiYzZiOWM2okmFTA==: 00:27:16.864 00:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MDVlNTU0OTA4NDRmMTFkZjlhMDU2ZTc0ZTM4NjZiNDL//Az2: ]] 00:27:16.864 00:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MDVlNTU0OTA4NDRmMTFkZjlhMDU2ZTc0ZTM4NjZiNDL//Az2: 00:27:16.864 00:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:27:16.864 00:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:16.864 00:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:16.864 00:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:16.864 00:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:16.864 00:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:16.864 00:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:16.864 00:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:16.864 00:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.864 00:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:16.864 00:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:16.864 00:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:16.864 00:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:16.864 00:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:16.864 00:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:16.864 00:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:16.864 00:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:16.864 00:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:16.864 00:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:16.864 00:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:16.864 00:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:16.864 00:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:16.864 00:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:16.864 00:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.126 nvme0n1 00:27:17.126 00:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:17.126 00:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:17.126 00:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:17.126 00:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:17.126 00:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.126 00:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:17.126 00:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:17.126 00:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:17.126 00:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:17.126 00:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.126 00:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:17.126 00:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:17.126 00:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:27:17.126 00:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:17.126 00:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:17.126 00:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:17.126 00:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:17.126 00:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Yjk2M2M1Y2U4ZGJhYTQzZjE3NjIzMzQxYTg0YmQ4ZmJkYjU3Y2JhNTNmMWQzMDRhN2ZmYzQzYTJkMGM4YzhlNB7igZA=: 00:27:17.126 00:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:17.126 00:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:17.126 00:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:17.126 00:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Yjk2M2M1Y2U4ZGJhYTQzZjE3NjIzMzQxYTg0YmQ4ZmJkYjU3Y2JhNTNmMWQzMDRhN2ZmYzQzYTJkMGM4YzhlNB7igZA=: 00:27:17.126 00:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:17.126 00:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:27:17.126 00:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:17.126 00:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:17.126 00:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:17.126 00:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:17.126 00:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:17.126 00:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:17.126 00:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:17.126 00:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.126 00:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:17.126 00:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:17.126 00:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:17.126 00:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:17.126 00:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:17.126 00:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:17.126 00:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:17.126 00:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:17.126 00:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:17.126 00:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:17.126 00:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:17.126 00:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:17.126 00:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:17.126 00:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:17.126 00:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.386 nvme0n1 00:27:17.386 00:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:17.386 00:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:17.386 00:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:17.386 00:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:17.386 00:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.386 00:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:17.646 00:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:17.646 00:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:17.646 00:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:17.646 00:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.646 00:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:17.646 00:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:17.646 00:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:17.646 00:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:27:17.646 00:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:17.646 00:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:17.646 00:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:17.646 00:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:17.646 00:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzUzNmFkOWEzNjJhMDIwNTkwOGY4MTQzYzMwNDE0MzlD04KG: 00:27:17.646 00:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:N2FiYWUxMzNlYjNjMzUyMGVmYjVjYWI4N2JiY2ExNjRlZWM1MDU1YTAxMWVkZGQ1ZDlhMzU5ZDJhOTNhYjVkYj/Vaq8=: 00:27:17.646 00:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:17.646 00:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:17.646 00:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzUzNmFkOWEzNjJhMDIwNTkwOGY4MTQzYzMwNDE0MzlD04KG: 00:27:17.646 00:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:N2FiYWUxMzNlYjNjMzUyMGVmYjVjYWI4N2JiY2ExNjRlZWM1MDU1YTAxMWVkZGQ1ZDlhMzU5ZDJhOTNhYjVkYj/Vaq8=: ]] 00:27:17.646 00:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:N2FiYWUxMzNlYjNjMzUyMGVmYjVjYWI4N2JiY2ExNjRlZWM1MDU1YTAxMWVkZGQ1ZDlhMzU5ZDJhOTNhYjVkYj/Vaq8=: 00:27:17.646 00:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:27:17.646 00:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:17.646 00:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:17.646 00:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:17.646 00:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:17.646 00:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:17.646 00:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:17.646 00:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:17.646 00:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.646 00:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:17.646 00:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:17.646 00:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:17.646 00:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:17.646 00:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:17.646 00:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:17.646 00:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:17.646 00:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:17.646 00:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:17.646 00:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:17.646 00:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:17.646 00:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:17.646 00:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:17.646 00:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:17.646 00:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.907 nvme0n1 00:27:17.907 00:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:17.907 00:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:17.907 00:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:17.907 00:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:17.907 00:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.907 00:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:17.907 00:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:17.907 00:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:17.907 00:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:17.907 00:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.907 00:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:17.907 00:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:17.907 00:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:27:17.907 00:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:17.907 00:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:17.907 00:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:17.907 00:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:17.907 00:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWNkOTk3ZGE4NTkzNGNjYjI2M2Q3MTk2NDBmOGQwNTM2ODFmOTA5YzJmYjBiMGFloiTCEw==: 00:27:17.907 00:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjNlYTRkNWU2NTdkYTcyNDJiNmQzMjM3ZDhkMTVlNDQxMjkwM2U4MmUyMmIyYjVlkmQc+Q==: 00:27:18.168 00:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:18.168 00:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:18.168 00:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWNkOTk3ZGE4NTkzNGNjYjI2M2Q3MTk2NDBmOGQwNTM2ODFmOTA5YzJmYjBiMGFloiTCEw==: 00:27:18.168 00:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjNlYTRkNWU2NTdkYTcyNDJiNmQzMjM3ZDhkMTVlNDQxMjkwM2U4MmUyMmIyYjVlkmQc+Q==: ]] 00:27:18.168 00:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjNlYTRkNWU2NTdkYTcyNDJiNmQzMjM3ZDhkMTVlNDQxMjkwM2U4MmUyMmIyYjVlkmQc+Q==: 00:27:18.168 00:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:27:18.168 00:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:18.168 00:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:18.168 00:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:18.168 00:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:18.168 00:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:18.168 00:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:18.168 00:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:18.168 00:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.168 00:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:18.168 00:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:18.168 00:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:18.168 00:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:18.168 00:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:18.168 00:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:18.168 00:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:18.168 00:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:18.168 00:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:18.168 00:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:18.168 00:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:18.168 00:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:18.168 00:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:18.168 00:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:18.168 00:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.429 nvme0n1 00:27:18.429 00:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:18.429 00:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:18.429 00:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:18.429 00:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:18.429 00:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.429 00:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:18.429 00:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:18.429 00:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:18.429 00:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:18.429 00:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.429 00:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:18.429 00:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:18.429 00:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:27:18.429 00:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:18.429 00:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:18.429 00:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:18.429 00:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:18.429 00:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjhkYzhlNTkwNWM4NjRkODMzZTkxNTVhNjNkZjk5M2KPLwBm: 00:27:18.429 00:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzNmNDNlMzY0N2ZmMTQyZGU5OTAxZjBlZmJjMTMxZmJPokLg: 00:27:18.429 00:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:18.429 00:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:18.429 00:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjhkYzhlNTkwNWM4NjRkODMzZTkxNTVhNjNkZjk5M2KPLwBm: 00:27:18.429 00:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzNmNDNlMzY0N2ZmMTQyZGU5OTAxZjBlZmJjMTMxZmJPokLg: ]] 00:27:18.429 00:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzNmNDNlMzY0N2ZmMTQyZGU5OTAxZjBlZmJjMTMxZmJPokLg: 00:27:18.429 00:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:27:18.429 00:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:18.429 00:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:18.429 00:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:18.429 00:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:18.429 00:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:18.429 00:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:18.429 00:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:18.429 00:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.429 00:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:18.429 00:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:18.429 00:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:18.429 00:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:18.429 00:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:18.429 00:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:18.429 00:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:18.429 00:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:18.429 00:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:18.429 00:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:18.429 00:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:18.429 00:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:18.429 00:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:18.429 00:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:18.429 00:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.998 nvme0n1 00:27:18.998 00:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:18.998 00:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:18.998 00:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:18.998 00:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:18.998 00:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.998 00:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:18.998 00:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:18.998 00:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:18.998 00:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:18.998 00:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.998 00:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:18.998 00:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:18.998 00:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:27:18.998 00:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:18.998 00:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:18.998 00:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:18.998 00:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:18.998 00:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MWJhYzU0MmNjZDVhMTk1MDM3OTliMjEyMGE3MDMzMmI3OTkyMDRjYzFiYzZiOWM2okmFTA==: 00:27:18.998 00:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MDVlNTU0OTA4NDRmMTFkZjlhMDU2ZTc0ZTM4NjZiNDL//Az2: 00:27:18.998 00:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:18.998 00:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:18.998 00:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MWJhYzU0MmNjZDVhMTk1MDM3OTliMjEyMGE3MDMzMmI3OTkyMDRjYzFiYzZiOWM2okmFTA==: 00:27:18.998 00:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MDVlNTU0OTA4NDRmMTFkZjlhMDU2ZTc0ZTM4NjZiNDL//Az2: ]] 00:27:18.998 00:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MDVlNTU0OTA4NDRmMTFkZjlhMDU2ZTc0ZTM4NjZiNDL//Az2: 00:27:18.998 00:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:27:18.998 00:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:18.998 00:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:18.998 00:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:18.998 00:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:18.998 00:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:18.998 00:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:18.998 00:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:18.998 00:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.998 00:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:18.998 00:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:18.998 00:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:18.998 00:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:18.998 00:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:18.998 00:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:18.998 00:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:18.998 00:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:18.998 00:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:18.999 00:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:18.999 00:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:18.999 00:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:18.999 00:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:18.999 00:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:18.999 00:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.569 nvme0n1 00:27:19.569 00:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:19.569 00:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:19.569 00:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:19.569 00:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:19.569 00:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.569 00:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:19.569 00:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:19.569 00:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:19.569 00:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:19.569 00:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.569 00:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:19.570 00:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:19.570 00:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:27:19.570 00:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:19.570 00:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:19.570 00:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:19.570 00:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:19.570 00:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Yjk2M2M1Y2U4ZGJhYTQzZjE3NjIzMzQxYTg0YmQ4ZmJkYjU3Y2JhNTNmMWQzMDRhN2ZmYzQzYTJkMGM4YzhlNB7igZA=: 00:27:19.570 00:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:19.570 00:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:19.570 00:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:19.570 00:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Yjk2M2M1Y2U4ZGJhYTQzZjE3NjIzMzQxYTg0YmQ4ZmJkYjU3Y2JhNTNmMWQzMDRhN2ZmYzQzYTJkMGM4YzhlNB7igZA=: 00:27:19.570 00:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:19.570 00:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:27:19.570 00:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:19.570 00:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:19.570 00:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:19.570 00:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:19.570 00:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:19.570 00:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:19.570 00:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:19.570 00:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.570 00:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:19.570 00:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:19.570 00:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:19.570 00:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:19.570 00:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:19.570 00:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:19.570 00:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:19.570 00:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:19.570 00:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:19.570 00:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:19.570 00:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:19.570 00:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:19.570 00:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:19.570 00:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:19.570 00:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.831 nvme0n1 00:27:19.831 00:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:19.831 00:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:19.831 00:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:19.831 00:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:19.831 00:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.831 00:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:19.831 00:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:19.831 00:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:19.831 00:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:19.831 00:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.831 00:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:19.831 00:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:19.831 00:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:19.831 00:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:27:19.831 00:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:19.831 00:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:19.831 00:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:19.831 00:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:19.831 00:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzUzNmFkOWEzNjJhMDIwNTkwOGY4MTQzYzMwNDE0MzlD04KG: 00:27:19.831 00:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:N2FiYWUxMzNlYjNjMzUyMGVmYjVjYWI4N2JiY2ExNjRlZWM1MDU1YTAxMWVkZGQ1ZDlhMzU5ZDJhOTNhYjVkYj/Vaq8=: 00:27:19.831 00:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:19.831 00:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:19.831 00:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzUzNmFkOWEzNjJhMDIwNTkwOGY4MTQzYzMwNDE0MzlD04KG: 00:27:19.831 00:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:N2FiYWUxMzNlYjNjMzUyMGVmYjVjYWI4N2JiY2ExNjRlZWM1MDU1YTAxMWVkZGQ1ZDlhMzU5ZDJhOTNhYjVkYj/Vaq8=: ]] 00:27:19.831 00:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:N2FiYWUxMzNlYjNjMzUyMGVmYjVjYWI4N2JiY2ExNjRlZWM1MDU1YTAxMWVkZGQ1ZDlhMzU5ZDJhOTNhYjVkYj/Vaq8=: 00:27:19.831 00:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:27:19.831 00:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:19.831 00:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:19.831 00:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:19.831 00:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:19.831 00:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:19.831 00:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:19.831 00:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:19.831 00:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.831 00:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:19.831 00:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:19.831 00:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:19.831 00:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:19.831 00:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:19.831 00:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:19.831 00:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:19.831 00:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:19.831 00:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:19.831 00:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:19.831 00:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:19.831 00:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:19.831 00:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:19.831 00:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:19.831 00:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.772 nvme0n1 00:27:20.772 00:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:20.772 00:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:20.772 00:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:20.772 00:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:20.772 00:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.772 00:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:20.772 00:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:20.772 00:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:20.772 00:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:20.772 00:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.772 00:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:20.772 00:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:20.772 00:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:27:20.773 00:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:20.773 00:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:20.773 00:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:20.773 00:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:20.773 00:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWNkOTk3ZGE4NTkzNGNjYjI2M2Q3MTk2NDBmOGQwNTM2ODFmOTA5YzJmYjBiMGFloiTCEw==: 00:27:20.773 00:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjNlYTRkNWU2NTdkYTcyNDJiNmQzMjM3ZDhkMTVlNDQxMjkwM2U4MmUyMmIyYjVlkmQc+Q==: 00:27:20.773 00:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:20.773 00:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:20.773 00:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWNkOTk3ZGE4NTkzNGNjYjI2M2Q3MTk2NDBmOGQwNTM2ODFmOTA5YzJmYjBiMGFloiTCEw==: 00:27:20.773 00:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjNlYTRkNWU2NTdkYTcyNDJiNmQzMjM3ZDhkMTVlNDQxMjkwM2U4MmUyMmIyYjVlkmQc+Q==: ]] 00:27:20.773 00:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjNlYTRkNWU2NTdkYTcyNDJiNmQzMjM3ZDhkMTVlNDQxMjkwM2U4MmUyMmIyYjVlkmQc+Q==: 00:27:20.773 00:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:27:20.773 00:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:20.773 00:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:20.773 00:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:20.773 00:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:20.773 00:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:20.773 00:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:20.773 00:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:20.773 00:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.773 00:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:20.773 00:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:20.773 00:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:20.773 00:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:20.773 00:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:20.773 00:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:20.773 00:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:20.773 00:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:20.773 00:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:20.773 00:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:20.773 00:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:20.773 00:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:20.773 00:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:20.773 00:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:20.773 00:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.344 nvme0n1 00:27:21.344 00:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:21.344 00:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:21.344 00:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:21.344 00:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:21.344 00:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.344 00:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:21.344 00:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:21.344 00:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:21.344 00:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:21.344 00:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.344 00:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:21.344 00:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:21.344 00:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:27:21.344 00:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:21.344 00:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:21.344 00:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:21.344 00:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:21.344 00:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjhkYzhlNTkwNWM4NjRkODMzZTkxNTVhNjNkZjk5M2KPLwBm: 00:27:21.344 00:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzNmNDNlMzY0N2ZmMTQyZGU5OTAxZjBlZmJjMTMxZmJPokLg: 00:27:21.344 00:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:21.344 00:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:21.344 00:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjhkYzhlNTkwNWM4NjRkODMzZTkxNTVhNjNkZjk5M2KPLwBm: 00:27:21.344 00:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzNmNDNlMzY0N2ZmMTQyZGU5OTAxZjBlZmJjMTMxZmJPokLg: ]] 00:27:21.344 00:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzNmNDNlMzY0N2ZmMTQyZGU5OTAxZjBlZmJjMTMxZmJPokLg: 00:27:21.344 00:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:27:21.344 00:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:21.344 00:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:21.344 00:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:21.344 00:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:21.344 00:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:21.344 00:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:21.344 00:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:21.344 00:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.344 00:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:21.344 00:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:21.344 00:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:21.344 00:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:21.344 00:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:21.344 00:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:21.344 00:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:21.344 00:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:21.344 00:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:21.344 00:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:21.344 00:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:21.344 00:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:21.344 00:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:21.344 00:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:21.344 00:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.915 nvme0n1 00:27:21.915 00:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:21.915 00:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:21.915 00:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:21.915 00:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:21.915 00:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.915 00:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:21.915 00:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:21.915 00:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:21.915 00:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:21.915 00:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.176 00:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:22.176 00:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:22.176 00:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:27:22.176 00:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:22.176 00:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:22.176 00:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:22.176 00:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:22.176 00:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MWJhYzU0MmNjZDVhMTk1MDM3OTliMjEyMGE3MDMzMmI3OTkyMDRjYzFiYzZiOWM2okmFTA==: 00:27:22.176 00:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MDVlNTU0OTA4NDRmMTFkZjlhMDU2ZTc0ZTM4NjZiNDL//Az2: 00:27:22.176 00:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:22.176 00:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:22.176 00:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MWJhYzU0MmNjZDVhMTk1MDM3OTliMjEyMGE3MDMzMmI3OTkyMDRjYzFiYzZiOWM2okmFTA==: 00:27:22.176 00:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MDVlNTU0OTA4NDRmMTFkZjlhMDU2ZTc0ZTM4NjZiNDL//Az2: ]] 00:27:22.176 00:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MDVlNTU0OTA4NDRmMTFkZjlhMDU2ZTc0ZTM4NjZiNDL//Az2: 00:27:22.176 00:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:27:22.176 00:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:22.176 00:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:22.176 00:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:22.176 00:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:22.176 00:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:22.176 00:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:22.176 00:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:22.176 00:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.176 00:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:22.176 00:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:22.176 00:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:22.176 00:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:22.176 00:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:22.176 00:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:22.176 00:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:22.176 00:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:22.176 00:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:22.176 00:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:22.176 00:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:22.176 00:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:22.176 00:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:22.176 00:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:22.176 00:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.747 nvme0n1 00:27:22.747 00:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:22.747 00:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:22.747 00:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:22.747 00:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:22.747 00:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.747 00:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:22.747 00:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:22.747 00:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:22.747 00:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:22.747 00:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.747 00:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:22.747 00:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:22.747 00:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:27:22.747 00:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:22.747 00:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:22.747 00:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:22.747 00:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:22.747 00:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Yjk2M2M1Y2U4ZGJhYTQzZjE3NjIzMzQxYTg0YmQ4ZmJkYjU3Y2JhNTNmMWQzMDRhN2ZmYzQzYTJkMGM4YzhlNB7igZA=: 00:27:22.747 00:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:22.747 00:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:22.747 00:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:22.747 00:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Yjk2M2M1Y2U4ZGJhYTQzZjE3NjIzMzQxYTg0YmQ4ZmJkYjU3Y2JhNTNmMWQzMDRhN2ZmYzQzYTJkMGM4YzhlNB7igZA=: 00:27:22.747 00:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:22.747 00:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:27:22.747 00:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:22.747 00:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:22.747 00:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:22.747 00:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:22.747 00:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:22.747 00:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:22.747 00:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:22.747 00:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.747 00:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:22.747 00:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:22.747 00:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:22.747 00:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:22.747 00:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:22.747 00:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:22.747 00:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:22.747 00:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:22.747 00:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:22.747 00:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:22.747 00:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:22.747 00:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:22.747 00:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:22.747 00:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:22.747 00:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.318 nvme0n1 00:27:23.318 00:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:23.318 00:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:23.318 00:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:23.318 00:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:23.318 00:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.318 00:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:23.318 00:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:23.318 00:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:23.318 00:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:23.318 00:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.580 00:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:23.580 00:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:27:23.580 00:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:23.580 00:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:23.580 00:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:27:23.580 00:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:23.580 00:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:23.580 00:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:23.580 00:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:23.580 00:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzUzNmFkOWEzNjJhMDIwNTkwOGY4MTQzYzMwNDE0MzlD04KG: 00:27:23.580 00:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:N2FiYWUxMzNlYjNjMzUyMGVmYjVjYWI4N2JiY2ExNjRlZWM1MDU1YTAxMWVkZGQ1ZDlhMzU5ZDJhOTNhYjVkYj/Vaq8=: 00:27:23.580 00:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:23.580 00:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:23.580 00:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzUzNmFkOWEzNjJhMDIwNTkwOGY4MTQzYzMwNDE0MzlD04KG: 00:27:23.580 00:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:N2FiYWUxMzNlYjNjMzUyMGVmYjVjYWI4N2JiY2ExNjRlZWM1MDU1YTAxMWVkZGQ1ZDlhMzU5ZDJhOTNhYjVkYj/Vaq8=: ]] 00:27:23.580 00:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:N2FiYWUxMzNlYjNjMzUyMGVmYjVjYWI4N2JiY2ExNjRlZWM1MDU1YTAxMWVkZGQ1ZDlhMzU5ZDJhOTNhYjVkYj/Vaq8=: 00:27:23.580 00:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:27:23.580 00:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:23.580 00:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:23.580 00:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:23.580 00:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:23.580 00:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:23.580 00:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:23.580 00:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:23.580 00:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.580 00:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:23.580 00:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:23.580 00:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:23.580 00:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:23.580 00:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:23.580 00:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:23.580 00:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:23.580 00:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:23.580 00:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:23.580 00:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:23.580 00:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:23.580 00:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:23.580 00:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:23.580 00:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:23.580 00:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.580 nvme0n1 00:27:23.580 00:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:23.580 00:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:23.580 00:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:23.580 00:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:23.580 00:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.580 00:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:23.580 00:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:23.580 00:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:23.580 00:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:23.580 00:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.580 00:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:23.580 00:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:23.580 00:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:27:23.580 00:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:23.580 00:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:23.580 00:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:23.580 00:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:23.580 00:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWNkOTk3ZGE4NTkzNGNjYjI2M2Q3MTk2NDBmOGQwNTM2ODFmOTA5YzJmYjBiMGFloiTCEw==: 00:27:23.580 00:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjNlYTRkNWU2NTdkYTcyNDJiNmQzMjM3ZDhkMTVlNDQxMjkwM2U4MmUyMmIyYjVlkmQc+Q==: 00:27:23.580 00:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:23.580 00:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:23.580 00:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWNkOTk3ZGE4NTkzNGNjYjI2M2Q3MTk2NDBmOGQwNTM2ODFmOTA5YzJmYjBiMGFloiTCEw==: 00:27:23.580 00:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjNlYTRkNWU2NTdkYTcyNDJiNmQzMjM3ZDhkMTVlNDQxMjkwM2U4MmUyMmIyYjVlkmQc+Q==: ]] 00:27:23.580 00:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjNlYTRkNWU2NTdkYTcyNDJiNmQzMjM3ZDhkMTVlNDQxMjkwM2U4MmUyMmIyYjVlkmQc+Q==: 00:27:23.580 00:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:27:23.580 00:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:23.581 00:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:23.581 00:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:23.581 00:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:23.581 00:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:23.581 00:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:23.581 00:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:23.581 00:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.581 00:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:23.581 00:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:23.581 00:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:23.581 00:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:23.581 00:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:23.581 00:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:23.581 00:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:23.581 00:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:23.581 00:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:23.581 00:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:23.581 00:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:23.581 00:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:23.581 00:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:23.581 00:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:23.581 00:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.841 nvme0n1 00:27:23.841 00:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:23.841 00:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:23.841 00:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:23.841 00:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:23.841 00:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.841 00:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:23.841 00:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:23.841 00:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:23.841 00:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:23.841 00:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.841 00:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:23.841 00:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:23.841 00:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:27:23.841 00:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:23.841 00:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:23.841 00:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:23.841 00:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:23.841 00:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjhkYzhlNTkwNWM4NjRkODMzZTkxNTVhNjNkZjk5M2KPLwBm: 00:27:23.841 00:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzNmNDNlMzY0N2ZmMTQyZGU5OTAxZjBlZmJjMTMxZmJPokLg: 00:27:23.841 00:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:23.841 00:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:23.841 00:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjhkYzhlNTkwNWM4NjRkODMzZTkxNTVhNjNkZjk5M2KPLwBm: 00:27:23.841 00:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzNmNDNlMzY0N2ZmMTQyZGU5OTAxZjBlZmJjMTMxZmJPokLg: ]] 00:27:23.841 00:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzNmNDNlMzY0N2ZmMTQyZGU5OTAxZjBlZmJjMTMxZmJPokLg: 00:27:23.841 00:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:27:23.841 00:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:23.841 00:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:23.841 00:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:23.841 00:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:23.841 00:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:23.841 00:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:23.841 00:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:23.841 00:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.841 00:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:23.841 00:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:23.841 00:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:23.841 00:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:23.841 00:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:23.841 00:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:23.841 00:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:23.841 00:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:23.841 00:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:23.841 00:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:23.841 00:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:23.841 00:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:23.841 00:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:23.841 00:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:23.841 00:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.101 nvme0n1 00:27:24.101 00:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:24.101 00:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:24.101 00:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:24.101 00:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:24.101 00:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.101 00:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:24.101 00:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:24.101 00:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:24.101 00:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:24.101 00:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.101 00:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:24.101 00:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:24.101 00:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:27:24.101 00:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:24.101 00:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:24.101 00:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:24.101 00:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:24.101 00:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MWJhYzU0MmNjZDVhMTk1MDM3OTliMjEyMGE3MDMzMmI3OTkyMDRjYzFiYzZiOWM2okmFTA==: 00:27:24.101 00:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MDVlNTU0OTA4NDRmMTFkZjlhMDU2ZTc0ZTM4NjZiNDL//Az2: 00:27:24.101 00:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:24.101 00:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:24.101 00:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MWJhYzU0MmNjZDVhMTk1MDM3OTliMjEyMGE3MDMzMmI3OTkyMDRjYzFiYzZiOWM2okmFTA==: 00:27:24.101 00:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MDVlNTU0OTA4NDRmMTFkZjlhMDU2ZTc0ZTM4NjZiNDL//Az2: ]] 00:27:24.101 00:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MDVlNTU0OTA4NDRmMTFkZjlhMDU2ZTc0ZTM4NjZiNDL//Az2: 00:27:24.101 00:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:27:24.101 00:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:24.101 00:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:24.101 00:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:24.101 00:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:24.101 00:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:24.101 00:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:24.101 00:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:24.101 00:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.101 00:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:24.101 00:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:24.101 00:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:24.101 00:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:24.101 00:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:24.101 00:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:24.101 00:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:24.101 00:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:24.101 00:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:24.101 00:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:24.101 00:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:24.101 00:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:24.101 00:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:24.101 00:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:24.101 00:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.362 nvme0n1 00:27:24.362 00:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:24.362 00:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:24.362 00:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:24.362 00:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:24.362 00:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.362 00:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:24.362 00:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:24.362 00:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:24.362 00:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:24.362 00:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.362 00:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:24.362 00:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:24.362 00:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:27:24.362 00:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:24.362 00:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:24.362 00:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:24.362 00:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:24.362 00:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Yjk2M2M1Y2U4ZGJhYTQzZjE3NjIzMzQxYTg0YmQ4ZmJkYjU3Y2JhNTNmMWQzMDRhN2ZmYzQzYTJkMGM4YzhlNB7igZA=: 00:27:24.362 00:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:24.362 00:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:24.362 00:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:24.362 00:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Yjk2M2M1Y2U4ZGJhYTQzZjE3NjIzMzQxYTg0YmQ4ZmJkYjU3Y2JhNTNmMWQzMDRhN2ZmYzQzYTJkMGM4YzhlNB7igZA=: 00:27:24.362 00:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:24.362 00:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:27:24.362 00:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:24.362 00:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:24.362 00:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:24.362 00:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:24.362 00:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:24.362 00:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:24.362 00:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:24.362 00:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.362 00:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:24.362 00:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:24.362 00:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:24.362 00:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:24.362 00:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:24.362 00:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:24.362 00:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:24.362 00:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:24.362 00:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:24.362 00:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:24.362 00:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:24.362 00:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:24.363 00:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:24.363 00:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:24.363 00:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.363 nvme0n1 00:27:24.363 00:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:24.363 00:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:24.363 00:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:24.363 00:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:24.363 00:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.363 00:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:24.624 00:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:24.624 00:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:24.624 00:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:24.624 00:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.624 00:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:24.624 00:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:24.624 00:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:24.624 00:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:27:24.624 00:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:24.624 00:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:24.624 00:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:24.624 00:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:24.624 00:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzUzNmFkOWEzNjJhMDIwNTkwOGY4MTQzYzMwNDE0MzlD04KG: 00:27:24.624 00:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:N2FiYWUxMzNlYjNjMzUyMGVmYjVjYWI4N2JiY2ExNjRlZWM1MDU1YTAxMWVkZGQ1ZDlhMzU5ZDJhOTNhYjVkYj/Vaq8=: 00:27:24.624 00:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:24.624 00:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:24.624 00:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzUzNmFkOWEzNjJhMDIwNTkwOGY4MTQzYzMwNDE0MzlD04KG: 00:27:24.624 00:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:N2FiYWUxMzNlYjNjMzUyMGVmYjVjYWI4N2JiY2ExNjRlZWM1MDU1YTAxMWVkZGQ1ZDlhMzU5ZDJhOTNhYjVkYj/Vaq8=: ]] 00:27:24.624 00:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:N2FiYWUxMzNlYjNjMzUyMGVmYjVjYWI4N2JiY2ExNjRlZWM1MDU1YTAxMWVkZGQ1ZDlhMzU5ZDJhOTNhYjVkYj/Vaq8=: 00:27:24.624 00:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:27:24.624 00:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:24.624 00:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:24.624 00:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:24.624 00:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:24.624 00:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:24.624 00:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:24.624 00:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:24.624 00:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.624 00:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:24.624 00:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:24.624 00:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:24.624 00:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:24.624 00:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:24.624 00:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:24.624 00:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:24.624 00:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:24.624 00:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:24.624 00:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:24.624 00:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:24.624 00:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:24.624 00:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:24.624 00:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:24.624 00:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.624 nvme0n1 00:27:24.624 00:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:24.624 00:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:24.624 00:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:24.624 00:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:24.624 00:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.624 00:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:24.885 00:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:24.885 00:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:24.885 00:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:24.885 00:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.885 00:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:24.885 00:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:24.885 00:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:27:24.885 00:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:24.885 00:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:24.885 00:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:24.885 00:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:24.885 00:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWNkOTk3ZGE4NTkzNGNjYjI2M2Q3MTk2NDBmOGQwNTM2ODFmOTA5YzJmYjBiMGFloiTCEw==: 00:27:24.885 00:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjNlYTRkNWU2NTdkYTcyNDJiNmQzMjM3ZDhkMTVlNDQxMjkwM2U4MmUyMmIyYjVlkmQc+Q==: 00:27:24.885 00:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:24.885 00:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:24.885 00:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWNkOTk3ZGE4NTkzNGNjYjI2M2Q3MTk2NDBmOGQwNTM2ODFmOTA5YzJmYjBiMGFloiTCEw==: 00:27:24.885 00:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjNlYTRkNWU2NTdkYTcyNDJiNmQzMjM3ZDhkMTVlNDQxMjkwM2U4MmUyMmIyYjVlkmQc+Q==: ]] 00:27:24.885 00:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjNlYTRkNWU2NTdkYTcyNDJiNmQzMjM3ZDhkMTVlNDQxMjkwM2U4MmUyMmIyYjVlkmQc+Q==: 00:27:24.885 00:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:27:24.885 00:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:24.885 00:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:24.885 00:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:24.885 00:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:24.885 00:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:24.885 00:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:24.885 00:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:24.885 00:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.885 00:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:24.885 00:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:24.885 00:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:24.885 00:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:24.885 00:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:24.885 00:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:24.885 00:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:24.885 00:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:24.885 00:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:24.885 00:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:24.885 00:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:24.885 00:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:24.885 00:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:24.885 00:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:24.885 00:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.885 nvme0n1 00:27:24.885 00:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:24.885 00:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:24.885 00:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:24.885 00:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:24.885 00:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.885 00:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:25.147 00:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:25.147 00:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:25.147 00:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:25.147 00:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.147 00:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:25.147 00:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:25.147 00:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:27:25.147 00:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:25.147 00:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:25.147 00:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:25.147 00:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:25.147 00:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjhkYzhlNTkwNWM4NjRkODMzZTkxNTVhNjNkZjk5M2KPLwBm: 00:27:25.147 00:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzNmNDNlMzY0N2ZmMTQyZGU5OTAxZjBlZmJjMTMxZmJPokLg: 00:27:25.147 00:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:25.147 00:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:25.147 00:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjhkYzhlNTkwNWM4NjRkODMzZTkxNTVhNjNkZjk5M2KPLwBm: 00:27:25.147 00:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzNmNDNlMzY0N2ZmMTQyZGU5OTAxZjBlZmJjMTMxZmJPokLg: ]] 00:27:25.147 00:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzNmNDNlMzY0N2ZmMTQyZGU5OTAxZjBlZmJjMTMxZmJPokLg: 00:27:25.147 00:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:27:25.147 00:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:25.147 00:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:25.147 00:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:25.147 00:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:25.147 00:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:25.147 00:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:25.147 00:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:25.147 00:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.147 00:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:25.147 00:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:25.147 00:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:25.147 00:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:25.147 00:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:25.147 00:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:25.147 00:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:25.147 00:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:25.147 00:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:25.147 00:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:25.147 00:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:25.147 00:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:25.147 00:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:25.147 00:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:25.147 00:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.147 nvme0n1 00:27:25.147 00:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:25.147 00:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:25.147 00:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:25.147 00:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:25.147 00:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.147 00:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:25.407 00:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:25.407 00:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:25.407 00:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:25.407 00:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.408 00:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:25.408 00:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:25.408 00:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:27:25.408 00:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:25.408 00:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:25.408 00:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:25.408 00:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:25.408 00:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MWJhYzU0MmNjZDVhMTk1MDM3OTliMjEyMGE3MDMzMmI3OTkyMDRjYzFiYzZiOWM2okmFTA==: 00:27:25.408 00:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MDVlNTU0OTA4NDRmMTFkZjlhMDU2ZTc0ZTM4NjZiNDL//Az2: 00:27:25.408 00:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:25.408 00:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:25.408 00:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MWJhYzU0MmNjZDVhMTk1MDM3OTliMjEyMGE3MDMzMmI3OTkyMDRjYzFiYzZiOWM2okmFTA==: 00:27:25.408 00:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MDVlNTU0OTA4NDRmMTFkZjlhMDU2ZTc0ZTM4NjZiNDL//Az2: ]] 00:27:25.408 00:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MDVlNTU0OTA4NDRmMTFkZjlhMDU2ZTc0ZTM4NjZiNDL//Az2: 00:27:25.408 00:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:27:25.408 00:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:25.408 00:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:25.408 00:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:25.408 00:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:25.408 00:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:25.408 00:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:25.408 00:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:25.408 00:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.408 00:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:25.408 00:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:25.408 00:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:25.408 00:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:25.408 00:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:25.408 00:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:25.408 00:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:25.408 00:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:25.408 00:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:25.408 00:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:25.408 00:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:25.408 00:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:25.408 00:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:25.408 00:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:25.408 00:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.408 nvme0n1 00:27:25.408 00:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:25.408 00:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:25.408 00:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:25.408 00:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:25.408 00:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.408 00:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:25.408 00:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:25.408 00:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:25.408 00:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:25.408 00:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.668 00:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:25.668 00:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:25.668 00:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:27:25.668 00:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:25.668 00:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:25.668 00:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:25.668 00:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:25.668 00:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Yjk2M2M1Y2U4ZGJhYTQzZjE3NjIzMzQxYTg0YmQ4ZmJkYjU3Y2JhNTNmMWQzMDRhN2ZmYzQzYTJkMGM4YzhlNB7igZA=: 00:27:25.668 00:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:25.668 00:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:25.668 00:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:25.668 00:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Yjk2M2M1Y2U4ZGJhYTQzZjE3NjIzMzQxYTg0YmQ4ZmJkYjU3Y2JhNTNmMWQzMDRhN2ZmYzQzYTJkMGM4YzhlNB7igZA=: 00:27:25.668 00:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:25.668 00:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:27:25.668 00:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:25.668 00:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:25.668 00:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:25.668 00:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:25.668 00:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:25.668 00:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:25.668 00:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:25.668 00:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.668 00:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:25.668 00:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:25.668 00:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:25.668 00:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:25.668 00:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:25.668 00:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:25.668 00:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:25.668 00:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:25.668 00:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:25.668 00:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:25.668 00:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:25.668 00:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:25.668 00:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:25.668 00:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:25.668 00:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.668 nvme0n1 00:27:25.668 00:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:25.668 00:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:25.668 00:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:25.668 00:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:25.668 00:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.668 00:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:25.668 00:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:25.668 00:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:25.668 00:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:25.668 00:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.668 00:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:25.668 00:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:25.668 00:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:25.668 00:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:27:25.668 00:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:25.668 00:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:25.668 00:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:25.668 00:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:25.928 00:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzUzNmFkOWEzNjJhMDIwNTkwOGY4MTQzYzMwNDE0MzlD04KG: 00:27:25.928 00:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:N2FiYWUxMzNlYjNjMzUyMGVmYjVjYWI4N2JiY2ExNjRlZWM1MDU1YTAxMWVkZGQ1ZDlhMzU5ZDJhOTNhYjVkYj/Vaq8=: 00:27:25.928 00:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:25.928 00:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:25.928 00:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzUzNmFkOWEzNjJhMDIwNTkwOGY4MTQzYzMwNDE0MzlD04KG: 00:27:25.928 00:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:N2FiYWUxMzNlYjNjMzUyMGVmYjVjYWI4N2JiY2ExNjRlZWM1MDU1YTAxMWVkZGQ1ZDlhMzU5ZDJhOTNhYjVkYj/Vaq8=: ]] 00:27:25.928 00:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:N2FiYWUxMzNlYjNjMzUyMGVmYjVjYWI4N2JiY2ExNjRlZWM1MDU1YTAxMWVkZGQ1ZDlhMzU5ZDJhOTNhYjVkYj/Vaq8=: 00:27:25.928 00:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:27:25.928 00:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:25.928 00:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:25.928 00:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:25.928 00:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:25.928 00:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:25.928 00:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:25.928 00:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:25.928 00:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.928 00:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:25.928 00:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:25.928 00:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:25.928 00:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:25.928 00:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:25.928 00:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:25.928 00:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:25.928 00:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:25.928 00:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:25.928 00:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:25.928 00:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:25.928 00:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:25.928 00:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:25.928 00:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:25.929 00:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.929 nvme0n1 00:27:25.929 00:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:25.929 00:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:26.188 00:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:26.188 00:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:26.189 00:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.189 00:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:26.189 00:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:26.189 00:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:26.189 00:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:26.189 00:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.189 00:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:26.189 00:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:26.189 00:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:27:26.189 00:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:26.189 00:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:26.189 00:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:26.189 00:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:26.189 00:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWNkOTk3ZGE4NTkzNGNjYjI2M2Q3MTk2NDBmOGQwNTM2ODFmOTA5YzJmYjBiMGFloiTCEw==: 00:27:26.189 00:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjNlYTRkNWU2NTdkYTcyNDJiNmQzMjM3ZDhkMTVlNDQxMjkwM2U4MmUyMmIyYjVlkmQc+Q==: 00:27:26.189 00:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:26.189 00:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:26.189 00:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWNkOTk3ZGE4NTkzNGNjYjI2M2Q3MTk2NDBmOGQwNTM2ODFmOTA5YzJmYjBiMGFloiTCEw==: 00:27:26.189 00:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjNlYTRkNWU2NTdkYTcyNDJiNmQzMjM3ZDhkMTVlNDQxMjkwM2U4MmUyMmIyYjVlkmQc+Q==: ]] 00:27:26.189 00:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjNlYTRkNWU2NTdkYTcyNDJiNmQzMjM3ZDhkMTVlNDQxMjkwM2U4MmUyMmIyYjVlkmQc+Q==: 00:27:26.189 00:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:27:26.189 00:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:26.189 00:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:26.189 00:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:26.189 00:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:26.189 00:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:26.189 00:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:26.189 00:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:26.189 00:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.189 00:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:26.189 00:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:26.189 00:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:26.189 00:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:26.189 00:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:26.189 00:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:26.189 00:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:26.189 00:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:26.189 00:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:26.189 00:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:26.189 00:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:26.189 00:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:26.189 00:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:26.189 00:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:26.189 00:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.449 nvme0n1 00:27:26.449 00:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:26.449 00:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:26.449 00:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:26.449 00:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:26.449 00:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.449 00:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:26.449 00:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:26.449 00:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:26.449 00:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:26.449 00:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.449 00:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:26.449 00:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:26.449 00:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:27:26.449 00:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:26.449 00:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:26.449 00:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:26.449 00:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:26.449 00:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjhkYzhlNTkwNWM4NjRkODMzZTkxNTVhNjNkZjk5M2KPLwBm: 00:27:26.449 00:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzNmNDNlMzY0N2ZmMTQyZGU5OTAxZjBlZmJjMTMxZmJPokLg: 00:27:26.449 00:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:26.449 00:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:26.450 00:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjhkYzhlNTkwNWM4NjRkODMzZTkxNTVhNjNkZjk5M2KPLwBm: 00:27:26.450 00:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzNmNDNlMzY0N2ZmMTQyZGU5OTAxZjBlZmJjMTMxZmJPokLg: ]] 00:27:26.450 00:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzNmNDNlMzY0N2ZmMTQyZGU5OTAxZjBlZmJjMTMxZmJPokLg: 00:27:26.450 00:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:27:26.450 00:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:26.450 00:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:26.450 00:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:26.450 00:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:26.450 00:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:26.450 00:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:26.450 00:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:26.450 00:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.450 00:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:26.450 00:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:26.450 00:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:26.450 00:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:26.450 00:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:26.450 00:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:26.450 00:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:26.450 00:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:26.450 00:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:26.450 00:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:26.450 00:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:26.450 00:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:26.450 00:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:26.450 00:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:26.450 00:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.714 nvme0n1 00:27:26.714 00:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:26.714 00:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:26.714 00:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:26.714 00:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:26.715 00:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.715 00:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:26.715 00:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:26.715 00:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:26.715 00:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:26.715 00:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.715 00:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:26.715 00:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:26.715 00:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:27:26.715 00:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:26.715 00:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:26.715 00:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:26.715 00:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:26.715 00:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MWJhYzU0MmNjZDVhMTk1MDM3OTliMjEyMGE3MDMzMmI3OTkyMDRjYzFiYzZiOWM2okmFTA==: 00:27:26.715 00:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MDVlNTU0OTA4NDRmMTFkZjlhMDU2ZTc0ZTM4NjZiNDL//Az2: 00:27:26.715 00:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:26.715 00:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:26.715 00:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MWJhYzU0MmNjZDVhMTk1MDM3OTliMjEyMGE3MDMzMmI3OTkyMDRjYzFiYzZiOWM2okmFTA==: 00:27:26.715 00:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MDVlNTU0OTA4NDRmMTFkZjlhMDU2ZTc0ZTM4NjZiNDL//Az2: ]] 00:27:26.715 00:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MDVlNTU0OTA4NDRmMTFkZjlhMDU2ZTc0ZTM4NjZiNDL//Az2: 00:27:26.715 00:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:27:26.715 00:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:26.715 00:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:26.715 00:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:26.715 00:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:26.715 00:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:26.715 00:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:26.715 00:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:26.715 00:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.715 00:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:26.715 00:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:26.715 00:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:26.715 00:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:26.715 00:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:26.715 00:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:26.715 00:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:26.715 00:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:26.715 00:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:26.715 00:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:26.715 00:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:26.715 00:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:26.715 00:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:26.715 00:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:26.715 00:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.975 nvme0n1 00:27:26.975 00:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:26.975 00:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:26.975 00:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:26.975 00:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:26.975 00:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.975 00:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:26.975 00:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:26.975 00:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:26.975 00:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:26.975 00:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.234 00:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:27.234 00:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:27.234 00:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:27:27.234 00:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:27.235 00:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:27.235 00:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:27.235 00:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:27.235 00:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Yjk2M2M1Y2U4ZGJhYTQzZjE3NjIzMzQxYTg0YmQ4ZmJkYjU3Y2JhNTNmMWQzMDRhN2ZmYzQzYTJkMGM4YzhlNB7igZA=: 00:27:27.235 00:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:27.235 00:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:27.235 00:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:27.235 00:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Yjk2M2M1Y2U4ZGJhYTQzZjE3NjIzMzQxYTg0YmQ4ZmJkYjU3Y2JhNTNmMWQzMDRhN2ZmYzQzYTJkMGM4YzhlNB7igZA=: 00:27:27.235 00:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:27.235 00:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:27:27.235 00:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:27.235 00:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:27.235 00:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:27.235 00:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:27.235 00:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:27.235 00:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:27.235 00:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:27.235 00:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.235 00:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:27.235 00:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:27.235 00:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:27.235 00:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:27.235 00:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:27.235 00:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:27.235 00:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:27.235 00:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:27.235 00:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:27.235 00:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:27.235 00:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:27.235 00:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:27.235 00:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:27.235 00:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:27.235 00:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.495 nvme0n1 00:27:27.495 00:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:27.495 00:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:27.495 00:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:27.495 00:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:27.495 00:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.495 00:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:27.495 00:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:27.495 00:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:27.495 00:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:27.495 00:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.495 00:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:27.495 00:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:27.495 00:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:27.495 00:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:27:27.495 00:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:27.495 00:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:27.495 00:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:27.495 00:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:27.495 00:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzUzNmFkOWEzNjJhMDIwNTkwOGY4MTQzYzMwNDE0MzlD04KG: 00:27:27.495 00:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:N2FiYWUxMzNlYjNjMzUyMGVmYjVjYWI4N2JiY2ExNjRlZWM1MDU1YTAxMWVkZGQ1ZDlhMzU5ZDJhOTNhYjVkYj/Vaq8=: 00:27:27.495 00:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:27.495 00:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:27.495 00:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzUzNmFkOWEzNjJhMDIwNTkwOGY4MTQzYzMwNDE0MzlD04KG: 00:27:27.495 00:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:N2FiYWUxMzNlYjNjMzUyMGVmYjVjYWI4N2JiY2ExNjRlZWM1MDU1YTAxMWVkZGQ1ZDlhMzU5ZDJhOTNhYjVkYj/Vaq8=: ]] 00:27:27.495 00:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:N2FiYWUxMzNlYjNjMzUyMGVmYjVjYWI4N2JiY2ExNjRlZWM1MDU1YTAxMWVkZGQ1ZDlhMzU5ZDJhOTNhYjVkYj/Vaq8=: 00:27:27.495 00:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:27:27.495 00:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:27.495 00:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:27.495 00:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:27.495 00:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:27.495 00:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:27.495 00:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:27.495 00:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:27.495 00:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.495 00:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:27.495 00:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:27.495 00:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:27.495 00:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:27.495 00:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:27.495 00:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:27.495 00:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:27.495 00:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:27.495 00:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:27.495 00:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:27.495 00:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:27.495 00:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:27.495 00:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:27.495 00:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:27.495 00:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.755 nvme0n1 00:27:27.755 00:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:27.755 00:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:27.755 00:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:27.755 00:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:27.755 00:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.755 00:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:28.015 00:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:28.015 00:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:28.015 00:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:28.015 00:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.015 00:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:28.015 00:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:28.015 00:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:27:28.015 00:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:28.015 00:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:28.015 00:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:28.015 00:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:28.015 00:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWNkOTk3ZGE4NTkzNGNjYjI2M2Q3MTk2NDBmOGQwNTM2ODFmOTA5YzJmYjBiMGFloiTCEw==: 00:27:28.015 00:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjNlYTRkNWU2NTdkYTcyNDJiNmQzMjM3ZDhkMTVlNDQxMjkwM2U4MmUyMmIyYjVlkmQc+Q==: 00:27:28.015 00:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:28.015 00:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:28.015 00:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWNkOTk3ZGE4NTkzNGNjYjI2M2Q3MTk2NDBmOGQwNTM2ODFmOTA5YzJmYjBiMGFloiTCEw==: 00:27:28.015 00:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjNlYTRkNWU2NTdkYTcyNDJiNmQzMjM3ZDhkMTVlNDQxMjkwM2U4MmUyMmIyYjVlkmQc+Q==: ]] 00:27:28.015 00:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjNlYTRkNWU2NTdkYTcyNDJiNmQzMjM3ZDhkMTVlNDQxMjkwM2U4MmUyMmIyYjVlkmQc+Q==: 00:27:28.015 00:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:27:28.015 00:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:28.015 00:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:28.015 00:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:28.015 00:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:28.015 00:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:28.015 00:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:28.015 00:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:28.015 00:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.015 00:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:28.015 00:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:28.015 00:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:28.015 00:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:28.015 00:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:28.015 00:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:28.015 00:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:28.015 00:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:28.015 00:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:28.015 00:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:28.015 00:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:28.015 00:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:28.015 00:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:28.015 00:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:28.015 00:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.275 nvme0n1 00:27:28.275 00:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:28.275 00:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:28.275 00:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:28.275 00:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:28.275 00:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.275 00:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:28.275 00:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:28.275 00:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:28.275 00:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:28.275 00:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.275 00:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:28.275 00:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:28.275 00:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:27:28.275 00:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:28.275 00:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:28.275 00:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:28.275 00:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:28.275 00:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjhkYzhlNTkwNWM4NjRkODMzZTkxNTVhNjNkZjk5M2KPLwBm: 00:27:28.275 00:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzNmNDNlMzY0N2ZmMTQyZGU5OTAxZjBlZmJjMTMxZmJPokLg: 00:27:28.275 00:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:28.275 00:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:28.275 00:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjhkYzhlNTkwNWM4NjRkODMzZTkxNTVhNjNkZjk5M2KPLwBm: 00:27:28.275 00:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzNmNDNlMzY0N2ZmMTQyZGU5OTAxZjBlZmJjMTMxZmJPokLg: ]] 00:27:28.275 00:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzNmNDNlMzY0N2ZmMTQyZGU5OTAxZjBlZmJjMTMxZmJPokLg: 00:27:28.275 00:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:27:28.275 00:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:28.275 00:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:28.275 00:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:28.275 00:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:28.275 00:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:28.275 00:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:28.275 00:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:28.275 00:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.275 00:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:28.275 00:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:28.275 00:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:28.275 00:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:28.275 00:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:28.276 00:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:28.276 00:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:28.276 00:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:28.276 00:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:28.276 00:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:28.276 00:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:28.276 00:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:28.276 00:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:28.535 00:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:28.535 00:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.795 nvme0n1 00:27:28.795 00:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:28.795 00:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:28.795 00:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:28.795 00:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:28.795 00:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.795 00:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:28.795 00:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:28.795 00:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:28.795 00:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:28.795 00:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.795 00:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:28.795 00:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:28.795 00:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:27:28.795 00:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:28.795 00:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:28.795 00:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:28.795 00:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:28.795 00:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MWJhYzU0MmNjZDVhMTk1MDM3OTliMjEyMGE3MDMzMmI3OTkyMDRjYzFiYzZiOWM2okmFTA==: 00:27:28.795 00:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MDVlNTU0OTA4NDRmMTFkZjlhMDU2ZTc0ZTM4NjZiNDL//Az2: 00:27:28.795 00:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:28.795 00:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:28.795 00:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MWJhYzU0MmNjZDVhMTk1MDM3OTliMjEyMGE3MDMzMmI3OTkyMDRjYzFiYzZiOWM2okmFTA==: 00:27:28.795 00:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MDVlNTU0OTA4NDRmMTFkZjlhMDU2ZTc0ZTM4NjZiNDL//Az2: ]] 00:27:28.795 00:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MDVlNTU0OTA4NDRmMTFkZjlhMDU2ZTc0ZTM4NjZiNDL//Az2: 00:27:28.795 00:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:27:28.795 00:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:28.795 00:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:28.796 00:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:28.796 00:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:28.796 00:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:28.796 00:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:28.796 00:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:28.796 00:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.796 00:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:28.796 00:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:28.796 00:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:28.796 00:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:28.796 00:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:28.796 00:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:28.796 00:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:28.796 00:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:28.796 00:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:28.796 00:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:28.796 00:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:28.796 00:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:28.796 00:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:28.796 00:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:28.796 00:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.365 nvme0n1 00:27:29.365 00:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:29.365 00:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:29.365 00:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:29.365 00:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:29.365 00:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.365 00:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:29.365 00:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:29.365 00:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:29.365 00:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:29.365 00:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.365 00:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:29.366 00:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:29.366 00:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:27:29.366 00:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:29.366 00:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:29.366 00:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:29.366 00:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:29.366 00:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Yjk2M2M1Y2U4ZGJhYTQzZjE3NjIzMzQxYTg0YmQ4ZmJkYjU3Y2JhNTNmMWQzMDRhN2ZmYzQzYTJkMGM4YzhlNB7igZA=: 00:27:29.366 00:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:29.366 00:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:29.366 00:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:29.366 00:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Yjk2M2M1Y2U4ZGJhYTQzZjE3NjIzMzQxYTg0YmQ4ZmJkYjU3Y2JhNTNmMWQzMDRhN2ZmYzQzYTJkMGM4YzhlNB7igZA=: 00:27:29.366 00:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:29.366 00:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:27:29.366 00:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:29.366 00:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:29.366 00:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:29.366 00:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:29.366 00:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:29.366 00:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:29.366 00:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:29.366 00:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.366 00:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:29.366 00:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:29.366 00:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:29.366 00:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:29.366 00:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:29.366 00:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:29.366 00:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:29.366 00:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:29.366 00:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:29.366 00:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:29.366 00:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:29.366 00:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:29.366 00:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:29.366 00:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:29.366 00:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.626 nvme0n1 00:27:29.626 00:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:29.626 00:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:29.626 00:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:29.626 00:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:29.626 00:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.886 00:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:29.886 00:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:29.886 00:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:29.886 00:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:29.886 00:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.886 00:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:29.886 00:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:29.886 00:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:29.886 00:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:27:29.886 00:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:29.886 00:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:29.886 00:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:29.886 00:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:29.886 00:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzUzNmFkOWEzNjJhMDIwNTkwOGY4MTQzYzMwNDE0MzlD04KG: 00:27:29.886 00:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:N2FiYWUxMzNlYjNjMzUyMGVmYjVjYWI4N2JiY2ExNjRlZWM1MDU1YTAxMWVkZGQ1ZDlhMzU5ZDJhOTNhYjVkYj/Vaq8=: 00:27:29.886 00:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:29.886 00:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:29.886 00:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzUzNmFkOWEzNjJhMDIwNTkwOGY4MTQzYzMwNDE0MzlD04KG: 00:27:29.886 00:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:N2FiYWUxMzNlYjNjMzUyMGVmYjVjYWI4N2JiY2ExNjRlZWM1MDU1YTAxMWVkZGQ1ZDlhMzU5ZDJhOTNhYjVkYj/Vaq8=: ]] 00:27:29.886 00:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:N2FiYWUxMzNlYjNjMzUyMGVmYjVjYWI4N2JiY2ExNjRlZWM1MDU1YTAxMWVkZGQ1ZDlhMzU5ZDJhOTNhYjVkYj/Vaq8=: 00:27:29.886 00:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:27:29.886 00:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:29.886 00:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:29.886 00:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:29.886 00:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:29.886 00:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:29.886 00:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:29.886 00:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:29.886 00:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.886 00:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:29.886 00:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:29.886 00:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:29.886 00:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:29.886 00:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:29.886 00:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:29.886 00:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:29.886 00:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:29.886 00:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:29.886 00:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:29.886 00:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:29.886 00:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:29.886 00:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:29.886 00:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:29.886 00:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.457 nvme0n1 00:27:30.457 00:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:30.457 00:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:30.457 00:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:30.457 00:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.457 00:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.457 00:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:30.457 00:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:30.457 00:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:30.457 00:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.457 00:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.457 00:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:30.457 00:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:30.457 00:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:27:30.457 00:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:30.457 00:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:30.457 00:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:30.457 00:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:30.457 00:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWNkOTk3ZGE4NTkzNGNjYjI2M2Q3MTk2NDBmOGQwNTM2ODFmOTA5YzJmYjBiMGFloiTCEw==: 00:27:30.457 00:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjNlYTRkNWU2NTdkYTcyNDJiNmQzMjM3ZDhkMTVlNDQxMjkwM2U4MmUyMmIyYjVlkmQc+Q==: 00:27:30.457 00:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:30.457 00:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:30.457 00:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWNkOTk3ZGE4NTkzNGNjYjI2M2Q3MTk2NDBmOGQwNTM2ODFmOTA5YzJmYjBiMGFloiTCEw==: 00:27:30.457 00:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjNlYTRkNWU2NTdkYTcyNDJiNmQzMjM3ZDhkMTVlNDQxMjkwM2U4MmUyMmIyYjVlkmQc+Q==: ]] 00:27:30.457 00:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjNlYTRkNWU2NTdkYTcyNDJiNmQzMjM3ZDhkMTVlNDQxMjkwM2U4MmUyMmIyYjVlkmQc+Q==: 00:27:30.457 00:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:27:30.457 00:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:30.457 00:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:30.457 00:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:30.457 00:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:30.457 00:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:30.457 00:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:30.457 00:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.457 00:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.457 00:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:30.457 00:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:30.457 00:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:30.457 00:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:30.457 00:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:30.457 00:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:30.457 00:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:30.457 00:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:30.457 00:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:30.457 00:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:30.457 00:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:30.457 00:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:30.457 00:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:30.457 00:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.457 00:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.397 nvme0n1 00:27:31.397 00:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.397 00:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:31.397 00:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:31.397 00:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.397 00:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.397 00:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.397 00:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:31.397 00:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:31.397 00:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.397 00:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.397 00:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.397 00:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:31.397 00:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:27:31.397 00:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:31.397 00:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:31.397 00:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:31.397 00:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:31.397 00:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjhkYzhlNTkwNWM4NjRkODMzZTkxNTVhNjNkZjk5M2KPLwBm: 00:27:31.397 00:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzNmNDNlMzY0N2ZmMTQyZGU5OTAxZjBlZmJjMTMxZmJPokLg: 00:27:31.397 00:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:31.397 00:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:31.397 00:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjhkYzhlNTkwNWM4NjRkODMzZTkxNTVhNjNkZjk5M2KPLwBm: 00:27:31.397 00:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzNmNDNlMzY0N2ZmMTQyZGU5OTAxZjBlZmJjMTMxZmJPokLg: ]] 00:27:31.397 00:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzNmNDNlMzY0N2ZmMTQyZGU5OTAxZjBlZmJjMTMxZmJPokLg: 00:27:31.397 00:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:27:31.397 00:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:31.397 00:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:31.397 00:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:31.397 00:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:31.397 00:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:31.397 00:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:31.397 00:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.397 00:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.397 00:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.397 00:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:31.397 00:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:31.397 00:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:31.397 00:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:31.397 00:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:31.397 00:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:31.397 00:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:31.397 00:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:31.397 00:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:31.397 00:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:31.397 00:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:31.397 00:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:31.397 00:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.397 00:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.968 nvme0n1 00:27:31.968 00:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.968 00:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:31.968 00:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:31.969 00:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.969 00:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.969 00:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.969 00:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:31.969 00:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:31.969 00:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.969 00:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.969 00:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.969 00:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:31.969 00:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:27:31.969 00:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:31.969 00:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:31.969 00:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:31.969 00:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:31.969 00:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MWJhYzU0MmNjZDVhMTk1MDM3OTliMjEyMGE3MDMzMmI3OTkyMDRjYzFiYzZiOWM2okmFTA==: 00:27:31.969 00:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MDVlNTU0OTA4NDRmMTFkZjlhMDU2ZTc0ZTM4NjZiNDL//Az2: 00:27:31.969 00:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:31.969 00:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:31.969 00:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MWJhYzU0MmNjZDVhMTk1MDM3OTliMjEyMGE3MDMzMmI3OTkyMDRjYzFiYzZiOWM2okmFTA==: 00:27:31.969 00:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MDVlNTU0OTA4NDRmMTFkZjlhMDU2ZTc0ZTM4NjZiNDL//Az2: ]] 00:27:31.969 00:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MDVlNTU0OTA4NDRmMTFkZjlhMDU2ZTc0ZTM4NjZiNDL//Az2: 00:27:31.969 00:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:27:31.969 00:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:31.969 00:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:31.969 00:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:31.969 00:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:31.969 00:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:31.969 00:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:31.969 00:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.969 00:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.969 00:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.969 00:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:31.969 00:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:31.969 00:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:31.969 00:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:31.969 00:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:31.969 00:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:31.969 00:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:31.969 00:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:31.969 00:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:31.969 00:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:31.969 00:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:31.969 00:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:31.969 00:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.969 00:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.539 nvme0n1 00:27:32.539 00:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.539 00:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:32.539 00:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:32.539 00:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.539 00:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.539 00:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.539 00:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:32.539 00:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:32.539 00:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.539 00:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.539 00:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.539 00:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:32.539 00:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:27:32.539 00:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:32.539 00:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:32.539 00:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:32.539 00:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:32.539 00:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Yjk2M2M1Y2U4ZGJhYTQzZjE3NjIzMzQxYTg0YmQ4ZmJkYjU3Y2JhNTNmMWQzMDRhN2ZmYzQzYTJkMGM4YzhlNB7igZA=: 00:27:32.539 00:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:32.539 00:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:32.539 00:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:32.539 00:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Yjk2M2M1Y2U4ZGJhYTQzZjE3NjIzMzQxYTg0YmQ4ZmJkYjU3Y2JhNTNmMWQzMDRhN2ZmYzQzYTJkMGM4YzhlNB7igZA=: 00:27:32.539 00:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:32.539 00:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:27:32.539 00:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:32.539 00:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:32.539 00:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:32.539 00:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:32.539 00:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:32.539 00:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:32.539 00:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.539 00:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.539 00:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.539 00:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:32.539 00:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:32.539 00:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:32.539 00:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:32.539 00:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:32.539 00:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:32.539 00:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:32.539 00:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:32.539 00:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:32.539 00:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:32.539 00:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:32.539 00:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:32.539 00:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.539 00:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.478 nvme0n1 00:27:33.478 00:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:33.478 00:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:33.478 00:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:33.478 00:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:33.478 00:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.478 00:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:33.478 00:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:33.478 00:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:33.478 00:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:33.478 00:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.478 00:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:33.478 00:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:27:33.478 00:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:33.478 00:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:33.478 00:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:33.478 00:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:33.478 00:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWNkOTk3ZGE4NTkzNGNjYjI2M2Q3MTk2NDBmOGQwNTM2ODFmOTA5YzJmYjBiMGFloiTCEw==: 00:27:33.478 00:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjNlYTRkNWU2NTdkYTcyNDJiNmQzMjM3ZDhkMTVlNDQxMjkwM2U4MmUyMmIyYjVlkmQc+Q==: 00:27:33.478 00:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:33.478 00:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:33.478 00:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWNkOTk3ZGE4NTkzNGNjYjI2M2Q3MTk2NDBmOGQwNTM2ODFmOTA5YzJmYjBiMGFloiTCEw==: 00:27:33.478 00:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjNlYTRkNWU2NTdkYTcyNDJiNmQzMjM3ZDhkMTVlNDQxMjkwM2U4MmUyMmIyYjVlkmQc+Q==: ]] 00:27:33.478 00:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjNlYTRkNWU2NTdkYTcyNDJiNmQzMjM3ZDhkMTVlNDQxMjkwM2U4MmUyMmIyYjVlkmQc+Q==: 00:27:33.478 00:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:33.478 00:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:33.478 00:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.478 00:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:33.478 00:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:27:33.478 00:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:33.478 00:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:33.478 00:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:33.478 00:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:33.478 00:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:33.478 00:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:33.478 00:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:33.478 00:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:33.478 00:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:33.478 00:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:33.478 00:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:27:33.478 00:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:27:33.478 00:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:27:33.478 00:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:27:33.478 00:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:33.478 00:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:27:33.478 00:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:33.478 00:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:27:33.478 00:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:33.478 00:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.478 request: 00:27:33.478 { 00:27:33.478 "name": "nvme0", 00:27:33.478 "trtype": "tcp", 00:27:33.478 "traddr": "10.0.0.1", 00:27:33.478 "adrfam": "ipv4", 00:27:33.478 "trsvcid": "4420", 00:27:33.478 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:27:33.478 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:27:33.478 "prchk_reftag": false, 00:27:33.478 "prchk_guard": false, 00:27:33.478 "hdgst": false, 00:27:33.479 "ddgst": false, 00:27:33.479 "allow_unrecognized_csi": false, 00:27:33.479 "method": "bdev_nvme_attach_controller", 00:27:33.479 "req_id": 1 00:27:33.479 } 00:27:33.479 Got JSON-RPC error response 00:27:33.479 response: 00:27:33.479 { 00:27:33.479 "code": -5, 00:27:33.479 "message": "Input/output error" 00:27:33.479 } 00:27:33.479 00:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:27:33.479 00:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:27:33.479 00:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:27:33.479 00:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:27:33.479 00:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:27:33.479 00:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:27:33.479 00:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:27:33.479 00:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:33.479 00:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.479 00:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:33.479 00:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:27:33.479 00:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:27:33.479 00:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:33.479 00:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:33.479 00:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:33.479 00:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:33.479 00:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:33.479 00:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:33.479 00:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:33.479 00:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:33.479 00:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:33.479 00:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:33.479 00:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:27:33.479 00:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:27:33.479 00:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:27:33.479 00:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:27:33.479 00:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:33.479 00:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:27:33.479 00:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:33.479 00:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:27:33.479 00:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:33.479 00:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.479 request: 00:27:33.479 { 00:27:33.479 "name": "nvme0", 00:27:33.479 "trtype": "tcp", 00:27:33.479 "traddr": "10.0.0.1", 00:27:33.479 "adrfam": "ipv4", 00:27:33.479 "trsvcid": "4420", 00:27:33.479 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:27:33.479 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:27:33.479 "prchk_reftag": false, 00:27:33.479 "prchk_guard": false, 00:27:33.479 "hdgst": false, 00:27:33.479 "ddgst": false, 00:27:33.479 "dhchap_key": "key2", 00:27:33.479 "allow_unrecognized_csi": false, 00:27:33.479 "method": "bdev_nvme_attach_controller", 00:27:33.479 "req_id": 1 00:27:33.479 } 00:27:33.479 Got JSON-RPC error response 00:27:33.479 response: 00:27:33.479 { 00:27:33.479 "code": -5, 00:27:33.479 "message": "Input/output error" 00:27:33.479 } 00:27:33.479 00:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:27:33.479 00:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:27:33.479 00:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:27:33.479 00:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:27:33.479 00:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:27:33.479 00:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:27:33.479 00:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:27:33.479 00:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:33.479 00:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.479 00:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:33.479 00:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:27:33.479 00:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:27:33.479 00:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:33.479 00:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:33.479 00:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:33.479 00:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:33.479 00:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:33.479 00:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:33.479 00:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:33.479 00:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:33.479 00:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:33.479 00:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:33.479 00:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:33.479 00:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:27:33.479 00:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:33.479 00:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:27:33.479 00:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:33.479 00:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:27:33.479 00:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:33.479 00:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:33.479 00:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:33.479 00:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.479 request: 00:27:33.479 { 00:27:33.479 "name": "nvme0", 00:27:33.479 "trtype": "tcp", 00:27:33.479 "traddr": "10.0.0.1", 00:27:33.479 "adrfam": "ipv4", 00:27:33.479 "trsvcid": "4420", 00:27:33.479 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:27:33.479 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:27:33.479 "prchk_reftag": false, 00:27:33.479 "prchk_guard": false, 00:27:33.479 "hdgst": false, 00:27:33.479 "ddgst": false, 00:27:33.479 "dhchap_key": "key1", 00:27:33.479 "dhchap_ctrlr_key": "ckey2", 00:27:33.479 "allow_unrecognized_csi": false, 00:27:33.479 "method": "bdev_nvme_attach_controller", 00:27:33.479 "req_id": 1 00:27:33.479 } 00:27:33.479 Got JSON-RPC error response 00:27:33.479 response: 00:27:33.479 { 00:27:33.479 "code": -5, 00:27:33.479 "message": "Input/output error" 00:27:33.479 } 00:27:33.479 00:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:27:33.479 00:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:27:33.479 00:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:27:33.479 00:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:27:33.479 00:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:27:33.479 00:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:27:33.479 00:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:33.479 00:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:33.479 00:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:33.479 00:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:33.479 00:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:33.479 00:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:33.479 00:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:33.479 00:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:33.479 00:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:33.479 00:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:33.479 00:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:27:33.479 00:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:33.479 00:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.740 nvme0n1 00:27:33.740 00:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:33.740 00:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:27:33.740 00:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:33.740 00:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:33.740 00:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:33.740 00:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:33.740 00:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjhkYzhlNTkwNWM4NjRkODMzZTkxNTVhNjNkZjk5M2KPLwBm: 00:27:33.740 00:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzNmNDNlMzY0N2ZmMTQyZGU5OTAxZjBlZmJjMTMxZmJPokLg: 00:27:33.740 00:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:33.740 00:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:33.740 00:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjhkYzhlNTkwNWM4NjRkODMzZTkxNTVhNjNkZjk5M2KPLwBm: 00:27:33.740 00:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzNmNDNlMzY0N2ZmMTQyZGU5OTAxZjBlZmJjMTMxZmJPokLg: ]] 00:27:33.740 00:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzNmNDNlMzY0N2ZmMTQyZGU5OTAxZjBlZmJjMTMxZmJPokLg: 00:27:33.740 00:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:33.740 00:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:33.740 00:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.740 00:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:33.740 00:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:27:33.740 00:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:27:33.740 00:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:33.740 00:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.740 00:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:33.740 00:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:33.740 00:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:33.740 00:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:27:33.740 00:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:33.740 00:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:27:33.740 00:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:33.740 00:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:27:33.740 00:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:33.740 00:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:33.740 00:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:33.740 00:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.000 request: 00:27:34.000 { 00:27:34.000 "name": "nvme0", 00:27:34.000 "dhchap_key": "key1", 00:27:34.000 "dhchap_ctrlr_key": "ckey2", 00:27:34.000 "method": "bdev_nvme_set_keys", 00:27:34.000 "req_id": 1 00:27:34.000 } 00:27:34.000 Got JSON-RPC error response 00:27:34.000 response: 00:27:34.000 { 00:27:34.000 "code": -13, 00:27:34.000 "message": "Permission denied" 00:27:34.000 } 00:27:34.000 00:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:27:34.000 00:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:27:34.000 00:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:27:34.000 00:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:27:34.000 00:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:27:34.000 00:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:27:34.000 00:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:27:34.000 00:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:34.000 00:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.000 00:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:34.000 00:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:27:34.000 00:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:27:34.974 00:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:27:34.974 00:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:27:34.974 00:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:34.974 00:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.974 00:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:34.974 00:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:27:34.974 00:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:27:34.974 00:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:34.974 00:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:34.974 00:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:34.974 00:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:34.974 00:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWNkOTk3ZGE4NTkzNGNjYjI2M2Q3MTk2NDBmOGQwNTM2ODFmOTA5YzJmYjBiMGFloiTCEw==: 00:27:34.974 00:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjNlYTRkNWU2NTdkYTcyNDJiNmQzMjM3ZDhkMTVlNDQxMjkwM2U4MmUyMmIyYjVlkmQc+Q==: 00:27:34.974 00:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:34.974 00:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:34.974 00:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWNkOTk3ZGE4NTkzNGNjYjI2M2Q3MTk2NDBmOGQwNTM2ODFmOTA5YzJmYjBiMGFloiTCEw==: 00:27:34.974 00:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjNlYTRkNWU2NTdkYTcyNDJiNmQzMjM3ZDhkMTVlNDQxMjkwM2U4MmUyMmIyYjVlkmQc+Q==: ]] 00:27:34.974 00:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjNlYTRkNWU2NTdkYTcyNDJiNmQzMjM3ZDhkMTVlNDQxMjkwM2U4MmUyMmIyYjVlkmQc+Q==: 00:27:34.974 00:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:27:34.974 00:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:34.974 00:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:34.974 00:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:34.974 00:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:34.974 00:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:34.974 00:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:34.974 00:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:34.974 00:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:34.974 00:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:34.974 00:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:34.974 00:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:27:34.974 00:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:34.974 00:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.240 nvme0n1 00:27:35.240 00:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:35.240 00:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:27:35.240 00:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:35.240 00:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:35.240 00:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:35.240 00:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:35.240 00:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjhkYzhlNTkwNWM4NjRkODMzZTkxNTVhNjNkZjk5M2KPLwBm: 00:27:35.240 00:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzNmNDNlMzY0N2ZmMTQyZGU5OTAxZjBlZmJjMTMxZmJPokLg: 00:27:35.241 00:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:35.241 00:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:35.241 00:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjhkYzhlNTkwNWM4NjRkODMzZTkxNTVhNjNkZjk5M2KPLwBm: 00:27:35.241 00:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzNmNDNlMzY0N2ZmMTQyZGU5OTAxZjBlZmJjMTMxZmJPokLg: ]] 00:27:35.241 00:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzNmNDNlMzY0N2ZmMTQyZGU5OTAxZjBlZmJjMTMxZmJPokLg: 00:27:35.241 00:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:27:35.241 00:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:27:35.241 00:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:27:35.241 00:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:27:35.241 00:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:35.241 00:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:27:35.241 00:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:35.241 00:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:27:35.241 00:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:35.241 00:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.241 request: 00:27:35.241 { 00:27:35.241 "name": "nvme0", 00:27:35.241 "dhchap_key": "key2", 00:27:35.241 "dhchap_ctrlr_key": "ckey1", 00:27:35.241 "method": "bdev_nvme_set_keys", 00:27:35.241 "req_id": 1 00:27:35.241 } 00:27:35.241 Got JSON-RPC error response 00:27:35.241 response: 00:27:35.241 { 00:27:35.241 "code": -13, 00:27:35.241 "message": "Permission denied" 00:27:35.241 } 00:27:35.241 00:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:27:35.241 00:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:27:35.241 00:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:27:35.241 00:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:27:35.241 00:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:27:35.241 00:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:27:35.241 00:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:27:35.241 00:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:35.241 00:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.241 00:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:35.241 00:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:27:35.241 00:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:27:36.180 00:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:27:36.180 00:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:27:36.180 00:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:36.180 00:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.180 00:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:36.441 00:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:27:36.441 00:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:27:36.441 00:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:27:36.441 00:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:27:36.441 00:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@514 -- # nvmfcleanup 00:27:36.441 00:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:27:36.441 00:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:36.441 00:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:27:36.441 00:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:36.441 00:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:36.441 rmmod nvme_tcp 00:27:36.441 rmmod nvme_fabrics 00:27:36.441 00:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:36.441 00:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:27:36.441 00:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:27:36.441 00:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@515 -- # '[' -n 3401837 ']' 00:27:36.441 00:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # killprocess 3401837 00:27:36.441 00:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@950 -- # '[' -z 3401837 ']' 00:27:36.441 00:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # kill -0 3401837 00:27:36.441 00:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@955 -- # uname 00:27:36.441 00:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:36.441 00:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3401837 00:27:36.441 00:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:27:36.441 00:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:27:36.441 00:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3401837' 00:27:36.441 killing process with pid 3401837 00:27:36.441 00:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@969 -- # kill 3401837 00:27:36.441 00:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@974 -- # wait 3401837 00:27:36.441 00:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:27:36.441 00:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:27:36.441 00:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:27:36.441 00:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:27:36.441 00:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@789 -- # iptables-save 00:27:36.441 00:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:27:36.441 00:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@789 -- # iptables-restore 00:27:36.441 00:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:36.441 00:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:36.441 00:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:36.441 00:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:36.441 00:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:39.010 00:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:39.010 00:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:27:39.010 00:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:27:39.010 00:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:27:39.010 00:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@710 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:27:39.010 00:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # echo 0 00:27:39.010 00:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:39.010 00:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@715 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:27:39.010 00:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:27:39.010 00:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:39.010 00:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # modules=(/sys/module/nvmet/holders/*) 00:27:39.010 00:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modprobe -r nvmet_tcp nvmet 00:27:39.010 00:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:27:42.309 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:27:42.310 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:27:42.310 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:27:42.310 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:27:42.310 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:27:42.310 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:27:42.310 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:27:42.310 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:27:42.310 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:27:42.310 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:27:42.310 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:27:42.310 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:27:42.310 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:27:42.310 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:27:42.310 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:27:42.310 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:27:42.310 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:27:42.310 00:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.lSU /tmp/spdk.key-null.kCs /tmp/spdk.key-sha256.h0S /tmp/spdk.key-sha384.JEg /tmp/spdk.key-sha512.rH6 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:27:42.310 00:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:27:46.523 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:27:46.523 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:27:46.523 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:27:46.523 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:27:46.523 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:27:46.523 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:27:46.523 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:27:46.523 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:27:46.523 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:27:46.523 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:27:46.523 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:27:46.523 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:27:46.523 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:27:46.523 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:27:46.523 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:27:46.523 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:27:46.523 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:27:46.523 00:27:46.523 real 0m57.497s 00:27:46.523 user 0m51.540s 00:27:46.523 sys 0m15.574s 00:27:46.523 00:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:46.523 00:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.523 ************************************ 00:27:46.523 END TEST nvmf_auth_host 00:27:46.523 ************************************ 00:27:46.523 00:35:16 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:27:46.523 00:35:16 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:27:46.523 00:35:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:27:46.523 00:35:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:46.523 00:35:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.523 ************************************ 00:27:46.523 START TEST nvmf_digest 00:27:46.523 ************************************ 00:27:46.523 00:35:16 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:27:46.523 * Looking for test storage... 00:27:46.523 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:46.523 00:35:16 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:27:46.523 00:35:16 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1681 -- # lcov --version 00:27:46.523 00:35:16 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:27:46.523 00:35:16 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:27:46.523 00:35:16 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:46.523 00:35:16 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:46.523 00:35:16 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:46.523 00:35:16 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:27:46.523 00:35:16 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:27:46.523 00:35:16 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:27:46.523 00:35:16 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:27:46.523 00:35:16 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:27:46.523 00:35:16 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:27:46.523 00:35:16 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:27:46.523 00:35:16 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:46.523 00:35:16 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:27:46.523 00:35:16 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:27:46.523 00:35:16 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:46.523 00:35:16 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:46.523 00:35:16 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:27:46.523 00:35:16 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:27:46.523 00:35:16 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:46.523 00:35:16 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:27:46.523 00:35:16 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:27:46.523 00:35:16 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:27:46.523 00:35:16 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:27:46.523 00:35:16 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:46.523 00:35:16 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:27:46.523 00:35:16 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:27:46.524 00:35:16 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:46.524 00:35:16 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:46.524 00:35:16 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:27:46.524 00:35:16 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:46.524 00:35:16 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:27:46.524 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:46.524 --rc genhtml_branch_coverage=1 00:27:46.524 --rc genhtml_function_coverage=1 00:27:46.524 --rc genhtml_legend=1 00:27:46.524 --rc geninfo_all_blocks=1 00:27:46.524 --rc geninfo_unexecuted_blocks=1 00:27:46.524 00:27:46.524 ' 00:27:46.524 00:35:16 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:27:46.524 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:46.524 --rc genhtml_branch_coverage=1 00:27:46.524 --rc genhtml_function_coverage=1 00:27:46.524 --rc genhtml_legend=1 00:27:46.524 --rc geninfo_all_blocks=1 00:27:46.524 --rc geninfo_unexecuted_blocks=1 00:27:46.524 00:27:46.524 ' 00:27:46.524 00:35:16 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:27:46.524 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:46.524 --rc genhtml_branch_coverage=1 00:27:46.524 --rc genhtml_function_coverage=1 00:27:46.524 --rc genhtml_legend=1 00:27:46.524 --rc geninfo_all_blocks=1 00:27:46.524 --rc geninfo_unexecuted_blocks=1 00:27:46.524 00:27:46.524 ' 00:27:46.524 00:35:16 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:27:46.524 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:46.524 --rc genhtml_branch_coverage=1 00:27:46.524 --rc genhtml_function_coverage=1 00:27:46.524 --rc genhtml_legend=1 00:27:46.524 --rc geninfo_all_blocks=1 00:27:46.524 --rc geninfo_unexecuted_blocks=1 00:27:46.524 00:27:46.524 ' 00:27:46.524 00:35:16 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:46.524 00:35:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:27:46.524 00:35:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:46.524 00:35:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:46.524 00:35:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:46.524 00:35:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:46.524 00:35:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:46.524 00:35:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:46.524 00:35:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:46.524 00:35:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:46.524 00:35:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:46.524 00:35:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:46.524 00:35:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:27:46.524 00:35:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:27:46.524 00:35:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:46.524 00:35:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:46.524 00:35:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:46.524 00:35:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:46.524 00:35:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:46.524 00:35:16 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:27:46.524 00:35:16 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:46.524 00:35:16 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:46.524 00:35:16 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:46.524 00:35:16 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:46.524 00:35:16 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:46.524 00:35:16 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:46.524 00:35:16 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:27:46.524 00:35:16 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:46.524 00:35:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:27:46.524 00:35:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:46.524 00:35:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:46.524 00:35:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:46.524 00:35:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:46.524 00:35:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:46.524 00:35:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:46.524 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:46.524 00:35:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:46.524 00:35:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:46.524 00:35:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:46.524 00:35:16 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:27:46.524 00:35:16 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:27:46.524 00:35:16 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:27:46.524 00:35:16 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:27:46.524 00:35:16 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:27:46.524 00:35:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:27:46.524 00:35:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:46.524 00:35:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # prepare_net_devs 00:27:46.524 00:35:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@436 -- # local -g is_hw=no 00:27:46.524 00:35:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # remove_spdk_ns 00:27:46.524 00:35:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:46.524 00:35:16 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:46.524 00:35:16 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:46.524 00:35:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:27:46.524 00:35:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:27:46.524 00:35:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@309 -- # xtrace_disable 00:27:46.524 00:35:16 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:27:54.667 00:35:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:54.667 00:35:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # pci_devs=() 00:27:54.667 00:35:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:54.667 00:35:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:54.667 00:35:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:54.667 00:35:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:54.667 00:35:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:54.667 00:35:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # net_devs=() 00:27:54.667 00:35:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:54.667 00:35:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # e810=() 00:27:54.667 00:35:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # local -ga e810 00:27:54.667 00:35:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # x722=() 00:27:54.667 00:35:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # local -ga x722 00:27:54.667 00:35:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # mlx=() 00:27:54.667 00:35:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # local -ga mlx 00:27:54.667 00:35:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:54.667 00:35:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:54.667 00:35:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:54.667 00:35:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:54.667 00:35:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:54.667 00:35:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:54.667 00:35:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:54.667 00:35:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:54.667 00:35:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:54.667 00:35:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:54.667 00:35:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:54.667 00:35:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:54.667 00:35:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:54.667 00:35:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:54.667 00:35:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:54.667 00:35:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:54.667 00:35:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:54.667 00:35:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:54.667 00:35:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:54.667 00:35:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:27:54.667 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:27:54.667 00:35:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:54.667 00:35:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:54.667 00:35:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:54.667 00:35:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:54.667 00:35:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:54.667 00:35:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:54.667 00:35:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:27:54.667 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:27:54.667 00:35:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:54.667 00:35:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:54.667 00:35:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:54.667 00:35:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:54.667 00:35:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:54.667 00:35:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:54.667 00:35:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:54.667 00:35:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:54.667 00:35:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:27:54.667 00:35:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:54.667 00:35:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:27:54.667 00:35:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:54.667 00:35:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ up == up ]] 00:27:54.667 00:35:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:27:54.667 00:35:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:54.667 00:35:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:27:54.667 Found net devices under 0000:4b:00.0: cvl_0_0 00:27:54.667 00:35:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:27:54.667 00:35:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:27:54.667 00:35:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:54.667 00:35:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:27:54.667 00:35:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:54.667 00:35:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ up == up ]] 00:27:54.667 00:35:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:27:54.667 00:35:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:54.667 00:35:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:27:54.667 Found net devices under 0000:4b:00.1: cvl_0_1 00:27:54.667 00:35:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:27:54.667 00:35:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:27:54.667 00:35:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # is_hw=yes 00:27:54.667 00:35:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:27:54.667 00:35:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:27:54.667 00:35:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:27:54.667 00:35:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:54.667 00:35:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:54.667 00:35:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:54.667 00:35:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:54.667 00:35:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:54.667 00:35:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:54.667 00:35:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:54.667 00:35:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:54.667 00:35:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:54.667 00:35:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:54.667 00:35:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:54.667 00:35:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:54.667 00:35:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:54.667 00:35:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:54.667 00:35:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:54.667 00:35:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:54.667 00:35:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:54.667 00:35:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:54.667 00:35:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:54.667 00:35:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:54.667 00:35:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:54.667 00:35:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:54.667 00:35:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:54.667 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:54.667 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.519 ms 00:27:54.667 00:27:54.667 --- 10.0.0.2 ping statistics --- 00:27:54.667 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:54.667 rtt min/avg/max/mdev = 0.519/0.519/0.519/0.000 ms 00:27:54.667 00:35:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:54.667 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:54.667 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.295 ms 00:27:54.667 00:27:54.667 --- 10.0.0.1 ping statistics --- 00:27:54.667 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:54.667 rtt min/avg/max/mdev = 0.295/0.295/0.295/0.000 ms 00:27:54.668 00:35:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:54.668 00:35:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@448 -- # return 0 00:27:54.668 00:35:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:27:54.668 00:35:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:54.668 00:35:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:27:54.668 00:35:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:27:54.668 00:35:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:54.668 00:35:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:27:54.668 00:35:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:27:54.668 00:35:24 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:27:54.668 00:35:24 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:27:54.668 00:35:24 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:27:54.668 00:35:24 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:27:54.668 00:35:24 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:54.668 00:35:24 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:27:54.668 ************************************ 00:27:54.668 START TEST nvmf_digest_clean 00:27:54.668 ************************************ 00:27:54.668 00:35:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1125 -- # run_digest 00:27:54.668 00:35:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:27:54.668 00:35:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:27:54.668 00:35:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:27:54.668 00:35:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:27:54.668 00:35:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:27:54.668 00:35:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:27:54.668 00:35:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:54.668 00:35:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:54.668 00:35:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # nvmfpid=3418319 00:27:54.668 00:35:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # waitforlisten 3418319 00:27:54.668 00:35:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:27:54.668 00:35:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 3418319 ']' 00:27:54.668 00:35:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:54.668 00:35:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:54.668 00:35:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:54.668 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:54.668 00:35:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:54.668 00:35:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:54.668 [2024-10-09 00:35:24.444254] Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 initialization... 00:27:54.668 [2024-10-09 00:35:24.444319] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:54.668 [2024-10-09 00:35:24.530411] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:54.668 [2024-10-09 00:35:24.625225] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:54.668 [2024-10-09 00:35:24.625286] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:54.668 [2024-10-09 00:35:24.625294] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:54.668 [2024-10-09 00:35:24.625302] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:54.668 [2024-10-09 00:35:24.625308] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:54.668 [2024-10-09 00:35:24.626111] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:27:54.668 00:35:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:54.668 00:35:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:27:54.668 00:35:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:27:54.668 00:35:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:54.668 00:35:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:54.668 00:35:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:54.668 00:35:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:27:54.668 00:35:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:27:54.668 00:35:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:27:54.668 00:35:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:54.668 00:35:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:54.929 null0 00:27:54.929 [2024-10-09 00:35:25.388688] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:54.930 [2024-10-09 00:35:25.413006] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:54.930 00:35:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:54.930 00:35:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:27:54.930 00:35:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:27:54.930 00:35:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:27:54.930 00:35:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:27:54.930 00:35:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:27:54.930 00:35:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:27:54.930 00:35:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:27:54.930 00:35:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3418465 00:27:54.930 00:35:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3418465 /var/tmp/bperf.sock 00:27:54.930 00:35:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 3418465 ']' 00:27:54.930 00:35:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:27:54.930 00:35:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:54.930 00:35:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:54.930 00:35:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:54.930 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:54.930 00:35:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:54.930 00:35:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:54.930 [2024-10-09 00:35:25.473035] Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 initialization... 00:27:54.930 [2024-10-09 00:35:25.473096] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3418465 ] 00:27:54.930 [2024-10-09 00:35:25.553283] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:55.191 [2024-10-09 00:35:25.647020] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:27:55.772 00:35:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:55.772 00:35:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:27:55.772 00:35:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:27:55.772 00:35:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:27:55.772 00:35:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:27:56.040 00:35:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:56.040 00:35:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:56.612 nvme0n1 00:27:56.612 00:35:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:27:56.612 00:35:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:56.612 Running I/O for 2 seconds... 00:27:58.499 19197.00 IOPS, 74.99 MiB/s [2024-10-08T22:35:29.134Z] 21200.00 IOPS, 82.81 MiB/s 00:27:58.499 Latency(us) 00:27:58.499 [2024-10-08T22:35:29.134Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:58.499 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:27:58.499 nvme0n1 : 2.00 21237.02 82.96 0.00 0.00 6020.89 2307.41 15291.73 00:27:58.499 [2024-10-08T22:35:29.134Z] =================================================================================================================== 00:27:58.499 [2024-10-08T22:35:29.134Z] Total : 21237.02 82.96 0.00 0.00 6020.89 2307.41 15291.73 00:27:58.499 { 00:27:58.499 "results": [ 00:27:58.499 { 00:27:58.499 "job": "nvme0n1", 00:27:58.499 "core_mask": "0x2", 00:27:58.499 "workload": "randread", 00:27:58.499 "status": "finished", 00:27:58.499 "queue_depth": 128, 00:27:58.499 "io_size": 4096, 00:27:58.499 "runtime": 2.004142, 00:27:58.499 "iops": 21237.0181354415, 00:27:58.499 "mibps": 82.95710209156836, 00:27:58.499 "io_failed": 0, 00:27:58.499 "io_timeout": 0, 00:27:58.499 "avg_latency_us": 6020.893364033645, 00:27:58.499 "min_latency_us": 2307.4133333333334, 00:27:58.499 "max_latency_us": 15291.733333333334 00:27:58.499 } 00:27:58.499 ], 00:27:58.499 "core_count": 1 00:27:58.499 } 00:27:58.499 00:35:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:27:58.499 00:35:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:27:58.499 00:35:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:27:58.499 00:35:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:27:58.499 | select(.opcode=="crc32c") 00:27:58.499 | "\(.module_name) \(.executed)"' 00:27:58.499 00:35:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:27:58.760 00:35:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:27:58.760 00:35:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:27:58.760 00:35:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:27:58.760 00:35:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:27:58.760 00:35:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3418465 00:27:58.760 00:35:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 3418465 ']' 00:27:58.760 00:35:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 3418465 00:27:58.760 00:35:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:27:58.760 00:35:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:58.760 00:35:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3418465 00:27:58.760 00:35:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:27:58.760 00:35:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:27:58.760 00:35:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3418465' 00:27:58.760 killing process with pid 3418465 00:27:58.760 00:35:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 3418465 00:27:58.760 Received shutdown signal, test time was about 2.000000 seconds 00:27:58.760 00:27:58.760 Latency(us) 00:27:58.760 [2024-10-08T22:35:29.395Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:58.760 [2024-10-08T22:35:29.395Z] =================================================================================================================== 00:27:58.760 [2024-10-08T22:35:29.395Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:58.760 00:35:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 3418465 00:27:59.020 00:35:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:27:59.020 00:35:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:27:59.021 00:35:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:27:59.021 00:35:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:27:59.021 00:35:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:27:59.021 00:35:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:27:59.021 00:35:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:27:59.021 00:35:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3419207 00:27:59.021 00:35:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3419207 /var/tmp/bperf.sock 00:27:59.021 00:35:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 3419207 ']' 00:27:59.021 00:35:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:59.021 00:35:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:27:59.021 00:35:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:59.021 00:35:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:59.021 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:59.021 00:35:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:59.021 00:35:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:59.021 [2024-10-09 00:35:29.500380] Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 initialization... 00:27:59.021 [2024-10-09 00:35:29.500437] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3419207 ] 00:27:59.021 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:59.021 Zero copy mechanism will not be used. 00:27:59.021 [2024-10-09 00:35:29.574873] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:59.021 [2024-10-09 00:35:29.628071] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:27:59.974 00:35:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:59.974 00:35:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:27:59.974 00:35:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:27:59.974 00:35:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:27:59.974 00:35:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:27:59.974 00:35:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:59.974 00:35:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:00.544 nvme0n1 00:28:00.544 00:35:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:28:00.544 00:35:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:00.544 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:00.544 Zero copy mechanism will not be used. 00:28:00.544 Running I/O for 2 seconds... 00:28:02.438 5558.00 IOPS, 694.75 MiB/s [2024-10-08T22:35:33.073Z] 4706.50 IOPS, 588.31 MiB/s 00:28:02.438 Latency(us) 00:28:02.438 [2024-10-08T22:35:33.073Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:02.438 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:28:02.438 nvme0n1 : 2.00 4708.98 588.62 0.00 0.00 3395.36 631.47 11304.96 00:28:02.438 [2024-10-08T22:35:33.073Z] =================================================================================================================== 00:28:02.438 [2024-10-08T22:35:33.073Z] Total : 4708.98 588.62 0.00 0.00 3395.36 631.47 11304.96 00:28:02.438 { 00:28:02.438 "results": [ 00:28:02.438 { 00:28:02.438 "job": "nvme0n1", 00:28:02.438 "core_mask": "0x2", 00:28:02.438 "workload": "randread", 00:28:02.438 "status": "finished", 00:28:02.438 "queue_depth": 16, 00:28:02.438 "io_size": 131072, 00:28:02.438 "runtime": 2.002345, 00:28:02.438 "iops": 4708.97872244793, 00:28:02.438 "mibps": 588.6223403059912, 00:28:02.438 "io_failed": 0, 00:28:02.438 "io_timeout": 0, 00:28:02.438 "avg_latency_us": 3395.355684236575, 00:28:02.438 "min_latency_us": 631.4666666666667, 00:28:02.438 "max_latency_us": 11304.96 00:28:02.438 } 00:28:02.438 ], 00:28:02.438 "core_count": 1 00:28:02.438 } 00:28:02.438 00:35:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:28:02.438 00:35:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:28:02.438 00:35:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:02.438 00:35:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:02.438 | select(.opcode=="crc32c") 00:28:02.438 | "\(.module_name) \(.executed)"' 00:28:02.438 00:35:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:02.698 00:35:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:28:02.698 00:35:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:28:02.698 00:35:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:28:02.698 00:35:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:02.698 00:35:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3419207 00:28:02.698 00:35:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 3419207 ']' 00:28:02.698 00:35:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 3419207 00:28:02.698 00:35:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:28:02.698 00:35:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:02.698 00:35:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3419207 00:28:02.698 00:35:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:28:02.698 00:35:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:28:02.698 00:35:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3419207' 00:28:02.698 killing process with pid 3419207 00:28:02.698 00:35:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 3419207 00:28:02.698 Received shutdown signal, test time was about 2.000000 seconds 00:28:02.698 00:28:02.698 Latency(us) 00:28:02.698 [2024-10-08T22:35:33.333Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:02.698 [2024-10-08T22:35:33.333Z] =================================================================================================================== 00:28:02.698 [2024-10-08T22:35:33.333Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:02.698 00:35:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 3419207 00:28:02.959 00:35:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:28:02.959 00:35:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:28:02.959 00:35:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:02.959 00:35:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:28:02.959 00:35:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:28:02.959 00:35:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:28:02.959 00:35:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:28:02.959 00:35:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3420076 00:28:02.959 00:35:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3420076 /var/tmp/bperf.sock 00:28:02.959 00:35:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 3420076 ']' 00:28:02.959 00:35:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:28:02.959 00:35:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:02.959 00:35:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:02.959 00:35:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:02.959 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:02.959 00:35:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:02.959 00:35:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:02.959 [2024-10-09 00:35:33.438128] Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 initialization... 00:28:02.959 [2024-10-09 00:35:33.438183] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3420076 ] 00:28:02.959 [2024-10-09 00:35:33.512316] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:02.959 [2024-10-09 00:35:33.565589] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:28:03.899 00:35:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:03.899 00:35:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:28:03.899 00:35:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:28:03.899 00:35:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:28:03.899 00:35:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:03.899 00:35:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:03.899 00:35:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:04.471 nvme0n1 00:28:04.471 00:35:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:28:04.471 00:35:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:04.471 Running I/O for 2 seconds... 00:28:06.354 30515.00 IOPS, 119.20 MiB/s [2024-10-08T22:35:36.989Z] 30513.50 IOPS, 119.19 MiB/s 00:28:06.354 Latency(us) 00:28:06.354 [2024-10-08T22:35:36.989Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:06.354 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:06.354 nvme0n1 : 2.00 30532.14 119.27 0.00 0.00 4187.14 2116.27 8246.61 00:28:06.354 [2024-10-08T22:35:36.989Z] =================================================================================================================== 00:28:06.354 [2024-10-08T22:35:36.989Z] Total : 30532.14 119.27 0.00 0.00 4187.14 2116.27 8246.61 00:28:06.354 { 00:28:06.354 "results": [ 00:28:06.354 { 00:28:06.354 "job": "nvme0n1", 00:28:06.354 "core_mask": "0x2", 00:28:06.354 "workload": "randwrite", 00:28:06.354 "status": "finished", 00:28:06.354 "queue_depth": 128, 00:28:06.354 "io_size": 4096, 00:28:06.354 "runtime": 2.004347, 00:28:06.354 "iops": 30532.1383971937, 00:28:06.354 "mibps": 119.26616561403789, 00:28:06.354 "io_failed": 0, 00:28:06.354 "io_timeout": 0, 00:28:06.354 "avg_latency_us": 4187.140244565366, 00:28:06.354 "min_latency_us": 2116.266666666667, 00:28:06.354 "max_latency_us": 8246.613333333333 00:28:06.354 } 00:28:06.354 ], 00:28:06.354 "core_count": 1 00:28:06.354 } 00:28:06.354 00:35:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:28:06.354 00:35:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:28:06.354 00:35:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:06.354 00:35:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:06.354 | select(.opcode=="crc32c") 00:28:06.354 | "\(.module_name) \(.executed)"' 00:28:06.354 00:35:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:06.615 00:35:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:28:06.615 00:35:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:28:06.615 00:35:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:28:06.615 00:35:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:06.615 00:35:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3420076 00:28:06.615 00:35:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 3420076 ']' 00:28:06.615 00:35:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 3420076 00:28:06.615 00:35:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:28:06.616 00:35:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:06.616 00:35:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3420076 00:28:06.616 00:35:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:28:06.616 00:35:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:28:06.616 00:35:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3420076' 00:28:06.616 killing process with pid 3420076 00:28:06.616 00:35:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 3420076 00:28:06.616 Received shutdown signal, test time was about 2.000000 seconds 00:28:06.616 00:28:06.616 Latency(us) 00:28:06.616 [2024-10-08T22:35:37.251Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:06.616 [2024-10-08T22:35:37.251Z] =================================================================================================================== 00:28:06.616 [2024-10-08T22:35:37.251Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:06.616 00:35:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 3420076 00:28:06.875 00:35:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:28:06.875 00:35:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:28:06.875 00:35:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:06.875 00:35:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:28:06.875 00:35:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:28:06.875 00:35:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:28:06.875 00:35:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:28:06.875 00:35:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3420840 00:28:06.875 00:35:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3420840 /var/tmp/bperf.sock 00:28:06.875 00:35:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 3420840 ']' 00:28:06.875 00:35:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:28:06.875 00:35:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:06.875 00:35:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:06.875 00:35:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:06.875 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:06.875 00:35:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:06.875 00:35:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:06.875 [2024-10-09 00:35:37.358905] Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 initialization... 00:28:06.875 [2024-10-09 00:35:37.358964] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3420840 ] 00:28:06.875 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:06.875 Zero copy mechanism will not be used. 00:28:06.875 [2024-10-09 00:35:37.433284] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:06.875 [2024-10-09 00:35:37.486497] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:28:07.817 00:35:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:07.817 00:35:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:28:07.817 00:35:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:28:07.817 00:35:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:28:07.817 00:35:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:07.817 00:35:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:07.817 00:35:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:08.077 nvme0n1 00:28:08.077 00:35:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:28:08.077 00:35:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:08.338 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:08.338 Zero copy mechanism will not be used. 00:28:08.338 Running I/O for 2 seconds... 00:28:10.235 7030.00 IOPS, 878.75 MiB/s [2024-10-08T22:35:40.870Z] 6446.50 IOPS, 805.81 MiB/s 00:28:10.235 Latency(us) 00:28:10.235 [2024-10-08T22:35:40.870Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:10.235 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:28:10.235 nvme0n1 : 2.00 6448.11 806.01 0.00 0.00 2478.14 1078.61 13325.65 00:28:10.235 [2024-10-08T22:35:40.870Z] =================================================================================================================== 00:28:10.235 [2024-10-08T22:35:40.870Z] Total : 6448.11 806.01 0.00 0.00 2478.14 1078.61 13325.65 00:28:10.235 { 00:28:10.235 "results": [ 00:28:10.235 { 00:28:10.235 "job": "nvme0n1", 00:28:10.235 "core_mask": "0x2", 00:28:10.235 "workload": "randwrite", 00:28:10.235 "status": "finished", 00:28:10.235 "queue_depth": 16, 00:28:10.235 "io_size": 131072, 00:28:10.235 "runtime": 2.002601, 00:28:10.235 "iops": 6448.114227447205, 00:28:10.235 "mibps": 806.0142784309006, 00:28:10.235 "io_failed": 0, 00:28:10.235 "io_timeout": 0, 00:28:10.235 "avg_latency_us": 2478.144761609747, 00:28:10.235 "min_latency_us": 1078.6133333333332, 00:28:10.235 "max_latency_us": 13325.653333333334 00:28:10.235 } 00:28:10.235 ], 00:28:10.235 "core_count": 1 00:28:10.235 } 00:28:10.235 00:35:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:28:10.235 00:35:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:28:10.235 00:35:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:10.235 00:35:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:10.235 | select(.opcode=="crc32c") 00:28:10.235 | "\(.module_name) \(.executed)"' 00:28:10.235 00:35:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:10.496 00:35:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:28:10.496 00:35:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:28:10.496 00:35:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:28:10.496 00:35:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:10.496 00:35:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3420840 00:28:10.496 00:35:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 3420840 ']' 00:28:10.496 00:35:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 3420840 00:28:10.496 00:35:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:28:10.496 00:35:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:10.496 00:35:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3420840 00:28:10.496 00:35:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:28:10.496 00:35:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:28:10.496 00:35:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3420840' 00:28:10.496 killing process with pid 3420840 00:28:10.496 00:35:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 3420840 00:28:10.496 Received shutdown signal, test time was about 2.000000 seconds 00:28:10.496 00:28:10.496 Latency(us) 00:28:10.496 [2024-10-08T22:35:41.131Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:10.496 [2024-10-08T22:35:41.131Z] =================================================================================================================== 00:28:10.496 [2024-10-08T22:35:41.131Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:10.496 00:35:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 3420840 00:28:10.757 00:35:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 3418319 00:28:10.757 00:35:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 3418319 ']' 00:28:10.757 00:35:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 3418319 00:28:10.757 00:35:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:28:10.757 00:35:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:10.757 00:35:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3418319 00:28:10.757 00:35:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:28:10.757 00:35:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:28:10.757 00:35:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3418319' 00:28:10.757 killing process with pid 3418319 00:28:10.757 00:35:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 3418319 00:28:10.757 00:35:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 3418319 00:28:10.757 00:28:10.757 real 0m16.969s 00:28:10.757 user 0m33.447s 00:28:10.757 sys 0m3.822s 00:28:10.757 00:35:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:10.757 00:35:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:10.757 ************************************ 00:28:10.757 END TEST nvmf_digest_clean 00:28:10.757 ************************************ 00:28:10.757 00:35:41 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:28:10.757 00:35:41 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:28:10.757 00:35:41 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:10.757 00:35:41 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:28:11.018 ************************************ 00:28:11.018 START TEST nvmf_digest_error 00:28:11.018 ************************************ 00:28:11.018 00:35:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1125 -- # run_digest_error 00:28:11.018 00:35:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:28:11.018 00:35:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:28:11.018 00:35:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:11.018 00:35:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:11.018 00:35:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # nvmfpid=3421555 00:28:11.018 00:35:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # waitforlisten 3421555 00:28:11.018 00:35:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:28:11.018 00:35:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 3421555 ']' 00:28:11.018 00:35:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:11.018 00:35:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:11.018 00:35:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:11.018 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:11.018 00:35:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:11.019 00:35:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:11.019 [2024-10-09 00:35:41.483034] Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 initialization... 00:28:11.019 [2024-10-09 00:35:41.483085] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:11.019 [2024-10-09 00:35:41.567310] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:11.019 [2024-10-09 00:35:41.626958] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:11.019 [2024-10-09 00:35:41.626992] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:11.019 [2024-10-09 00:35:41.626998] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:11.019 [2024-10-09 00:35:41.627003] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:11.019 [2024-10-09 00:35:41.627007] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:11.019 [2024-10-09 00:35:41.627477] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:28:11.959 00:35:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:11.959 00:35:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:28:11.959 00:35:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:28:11.959 00:35:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:11.959 00:35:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:11.959 00:35:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:11.959 00:35:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:28:11.959 00:35:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:11.959 00:35:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:11.959 [2024-10-09 00:35:42.313358] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:28:11.959 00:35:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:11.959 00:35:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:28:11.959 00:35:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:28:11.959 00:35:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:11.959 00:35:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:11.959 null0 00:28:11.959 [2024-10-09 00:35:42.390885] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:11.959 [2024-10-09 00:35:42.415090] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:11.959 00:35:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:11.959 00:35:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:28:11.959 00:35:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:28:11.959 00:35:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:28:11.959 00:35:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:28:11.959 00:35:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:28:11.959 00:35:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3421898 00:28:11.959 00:35:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3421898 /var/tmp/bperf.sock 00:28:11.959 00:35:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 3421898 ']' 00:28:11.959 00:35:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:28:11.959 00:35:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:11.959 00:35:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:11.959 00:35:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:11.959 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:11.959 00:35:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:11.959 00:35:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:11.959 [2024-10-09 00:35:42.469500] Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 initialization... 00:28:11.959 [2024-10-09 00:35:42.469547] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3421898 ] 00:28:11.959 [2024-10-09 00:35:42.546050] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:12.220 [2024-10-09 00:35:42.599434] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:28:12.791 00:35:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:12.791 00:35:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:28:12.791 00:35:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:12.791 00:35:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:13.052 00:35:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:28:13.052 00:35:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:13.052 00:35:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:13.052 00:35:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:13.052 00:35:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:13.052 00:35:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:13.313 nvme0n1 00:28:13.313 00:35:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:28:13.313 00:35:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:13.313 00:35:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:13.313 00:35:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:13.313 00:35:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:28:13.313 00:35:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:13.580 Running I/O for 2 seconds... 00:28:13.580 [2024-10-09 00:35:43.989388] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1886c70) 00:28:13.580 [2024-10-09 00:35:43.989419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:6327 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.580 [2024-10-09 00:35:43.989428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.580 [2024-10-09 00:35:44.001021] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1886c70) 00:28:13.580 [2024-10-09 00:35:44.001042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:3641 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.580 [2024-10-09 00:35:44.001049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.580 [2024-10-09 00:35:44.011970] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1886c70) 00:28:13.580 [2024-10-09 00:35:44.011989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:10429 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.580 [2024-10-09 00:35:44.011996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.580 [2024-10-09 00:35:44.022064] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1886c70) 00:28:13.580 [2024-10-09 00:35:44.022082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:11052 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.580 [2024-10-09 00:35:44.022089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.580 [2024-10-09 00:35:44.030964] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1886c70) 00:28:13.580 [2024-10-09 00:35:44.030982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:22355 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.580 [2024-10-09 00:35:44.030989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.580 [2024-10-09 00:35:44.039472] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1886c70) 00:28:13.580 [2024-10-09 00:35:44.039490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21725 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.580 [2024-10-09 00:35:44.039497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.580 [2024-10-09 00:35:44.048302] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1886c70) 00:28:13.580 [2024-10-09 00:35:44.048320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:18080 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.580 [2024-10-09 00:35:44.048327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.580 [2024-10-09 00:35:44.057325] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1886c70) 00:28:13.580 [2024-10-09 00:35:44.057348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:9104 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.580 [2024-10-09 00:35:44.057354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.580 [2024-10-09 00:35:44.065854] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1886c70) 00:28:13.580 [2024-10-09 00:35:44.065871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:15295 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.580 [2024-10-09 00:35:44.065877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.580 [2024-10-09 00:35:44.074289] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1886c70) 00:28:13.580 [2024-10-09 00:35:44.074307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:17617 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.580 [2024-10-09 00:35:44.074314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.580 [2024-10-09 00:35:44.083927] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1886c70) 00:28:13.580 [2024-10-09 00:35:44.083945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:10286 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.580 [2024-10-09 00:35:44.083951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.580 [2024-10-09 00:35:44.092546] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1886c70) 00:28:13.580 [2024-10-09 00:35:44.092564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:22636 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.580 [2024-10-09 00:35:44.092570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.580 [2024-10-09 00:35:44.101807] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1886c70) 00:28:13.580 [2024-10-09 00:35:44.101825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:574 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.580 [2024-10-09 00:35:44.101832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.580 [2024-10-09 00:35:44.110436] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1886c70) 00:28:13.580 [2024-10-09 00:35:44.110453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:11349 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.580 [2024-10-09 00:35:44.110460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.580 [2024-10-09 00:35:44.119798] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1886c70) 00:28:13.580 [2024-10-09 00:35:44.119816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9347 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.580 [2024-10-09 00:35:44.119822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.581 [2024-10-09 00:35:44.128550] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1886c70) 00:28:13.581 [2024-10-09 00:35:44.128567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:10138 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.581 [2024-10-09 00:35:44.128573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.581 [2024-10-09 00:35:44.135859] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1886c70) 00:28:13.581 [2024-10-09 00:35:44.135876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5188 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.581 [2024-10-09 00:35:44.135883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.581 [2024-10-09 00:35:44.147527] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1886c70) 00:28:13.581 [2024-10-09 00:35:44.147545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:13399 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.581 [2024-10-09 00:35:44.147551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.581 [2024-10-09 00:35:44.156330] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1886c70) 00:28:13.581 [2024-10-09 00:35:44.156347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:13035 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.581 [2024-10-09 00:35:44.156354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.581 [2024-10-09 00:35:44.164845] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1886c70) 00:28:13.581 [2024-10-09 00:35:44.164862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:20641 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.581 [2024-10-09 00:35:44.164868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.581 [2024-10-09 00:35:44.175048] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1886c70) 00:28:13.581 [2024-10-09 00:35:44.175065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:628 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.581 [2024-10-09 00:35:44.175072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.581 [2024-10-09 00:35:44.183967] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1886c70) 00:28:13.581 [2024-10-09 00:35:44.183984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:17016 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.581 [2024-10-09 00:35:44.183990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.581 [2024-10-09 00:35:44.191653] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1886c70) 00:28:13.581 [2024-10-09 00:35:44.191670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:5345 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.581 [2024-10-09 00:35:44.191677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.581 [2024-10-09 00:35:44.200591] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1886c70) 00:28:13.581 [2024-10-09 00:35:44.200607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:25272 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.581 [2024-10-09 00:35:44.200614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.581 [2024-10-09 00:35:44.209334] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1886c70) 00:28:13.581 [2024-10-09 00:35:44.209351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:6819 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.582 [2024-10-09 00:35:44.209363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.851 [2024-10-09 00:35:44.219232] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1886c70) 00:28:13.851 [2024-10-09 00:35:44.219250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:9991 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.851 [2024-10-09 00:35:44.219257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.851 [2024-10-09 00:35:44.228744] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1886c70) 00:28:13.851 [2024-10-09 00:35:44.228761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:24592 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.851 [2024-10-09 00:35:44.228767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.851 [2024-10-09 00:35:44.236769] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1886c70) 00:28:13.851 [2024-10-09 00:35:44.236786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:7879 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.851 [2024-10-09 00:35:44.236792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.851 [2024-10-09 00:35:44.245728] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1886c70) 00:28:13.851 [2024-10-09 00:35:44.245744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:14802 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.851 [2024-10-09 00:35:44.245751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.851 [2024-10-09 00:35:44.256734] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1886c70) 00:28:13.851 [2024-10-09 00:35:44.256751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:14084 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.851 [2024-10-09 00:35:44.256758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.851 [2024-10-09 00:35:44.265576] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1886c70) 00:28:13.851 [2024-10-09 00:35:44.265593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:1108 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.851 [2024-10-09 00:35:44.265599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.851 [2024-10-09 00:35:44.274641] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1886c70) 00:28:13.851 [2024-10-09 00:35:44.274658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:4453 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.851 [2024-10-09 00:35:44.274664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.852 [2024-10-09 00:35:44.283295] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1886c70) 00:28:13.852 [2024-10-09 00:35:44.283312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:2613 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.852 [2024-10-09 00:35:44.283318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.852 [2024-10-09 00:35:44.291647] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1886c70) 00:28:13.852 [2024-10-09 00:35:44.291668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:13853 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.852 [2024-10-09 00:35:44.291674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.852 [2024-10-09 00:35:44.300575] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1886c70) 00:28:13.852 [2024-10-09 00:35:44.300592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:8632 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.852 [2024-10-09 00:35:44.300599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.852 [2024-10-09 00:35:44.309042] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1886c70) 00:28:13.852 [2024-10-09 00:35:44.309058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:10459 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.852 [2024-10-09 00:35:44.309065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.852 [2024-10-09 00:35:44.318648] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1886c70) 00:28:13.852 [2024-10-09 00:35:44.318665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:23857 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.852 [2024-10-09 00:35:44.318671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.852 [2024-10-09 00:35:44.326759] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1886c70) 00:28:13.852 [2024-10-09 00:35:44.326776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:5300 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.852 [2024-10-09 00:35:44.326782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.852 [2024-10-09 00:35:44.335130] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1886c70) 00:28:13.852 [2024-10-09 00:35:44.335147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:4728 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.852 [2024-10-09 00:35:44.335154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.852 [2024-10-09 00:35:44.345750] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1886c70) 00:28:13.852 [2024-10-09 00:35:44.345768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:8990 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.852 [2024-10-09 00:35:44.345774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.852 [2024-10-09 00:35:44.355286] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1886c70) 00:28:13.852 [2024-10-09 00:35:44.355303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:16728 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.852 [2024-10-09 00:35:44.355309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.852 [2024-10-09 00:35:44.363180] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1886c70) 00:28:13.852 [2024-10-09 00:35:44.363197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:4567 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.852 [2024-10-09 00:35:44.363203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.852 [2024-10-09 00:35:44.373338] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1886c70) 00:28:13.852 [2024-10-09 00:35:44.373355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9472 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.852 [2024-10-09 00:35:44.373361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.852 [2024-10-09 00:35:44.382339] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1886c70) 00:28:13.852 [2024-10-09 00:35:44.382357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19415 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.852 [2024-10-09 00:35:44.382363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.852 [2024-10-09 00:35:44.391375] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1886c70) 00:28:13.852 [2024-10-09 00:35:44.391392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:6255 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.852 [2024-10-09 00:35:44.391399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.852 [2024-10-09 00:35:44.400630] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1886c70) 00:28:13.852 [2024-10-09 00:35:44.400647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:325 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.852 [2024-10-09 00:35:44.400654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.852 [2024-10-09 00:35:44.410475] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1886c70) 00:28:13.852 [2024-10-09 00:35:44.410492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:10731 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.852 [2024-10-09 00:35:44.410498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.852 [2024-10-09 00:35:44.418216] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1886c70) 00:28:13.852 [2024-10-09 00:35:44.418233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:7773 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.852 [2024-10-09 00:35:44.418239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.852 [2024-10-09 00:35:44.427483] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1886c70) 00:28:13.852 [2024-10-09 00:35:44.427500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:16745 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.852 [2024-10-09 00:35:44.427506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.852 [2024-10-09 00:35:44.438029] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1886c70) 00:28:13.852 [2024-10-09 00:35:44.438047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:2413 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.852 [2024-10-09 00:35:44.438053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.852 [2024-10-09 00:35:44.447905] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1886c70) 00:28:13.852 [2024-10-09 00:35:44.447926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:9731 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.852 [2024-10-09 00:35:44.447932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.852 [2024-10-09 00:35:44.456668] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1886c70) 00:28:13.852 [2024-10-09 00:35:44.456684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:14265 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.852 [2024-10-09 00:35:44.456691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.852 [2024-10-09 00:35:44.464540] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1886c70) 00:28:13.852 [2024-10-09 00:35:44.464557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:25192 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.852 [2024-10-09 00:35:44.464563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.852 [2024-10-09 00:35:44.473299] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1886c70) 00:28:13.852 [2024-10-09 00:35:44.473315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:21873 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.852 [2024-10-09 00:35:44.473322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.852 [2024-10-09 00:35:44.483031] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1886c70) 00:28:13.852 [2024-10-09 00:35:44.483048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:16454 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.852 [2024-10-09 00:35:44.483054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.114 [2024-10-09 00:35:44.491119] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1886c70) 00:28:14.114 [2024-10-09 00:35:44.491135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:3232 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.114 [2024-10-09 00:35:44.491142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.114 [2024-10-09 00:35:44.501652] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1886c70) 00:28:14.114 [2024-10-09 00:35:44.501668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24257 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.114 [2024-10-09 00:35:44.501675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.114 [2024-10-09 00:35:44.510094] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1886c70) 00:28:14.114 [2024-10-09 00:35:44.510111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24350 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.114 [2024-10-09 00:35:44.510117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.114 [2024-10-09 00:35:44.518847] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1886c70) 00:28:14.114 [2024-10-09 00:35:44.518864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:11230 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.114 [2024-10-09 00:35:44.518870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.114 [2024-10-09 00:35:44.527804] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1886c70) 00:28:14.114 [2024-10-09 00:35:44.527821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:17276 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.114 [2024-10-09 00:35:44.527827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.114 [2024-10-09 00:35:44.536197] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1886c70) 00:28:14.114 [2024-10-09 00:35:44.536213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:16632 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.114 [2024-10-09 00:35:44.536219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.114 [2024-10-09 00:35:44.545455] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1886c70) 00:28:14.114 [2024-10-09 00:35:44.545471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:23797 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.114 [2024-10-09 00:35:44.545478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.114 [2024-10-09 00:35:44.555042] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1886c70) 00:28:14.114 [2024-10-09 00:35:44.555059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:7719 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.114 [2024-10-09 00:35:44.555065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.114 [2024-10-09 00:35:44.564537] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1886c70) 00:28:14.114 [2024-10-09 00:35:44.564553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:25186 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.114 [2024-10-09 00:35:44.564559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.114 [2024-10-09 00:35:44.573435] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1886c70) 00:28:14.114 [2024-10-09 00:35:44.573451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:23743 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.114 [2024-10-09 00:35:44.573457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.114 [2024-10-09 00:35:44.580753] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1886c70) 00:28:14.114 [2024-10-09 00:35:44.580769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:16422 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.114 [2024-10-09 00:35:44.580776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.114 [2024-10-09 00:35:44.590705] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1886c70) 00:28:14.114 [2024-10-09 00:35:44.590727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:3245 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.114 [2024-10-09 00:35:44.590734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.114 [2024-10-09 00:35:44.599776] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1886c70) 00:28:14.114 [2024-10-09 00:35:44.599793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:10524 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.114 [2024-10-09 00:35:44.599802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.114 [2024-10-09 00:35:44.607827] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1886c70) 00:28:14.114 [2024-10-09 00:35:44.607844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:21754 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.114 [2024-10-09 00:35:44.607850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.114 [2024-10-09 00:35:44.617099] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1886c70) 00:28:14.114 [2024-10-09 00:35:44.617116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3726 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.115 [2024-10-09 00:35:44.617122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.115 [2024-10-09 00:35:44.624555] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1886c70) 00:28:14.115 [2024-10-09 00:35:44.624572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:3883 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.115 [2024-10-09 00:35:44.624578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.115 [2024-10-09 00:35:44.636701] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1886c70) 00:28:14.115 [2024-10-09 00:35:44.636718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23328 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.115 [2024-10-09 00:35:44.636728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.115 [2024-10-09 00:35:44.648124] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1886c70) 00:28:14.115 [2024-10-09 00:35:44.648141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:10684 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.115 [2024-10-09 00:35:44.648147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.115 [2024-10-09 00:35:44.657200] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1886c70) 00:28:14.115 [2024-10-09 00:35:44.657217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:4287 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.115 [2024-10-09 00:35:44.657223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.115 [2024-10-09 00:35:44.667020] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1886c70) 00:28:14.115 [2024-10-09 00:35:44.667037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1420 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.115 [2024-10-09 00:35:44.667043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.115 [2024-10-09 00:35:44.676744] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1886c70) 00:28:14.115 [2024-10-09 00:35:44.676761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:15766 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.115 [2024-10-09 00:35:44.676767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.115 [2024-10-09 00:35:44.686043] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1886c70) 00:28:14.115 [2024-10-09 00:35:44.686063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:23115 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.115 [2024-10-09 00:35:44.686069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.115 [2024-10-09 00:35:44.695501] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1886c70) 00:28:14.115 [2024-10-09 00:35:44.695518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:1015 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.115 [2024-10-09 00:35:44.695524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.115 [2024-10-09 00:35:44.705398] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1886c70) 00:28:14.115 [2024-10-09 00:35:44.705415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:6636 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.115 [2024-10-09 00:35:44.705421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.115 [2024-10-09 00:35:44.715136] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1886c70) 00:28:14.115 [2024-10-09 00:35:44.715153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:340 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.115 [2024-10-09 00:35:44.715159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.115 [2024-10-09 00:35:44.722661] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1886c70) 00:28:14.115 [2024-10-09 00:35:44.722678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:23319 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.115 [2024-10-09 00:35:44.722685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.115 [2024-10-09 00:35:44.733874] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1886c70) 00:28:14.115 [2024-10-09 00:35:44.733890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:12327 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.115 [2024-10-09 00:35:44.733897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.115 [2024-10-09 00:35:44.741811] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1886c70) 00:28:14.115 [2024-10-09 00:35:44.741828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:25239 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.115 [2024-10-09 00:35:44.741834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.377 [2024-10-09 00:35:44.750417] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1886c70) 00:28:14.377 [2024-10-09 00:35:44.750433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:9444 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.377 [2024-10-09 00:35:44.750439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.377 [2024-10-09 00:35:44.759945] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1886c70) 00:28:14.377 [2024-10-09 00:35:44.759962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7110 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.377 [2024-10-09 00:35:44.759968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.377 [2024-10-09 00:35:44.769363] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1886c70) 00:28:14.377 [2024-10-09 00:35:44.769381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:3564 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.377 [2024-10-09 00:35:44.769387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.377 [2024-10-09 00:35:44.777461] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1886c70) 00:28:14.377 [2024-10-09 00:35:44.777478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:6151 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.377 [2024-10-09 00:35:44.777484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.377 [2024-10-09 00:35:44.786705] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1886c70) 00:28:14.377 [2024-10-09 00:35:44.786727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:12924 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.377 [2024-10-09 00:35:44.786734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.377 [2024-10-09 00:35:44.795943] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1886c70) 00:28:14.377 [2024-10-09 00:35:44.795960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:299 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.377 [2024-10-09 00:35:44.795966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.377 [2024-10-09 00:35:44.805196] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1886c70) 00:28:14.377 [2024-10-09 00:35:44.805213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:11417 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.377 [2024-10-09 00:35:44.805219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.377 [2024-10-09 00:35:44.813654] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1886c70) 00:28:14.377 [2024-10-09 00:35:44.813671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:15081 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.377 [2024-10-09 00:35:44.813677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.377 [2024-10-09 00:35:44.825899] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1886c70) 00:28:14.377 [2024-10-09 00:35:44.825916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:14852 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.377 [2024-10-09 00:35:44.825922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.377 [2024-10-09 00:35:44.835802] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1886c70) 00:28:14.377 [2024-10-09 00:35:44.835819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:8009 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.377 [2024-10-09 00:35:44.835826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.377 [2024-10-09 00:35:44.844100] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1886c70) 00:28:14.377 [2024-10-09 00:35:44.844117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:12330 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.377 [2024-10-09 00:35:44.844126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.377 [2024-10-09 00:35:44.852604] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1886c70) 00:28:14.377 [2024-10-09 00:35:44.852621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:9751 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.377 [2024-10-09 00:35:44.852627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.377 [2024-10-09 00:35:44.862088] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1886c70) 00:28:14.377 [2024-10-09 00:35:44.862105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:12051 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.377 [2024-10-09 00:35:44.862111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.377 [2024-10-09 00:35:44.872765] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1886c70) 00:28:14.377 [2024-10-09 00:35:44.872782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:6995 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.377 [2024-10-09 00:35:44.872788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.377 [2024-10-09 00:35:44.884703] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1886c70) 00:28:14.377 [2024-10-09 00:35:44.884725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:2896 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.377 [2024-10-09 00:35:44.884731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.377 [2024-10-09 00:35:44.892032] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1886c70) 00:28:14.377 [2024-10-09 00:35:44.892049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21662 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.377 [2024-10-09 00:35:44.892056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.377 [2024-10-09 00:35:44.902168] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1886c70) 00:28:14.377 [2024-10-09 00:35:44.902185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:1776 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.377 [2024-10-09 00:35:44.902191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.377 [2024-10-09 00:35:44.909985] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1886c70) 00:28:14.377 [2024-10-09 00:35:44.910002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:12717 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.377 [2024-10-09 00:35:44.910009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.377 [2024-10-09 00:35:44.919310] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1886c70) 00:28:14.377 [2024-10-09 00:35:44.919327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:1388 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.377 [2024-10-09 00:35:44.919333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.377 [2024-10-09 00:35:44.927841] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1886c70) 00:28:14.377 [2024-10-09 00:35:44.927858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:20410 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.377 [2024-10-09 00:35:44.927864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.377 [2024-10-09 00:35:44.937117] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1886c70) 00:28:14.377 [2024-10-09 00:35:44.937133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:21141 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.377 [2024-10-09 00:35:44.937139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.377 [2024-10-09 00:35:44.946040] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1886c70) 00:28:14.377 [2024-10-09 00:35:44.946057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:13292 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.377 [2024-10-09 00:35:44.946063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.377 [2024-10-09 00:35:44.954696] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1886c70) 00:28:14.377 [2024-10-09 00:35:44.954713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:1162 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.377 [2024-10-09 00:35:44.954724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.377 [2024-10-09 00:35:44.964261] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1886c70) 00:28:14.377 [2024-10-09 00:35:44.964278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12500 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.377 [2024-10-09 00:35:44.964284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.377 [2024-10-09 00:35:44.971439] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1886c70) 00:28:14.377 [2024-10-09 00:35:44.971456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:4263 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.377 [2024-10-09 00:35:44.971462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.377 27547.00 IOPS, 107.61 MiB/s [2024-10-08T22:35:45.012Z] [2024-10-09 00:35:44.981532] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1886c70) 00:28:14.377 [2024-10-09 00:35:44.981549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20177 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.377 [2024-10-09 00:35:44.981555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.377 [2024-10-09 00:35:44.989966] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1886c70) 00:28:14.378 [2024-10-09 00:35:44.989983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:21358 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.378 [2024-10-09 00:35:44.989990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.378 [2024-10-09 00:35:44.999120] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1886c70) 00:28:14.378 [2024-10-09 00:35:44.999137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:13193 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.378 [2024-10-09 00:35:44.999146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.378 [2024-10-09 00:35:45.007914] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1886c70) 00:28:14.378 [2024-10-09 00:35:45.007931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:25470 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.378 [2024-10-09 00:35:45.007937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.639 [2024-10-09 00:35:45.017097] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1886c70) 00:28:14.639 [2024-10-09 00:35:45.017114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:6226 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.639 [2024-10-09 00:35:45.017120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.639 [2024-10-09 00:35:45.024848] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1886c70) 00:28:14.639 [2024-10-09 00:35:45.024865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:2947 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.640 [2024-10-09 00:35:45.024871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.640 [2024-10-09 00:35:45.034550] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1886c70) 00:28:14.640 [2024-10-09 00:35:45.034567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15268 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.640 [2024-10-09 00:35:45.034573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.640 [2024-10-09 00:35:45.042866] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1886c70) 00:28:14.640 [2024-10-09 00:35:45.042883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8994 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.640 [2024-10-09 00:35:45.042889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.640 [2024-10-09 00:35:45.052116] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1886c70) 00:28:14.640 [2024-10-09 00:35:45.052133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:3472 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.640 [2024-10-09 00:35:45.052139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.640 [2024-10-09 00:35:45.060677] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1886c70) 00:28:14.640 [2024-10-09 00:35:45.060693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:24299 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.640 [2024-10-09 00:35:45.060700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.640 [2024-10-09 00:35:45.069819] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1886c70) 00:28:14.640 [2024-10-09 00:35:45.069835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:20259 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.640 [2024-10-09 00:35:45.069841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.640 [2024-10-09 00:35:45.077583] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1886c70) 00:28:14.640 [2024-10-09 00:35:45.077603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21322 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.640 [2024-10-09 00:35:45.077609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.640 [2024-10-09 00:35:45.085985] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1886c70) 00:28:14.640 [2024-10-09 00:35:45.086002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:15584 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.640 [2024-10-09 00:35:45.086008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.640 [2024-10-09 00:35:45.095968] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1886c70) 00:28:14.640 [2024-10-09 00:35:45.095984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:8821 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.640 [2024-10-09 00:35:45.095990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.640 [2024-10-09 00:35:45.104279] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1886c70) 00:28:14.640 [2024-10-09 00:35:45.104296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:4142 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.640 [2024-10-09 00:35:45.104302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.640 [2024-10-09 00:35:45.112263] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1886c70) 00:28:14.640 [2024-10-09 00:35:45.112280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:23903 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.640 [2024-10-09 00:35:45.112286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.640 [2024-10-09 00:35:45.122521] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1886c70) 00:28:14.640 [2024-10-09 00:35:45.122538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:3013 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.640 [2024-10-09 00:35:45.122545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.640 [2024-10-09 00:35:45.131329] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1886c70) 00:28:14.640 [2024-10-09 00:35:45.131346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:5026 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.640 [2024-10-09 00:35:45.131352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.640 [2024-10-09 00:35:45.140180] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1886c70) 00:28:14.640 [2024-10-09 00:35:45.140197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:14113 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.640 [2024-10-09 00:35:45.140203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.640 [2024-10-09 00:35:45.148919] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1886c70) 00:28:14.640 [2024-10-09 00:35:45.148936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:13942 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.640 [2024-10-09 00:35:45.148942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.640 [2024-10-09 00:35:45.157909] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1886c70) 00:28:14.640 [2024-10-09 00:35:45.157927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:15603 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.640 [2024-10-09 00:35:45.157933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.640 [2024-10-09 00:35:45.166187] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1886c70) 00:28:14.640 [2024-10-09 00:35:45.166204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8862 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.640 [2024-10-09 00:35:45.166210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.640 [2024-10-09 00:35:45.175112] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1886c70) 00:28:14.640 [2024-10-09 00:35:45.175129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:21642 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.640 [2024-10-09 00:35:45.175136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.640 [2024-10-09 00:35:45.183866] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1886c70) 00:28:14.640 [2024-10-09 00:35:45.183884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:9560 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.640 [2024-10-09 00:35:45.183891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.640 [2024-10-09 00:35:45.193917] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1886c70) 00:28:14.640 [2024-10-09 00:35:45.193934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:6732 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.640 [2024-10-09 00:35:45.193940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.640 [2024-10-09 00:35:45.203657] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1886c70) 00:28:14.640 [2024-10-09 00:35:45.203674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:18507 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.640 [2024-10-09 00:35:45.203680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.640 [2024-10-09 00:35:45.212144] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1886c70) 00:28:14.640 [2024-10-09 00:35:45.212162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:12066 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.640 [2024-10-09 00:35:45.212168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.640 [2024-10-09 00:35:45.221627] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1886c70) 00:28:14.640 [2024-10-09 00:35:45.221644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3196 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.640 [2024-10-09 00:35:45.221650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.640 [2024-10-09 00:35:45.229805] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1886c70) 00:28:14.640 [2024-10-09 00:35:45.229821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:24295 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.640 [2024-10-09 00:35:45.229831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.640 [2024-10-09 00:35:45.239371] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1886c70) 00:28:14.640 [2024-10-09 00:35:45.239388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:7225 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.640 [2024-10-09 00:35:45.239394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.640 [2024-10-09 00:35:45.248359] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1886c70) 00:28:14.640 [2024-10-09 00:35:45.248376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:7638 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.640 [2024-10-09 00:35:45.248382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.640 [2024-10-09 00:35:45.257416] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1886c70) 00:28:14.640 [2024-10-09 00:35:45.257433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:9227 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.640 [2024-10-09 00:35:45.257439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.640 [2024-10-09 00:35:45.265724] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1886c70) 00:28:14.640 [2024-10-09 00:35:45.265742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2307 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.640 [2024-10-09 00:35:45.265748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.903 [2024-10-09 00:35:45.274711] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1886c70) 00:28:14.903 [2024-10-09 00:35:45.274733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:21158 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.903 [2024-10-09 00:35:45.274740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.903 [2024-10-09 00:35:45.283418] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1886c70) 00:28:14.903 [2024-10-09 00:35:45.283435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:25025 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.903 [2024-10-09 00:35:45.283442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.903 [2024-10-09 00:35:45.292150] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1886c70) 00:28:14.903 [2024-10-09 00:35:45.292167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:25562 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.903 [2024-10-09 00:35:45.292173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.903 [2024-10-09 00:35:45.299774] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1886c70) 00:28:14.903 [2024-10-09 00:35:45.299790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:5143 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.903 [2024-10-09 00:35:45.299796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.903 [2024-10-09 00:35:45.311063] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1886c70) 00:28:14.903 [2024-10-09 00:35:45.311081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:3714 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.903 [2024-10-09 00:35:45.311087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.903 [2024-10-09 00:35:45.321649] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1886c70) 00:28:14.903 [2024-10-09 00:35:45.321666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:8238 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.903 [2024-10-09 00:35:45.321672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.903 [2024-10-09 00:35:45.331246] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1886c70) 00:28:14.903 [2024-10-09 00:35:45.331263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:12659 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.903 [2024-10-09 00:35:45.331269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.903 [2024-10-09 00:35:45.339080] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1886c70) 00:28:14.903 [2024-10-09 00:35:45.339097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2756 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.903 [2024-10-09 00:35:45.339103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.903 [2024-10-09 00:35:45.350279] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1886c70) 00:28:14.904 [2024-10-09 00:35:45.350296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16237 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.904 [2024-10-09 00:35:45.350302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.904 [2024-10-09 00:35:45.362452] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1886c70) 00:28:14.904 [2024-10-09 00:35:45.362469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:16537 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.904 [2024-10-09 00:35:45.362475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.904 [2024-10-09 00:35:45.370809] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1886c70) 00:28:14.904 [2024-10-09 00:35:45.370826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19980 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.904 [2024-10-09 00:35:45.370833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.904 [2024-10-09 00:35:45.380218] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1886c70) 00:28:14.904 [2024-10-09 00:35:45.380235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5344 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.904 [2024-10-09 00:35:45.380242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.904 [2024-10-09 00:35:45.389433] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1886c70) 00:28:14.904 [2024-10-09 00:35:45.389451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:22453 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.904 [2024-10-09 00:35:45.389463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.904 [2024-10-09 00:35:45.398705] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1886c70) 00:28:14.904 [2024-10-09 00:35:45.398729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:9902 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.904 [2024-10-09 00:35:45.398735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.904 [2024-10-09 00:35:45.408466] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1886c70) 00:28:14.904 [2024-10-09 00:35:45.408483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:22668 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.904 [2024-10-09 00:35:45.408490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.904 [2024-10-09 00:35:45.415436] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1886c70) 00:28:14.904 [2024-10-09 00:35:45.415453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:10604 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.904 [2024-10-09 00:35:45.415459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.904 [2024-10-09 00:35:45.424767] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1886c70) 00:28:14.904 [2024-10-09 00:35:45.424784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:12891 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.904 [2024-10-09 00:35:45.424790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.904 [2024-10-09 00:35:45.433379] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1886c70) 00:28:14.904 [2024-10-09 00:35:45.433396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:10311 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.904 [2024-10-09 00:35:45.433402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.904 [2024-10-09 00:35:45.442388] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1886c70) 00:28:14.904 [2024-10-09 00:35:45.442405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:908 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.904 [2024-10-09 00:35:45.442411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.904 [2024-10-09 00:35:45.452242] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1886c70) 00:28:14.904 [2024-10-09 00:35:45.452259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:21661 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.904 [2024-10-09 00:35:45.452265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.904 [2024-10-09 00:35:45.460612] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1886c70) 00:28:14.904 [2024-10-09 00:35:45.460629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:12559 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.904 [2024-10-09 00:35:45.460636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.904 [2024-10-09 00:35:45.469691] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1886c70) 00:28:14.904 [2024-10-09 00:35:45.469712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:3953 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.904 [2024-10-09 00:35:45.469726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.904 [2024-10-09 00:35:45.478895] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1886c70) 00:28:14.904 [2024-10-09 00:35:45.478912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:21330 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.904 [2024-10-09 00:35:45.478919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.904 [2024-10-09 00:35:45.487537] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1886c70) 00:28:14.904 [2024-10-09 00:35:45.487553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:11735 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.904 [2024-10-09 00:35:45.487560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.904 [2024-10-09 00:35:45.496067] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1886c70) 00:28:14.904 [2024-10-09 00:35:45.496084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:558 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.904 [2024-10-09 00:35:45.496090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.904 [2024-10-09 00:35:45.504616] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1886c70) 00:28:14.904 [2024-10-09 00:35:45.504634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:11366 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.904 [2024-10-09 00:35:45.504640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.904 [2024-10-09 00:35:45.512934] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1886c70) 00:28:14.904 [2024-10-09 00:35:45.512951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:5079 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.904 [2024-10-09 00:35:45.512957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.904 [2024-10-09 00:35:45.522656] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1886c70) 00:28:14.904 [2024-10-09 00:35:45.522674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14764 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.904 [2024-10-09 00:35:45.522680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.904 [2024-10-09 00:35:45.531244] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1886c70) 00:28:14.904 [2024-10-09 00:35:45.531261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:9931 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.904 [2024-10-09 00:35:45.531267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:15.170 [2024-10-09 00:35:45.540603] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1886c70) 00:28:15.170 [2024-10-09 00:35:45.540620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:25491 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.170 [2024-10-09 00:35:45.540626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:15.170 [2024-10-09 00:35:45.548590] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1886c70) 00:28:15.170 [2024-10-09 00:35:45.548607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:17090 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.170 [2024-10-09 00:35:45.548613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:15.170 [2024-10-09 00:35:45.557336] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1886c70) 00:28:15.170 [2024-10-09 00:35:45.557352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:6112 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.170 [2024-10-09 00:35:45.557358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:15.170 [2024-10-09 00:35:45.567513] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1886c70) 00:28:15.170 [2024-10-09 00:35:45.567530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:15522 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.170 [2024-10-09 00:35:45.567537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:15.170 [2024-10-09 00:35:45.575887] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1886c70) 00:28:15.170 [2024-10-09 00:35:45.575904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:20371 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.170 [2024-10-09 00:35:45.575910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:15.170 [2024-10-09 00:35:45.585325] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1886c70) 00:28:15.170 [2024-10-09 00:35:45.585341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16746 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.170 [2024-10-09 00:35:45.585348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:15.170 [2024-10-09 00:35:45.593472] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1886c70) 00:28:15.170 [2024-10-09 00:35:45.593489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:21862 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.170 [2024-10-09 00:35:45.593495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:15.170 [2024-10-09 00:35:45.603659] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1886c70) 00:28:15.170 [2024-10-09 00:35:45.603676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11300 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.170 [2024-10-09 00:35:45.603683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:15.170 [2024-10-09 00:35:45.612285] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1886c70) 00:28:15.170 [2024-10-09 00:35:45.612302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:12082 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.170 [2024-10-09 00:35:45.612308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:15.170 [2024-10-09 00:35:45.621100] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1886c70) 00:28:15.170 [2024-10-09 00:35:45.621117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:11991 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.170 [2024-10-09 00:35:45.621126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:15.170 [2024-10-09 00:35:45.630399] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1886c70) 00:28:15.170 [2024-10-09 00:35:45.630416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21092 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.170 [2024-10-09 00:35:45.630422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:15.170 [2024-10-09 00:35:45.639647] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1886c70) 00:28:15.170 [2024-10-09 00:35:45.639664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:18034 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.170 [2024-10-09 00:35:45.639670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:15.171 [2024-10-09 00:35:45.649113] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1886c70) 00:28:15.171 [2024-10-09 00:35:45.649130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19307 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.171 [2024-10-09 00:35:45.649136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:15.171 [2024-10-09 00:35:45.657271] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1886c70) 00:28:15.171 [2024-10-09 00:35:45.657287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2496 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.171 [2024-10-09 00:35:45.657294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:15.171 [2024-10-09 00:35:45.666142] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1886c70) 00:28:15.171 [2024-10-09 00:35:45.666158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20522 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.171 [2024-10-09 00:35:45.666165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:15.171 [2024-10-09 00:35:45.674887] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1886c70) 00:28:15.171 [2024-10-09 00:35:45.674904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:22360 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.171 [2024-10-09 00:35:45.674911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:15.171 [2024-10-09 00:35:45.683310] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1886c70) 00:28:15.171 [2024-10-09 00:35:45.683327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:2408 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.171 [2024-10-09 00:35:45.683333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:15.171 [2024-10-09 00:35:45.692630] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1886c70) 00:28:15.171 [2024-10-09 00:35:45.692647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:2700 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.171 [2024-10-09 00:35:45.692654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:15.171 [2024-10-09 00:35:45.701759] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1886c70) 00:28:15.171 [2024-10-09 00:35:45.701776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:13118 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.171 [2024-10-09 00:35:45.701782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:15.171 [2024-10-09 00:35:45.710291] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1886c70) 00:28:15.171 [2024-10-09 00:35:45.710308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:18266 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.171 [2024-10-09 00:35:45.710315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:15.171 [2024-10-09 00:35:45.718766] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1886c70) 00:28:15.171 [2024-10-09 00:35:45.718783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:2184 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.171 [2024-10-09 00:35:45.718789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:15.171 [2024-10-09 00:35:45.727980] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1886c70) 00:28:15.171 [2024-10-09 00:35:45.727998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:20821 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.171 [2024-10-09 00:35:45.728004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:15.171 [2024-10-09 00:35:45.738119] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1886c70) 00:28:15.171 [2024-10-09 00:35:45.738136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:7507 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.171 [2024-10-09 00:35:45.738142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:15.171 [2024-10-09 00:35:45.746545] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1886c70) 00:28:15.171 [2024-10-09 00:35:45.746562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:1443 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.171 [2024-10-09 00:35:45.746568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:15.171 [2024-10-09 00:35:45.755555] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1886c70) 00:28:15.171 [2024-10-09 00:35:45.755573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:5202 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.171 [2024-10-09 00:35:45.755579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:15.171 [2024-10-09 00:35:45.763455] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1886c70) 00:28:15.171 [2024-10-09 00:35:45.763472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:6112 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.171 [2024-10-09 00:35:45.763479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:15.171 [2024-10-09 00:35:45.773177] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1886c70) 00:28:15.171 [2024-10-09 00:35:45.773194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:4939 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.171 [2024-10-09 00:35:45.773204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:15.171 [2024-10-09 00:35:45.783262] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1886c70) 00:28:15.171 [2024-10-09 00:35:45.783279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:16917 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.171 [2024-10-09 00:35:45.783285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:15.171 [2024-10-09 00:35:45.794543] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1886c70) 00:28:15.171 [2024-10-09 00:35:45.794560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:11910 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.171 [2024-10-09 00:35:45.794566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:15.171 [2024-10-09 00:35:45.803169] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1886c70) 00:28:15.171 [2024-10-09 00:35:45.803186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:14245 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.171 [2024-10-09 00:35:45.803192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:15.445 [2024-10-09 00:35:45.811739] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1886c70) 00:28:15.445 [2024-10-09 00:35:45.811756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:23356 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.445 [2024-10-09 00:35:45.811763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:15.445 [2024-10-09 00:35:45.820028] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1886c70) 00:28:15.445 [2024-10-09 00:35:45.820045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:9023 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.445 [2024-10-09 00:35:45.820051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:15.445 [2024-10-09 00:35:45.829435] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1886c70) 00:28:15.445 [2024-10-09 00:35:45.829452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:16652 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.445 [2024-10-09 00:35:45.829458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:15.445 [2024-10-09 00:35:45.837841] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1886c70) 00:28:15.445 [2024-10-09 00:35:45.837858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:19405 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.445 [2024-10-09 00:35:45.837864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:15.445 [2024-10-09 00:35:45.846830] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1886c70) 00:28:15.445 [2024-10-09 00:35:45.846847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:15976 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.445 [2024-10-09 00:35:45.846854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:15.445 [2024-10-09 00:35:45.855981] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1886c70) 00:28:15.445 [2024-10-09 00:35:45.856001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:23649 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.445 [2024-10-09 00:35:45.856007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:15.445 [2024-10-09 00:35:45.864741] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1886c70) 00:28:15.445 [2024-10-09 00:35:45.864758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:5121 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.445 [2024-10-09 00:35:45.864764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:15.445 [2024-10-09 00:35:45.873148] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1886c70) 00:28:15.445 [2024-10-09 00:35:45.873165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:23341 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.445 [2024-10-09 00:35:45.873171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:15.445 [2024-10-09 00:35:45.883443] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1886c70) 00:28:15.445 [2024-10-09 00:35:45.883460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:20289 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.445 [2024-10-09 00:35:45.883467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:15.445 [2024-10-09 00:35:45.892433] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1886c70) 00:28:15.445 [2024-10-09 00:35:45.892450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:16234 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.445 [2024-10-09 00:35:45.892456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:15.445 [2024-10-09 00:35:45.901467] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1886c70) 00:28:15.445 [2024-10-09 00:35:45.901484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:7090 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.445 [2024-10-09 00:35:45.901490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:15.445 [2024-10-09 00:35:45.909898] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1886c70) 00:28:15.445 [2024-10-09 00:35:45.909915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:5709 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.445 [2024-10-09 00:35:45.909921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:15.445 [2024-10-09 00:35:45.918951] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1886c70) 00:28:15.445 [2024-10-09 00:35:45.918967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:4974 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.445 [2024-10-09 00:35:45.918973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:15.445 [2024-10-09 00:35:45.927383] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1886c70) 00:28:15.445 [2024-10-09 00:35:45.927400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:14073 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.445 [2024-10-09 00:35:45.927406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:15.445 [2024-10-09 00:35:45.936795] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1886c70) 00:28:15.445 [2024-10-09 00:35:45.936812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:20119 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.445 [2024-10-09 00:35:45.936818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:15.445 [2024-10-09 00:35:45.946551] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1886c70) 00:28:15.445 [2024-10-09 00:35:45.946567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:19815 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.445 [2024-10-09 00:35:45.946574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:15.445 [2024-10-09 00:35:45.955474] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1886c70) 00:28:15.445 [2024-10-09 00:35:45.955491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:15808 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.445 [2024-10-09 00:35:45.955497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:15.445 [2024-10-09 00:35:45.965303] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1886c70) 00:28:15.445 [2024-10-09 00:35:45.965319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:3166 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.446 [2024-10-09 00:35:45.965326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:15.446 [2024-10-09 00:35:45.975429] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1886c70) 00:28:15.446 [2024-10-09 00:35:45.975447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:23391 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.446 [2024-10-09 00:35:45.975454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:15.446 27921.50 IOPS, 109.07 MiB/s 00:28:15.446 Latency(us) 00:28:15.446 [2024-10-08T22:35:46.081Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:15.446 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:28:15.446 nvme0n1 : 2.01 27922.58 109.07 0.00 0.00 4578.32 2389.33 17257.81 00:28:15.446 [2024-10-08T22:35:46.081Z] =================================================================================================================== 00:28:15.446 [2024-10-08T22:35:46.081Z] Total : 27922.58 109.07 0.00 0.00 4578.32 2389.33 17257.81 00:28:15.446 { 00:28:15.446 "results": [ 00:28:15.446 { 00:28:15.446 "job": "nvme0n1", 00:28:15.446 "core_mask": "0x2", 00:28:15.446 "workload": "randread", 00:28:15.446 "status": "finished", 00:28:15.446 "queue_depth": 128, 00:28:15.446 "io_size": 4096, 00:28:15.446 "runtime": 2.005868, 00:28:15.446 "iops": 27922.575164467453, 00:28:15.446 "mibps": 109.07255923620099, 00:28:15.446 "io_failed": 0, 00:28:15.446 "io_timeout": 0, 00:28:15.446 "avg_latency_us": 4578.320960321853, 00:28:15.446 "min_latency_us": 2389.3333333333335, 00:28:15.446 "max_latency_us": 17257.81333333333 00:28:15.446 } 00:28:15.446 ], 00:28:15.446 "core_count": 1 00:28:15.446 } 00:28:15.446 00:35:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:28:15.446 00:35:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:28:15.446 00:35:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:28:15.446 00:35:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:28:15.446 | .driver_specific 00:28:15.446 | .nvme_error 00:28:15.446 | .status_code 00:28:15.446 | .command_transient_transport_error' 00:28:15.706 00:35:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 219 > 0 )) 00:28:15.706 00:35:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3421898 00:28:15.706 00:35:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 3421898 ']' 00:28:15.706 00:35:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 3421898 00:28:15.706 00:35:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:28:15.706 00:35:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:15.706 00:35:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3421898 00:28:15.706 00:35:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:28:15.706 00:35:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:28:15.706 00:35:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3421898' 00:28:15.706 killing process with pid 3421898 00:28:15.706 00:35:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 3421898 00:28:15.706 Received shutdown signal, test time was about 2.000000 seconds 00:28:15.706 00:28:15.706 Latency(us) 00:28:15.706 [2024-10-08T22:35:46.341Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:15.706 [2024-10-08T22:35:46.341Z] =================================================================================================================== 00:28:15.706 [2024-10-08T22:35:46.341Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:15.706 00:35:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 3421898 00:28:15.966 00:35:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:28:15.966 00:35:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:28:15.966 00:35:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:28:15.966 00:35:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:28:15.966 00:35:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:28:15.966 00:35:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3422579 00:28:15.966 00:35:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3422579 /var/tmp/bperf.sock 00:28:15.966 00:35:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 3422579 ']' 00:28:15.966 00:35:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:28:15.966 00:35:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:15.966 00:35:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:15.966 00:35:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:15.966 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:15.966 00:35:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:15.966 00:35:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:15.966 [2024-10-09 00:35:46.412922] Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 initialization... 00:28:15.966 [2024-10-09 00:35:46.412978] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3422579 ] 00:28:15.966 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:15.966 Zero copy mechanism will not be used. 00:28:15.966 [2024-10-09 00:35:46.490802] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:15.966 [2024-10-09 00:35:46.543109] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:28:16.963 00:35:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:16.963 00:35:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:28:16.963 00:35:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:16.963 00:35:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:16.963 00:35:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:28:16.963 00:35:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:16.963 00:35:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:16.963 00:35:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:16.963 00:35:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:16.963 00:35:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:17.276 nvme0n1 00:28:17.276 00:35:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:28:17.276 00:35:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:17.276 00:35:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:17.276 00:35:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:17.276 00:35:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:28:17.276 00:35:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:17.276 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:17.276 Zero copy mechanism will not be used. 00:28:17.276 Running I/O for 2 seconds... 00:28:17.276 [2024-10-09 00:35:47.878381] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfde8d0) 00:28:17.276 [2024-10-09 00:35:47.878414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.276 [2024-10-09 00:35:47.878424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:17.276 [2024-10-09 00:35:47.889054] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfde8d0) 00:28:17.276 [2024-10-09 00:35:47.889076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.276 [2024-10-09 00:35:47.889083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:17.536 [2024-10-09 00:35:47.901256] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfde8d0) 00:28:17.536 [2024-10-09 00:35:47.901278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.536 [2024-10-09 00:35:47.901285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:17.536 [2024-10-09 00:35:47.911814] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfde8d0) 00:28:17.536 [2024-10-09 00:35:47.911832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.536 [2024-10-09 00:35:47.911838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.536 [2024-10-09 00:35:47.922752] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfde8d0) 00:28:17.536 [2024-10-09 00:35:47.922770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.536 [2024-10-09 00:35:47.922777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:17.536 [2024-10-09 00:35:47.933396] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfde8d0) 00:28:17.536 [2024-10-09 00:35:47.933414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.536 [2024-10-09 00:35:47.933421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:17.536 [2024-10-09 00:35:47.946475] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfde8d0) 00:28:17.536 [2024-10-09 00:35:47.946493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.536 [2024-10-09 00:35:47.946500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:17.536 [2024-10-09 00:35:47.955968] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfde8d0) 00:28:17.536 [2024-10-09 00:35:47.955986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.536 [2024-10-09 00:35:47.955992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.536 [2024-10-09 00:35:47.966134] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfde8d0) 00:28:17.536 [2024-10-09 00:35:47.966151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.536 [2024-10-09 00:35:47.966158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:17.536 [2024-10-09 00:35:47.976387] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfde8d0) 00:28:17.536 [2024-10-09 00:35:47.976404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.536 [2024-10-09 00:35:47.976410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:17.536 [2024-10-09 00:35:47.987327] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfde8d0) 00:28:17.536 [2024-10-09 00:35:47.987345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.536 [2024-10-09 00:35:47.987352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:17.536 [2024-10-09 00:35:47.998179] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfde8d0) 00:28:17.536 [2024-10-09 00:35:47.998197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.536 [2024-10-09 00:35:47.998204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.536 [2024-10-09 00:35:48.009586] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfde8d0) 00:28:17.536 [2024-10-09 00:35:48.009604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.536 [2024-10-09 00:35:48.009610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:17.536 [2024-10-09 00:35:48.020274] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfde8d0) 00:28:17.536 [2024-10-09 00:35:48.020291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.536 [2024-10-09 00:35:48.020298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:17.536 [2024-10-09 00:35:48.030278] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfde8d0) 00:28:17.536 [2024-10-09 00:35:48.030295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.536 [2024-10-09 00:35:48.030302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:17.536 [2024-10-09 00:35:48.039451] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfde8d0) 00:28:17.536 [2024-10-09 00:35:48.039468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.536 [2024-10-09 00:35:48.039475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.536 [2024-10-09 00:35:48.048707] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfde8d0) 00:28:17.536 [2024-10-09 00:35:48.048729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.536 [2024-10-09 00:35:48.048735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:17.536 [2024-10-09 00:35:48.059336] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfde8d0) 00:28:17.536 [2024-10-09 00:35:48.059354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.536 [2024-10-09 00:35:48.059361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:17.536 [2024-10-09 00:35:48.068955] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfde8d0) 00:28:17.536 [2024-10-09 00:35:48.068973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.536 [2024-10-09 00:35:48.068979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:17.536 [2024-10-09 00:35:48.082290] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfde8d0) 00:28:17.536 [2024-10-09 00:35:48.082307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.536 [2024-10-09 00:35:48.082319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.536 [2024-10-09 00:35:48.090296] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfde8d0) 00:28:17.536 [2024-10-09 00:35:48.090314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.536 [2024-10-09 00:35:48.090320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:17.536 [2024-10-09 00:35:48.099211] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfde8d0) 00:28:17.536 [2024-10-09 00:35:48.099229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.536 [2024-10-09 00:35:48.099236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:17.536 [2024-10-09 00:35:48.108400] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfde8d0) 00:28:17.536 [2024-10-09 00:35:48.108417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.536 [2024-10-09 00:35:48.108423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:17.536 [2024-10-09 00:35:48.120260] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfde8d0) 00:28:17.536 [2024-10-09 00:35:48.120278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.536 [2024-10-09 00:35:48.120285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.536 [2024-10-09 00:35:48.130327] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfde8d0) 00:28:17.536 [2024-10-09 00:35:48.130344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.536 [2024-10-09 00:35:48.130351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:17.536 [2024-10-09 00:35:48.140488] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfde8d0) 00:28:17.536 [2024-10-09 00:35:48.140506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.536 [2024-10-09 00:35:48.140512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:17.536 [2024-10-09 00:35:48.151477] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfde8d0) 00:28:17.536 [2024-10-09 00:35:48.151494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.537 [2024-10-09 00:35:48.151500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:17.537 [2024-10-09 00:35:48.162072] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfde8d0) 00:28:17.537 [2024-10-09 00:35:48.162090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.537 [2024-10-09 00:35:48.162096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.797 [2024-10-09 00:35:48.174278] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfde8d0) 00:28:17.797 [2024-10-09 00:35:48.174299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.797 [2024-10-09 00:35:48.174305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:17.797 [2024-10-09 00:35:48.184818] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfde8d0) 00:28:17.797 [2024-10-09 00:35:48.184836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.797 [2024-10-09 00:35:48.184842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:17.797 [2024-10-09 00:35:48.195991] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfde8d0) 00:28:17.797 [2024-10-09 00:35:48.196008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.797 [2024-10-09 00:35:48.196015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:17.797 [2024-10-09 00:35:48.207399] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfde8d0) 00:28:17.797 [2024-10-09 00:35:48.207416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.797 [2024-10-09 00:35:48.207423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.797 [2024-10-09 00:35:48.218153] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfde8d0) 00:28:17.797 [2024-10-09 00:35:48.218171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.797 [2024-10-09 00:35:48.218177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:17.797 [2024-10-09 00:35:48.229167] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfde8d0) 00:28:17.797 [2024-10-09 00:35:48.229185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.797 [2024-10-09 00:35:48.229191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:17.797 [2024-10-09 00:35:48.239707] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfde8d0) 00:28:17.797 [2024-10-09 00:35:48.239729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.797 [2024-10-09 00:35:48.239736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:17.797 [2024-10-09 00:35:48.248182] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfde8d0) 00:28:17.797 [2024-10-09 00:35:48.248199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.797 [2024-10-09 00:35:48.248206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.797 [2024-10-09 00:35:48.259840] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfde8d0) 00:28:17.797 [2024-10-09 00:35:48.259857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.797 [2024-10-09 00:35:48.259863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:17.797 [2024-10-09 00:35:48.269491] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfde8d0) 00:28:17.797 [2024-10-09 00:35:48.269508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.798 [2024-10-09 00:35:48.269515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:17.798 [2024-10-09 00:35:48.279905] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfde8d0) 00:28:17.798 [2024-10-09 00:35:48.279922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.798 [2024-10-09 00:35:48.279929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:17.798 [2024-10-09 00:35:48.290328] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfde8d0) 00:28:17.798 [2024-10-09 00:35:48.290345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.798 [2024-10-09 00:35:48.290352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.798 [2024-10-09 00:35:48.301012] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfde8d0) 00:28:17.798 [2024-10-09 00:35:48.301030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.798 [2024-10-09 00:35:48.301036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:17.798 [2024-10-09 00:35:48.309100] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfde8d0) 00:28:17.798 [2024-10-09 00:35:48.309117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.798 [2024-10-09 00:35:48.309124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:17.798 [2024-10-09 00:35:48.317005] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfde8d0) 00:28:17.798 [2024-10-09 00:35:48.317023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.798 [2024-10-09 00:35:48.317029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:17.798 [2024-10-09 00:35:48.328211] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfde8d0) 00:28:17.798 [2024-10-09 00:35:48.328228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.798 [2024-10-09 00:35:48.328235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.798 [2024-10-09 00:35:48.337951] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfde8d0) 00:28:17.798 [2024-10-09 00:35:48.337968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.798 [2024-10-09 00:35:48.337975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:17.798 [2024-10-09 00:35:48.348596] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfde8d0) 00:28:17.798 [2024-10-09 00:35:48.348613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.798 [2024-10-09 00:35:48.348622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:17.798 [2024-10-09 00:35:48.360530] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfde8d0) 00:28:17.798 [2024-10-09 00:35:48.360548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.798 [2024-10-09 00:35:48.360554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:17.798 [2024-10-09 00:35:48.373108] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfde8d0) 00:28:17.798 [2024-10-09 00:35:48.373126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.798 [2024-10-09 00:35:48.373133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.798 [2024-10-09 00:35:48.384390] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfde8d0) 00:28:17.798 [2024-10-09 00:35:48.384408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.798 [2024-10-09 00:35:48.384414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:17.798 [2024-10-09 00:35:48.396309] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfde8d0) 00:28:17.798 [2024-10-09 00:35:48.396327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.798 [2024-10-09 00:35:48.396333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:17.798 [2024-10-09 00:35:48.408666] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfde8d0) 00:28:17.798 [2024-10-09 00:35:48.408683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.798 [2024-10-09 00:35:48.408690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:17.798 [2024-10-09 00:35:48.421070] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfde8d0) 00:28:17.798 [2024-10-09 00:35:48.421087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.798 [2024-10-09 00:35:48.421094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.070 [2024-10-09 00:35:48.432871] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfde8d0) 00:28:18.070 [2024-10-09 00:35:48.432889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.070 [2024-10-09 00:35:48.432895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:18.070 [2024-10-09 00:35:48.443684] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfde8d0) 00:28:18.070 [2024-10-09 00:35:48.443702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.070 [2024-10-09 00:35:48.443708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:18.070 [2024-10-09 00:35:48.454849] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfde8d0) 00:28:18.070 [2024-10-09 00:35:48.454867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.070 [2024-10-09 00:35:48.454873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:18.070 [2024-10-09 00:35:48.463669] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfde8d0) 00:28:18.070 [2024-10-09 00:35:48.463687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.070 [2024-10-09 00:35:48.463695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.070 [2024-10-09 00:35:48.470004] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfde8d0) 00:28:18.070 [2024-10-09 00:35:48.470022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.070 [2024-10-09 00:35:48.470029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:18.070 [2024-10-09 00:35:48.478659] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfde8d0) 00:28:18.070 [2024-10-09 00:35:48.478677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.070 [2024-10-09 00:35:48.478684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:18.070 [2024-10-09 00:35:48.486129] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfde8d0) 00:28:18.070 [2024-10-09 00:35:48.486146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.070 [2024-10-09 00:35:48.486152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:18.070 [2024-10-09 00:35:48.496899] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfde8d0) 00:28:18.070 [2024-10-09 00:35:48.496916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.070 [2024-10-09 00:35:48.496922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.070 [2024-10-09 00:35:48.501913] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfde8d0) 00:28:18.070 [2024-10-09 00:35:48.501930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.070 [2024-10-09 00:35:48.501936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:18.070 [2024-10-09 00:35:48.510532] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfde8d0) 00:28:18.070 [2024-10-09 00:35:48.510549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.070 [2024-10-09 00:35:48.510556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:18.070 [2024-10-09 00:35:48.519253] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfde8d0) 00:28:18.070 [2024-10-09 00:35:48.519271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.070 [2024-10-09 00:35:48.519280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:18.070 [2024-10-09 00:35:48.528595] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfde8d0) 00:28:18.070 [2024-10-09 00:35:48.528612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.070 [2024-10-09 00:35:48.528619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.070 [2024-10-09 00:35:48.534659] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfde8d0) 00:28:18.070 [2024-10-09 00:35:48.534676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.070 [2024-10-09 00:35:48.534683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:18.070 [2024-10-09 00:35:48.545015] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfde8d0) 00:28:18.070 [2024-10-09 00:35:48.545032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.070 [2024-10-09 00:35:48.545039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:18.070 [2024-10-09 00:35:48.555280] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfde8d0) 00:28:18.070 [2024-10-09 00:35:48.555297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.070 [2024-10-09 00:35:48.555303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:18.070 [2024-10-09 00:35:48.566931] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfde8d0) 00:28:18.070 [2024-10-09 00:35:48.566949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.070 [2024-10-09 00:35:48.566956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.070 [2024-10-09 00:35:48.579005] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfde8d0) 00:28:18.070 [2024-10-09 00:35:48.579022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.070 [2024-10-09 00:35:48.579029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:18.070 [2024-10-09 00:35:48.586790] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfde8d0) 00:28:18.070 [2024-10-09 00:35:48.586807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.070 [2024-10-09 00:35:48.586814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:18.070 [2024-10-09 00:35:48.597278] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfde8d0) 00:28:18.070 [2024-10-09 00:35:48.597296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.070 [2024-10-09 00:35:48.597302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:18.070 [2024-10-09 00:35:48.603521] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfde8d0) 00:28:18.070 [2024-10-09 00:35:48.603541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.070 [2024-10-09 00:35:48.603548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.070 [2024-10-09 00:35:48.614253] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfde8d0) 00:28:18.070 [2024-10-09 00:35:48.614271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.070 [2024-10-09 00:35:48.614277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:18.070 [2024-10-09 00:35:48.621357] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfde8d0) 00:28:18.070 [2024-10-09 00:35:48.621374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.071 [2024-10-09 00:35:48.621380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:18.071 [2024-10-09 00:35:48.629611] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfde8d0) 00:28:18.071 [2024-10-09 00:35:48.629629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.071 [2024-10-09 00:35:48.629635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:18.071 [2024-10-09 00:35:48.639062] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfde8d0) 00:28:18.071 [2024-10-09 00:35:48.639080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.071 [2024-10-09 00:35:48.639086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.071 [2024-10-09 00:35:48.644233] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfde8d0) 00:28:18.071 [2024-10-09 00:35:48.644250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.071 [2024-10-09 00:35:48.644257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:18.071 [2024-10-09 00:35:48.652764] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfde8d0) 00:28:18.071 [2024-10-09 00:35:48.652781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.071 [2024-10-09 00:35:48.652787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:18.071 [2024-10-09 00:35:48.659420] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfde8d0) 00:28:18.071 [2024-10-09 00:35:48.659438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.071 [2024-10-09 00:35:48.659445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:18.071 [2024-10-09 00:35:48.668673] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfde8d0) 00:28:18.071 [2024-10-09 00:35:48.668691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.071 [2024-10-09 00:35:48.668698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.071 [2024-10-09 00:35:48.677877] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfde8d0) 00:28:18.071 [2024-10-09 00:35:48.677895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.071 [2024-10-09 00:35:48.677901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:18.071 [2024-10-09 00:35:48.688750] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfde8d0) 00:28:18.071 [2024-10-09 00:35:48.688768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.071 [2024-10-09 00:35:48.688774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:18.071 [2024-10-09 00:35:48.698029] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfde8d0) 00:28:18.071 [2024-10-09 00:35:48.698047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.071 [2024-10-09 00:35:48.698053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:18.415 [2024-10-09 00:35:48.704772] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfde8d0) 00:28:18.415 [2024-10-09 00:35:48.704790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.415 [2024-10-09 00:35:48.704796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.415 [2024-10-09 00:35:48.713361] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfde8d0) 00:28:18.415 [2024-10-09 00:35:48.713379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.415 [2024-10-09 00:35:48.713385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:18.415 [2024-10-09 00:35:48.721781] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfde8d0) 00:28:18.415 [2024-10-09 00:35:48.721799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.415 [2024-10-09 00:35:48.721805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:18.415 [2024-10-09 00:35:48.731085] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfde8d0) 00:28:18.415 [2024-10-09 00:35:48.731108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.416 [2024-10-09 00:35:48.731115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:18.416 [2024-10-09 00:35:48.738394] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfde8d0) 00:28:18.416 [2024-10-09 00:35:48.738413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.416 [2024-10-09 00:35:48.738421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.416 [2024-10-09 00:35:48.748259] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfde8d0) 00:28:18.416 [2024-10-09 00:35:48.748277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.416 [2024-10-09 00:35:48.748288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:18.416 [2024-10-09 00:35:48.759482] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfde8d0) 00:28:18.416 [2024-10-09 00:35:48.759501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.416 [2024-10-09 00:35:48.759508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:18.416 [2024-10-09 00:35:48.768823] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfde8d0) 00:28:18.416 [2024-10-09 00:35:48.768841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.416 [2024-10-09 00:35:48.768848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:18.416 [2024-10-09 00:35:48.778417] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfde8d0) 00:28:18.416 [2024-10-09 00:35:48.778434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.416 [2024-10-09 00:35:48.778441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.416 [2024-10-09 00:35:48.787085] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfde8d0) 00:28:18.416 [2024-10-09 00:35:48.787103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.416 [2024-10-09 00:35:48.787109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:18.416 [2024-10-09 00:35:48.791890] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfde8d0) 00:28:18.416 [2024-10-09 00:35:48.791908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.416 [2024-10-09 00:35:48.791914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:18.416 [2024-10-09 00:35:48.801402] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfde8d0) 00:28:18.416 [2024-10-09 00:35:48.801421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.416 [2024-10-09 00:35:48.801428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:18.416 [2024-10-09 00:35:48.809967] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfde8d0) 00:28:18.416 [2024-10-09 00:35:48.809985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.416 [2024-10-09 00:35:48.809992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.416 [2024-10-09 00:35:48.821035] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfde8d0) 00:28:18.416 [2024-10-09 00:35:48.821054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.416 [2024-10-09 00:35:48.821060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:18.416 [2024-10-09 00:35:48.832461] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfde8d0) 00:28:18.416 [2024-10-09 00:35:48.832480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.416 [2024-10-09 00:35:48.832486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:18.416 [2024-10-09 00:35:48.844196] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfde8d0) 00:28:18.416 [2024-10-09 00:35:48.844215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.416 [2024-10-09 00:35:48.844222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:18.416 [2024-10-09 00:35:48.854812] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfde8d0) 00:28:18.416 [2024-10-09 00:35:48.854831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.416 [2024-10-09 00:35:48.854837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.416 3120.00 IOPS, 390.00 MiB/s [2024-10-08T22:35:49.051Z] [2024-10-09 00:35:48.866604] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfde8d0) 00:28:18.416 [2024-10-09 00:35:48.866623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.416 [2024-10-09 00:35:48.866630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:18.416 [2024-10-09 00:35:48.878053] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfde8d0) 00:28:18.416 [2024-10-09 00:35:48.878072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.416 [2024-10-09 00:35:48.878079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:18.416 [2024-10-09 00:35:48.890407] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfde8d0) 00:28:18.416 [2024-10-09 00:35:48.890426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.416 [2024-10-09 00:35:48.890433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:18.416 [2024-10-09 00:35:48.902740] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfde8d0) 00:28:18.416 [2024-10-09 00:35:48.902758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.416 [2024-10-09 00:35:48.902764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.416 [2024-10-09 00:35:48.915369] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfde8d0) 00:28:18.416 [2024-10-09 00:35:48.915388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.416 [2024-10-09 00:35:48.915394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:18.416 [2024-10-09 00:35:48.927957] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfde8d0) 00:28:18.416 [2024-10-09 00:35:48.927975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.416 [2024-10-09 00:35:48.927985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:18.416 [2024-10-09 00:35:48.939902] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfde8d0) 00:28:18.416 [2024-10-09 00:35:48.939921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.416 [2024-10-09 00:35:48.939928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:18.416 [2024-10-09 00:35:48.952327] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfde8d0) 00:28:18.416 [2024-10-09 00:35:48.952346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.416 [2024-10-09 00:35:48.952352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.416 [2024-10-09 00:35:48.964259] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfde8d0) 00:28:18.416 [2024-10-09 00:35:48.964278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.416 [2024-10-09 00:35:48.964285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:18.416 [2024-10-09 00:35:48.975013] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfde8d0) 00:28:18.416 [2024-10-09 00:35:48.975032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.416 [2024-10-09 00:35:48.975039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:18.416 [2024-10-09 00:35:48.985154] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfde8d0) 00:28:18.416 [2024-10-09 00:35:48.985173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.416 [2024-10-09 00:35:48.985179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:18.416 [2024-10-09 00:35:48.994834] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfde8d0) 00:28:18.416 [2024-10-09 00:35:48.994852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.416 [2024-10-09 00:35:48.994859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.416 [2024-10-09 00:35:49.001635] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfde8d0) 00:28:18.416 [2024-10-09 00:35:49.001654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.416 [2024-10-09 00:35:49.001660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:18.416 [2024-10-09 00:35:49.009602] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfde8d0) 00:28:18.416 [2024-10-09 00:35:49.009621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.416 [2024-10-09 00:35:49.009627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:18.416 [2024-10-09 00:35:49.018269] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfde8d0) 00:28:18.416 [2024-10-09 00:35:49.018291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.416 [2024-10-09 00:35:49.018298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:18.417 [2024-10-09 00:35:49.023531] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfde8d0) 00:28:18.417 [2024-10-09 00:35:49.023548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.417 [2024-10-09 00:35:49.023554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.417 [2024-10-09 00:35:49.028923] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfde8d0) 00:28:18.417 [2024-10-09 00:35:49.028941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.417 [2024-10-09 00:35:49.028948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:18.417 [2024-10-09 00:35:49.036811] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfde8d0) 00:28:18.417 [2024-10-09 00:35:49.036829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.417 [2024-10-09 00:35:49.036835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:18.417 [2024-10-09 00:35:49.045170] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfde8d0) 00:28:18.417 [2024-10-09 00:35:49.045189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.417 [2024-10-09 00:35:49.045195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:18.678 [2024-10-09 00:35:49.052781] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfde8d0) 00:28:18.678 [2024-10-09 00:35:49.052800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.678 [2024-10-09 00:35:49.052807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.678 [2024-10-09 00:35:49.062756] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfde8d0) 00:28:18.678 [2024-10-09 00:35:49.062773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.678 [2024-10-09 00:35:49.062780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:18.678 [2024-10-09 00:35:49.068021] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfde8d0) 00:28:18.678 [2024-10-09 00:35:49.068039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.678 [2024-10-09 00:35:49.068046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:18.678 [2024-10-09 00:35:49.076788] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfde8d0) 00:28:18.678 [2024-10-09 00:35:49.076806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.678 [2024-10-09 00:35:49.076813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:18.678 [2024-10-09 00:35:49.083292] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfde8d0) 00:28:18.678 [2024-10-09 00:35:49.083311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.678 [2024-10-09 00:35:49.083318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.678 [2024-10-09 00:35:49.091232] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfde8d0) 00:28:18.678 [2024-10-09 00:35:49.091250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.678 [2024-10-09 00:35:49.091256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:18.678 [2024-10-09 00:35:49.096772] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfde8d0) 00:28:18.678 [2024-10-09 00:35:49.096790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.678 [2024-10-09 00:35:49.096796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:18.678 [2024-10-09 00:35:49.106090] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfde8d0) 00:28:18.678 [2024-10-09 00:35:49.106108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.678 [2024-10-09 00:35:49.106115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:18.678 [2024-10-09 00:35:49.110998] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfde8d0) 00:28:18.678 [2024-10-09 00:35:49.111016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.678 [2024-10-09 00:35:49.111022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.678 [2024-10-09 00:35:49.120398] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfde8d0) 00:28:18.679 [2024-10-09 00:35:49.120416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.679 [2024-10-09 00:35:49.120422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:18.679 [2024-10-09 00:35:49.125182] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfde8d0) 00:28:18.679 [2024-10-09 00:35:49.125200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.679 [2024-10-09 00:35:49.125207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:18.679 [2024-10-09 00:35:49.136128] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfde8d0) 00:28:18.679 [2024-10-09 00:35:49.136147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.679 [2024-10-09 00:35:49.136153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:18.679 [2024-10-09 00:35:49.143020] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfde8d0) 00:28:18.679 [2024-10-09 00:35:49.143038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.679 [2024-10-09 00:35:49.143048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.679 [2024-10-09 00:35:49.148795] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfde8d0) 00:28:18.679 [2024-10-09 00:35:49.148814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.679 [2024-10-09 00:35:49.148820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:18.679 [2024-10-09 00:35:49.153195] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfde8d0) 00:28:18.679 [2024-10-09 00:35:49.153213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.679 [2024-10-09 00:35:49.153220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:18.679 [2024-10-09 00:35:49.157759] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfde8d0) 00:28:18.679 [2024-10-09 00:35:49.157776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.679 [2024-10-09 00:35:49.157783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:18.679 [2024-10-09 00:35:49.167771] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfde8d0) 00:28:18.679 [2024-10-09 00:35:49.167789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.679 [2024-10-09 00:35:49.167796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.679 [2024-10-09 00:35:49.175034] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfde8d0) 00:28:18.679 [2024-10-09 00:35:49.175052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.679 [2024-10-09 00:35:49.175058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:18.679 [2024-10-09 00:35:49.183480] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfde8d0) 00:28:18.679 [2024-10-09 00:35:49.183499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.679 [2024-10-09 00:35:49.183505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:18.679 [2024-10-09 00:35:49.187941] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfde8d0) 00:28:18.679 [2024-10-09 00:35:49.187959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.679 [2024-10-09 00:35:49.187965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:18.679 [2024-10-09 00:35:49.194603] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfde8d0) 00:28:18.679 [2024-10-09 00:35:49.194621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.679 [2024-10-09 00:35:49.194627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.679 [2024-10-09 00:35:49.199089] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfde8d0) 00:28:18.679 [2024-10-09 00:35:49.199111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.679 [2024-10-09 00:35:49.199117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:18.679 [2024-10-09 00:35:49.203453] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfde8d0) 00:28:18.679 [2024-10-09 00:35:49.203471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.679 [2024-10-09 00:35:49.203477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:18.679 [2024-10-09 00:35:49.207890] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfde8d0) 00:28:18.679 [2024-10-09 00:35:49.207908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.679 [2024-10-09 00:35:49.207915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:18.679 [2024-10-09 00:35:49.213128] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfde8d0) 00:28:18.679 [2024-10-09 00:35:49.213147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.679 [2024-10-09 00:35:49.213153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.679 [2024-10-09 00:35:49.221352] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfde8d0) 00:28:18.679 [2024-10-09 00:35:49.221371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.679 [2024-10-09 00:35:49.221378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:18.679 [2024-10-09 00:35:49.225826] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfde8d0) 00:28:18.679 [2024-10-09 00:35:49.225843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.679 [2024-10-09 00:35:49.225850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:18.679 [2024-10-09 00:35:49.229973] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfde8d0) 00:28:18.679 [2024-10-09 00:35:49.229992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.679 [2024-10-09 00:35:49.229998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:18.679 [2024-10-09 00:35:49.234387] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfde8d0) 00:28:18.679 [2024-10-09 00:35:49.234406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.679 [2024-10-09 00:35:49.234412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.679 [2024-10-09 00:35:49.242243] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfde8d0) 00:28:18.679 [2024-10-09 00:35:49.242262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.679 [2024-10-09 00:35:49.242268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:18.679 [2024-10-09 00:35:49.250198] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfde8d0) 00:28:18.679 [2024-10-09 00:35:49.250217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.679 [2024-10-09 00:35:49.250223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:18.679 [2024-10-09 00:35:49.254610] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfde8d0) 00:28:18.679 [2024-10-09 00:35:49.254628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.679 [2024-10-09 00:35:49.254634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:18.679 [2024-10-09 00:35:49.261632] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfde8d0) 00:28:18.679 [2024-10-09 00:35:49.261651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.679 [2024-10-09 00:35:49.261658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.679 [2024-10-09 00:35:49.268813] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfde8d0) 00:28:18.679 [2024-10-09 00:35:49.268831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.679 [2024-10-09 00:35:49.268837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:18.679 [2024-10-09 00:35:49.275556] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfde8d0) 00:28:18.679 [2024-10-09 00:35:49.275575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.679 [2024-10-09 00:35:49.275582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:18.679 [2024-10-09 00:35:49.281727] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfde8d0) 00:28:18.679 [2024-10-09 00:35:49.281745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.679 [2024-10-09 00:35:49.281751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:18.679 [2024-10-09 00:35:49.286812] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfde8d0) 00:28:18.679 [2024-10-09 00:35:49.286830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.679 [2024-10-09 00:35:49.286837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.679 [2024-10-09 00:35:49.295039] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfde8d0) 00:28:18.679 [2024-10-09 00:35:49.295058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.679 [2024-10-09 00:35:49.295064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:18.680 [2024-10-09 00:35:49.303014] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfde8d0) 00:28:18.680 [2024-10-09 00:35:49.303033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.680 [2024-10-09 00:35:49.303043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:18.680 [2024-10-09 00:35:49.311158] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfde8d0) 00:28:18.680 [2024-10-09 00:35:49.311177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.680 [2024-10-09 00:35:49.311183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:18.941 [2024-10-09 00:35:49.316802] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfde8d0) 00:28:18.941 [2024-10-09 00:35:49.316821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.941 [2024-10-09 00:35:49.316827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.941 [2024-10-09 00:35:49.324067] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfde8d0) 00:28:18.941 [2024-10-09 00:35:49.324085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.941 [2024-10-09 00:35:49.324091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:18.941 [2024-10-09 00:35:49.330082] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfde8d0) 00:28:18.941 [2024-10-09 00:35:49.330101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.941 [2024-10-09 00:35:49.330107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:18.941 [2024-10-09 00:35:49.339078] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfde8d0) 00:28:18.941 [2024-10-09 00:35:49.339097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.941 [2024-10-09 00:35:49.339103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:18.941 [2024-10-09 00:35:49.347882] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfde8d0) 00:28:18.941 [2024-10-09 00:35:49.347901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.941 [2024-10-09 00:35:49.347907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.941 [2024-10-09 00:35:49.356550] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfde8d0) 00:28:18.941 [2024-10-09 00:35:49.356568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.941 [2024-10-09 00:35:49.356575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:18.941 [2024-10-09 00:35:49.361469] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfde8d0) 00:28:18.941 [2024-10-09 00:35:49.361487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.941 [2024-10-09 00:35:49.361493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:18.941 [2024-10-09 00:35:49.368784] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfde8d0) 00:28:18.941 [2024-10-09 00:35:49.368807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.941 [2024-10-09 00:35:49.368813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:18.941 [2024-10-09 00:35:49.375305] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfde8d0) 00:28:18.941 [2024-10-09 00:35:49.375324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.941 [2024-10-09 00:35:49.375330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.941 [2024-10-09 00:35:49.383461] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfde8d0) 00:28:18.941 [2024-10-09 00:35:49.383480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.941 [2024-10-09 00:35:49.383486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:18.941 [2024-10-09 00:35:49.393544] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfde8d0) 00:28:18.941 [2024-10-09 00:35:49.393563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.941 [2024-10-09 00:35:49.393570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:18.941 [2024-10-09 00:35:49.404514] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfde8d0) 00:28:18.941 [2024-10-09 00:35:49.404531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.941 [2024-10-09 00:35:49.404537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:18.941 [2024-10-09 00:35:49.415549] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfde8d0) 00:28:18.941 [2024-10-09 00:35:49.415568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.941 [2024-10-09 00:35:49.415574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.941 [2024-10-09 00:35:49.428092] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfde8d0) 00:28:18.941 [2024-10-09 00:35:49.428111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.941 [2024-10-09 00:35:49.428117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:18.941 [2024-10-09 00:35:49.440073] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfde8d0) 00:28:18.941 [2024-10-09 00:35:49.440091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.941 [2024-10-09 00:35:49.440097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:18.941 [2024-10-09 00:35:49.452447] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfde8d0) 00:28:18.941 [2024-10-09 00:35:49.452466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.941 [2024-10-09 00:35:49.452472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:18.941 [2024-10-09 00:35:49.464992] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfde8d0) 00:28:18.941 [2024-10-09 00:35:49.465011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.941 [2024-10-09 00:35:49.465018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.941 [2024-10-09 00:35:49.476218] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfde8d0) 00:28:18.941 [2024-10-09 00:35:49.476237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.941 [2024-10-09 00:35:49.476243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:18.941 [2024-10-09 00:35:49.488060] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfde8d0) 00:28:18.941 [2024-10-09 00:35:49.488079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.941 [2024-10-09 00:35:49.488086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:18.941 [2024-10-09 00:35:49.499882] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfde8d0) 00:28:18.941 [2024-10-09 00:35:49.499901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.941 [2024-10-09 00:35:49.499907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:18.941 [2024-10-09 00:35:49.511644] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfde8d0) 00:28:18.941 [2024-10-09 00:35:49.511663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.941 [2024-10-09 00:35:49.511669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.941 [2024-10-09 00:35:49.522818] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfde8d0) 00:28:18.941 [2024-10-09 00:35:49.522836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.941 [2024-10-09 00:35:49.522843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:18.941 [2024-10-09 00:35:49.533813] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfde8d0) 00:28:18.941 [2024-10-09 00:35:49.533831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.942 [2024-10-09 00:35:49.533837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:18.942 [2024-10-09 00:35:49.544268] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfde8d0) 00:28:18.942 [2024-10-09 00:35:49.544287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.942 [2024-10-09 00:35:49.544294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:18.942 [2024-10-09 00:35:49.554642] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfde8d0) 00:28:18.942 [2024-10-09 00:35:49.554661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.942 [2024-10-09 00:35:49.554671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.942 [2024-10-09 00:35:49.564428] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfde8d0) 00:28:18.942 [2024-10-09 00:35:49.564446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.942 [2024-10-09 00:35:49.564452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:18.942 [2024-10-09 00:35:49.573253] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfde8d0) 00:28:18.942 [2024-10-09 00:35:49.573272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.942 [2024-10-09 00:35:49.573278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:19.203 [2024-10-09 00:35:49.583289] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfde8d0) 00:28:19.203 [2024-10-09 00:35:49.583307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.203 [2024-10-09 00:35:49.583314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:19.203 [2024-10-09 00:35:49.591031] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfde8d0) 00:28:19.203 [2024-10-09 00:35:49.591049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.203 [2024-10-09 00:35:49.591056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.203 [2024-10-09 00:35:49.602316] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfde8d0) 00:28:19.203 [2024-10-09 00:35:49.602334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.203 [2024-10-09 00:35:49.602341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:19.203 [2024-10-09 00:35:49.609027] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfde8d0) 00:28:19.203 [2024-10-09 00:35:49.609045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.203 [2024-10-09 00:35:49.609051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:19.203 [2024-10-09 00:35:49.614571] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfde8d0) 00:28:19.203 [2024-10-09 00:35:49.614590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.203 [2024-10-09 00:35:49.614596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:19.203 [2024-10-09 00:35:49.625772] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfde8d0) 00:28:19.203 [2024-10-09 00:35:49.625790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.203 [2024-10-09 00:35:49.625796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.203 [2024-10-09 00:35:49.637167] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfde8d0) 00:28:19.203 [2024-10-09 00:35:49.637186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.203 [2024-10-09 00:35:49.637192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:19.203 [2024-10-09 00:35:49.648713] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfde8d0) 00:28:19.203 [2024-10-09 00:35:49.648737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.203 [2024-10-09 00:35:49.648744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:19.203 [2024-10-09 00:35:49.660498] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfde8d0) 00:28:19.203 [2024-10-09 00:35:49.660516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.203 [2024-10-09 00:35:49.660522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:19.203 [2024-10-09 00:35:49.671707] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfde8d0) 00:28:19.203 [2024-10-09 00:35:49.671729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.203 [2024-10-09 00:35:49.671736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.203 [2024-10-09 00:35:49.678110] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfde8d0) 00:28:19.203 [2024-10-09 00:35:49.678128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.203 [2024-10-09 00:35:49.678134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:19.203 [2024-10-09 00:35:49.690479] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfde8d0) 00:28:19.203 [2024-10-09 00:35:49.690497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.203 [2024-10-09 00:35:49.690504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:19.203 [2024-10-09 00:35:49.702145] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfde8d0) 00:28:19.203 [2024-10-09 00:35:49.702162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.203 [2024-10-09 00:35:49.702169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:19.203 [2024-10-09 00:35:49.714693] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfde8d0) 00:28:19.203 [2024-10-09 00:35:49.714711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.203 [2024-10-09 00:35:49.714717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.204 [2024-10-09 00:35:49.726842] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfde8d0) 00:28:19.204 [2024-10-09 00:35:49.726860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.204 [2024-10-09 00:35:49.726870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:19.204 [2024-10-09 00:35:49.739733] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfde8d0) 00:28:19.204 [2024-10-09 00:35:49.739751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.204 [2024-10-09 00:35:49.739758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:19.204 [2024-10-09 00:35:49.752262] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfde8d0) 00:28:19.204 [2024-10-09 00:35:49.752280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.204 [2024-10-09 00:35:49.752287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:19.204 [2024-10-09 00:35:49.763403] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfde8d0) 00:28:19.204 [2024-10-09 00:35:49.763421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.204 [2024-10-09 00:35:49.763428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.204 [2024-10-09 00:35:49.771728] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfde8d0) 00:28:19.204 [2024-10-09 00:35:49.771746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.204 [2024-10-09 00:35:49.771752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:19.204 [2024-10-09 00:35:49.780674] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfde8d0) 00:28:19.204 [2024-10-09 00:35:49.780692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.204 [2024-10-09 00:35:49.780698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:19.204 [2024-10-09 00:35:49.792293] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfde8d0) 00:28:19.204 [2024-10-09 00:35:49.792311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.204 [2024-10-09 00:35:49.792317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:19.204 [2024-10-09 00:35:49.804245] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfde8d0) 00:28:19.204 [2024-10-09 00:35:49.804263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.204 [2024-10-09 00:35:49.804270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.204 [2024-10-09 00:35:49.816245] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfde8d0) 00:28:19.204 [2024-10-09 00:35:49.816263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.204 [2024-10-09 00:35:49.816270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:19.204 [2024-10-09 00:35:49.827788] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfde8d0) 00:28:19.204 [2024-10-09 00:35:49.827809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.204 [2024-10-09 00:35:49.827816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:19.465 [2024-10-09 00:35:49.837344] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfde8d0) 00:28:19.465 [2024-10-09 00:35:49.837363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.465 [2024-10-09 00:35:49.837369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:19.465 [2024-10-09 00:35:49.847640] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfde8d0) 00:28:19.465 [2024-10-09 00:35:49.847658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.465 [2024-10-09 00:35:49.847665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.465 [2024-10-09 00:35:49.858250] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfde8d0) 00:28:19.465 [2024-10-09 00:35:49.858267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.465 [2024-10-09 00:35:49.858273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:19.465 3317.00 IOPS, 414.62 MiB/s [2024-10-08T22:35:50.100Z] [2024-10-09 00:35:49.868816] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfde8d0) 00:28:19.465 [2024-10-09 00:35:49.868834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.465 [2024-10-09 00:35:49.868841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:19.465 00:28:19.466 Latency(us) 00:28:19.466 [2024-10-08T22:35:50.101Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:19.466 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:28:19.466 nvme0n1 : 2.00 3320.20 415.02 0.00 0.00 4814.94 559.79 13271.04 00:28:19.466 [2024-10-08T22:35:50.101Z] =================================================================================================================== 00:28:19.466 [2024-10-08T22:35:50.101Z] Total : 3320.20 415.02 0.00 0.00 4814.94 559.79 13271.04 00:28:19.466 { 00:28:19.466 "results": [ 00:28:19.466 { 00:28:19.466 "job": "nvme0n1", 00:28:19.466 "core_mask": "0x2", 00:28:19.466 "workload": "randread", 00:28:19.466 "status": "finished", 00:28:19.466 "queue_depth": 16, 00:28:19.466 "io_size": 131072, 00:28:19.466 "runtime": 2.002894, 00:28:19.466 "iops": 3320.19567685559, 00:28:19.466 "mibps": 415.02445960694877, 00:28:19.466 "io_failed": 0, 00:28:19.466 "io_timeout": 0, 00:28:19.466 "avg_latency_us": 4814.935001503759, 00:28:19.466 "min_latency_us": 559.7866666666666, 00:28:19.466 "max_latency_us": 13271.04 00:28:19.466 } 00:28:19.466 ], 00:28:19.466 "core_count": 1 00:28:19.466 } 00:28:19.466 00:35:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:28:19.466 00:35:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:28:19.466 00:35:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:28:19.466 | .driver_specific 00:28:19.466 | .nvme_error 00:28:19.466 | .status_code 00:28:19.466 | .command_transient_transport_error' 00:28:19.466 00:35:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:28:19.466 00:35:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 214 > 0 )) 00:28:19.466 00:35:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3422579 00:28:19.466 00:35:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 3422579 ']' 00:28:19.466 00:35:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 3422579 00:28:19.466 00:35:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:28:19.466 00:35:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:19.466 00:35:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3422579 00:28:19.726 00:35:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:28:19.727 00:35:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:28:19.727 00:35:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3422579' 00:28:19.727 killing process with pid 3422579 00:28:19.727 00:35:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 3422579 00:28:19.727 Received shutdown signal, test time was about 2.000000 seconds 00:28:19.727 00:28:19.727 Latency(us) 00:28:19.727 [2024-10-08T22:35:50.362Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:19.727 [2024-10-08T22:35:50.362Z] =================================================================================================================== 00:28:19.727 [2024-10-08T22:35:50.362Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:19.727 00:35:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 3422579 00:28:19.727 00:35:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:28:19.727 00:35:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:28:19.727 00:35:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:28:19.727 00:35:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:28:19.727 00:35:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:28:19.727 00:35:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3423271 00:28:19.727 00:35:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3423271 /var/tmp/bperf.sock 00:28:19.727 00:35:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 3423271 ']' 00:28:19.727 00:35:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:28:19.727 00:35:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:19.727 00:35:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:19.727 00:35:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:19.727 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:19.727 00:35:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:19.727 00:35:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:19.727 [2024-10-09 00:35:50.314047] Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 initialization... 00:28:19.727 [2024-10-09 00:35:50.314107] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3423271 ] 00:28:19.987 [2024-10-09 00:35:50.389647] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:19.988 [2024-10-09 00:35:50.442830] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:28:20.559 00:35:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:20.559 00:35:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:28:20.559 00:35:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:20.559 00:35:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:20.820 00:35:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:28:20.820 00:35:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:20.820 00:35:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:20.820 00:35:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:20.820 00:35:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:20.820 00:35:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:21.081 nvme0n1 00:28:21.081 00:35:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:28:21.081 00:35:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:21.081 00:35:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:21.081 00:35:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:21.081 00:35:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:28:21.081 00:35:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:21.081 Running I/O for 2 seconds... 00:28:21.081 [2024-10-09 00:35:51.686555] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2450) with pdu=0x2000198f6cc8 00:28:21.081 [2024-10-09 00:35:51.687341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:11710 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.081 [2024-10-09 00:35:51.687369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:28:21.081 [2024-10-09 00:35:51.695379] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2450) with pdu=0x2000198e1b48 00:28:21.081 [2024-10-09 00:35:51.696158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:14019 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.081 [2024-10-09 00:35:51.696176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:21.081 [2024-10-09 00:35:51.703873] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2450) with pdu=0x2000198f46d0 00:28:21.081 [2024-10-09 00:35:51.704653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:20269 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.081 [2024-10-09 00:35:51.704669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:21.081 [2024-10-09 00:35:51.712363] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2450) with pdu=0x2000198f7970 00:28:21.081 [2024-10-09 00:35:51.713143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:11498 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.081 [2024-10-09 00:35:51.713159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:21.344 [2024-10-09 00:35:51.720846] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2450) with pdu=0x2000198fac10 00:28:21.344 [2024-10-09 00:35:51.721629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:14023 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.344 [2024-10-09 00:35:51.721645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:21.344 [2024-10-09 00:35:51.729313] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2450) with pdu=0x2000198ec408 00:28:21.344 [2024-10-09 00:35:51.730091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:22529 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.344 [2024-10-09 00:35:51.730106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:21.344 [2024-10-09 00:35:51.737764] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2450) with pdu=0x2000198e9168 00:28:21.344 [2024-10-09 00:35:51.738504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:4479 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.344 [2024-10-09 00:35:51.738520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:21.344 [2024-10-09 00:35:51.746223] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2450) with pdu=0x2000198e38d0 00:28:21.344 [2024-10-09 00:35:51.746982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:4216 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.344 [2024-10-09 00:35:51.746997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:21.344 [2024-10-09 00:35:51.754667] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2450) with pdu=0x2000198f5be8 00:28:21.344 [2024-10-09 00:35:51.755455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:2445 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.344 [2024-10-09 00:35:51.755471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:21.344 [2024-10-09 00:35:51.763115] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2450) with pdu=0x2000198f8e88 00:28:21.344 [2024-10-09 00:35:51.763844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:13823 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.344 [2024-10-09 00:35:51.763859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:21.344 [2024-10-09 00:35:51.771549] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2450) with pdu=0x2000198ee190 00:28:21.344 [2024-10-09 00:35:51.772327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:15394 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.344 [2024-10-09 00:35:51.772342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:21.344 [2024-10-09 00:35:51.780000] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2450) with pdu=0x2000198eaef0 00:28:21.344 [2024-10-09 00:35:51.780786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:14003 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.344 [2024-10-09 00:35:51.780802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:21.344 [2024-10-09 00:35:51.788459] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2450) with pdu=0x2000198e1b48 00:28:21.344 [2024-10-09 00:35:51.789228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:2582 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.344 [2024-10-09 00:35:51.789243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:21.344 [2024-10-09 00:35:51.796901] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2450) with pdu=0x2000198f46d0 00:28:21.344 [2024-10-09 00:35:51.797685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:6479 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.344 [2024-10-09 00:35:51.797701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:21.344 [2024-10-09 00:35:51.805330] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2450) with pdu=0x2000198f7970 00:28:21.344 [2024-10-09 00:35:51.806096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:18326 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.344 [2024-10-09 00:35:51.806112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:21.344 [2024-10-09 00:35:51.813766] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2450) with pdu=0x2000198fac10 00:28:21.344 [2024-10-09 00:35:51.814545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:2657 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.344 [2024-10-09 00:35:51.814560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:21.344 [2024-10-09 00:35:51.822223] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2450) with pdu=0x2000198ec408 00:28:21.345 [2024-10-09 00:35:51.822996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:24279 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.345 [2024-10-09 00:35:51.823012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:21.345 [2024-10-09 00:35:51.830658] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2450) with pdu=0x2000198e9168 00:28:21.345 [2024-10-09 00:35:51.831424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:16498 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.345 [2024-10-09 00:35:51.831440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:21.345 [2024-10-09 00:35:51.839092] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2450) with pdu=0x2000198e38d0 00:28:21.345 [2024-10-09 00:35:51.839818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:24006 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.345 [2024-10-09 00:35:51.839833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:21.345 [2024-10-09 00:35:51.847552] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2450) with pdu=0x2000198f5be8 00:28:21.345 [2024-10-09 00:35:51.848277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:16189 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.345 [2024-10-09 00:35:51.848292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:21.345 [2024-10-09 00:35:51.855995] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2450) with pdu=0x2000198f8e88 00:28:21.345 [2024-10-09 00:35:51.856757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:12983 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.345 [2024-10-09 00:35:51.856775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:21.345 [2024-10-09 00:35:51.864432] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2450) with pdu=0x2000198ee190 00:28:21.345 [2024-10-09 00:35:51.865222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:23002 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.345 [2024-10-09 00:35:51.865237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:21.345 [2024-10-09 00:35:51.872876] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2450) with pdu=0x2000198eaef0 00:28:21.345 [2024-10-09 00:35:51.873643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:19473 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.345 [2024-10-09 00:35:51.873658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:21.345 [2024-10-09 00:35:51.881315] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2450) with pdu=0x2000198e1b48 00:28:21.345 [2024-10-09 00:35:51.882109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:11249 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.345 [2024-10-09 00:35:51.882124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:21.345 [2024-10-09 00:35:51.889753] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2450) with pdu=0x2000198f46d0 00:28:21.345 [2024-10-09 00:35:51.890535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:13065 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.345 [2024-10-09 00:35:51.890550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:21.345 [2024-10-09 00:35:51.898192] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2450) with pdu=0x2000198f7970 00:28:21.345 [2024-10-09 00:35:51.898954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:2537 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.345 [2024-10-09 00:35:51.898969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:21.345 [2024-10-09 00:35:51.906694] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2450) with pdu=0x2000198fac10 00:28:21.345 [2024-10-09 00:35:51.907484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:20805 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.345 [2024-10-09 00:35:51.907499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:21.345 [2024-10-09 00:35:51.915151] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2450) with pdu=0x2000198ec408 00:28:21.345 [2024-10-09 00:35:51.915935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:3095 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.345 [2024-10-09 00:35:51.915950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:21.345 [2024-10-09 00:35:51.923585] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2450) with pdu=0x2000198e9168 00:28:21.345 [2024-10-09 00:35:51.924308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:22933 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.345 [2024-10-09 00:35:51.924324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:21.345 [2024-10-09 00:35:51.933054] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2450) with pdu=0x2000198e38d0 00:28:21.345 [2024-10-09 00:35:51.934278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:24815 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.345 [2024-10-09 00:35:51.934294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:21.345 [2024-10-09 00:35:51.942296] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2450) with pdu=0x2000198efae0 00:28:21.345 [2024-10-09 00:35:51.943457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25502 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.345 [2024-10-09 00:35:51.943473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:21.345 [2024-10-09 00:35:51.950739] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2450) with pdu=0x2000198e6738 00:28:21.345 [2024-10-09 00:35:51.951922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:22071 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.345 [2024-10-09 00:35:51.951937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:21.345 [2024-10-09 00:35:51.959180] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2450) with pdu=0x2000198eb328 00:28:21.345 [2024-10-09 00:35:51.960386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:25231 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.345 [2024-10-09 00:35:51.960401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:21.345 [2024-10-09 00:35:51.967625] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2450) with pdu=0x2000198efae0 00:28:21.345 [2024-10-09 00:35:51.968782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:20783 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.345 [2024-10-09 00:35:51.968797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:21.345 [2024-10-09 00:35:51.974933] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2450) with pdu=0x2000198e8d30 00:28:21.345 [2024-10-09 00:35:51.975782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:7883 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.345 [2024-10-09 00:35:51.975797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:28:21.605 [2024-10-09 00:35:51.983521] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2450) with pdu=0x2000198df118 00:28:21.605 [2024-10-09 00:35:51.984411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:25034 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.605 [2024-10-09 00:35:51.984427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:21.605 [2024-10-09 00:35:51.991977] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2450) with pdu=0x2000198e9e10 00:28:21.605 [2024-10-09 00:35:51.992868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:7837 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.605 [2024-10-09 00:35:51.992884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:21.605 [2024-10-09 00:35:52.000445] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2450) with pdu=0x2000198fda78 00:28:21.605 [2024-10-09 00:35:52.001338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:11481 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.605 [2024-10-09 00:35:52.001354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:21.605 [2024-10-09 00:35:52.008927] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2450) with pdu=0x2000198e5658 00:28:21.605 [2024-10-09 00:35:52.009833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:16872 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.605 [2024-10-09 00:35:52.009849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:21.605 [2024-10-09 00:35:52.017401] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2450) with pdu=0x2000198fbcf0 00:28:21.605 [2024-10-09 00:35:52.018290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:8508 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.605 [2024-10-09 00:35:52.018306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:21.605 [2024-10-09 00:35:52.025874] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2450) with pdu=0x2000198f0350 00:28:21.605 [2024-10-09 00:35:52.026760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:8405 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.605 [2024-10-09 00:35:52.026776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:21.605 [2024-10-09 00:35:52.034339] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2450) with pdu=0x2000198e8d30 00:28:21.605 [2024-10-09 00:35:52.035229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:17425 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.605 [2024-10-09 00:35:52.035245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:21.605 [2024-10-09 00:35:52.042772] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2450) with pdu=0x2000198df118 00:28:21.605 [2024-10-09 00:35:52.043670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:10628 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.605 [2024-10-09 00:35:52.043686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:21.605 [2024-10-09 00:35:52.051248] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2450) with pdu=0x2000198e9e10 00:28:21.605 [2024-10-09 00:35:52.052154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:3044 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.605 [2024-10-09 00:35:52.052169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:21.605 [2024-10-09 00:35:52.059736] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2450) with pdu=0x2000198fda78 00:28:21.605 [2024-10-09 00:35:52.060579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:22716 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.605 [2024-10-09 00:35:52.060594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:21.605 [2024-10-09 00:35:52.068222] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2450) with pdu=0x2000198e5658 00:28:21.605 [2024-10-09 00:35:52.069123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:12005 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.605 [2024-10-09 00:35:52.069139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:21.606 [2024-10-09 00:35:52.076677] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2450) with pdu=0x2000198fbcf0 00:28:21.606 [2024-10-09 00:35:52.077580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:412 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.606 [2024-10-09 00:35:52.077598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:21.606 [2024-10-09 00:35:52.085127] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2450) with pdu=0x2000198f0350 00:28:21.606 [2024-10-09 00:35:52.086022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:14 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.606 [2024-10-09 00:35:52.086038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:21.606 [2024-10-09 00:35:52.093582] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2450) with pdu=0x2000198e8d30 00:28:21.606 [2024-10-09 00:35:52.094430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:20551 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.606 [2024-10-09 00:35:52.094445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:21.606 [2024-10-09 00:35:52.102135] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2450) with pdu=0x2000198df118 00:28:21.606 [2024-10-09 00:35:52.103023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:21337 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.606 [2024-10-09 00:35:52.103040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:21.606 [2024-10-09 00:35:52.110604] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2450) with pdu=0x2000198e9e10 00:28:21.606 [2024-10-09 00:35:52.111503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:9116 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.606 [2024-10-09 00:35:52.111519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:21.606 [2024-10-09 00:35:52.119087] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2450) with pdu=0x2000198fda78 00:28:21.606 [2024-10-09 00:35:52.119997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:15601 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.606 [2024-10-09 00:35:52.120013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:21.606 [2024-10-09 00:35:52.127570] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2450) with pdu=0x2000198e5658 00:28:21.606 [2024-10-09 00:35:52.128463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:21981 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.606 [2024-10-09 00:35:52.128478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:21.606 [2024-10-09 00:35:52.136045] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2450) with pdu=0x2000198fbcf0 00:28:21.606 [2024-10-09 00:35:52.136923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:8030 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.606 [2024-10-09 00:35:52.136939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:21.606 [2024-10-09 00:35:52.144499] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2450) with pdu=0x2000198f0350 00:28:21.606 [2024-10-09 00:35:52.145390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:17508 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.606 [2024-10-09 00:35:52.145405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:21.606 [2024-10-09 00:35:52.152976] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2450) with pdu=0x2000198e8d30 00:28:21.606 [2024-10-09 00:35:52.153865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:22712 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.606 [2024-10-09 00:35:52.153881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:21.606 [2024-10-09 00:35:52.161433] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2450) with pdu=0x2000198df118 00:28:21.606 [2024-10-09 00:35:52.162348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:8785 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.606 [2024-10-09 00:35:52.162363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:21.606 [2024-10-09 00:35:52.169932] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2450) with pdu=0x2000198e9e10 00:28:21.606 [2024-10-09 00:35:52.170785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:21205 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.606 [2024-10-09 00:35:52.170800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:21.606 [2024-10-09 00:35:52.178394] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2450) with pdu=0x2000198fda78 00:28:21.606 [2024-10-09 00:35:52.179282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:5695 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.606 [2024-10-09 00:35:52.179298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:21.606 [2024-10-09 00:35:52.186863] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2450) with pdu=0x2000198e5658 00:28:21.606 [2024-10-09 00:35:52.187759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:12924 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.606 [2024-10-09 00:35:52.187775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:21.606 [2024-10-09 00:35:52.195363] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2450) with pdu=0x2000198fbcf0 00:28:21.606 [2024-10-09 00:35:52.196235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:4843 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.606 [2024-10-09 00:35:52.196252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:21.606 [2024-10-09 00:35:52.203828] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2450) with pdu=0x2000198f0350 00:28:21.606 [2024-10-09 00:35:52.204685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:1425 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.606 [2024-10-09 00:35:52.204701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:21.606 [2024-10-09 00:35:52.212255] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2450) with pdu=0x2000198e8d30 00:28:21.606 [2024-10-09 00:35:52.213151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:23276 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.606 [2024-10-09 00:35:52.213167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:21.606 [2024-10-09 00:35:52.220714] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2450) with pdu=0x2000198df118 00:28:21.606 [2024-10-09 00:35:52.221611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:71 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.606 [2024-10-09 00:35:52.221627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:21.606 [2024-10-09 00:35:52.229180] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2450) with pdu=0x2000198e9e10 00:28:21.606 [2024-10-09 00:35:52.230079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:25439 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.606 [2024-10-09 00:35:52.230095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:21.606 [2024-10-09 00:35:52.237661] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2450) with pdu=0x2000198fda78 00:28:21.606 [2024-10-09 00:35:52.238560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:5457 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.606 [2024-10-09 00:35:52.238575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:21.868 [2024-10-09 00:35:52.246136] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2450) with pdu=0x2000198e5658 00:28:21.868 [2024-10-09 00:35:52.247029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:8600 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.868 [2024-10-09 00:35:52.247045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:21.868 [2024-10-09 00:35:52.254613] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2450) with pdu=0x2000198fbcf0 00:28:21.868 [2024-10-09 00:35:52.255518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4370 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.868 [2024-10-09 00:35:52.255533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:21.868 [2024-10-09 00:35:52.263104] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2450) with pdu=0x2000198f0350 00:28:21.868 [2024-10-09 00:35:52.264000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:24531 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.868 [2024-10-09 00:35:52.264016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:21.868 [2024-10-09 00:35:52.271586] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2450) with pdu=0x2000198e8d30 00:28:21.868 [2024-10-09 00:35:52.272497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:5064 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.868 [2024-10-09 00:35:52.272513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:21.868 [2024-10-09 00:35:52.280061] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2450) with pdu=0x2000198df118 00:28:21.868 [2024-10-09 00:35:52.280921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:7394 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.868 [2024-10-09 00:35:52.280938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:21.868 [2024-10-09 00:35:52.288534] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2450) with pdu=0x2000198e9e10 00:28:21.868 [2024-10-09 00:35:52.289396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:24262 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.868 [2024-10-09 00:35:52.289412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:21.868 [2024-10-09 00:35:52.297017] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2450) with pdu=0x2000198fda78 00:28:21.868 [2024-10-09 00:35:52.297895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:10973 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.868 [2024-10-09 00:35:52.297910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:21.868 [2024-10-09 00:35:52.305455] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2450) with pdu=0x2000198e5658 00:28:21.868 [2024-10-09 00:35:52.306354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:13464 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.868 [2024-10-09 00:35:52.306371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:21.868 [2024-10-09 00:35:52.313930] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2450) with pdu=0x2000198fbcf0 00:28:21.868 [2024-10-09 00:35:52.314781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:14549 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.868 [2024-10-09 00:35:52.314797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:21.868 [2024-10-09 00:35:52.322354] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2450) with pdu=0x2000198f0350 00:28:21.868 [2024-10-09 00:35:52.323249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:61 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.868 [2024-10-09 00:35:52.323264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:21.868 [2024-10-09 00:35:52.330851] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2450) with pdu=0x2000198e8d30 00:28:21.869 [2024-10-09 00:35:52.331696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:11138 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.869 [2024-10-09 00:35:52.331712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:21.869 [2024-10-09 00:35:52.339333] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2450) with pdu=0x2000198df118 00:28:21.869 [2024-10-09 00:35:52.340221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:21330 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.869 [2024-10-09 00:35:52.340237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:21.869 [2024-10-09 00:35:52.347819] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2450) with pdu=0x2000198e9e10 00:28:21.869 [2024-10-09 00:35:52.348706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:7408 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.869 [2024-10-09 00:35:52.348725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:21.869 [2024-10-09 00:35:52.356270] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2450) with pdu=0x2000198fda78 00:28:21.869 [2024-10-09 00:35:52.357154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:3905 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.869 [2024-10-09 00:35:52.357169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:21.869 [2024-10-09 00:35:52.364711] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2450) with pdu=0x2000198e5658 00:28:21.869 [2024-10-09 00:35:52.365607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:11063 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.869 [2024-10-09 00:35:52.365623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:21.869 [2024-10-09 00:35:52.373194] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2450) with pdu=0x2000198fbcf0 00:28:21.869 [2024-10-09 00:35:52.374093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:21879 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.869 [2024-10-09 00:35:52.374111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:21.869 [2024-10-09 00:35:52.381679] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2450) with pdu=0x2000198f0350 00:28:21.869 [2024-10-09 00:35:52.382580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:25121 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.869 [2024-10-09 00:35:52.382596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:21.869 [2024-10-09 00:35:52.390303] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2450) with pdu=0x2000198e8d30 00:28:21.869 [2024-10-09 00:35:52.391199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:9868 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.869 [2024-10-09 00:35:52.391215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:21.869 [2024-10-09 00:35:52.398783] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2450) with pdu=0x2000198df118 00:28:21.869 [2024-10-09 00:35:52.399675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:13282 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.869 [2024-10-09 00:35:52.399691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:21.869 [2024-10-09 00:35:52.407247] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2450) with pdu=0x2000198e9e10 00:28:21.869 [2024-10-09 00:35:52.408147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:14457 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.869 [2024-10-09 00:35:52.408163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:21.869 [2024-10-09 00:35:52.415707] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2450) with pdu=0x2000198fda78 00:28:21.869 [2024-10-09 00:35:52.416608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:6340 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.869 [2024-10-09 00:35:52.416624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:21.869 [2024-10-09 00:35:52.423632] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2450) with pdu=0x2000198ea680 00:28:21.869 [2024-10-09 00:35:52.424497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:22470 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.869 [2024-10-09 00:35:52.424513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:21.869 [2024-10-09 00:35:52.432818] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2450) with pdu=0x2000198e99d8 00:28:21.869 [2024-10-09 00:35:52.433698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:8665 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.869 [2024-10-09 00:35:52.433714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:28:21.869 [2024-10-09 00:35:52.440639] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2450) with pdu=0x2000198e5ec8 00:28:21.869 [2024-10-09 00:35:52.441439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:23481 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.869 [2024-10-09 00:35:52.441454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:21.869 [2024-10-09 00:35:52.449776] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2450) with pdu=0x2000198f0350 00:28:21.869 [2024-10-09 00:35:52.450622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25256 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.869 [2024-10-09 00:35:52.450638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:28:21.869 [2024-10-09 00:35:52.458357] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2450) with pdu=0x2000198e6fa8 00:28:21.869 [2024-10-09 00:35:52.459231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:24969 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.869 [2024-10-09 00:35:52.459247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:21.869 [2024-10-09 00:35:52.466816] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2450) with pdu=0x2000198ecc78 00:28:21.869 [2024-10-09 00:35:52.467653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:12768 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.869 [2024-10-09 00:35:52.467668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:21.869 [2024-10-09 00:35:52.475276] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2450) with pdu=0x2000198f4f40 00:28:21.869 [2024-10-09 00:35:52.476149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:14985 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.869 [2024-10-09 00:35:52.476167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:21.869 [2024-10-09 00:35:52.483808] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2450) with pdu=0x2000198f6458 00:28:21.869 [2024-10-09 00:35:52.484695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:7433 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.869 [2024-10-09 00:35:52.484712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:21.869 [2024-10-09 00:35:52.492281] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2450) with pdu=0x2000198ed0b0 00:28:21.869 [2024-10-09 00:35:52.493160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:11614 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.869 [2024-10-09 00:35:52.493175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:21.869 [2024-10-09 00:35:52.500714] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2450) with pdu=0x2000198f46d0 00:28:21.869 [2024-10-09 00:35:52.501599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:16174 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.869 [2024-10-09 00:35:52.501615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:22.131 [2024-10-09 00:35:52.509162] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2450) with pdu=0x2000198f57b0 00:28:22.131 [2024-10-09 00:35:52.510053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:6699 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.131 [2024-10-09 00:35:52.510069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:22.131 [2024-10-09 00:35:52.517605] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2450) with pdu=0x2000198f2d80 00:28:22.131 [2024-10-09 00:35:52.518493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:18069 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.131 [2024-10-09 00:35:52.518509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:22.131 [2024-10-09 00:35:52.526044] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2450) with pdu=0x2000198e27f0 00:28:22.131 [2024-10-09 00:35:52.526906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:19904 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.131 [2024-10-09 00:35:52.526923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:22.131 [2024-10-09 00:35:52.534500] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2450) with pdu=0x2000198fb480 00:28:22.131 [2024-10-09 00:35:52.535385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:18792 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.131 [2024-10-09 00:35:52.535401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:22.131 [2024-10-09 00:35:52.542934] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2450) with pdu=0x2000198eee38 00:28:22.131 [2024-10-09 00:35:52.543780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:13263 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.131 [2024-10-09 00:35:52.543796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:22.131 [2024-10-09 00:35:52.551387] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2450) with pdu=0x2000198e6738 00:28:22.131 [2024-10-09 00:35:52.552283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:24938 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.131 [2024-10-09 00:35:52.552298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:22.131 [2024-10-09 00:35:52.559844] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2450) with pdu=0x2000198e99d8 00:28:22.131 [2024-10-09 00:35:52.560734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:17431 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.131 [2024-10-09 00:35:52.560750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:22.131 [2024-10-09 00:35:52.568283] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2450) with pdu=0x2000198f81e0 00:28:22.131 [2024-10-09 00:35:52.569132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20132 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.131 [2024-10-09 00:35:52.569148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:22.131 [2024-10-09 00:35:52.576739] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2450) with pdu=0x2000198feb58 00:28:22.131 [2024-10-09 00:35:52.577629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:2711 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.131 [2024-10-09 00:35:52.577645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:22.131 [2024-10-09 00:35:52.585168] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2450) with pdu=0x2000198e84c0 00:28:22.131 [2024-10-09 00:35:52.586000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:9494 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.131 [2024-10-09 00:35:52.586016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:22.131 [2024-10-09 00:35:52.593607] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2450) with pdu=0x2000198e73e0 00:28:22.131 [2024-10-09 00:35:52.594491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:11965 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.131 [2024-10-09 00:35:52.594509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:22.131 [2024-10-09 00:35:52.602046] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2450) with pdu=0x2000198f6cc8 00:28:22.131 [2024-10-09 00:35:52.602919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:17766 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.131 [2024-10-09 00:35:52.602934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:22.131 [2024-10-09 00:35:52.610480] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2450) with pdu=0x2000198ebfd0 00:28:22.131 [2024-10-09 00:35:52.611373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:15766 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.131 [2024-10-09 00:35:52.611388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:22.131 [2024-10-09 00:35:52.618931] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2450) with pdu=0x2000198f6020 00:28:22.131 [2024-10-09 00:35:52.619800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:11413 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.131 [2024-10-09 00:35:52.619816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:22.131 [2024-10-09 00:35:52.627372] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2450) with pdu=0x2000198e3060 00:28:22.131 [2024-10-09 00:35:52.628255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22269 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.131 [2024-10-09 00:35:52.628271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:22.131 [2024-10-09 00:35:52.635809] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2450) with pdu=0x2000198f4b08 00:28:22.131 [2024-10-09 00:35:52.636700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:2725 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.131 [2024-10-09 00:35:52.636716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:22.131 [2024-10-09 00:35:52.644248] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2450) with pdu=0x2000198ebb98 00:28:22.132 [2024-10-09 00:35:52.645144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:3531 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.132 [2024-10-09 00:35:52.645160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:22.132 [2024-10-09 00:35:52.652703] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2450) with pdu=0x2000198f2948 00:28:22.132 [2024-10-09 00:35:52.653592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:13397 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.132 [2024-10-09 00:35:52.653608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:22.132 [2024-10-09 00:35:52.661159] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2450) with pdu=0x2000198f92c0 00:28:22.132 [2024-10-09 00:35:52.661989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:9997 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.132 [2024-10-09 00:35:52.662006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:22.132 [2024-10-09 00:35:52.669594] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2450) with pdu=0x2000198e2c28 00:28:22.132 [2024-10-09 00:35:52.670450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:12230 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.132 [2024-10-09 00:35:52.670467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:22.132 29945.00 IOPS, 116.97 MiB/s [2024-10-08T22:35:52.767Z] [2024-10-09 00:35:52.678303] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2450) with pdu=0x2000198f3e60 00:28:22.132 [2024-10-09 00:35:52.679050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11306 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.132 [2024-10-09 00:35:52.679066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:28:22.132 [2024-10-09 00:35:52.686955] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2450) with pdu=0x2000198e7c50 00:28:22.132 [2024-10-09 00:35:52.687937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:5172 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.132 [2024-10-09 00:35:52.687952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:28:22.132 [2024-10-09 00:35:52.695394] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2450) with pdu=0x2000198eb328 00:28:22.132 [2024-10-09 00:35:52.696412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:3098 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.132 [2024-10-09 00:35:52.696428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:28:22.132 [2024-10-09 00:35:52.703872] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2450) with pdu=0x2000198ed4e8 00:28:22.132 [2024-10-09 00:35:52.704834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:1182 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.132 [2024-10-09 00:35:52.704850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:28:22.132 [2024-10-09 00:35:52.712311] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2450) with pdu=0x2000198f5378 00:28:22.132 [2024-10-09 00:35:52.713297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:11155 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.132 [2024-10-09 00:35:52.713313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:28:22.132 [2024-10-09 00:35:52.720749] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2450) with pdu=0x2000198f2510 00:28:22.132 [2024-10-09 00:35:52.721717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:383 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.132 [2024-10-09 00:35:52.721737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:28:22.132 [2024-10-09 00:35:52.729180] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2450) with pdu=0x2000198f96f8 00:28:22.132 [2024-10-09 00:35:52.730206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:6204 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.132 [2024-10-09 00:35:52.730222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:28:22.132 [2024-10-09 00:35:52.737614] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2450) with pdu=0x2000198f0ff8 00:28:22.132 [2024-10-09 00:35:52.738630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:17780 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.132 [2024-10-09 00:35:52.738645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:28:22.132 [2024-10-09 00:35:52.746059] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2450) with pdu=0x2000198fa7d8 00:28:22.132 [2024-10-09 00:35:52.747055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:17113 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.132 [2024-10-09 00:35:52.747071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:28:22.132 [2024-10-09 00:35:52.754508] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2450) with pdu=0x2000198f6890 00:28:22.132 [2024-10-09 00:35:52.755503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:1268 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.132 [2024-10-09 00:35:52.755519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:28:22.132 [2024-10-09 00:35:52.762977] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2450) with pdu=0x2000198e84c0 00:28:22.132 [2024-10-09 00:35:52.763983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:2173 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.132 [2024-10-09 00:35:52.763999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:28:22.394 [2024-10-09 00:35:52.771406] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2450) with pdu=0x2000198ebb98 00:28:22.394 [2024-10-09 00:35:52.772420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:21756 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.394 [2024-10-09 00:35:52.772436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:28:22.394 [2024-10-09 00:35:52.779836] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2450) with pdu=0x2000198e3060 00:28:22.394 [2024-10-09 00:35:52.780787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:699 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.394 [2024-10-09 00:35:52.780803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:28:22.394 [2024-10-09 00:35:52.788271] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2450) with pdu=0x2000198ebfd0 00:28:22.394 [2024-10-09 00:35:52.789280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:13504 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.394 [2024-10-09 00:35:52.789295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:28:22.394 [2024-10-09 00:35:52.796712] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2450) with pdu=0x2000198f2d80 00:28:22.394 [2024-10-09 00:35:52.797747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:8917 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.394 [2024-10-09 00:35:52.797763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:28:22.394 [2024-10-09 00:35:52.805173] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2450) with pdu=0x2000198fb480 00:28:22.394 [2024-10-09 00:35:52.806178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:12892 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.394 [2024-10-09 00:35:52.806194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:28:22.394 [2024-10-09 00:35:52.813592] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2450) with pdu=0x2000198f3a28 00:28:22.394 [2024-10-09 00:35:52.814600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:20460 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.394 [2024-10-09 00:35:52.814619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:28:22.394 [2024-10-09 00:35:52.822025] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2450) with pdu=0x2000198f8e88 00:28:22.394 [2024-10-09 00:35:52.823055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:21121 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.394 [2024-10-09 00:35:52.823071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:28:22.394 [2024-10-09 00:35:52.830454] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2450) with pdu=0x2000198e7c50 00:28:22.394 [2024-10-09 00:35:52.831459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:23599 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.394 [2024-10-09 00:35:52.831474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:28:22.394 [2024-10-09 00:35:52.838905] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2450) with pdu=0x2000198eb328 00:28:22.394 [2024-10-09 00:35:52.839885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:2858 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.394 [2024-10-09 00:35:52.839900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:28:22.394 [2024-10-09 00:35:52.847358] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2450) with pdu=0x2000198ed4e8 00:28:22.394 [2024-10-09 00:35:52.848385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:2317 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.394 [2024-10-09 00:35:52.848401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:28:22.394 [2024-10-09 00:35:52.855812] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2450) with pdu=0x2000198f5378 00:28:22.394 [2024-10-09 00:35:52.856787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:10564 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.394 [2024-10-09 00:35:52.856803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:28:22.394 [2024-10-09 00:35:52.864256] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2450) with pdu=0x2000198f2510 00:28:22.394 [2024-10-09 00:35:52.865268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:23140 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.394 [2024-10-09 00:35:52.865284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:28:22.394 [2024-10-09 00:35:52.872679] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2450) with pdu=0x2000198f96f8 00:28:22.394 [2024-10-09 00:35:52.873680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:12649 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.394 [2024-10-09 00:35:52.873696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:28:22.394 [2024-10-09 00:35:52.881158] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2450) with pdu=0x2000198f0ff8 00:28:22.394 [2024-10-09 00:35:52.882180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:3249 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.394 [2024-10-09 00:35:52.882196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:28:22.394 [2024-10-09 00:35:52.889593] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2450) with pdu=0x2000198fa7d8 00:28:22.394 [2024-10-09 00:35:52.890621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25479 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.394 [2024-10-09 00:35:52.890637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:28:22.394 [2024-10-09 00:35:52.898056] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2450) with pdu=0x2000198f6890 00:28:22.394 [2024-10-09 00:35:52.899081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:11838 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.394 [2024-10-09 00:35:52.899096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:28:22.394 [2024-10-09 00:35:52.906496] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2450) with pdu=0x2000198e84c0 00:28:22.394 [2024-10-09 00:35:52.907507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24979 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.394 [2024-10-09 00:35:52.907522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:28:22.394 [2024-10-09 00:35:52.914924] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2450) with pdu=0x2000198ebb98 00:28:22.394 [2024-10-09 00:35:52.915942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:4952 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.394 [2024-10-09 00:35:52.915957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:28:22.394 [2024-10-09 00:35:52.923363] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2450) with pdu=0x2000198e3060 00:28:22.394 [2024-10-09 00:35:52.924368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:1336 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.394 [2024-10-09 00:35:52.924384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:28:22.394 [2024-10-09 00:35:52.931833] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2450) with pdu=0x2000198ebfd0 00:28:22.394 [2024-10-09 00:35:52.932789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:10614 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.395 [2024-10-09 00:35:52.932806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:28:22.395 [2024-10-09 00:35:52.940276] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2450) with pdu=0x2000198f2d80 00:28:22.395 [2024-10-09 00:35:52.941290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:12426 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.395 [2024-10-09 00:35:52.941306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:28:22.395 [2024-10-09 00:35:52.948702] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2450) with pdu=0x2000198fb480 00:28:22.395 [2024-10-09 00:35:52.949698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:16755 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.395 [2024-10-09 00:35:52.949713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:28:22.395 [2024-10-09 00:35:52.957129] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2450) with pdu=0x2000198f3a28 00:28:22.395 [2024-10-09 00:35:52.958085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:7602 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.395 [2024-10-09 00:35:52.958101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:28:22.395 [2024-10-09 00:35:52.965554] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2450) with pdu=0x2000198f8e88 00:28:22.395 [2024-10-09 00:35:52.966523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:13494 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.395 [2024-10-09 00:35:52.966538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:28:22.395 [2024-10-09 00:35:52.973989] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2450) with pdu=0x2000198e7c50 00:28:22.395 [2024-10-09 00:35:52.974957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:8612 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.395 [2024-10-09 00:35:52.974973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:28:22.395 [2024-10-09 00:35:52.982418] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2450) with pdu=0x2000198eb328 00:28:22.395 [2024-10-09 00:35:52.983434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:23909 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.395 [2024-10-09 00:35:52.983450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:28:22.395 [2024-10-09 00:35:52.990865] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2450) with pdu=0x2000198ed4e8 00:28:22.395 [2024-10-09 00:35:52.991827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:4550 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.395 [2024-10-09 00:35:52.991842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:28:22.395 [2024-10-09 00:35:52.999298] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2450) with pdu=0x2000198f5378 00:28:22.395 [2024-10-09 00:35:53.000301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:1430 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.395 [2024-10-09 00:35:53.000317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:28:22.395 [2024-10-09 00:35:53.007751] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2450) with pdu=0x2000198f2510 00:28:22.395 [2024-10-09 00:35:53.008749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:12455 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.395 [2024-10-09 00:35:53.008764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:28:22.395 [2024-10-09 00:35:53.016187] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2450) with pdu=0x2000198f96f8 00:28:22.395 [2024-10-09 00:35:53.017190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:7551 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.395 [2024-10-09 00:35:53.017205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:28:22.395 [2024-10-09 00:35:53.024625] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2450) with pdu=0x2000198f0ff8 00:28:22.395 [2024-10-09 00:35:53.025631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:17838 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.395 [2024-10-09 00:35:53.025647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:28:22.656 [2024-10-09 00:35:53.033079] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2450) with pdu=0x2000198fa7d8 00:28:22.656 [2024-10-09 00:35:53.034088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:19618 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.656 [2024-10-09 00:35:53.034106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:28:22.656 [2024-10-09 00:35:53.041507] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2450) with pdu=0x2000198f6890 00:28:22.656 [2024-10-09 00:35:53.042526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:5065 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.656 [2024-10-09 00:35:53.042541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:28:22.656 [2024-10-09 00:35:53.049928] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2450) with pdu=0x2000198e84c0 00:28:22.656 [2024-10-09 00:35:53.050926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:11778 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.656 [2024-10-09 00:35:53.050942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:28:22.656 [2024-10-09 00:35:53.058383] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2450) with pdu=0x2000198ebb98 00:28:22.656 [2024-10-09 00:35:53.059402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:6939 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.656 [2024-10-09 00:35:53.059417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:28:22.656 [2024-10-09 00:35:53.066821] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2450) with pdu=0x2000198e3060 00:28:22.656 [2024-10-09 00:35:53.067781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:5970 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.656 [2024-10-09 00:35:53.067797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:28:22.656 [2024-10-09 00:35:53.075280] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2450) with pdu=0x2000198ebfd0 00:28:22.656 [2024-10-09 00:35:53.076280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:5551 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.656 [2024-10-09 00:35:53.076295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:28:22.656 [2024-10-09 00:35:53.083706] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2450) with pdu=0x2000198f2d80 00:28:22.656 [2024-10-09 00:35:53.084706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:2848 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.656 [2024-10-09 00:35:53.084724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:28:22.656 [2024-10-09 00:35:53.092148] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2450) with pdu=0x2000198fb480 00:28:22.656 [2024-10-09 00:35:53.093139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:11048 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.656 [2024-10-09 00:35:53.093155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:28:22.656 [2024-10-09 00:35:53.100570] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2450) with pdu=0x2000198f3a28 00:28:22.656 [2024-10-09 00:35:53.101564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:8718 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.656 [2024-10-09 00:35:53.101580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:28:22.656 [2024-10-09 00:35:53.109008] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2450) with pdu=0x2000198f8e88 00:28:22.656 [2024-10-09 00:35:53.110017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:11118 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.656 [2024-10-09 00:35:53.110033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:28:22.656 [2024-10-09 00:35:53.117429] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2450) with pdu=0x2000198e7c50 00:28:22.656 [2024-10-09 00:35:53.118434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:17146 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.656 [2024-10-09 00:35:53.118450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:28:22.656 [2024-10-09 00:35:53.125918] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2450) with pdu=0x2000198eb328 00:28:22.656 [2024-10-09 00:35:53.126930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:19555 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.656 [2024-10-09 00:35:53.126946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:28:22.656 [2024-10-09 00:35:53.134345] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2450) with pdu=0x2000198ed4e8 00:28:22.656 [2024-10-09 00:35:53.135367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:4659 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.656 [2024-10-09 00:35:53.135382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:28:22.656 [2024-10-09 00:35:53.142760] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2450) with pdu=0x2000198f5378 00:28:22.656 [2024-10-09 00:35:53.143771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:21132 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.656 [2024-10-09 00:35:53.143787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:28:22.656 [2024-10-09 00:35:53.151161] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2450) with pdu=0x2000198f2510 00:28:22.656 [2024-10-09 00:35:53.152176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:16291 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.656 [2024-10-09 00:35:53.152192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:28:22.656 [2024-10-09 00:35:53.159588] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2450) with pdu=0x2000198f96f8 00:28:22.656 [2024-10-09 00:35:53.160584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:5321 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.656 [2024-10-09 00:35:53.160600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:28:22.656 [2024-10-09 00:35:53.168029] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2450) with pdu=0x2000198f0ff8 00:28:22.656 [2024-10-09 00:35:53.169027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:22535 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.656 [2024-10-09 00:35:53.169042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:28:22.656 [2024-10-09 00:35:53.176463] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2450) with pdu=0x2000198fa7d8 00:28:22.656 [2024-10-09 00:35:53.177481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:19199 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.656 [2024-10-09 00:35:53.177497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:28:22.656 [2024-10-09 00:35:53.184871] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2450) with pdu=0x2000198f6890 00:28:22.656 [2024-10-09 00:35:53.185869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:4379 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.656 [2024-10-09 00:35:53.185885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:28:22.656 [2024-10-09 00:35:53.193294] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2450) with pdu=0x2000198e84c0 00:28:22.656 [2024-10-09 00:35:53.194290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:7114 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.656 [2024-10-09 00:35:53.194305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:28:22.656 [2024-10-09 00:35:53.201714] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2450) with pdu=0x2000198ebb98 00:28:22.656 [2024-10-09 00:35:53.202712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:4294 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.656 [2024-10-09 00:35:53.202731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:28:22.656 [2024-10-09 00:35:53.210153] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2450) with pdu=0x2000198e3060 00:28:22.656 [2024-10-09 00:35:53.211171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:13612 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.656 [2024-10-09 00:35:53.211187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:28:22.656 [2024-10-09 00:35:53.218582] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2450) with pdu=0x2000198ebfd0 00:28:22.656 [2024-10-09 00:35:53.219564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:16211 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.656 [2024-10-09 00:35:53.219579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:28:22.656 [2024-10-09 00:35:53.227006] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2450) with pdu=0x2000198f2d80 00:28:22.656 [2024-10-09 00:35:53.228007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:10304 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.656 [2024-10-09 00:35:53.228022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:28:22.656 [2024-10-09 00:35:53.235404] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2450) with pdu=0x2000198fb480 00:28:22.656 [2024-10-09 00:35:53.236414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:152 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.657 [2024-10-09 00:35:53.236429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:28:22.657 [2024-10-09 00:35:53.243815] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2450) with pdu=0x2000198f3a28 00:28:22.657 [2024-10-09 00:35:53.244669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:6121 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.657 [2024-10-09 00:35:53.244685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:28:22.657 [2024-10-09 00:35:53.252534] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2450) with pdu=0x2000198eaab8 00:28:22.657 [2024-10-09 00:35:53.253634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:6189 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.657 [2024-10-09 00:35:53.253652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:22.657 [2024-10-09 00:35:53.261118] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2450) with pdu=0x2000198df550 00:28:22.657 [2024-10-09 00:35:53.262190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1659 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.657 [2024-10-09 00:35:53.262206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.657 [2024-10-09 00:35:53.269543] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2450) with pdu=0x2000198de470 00:28:22.657 [2024-10-09 00:35:53.270658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:24728 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.657 [2024-10-09 00:35:53.270674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.657 [2024-10-09 00:35:53.277205] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2450) with pdu=0x2000198f57b0 00:28:22.657 [2024-10-09 00:35:53.278625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:22138 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.657 [2024-10-09 00:35:53.278641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:22.657 [2024-10-09 00:35:53.285008] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2450) with pdu=0x2000198e7c50 00:28:22.657 [2024-10-09 00:35:53.285755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:13692 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.657 [2024-10-09 00:35:53.285770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:28:22.917 [2024-10-09 00:35:53.293580] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2450) with pdu=0x2000198ea680 00:28:22.917 [2024-10-09 00:35:53.294353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:23096 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.917 [2024-10-09 00:35:53.294369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:28:22.917 [2024-10-09 00:35:53.302036] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2450) with pdu=0x2000198f8e88 00:28:22.917 [2024-10-09 00:35:53.302783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:24782 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.917 [2024-10-09 00:35:53.302799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:28:22.917 [2024-10-09 00:35:53.310468] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2450) with pdu=0x2000198fb048 00:28:22.917 [2024-10-09 00:35:53.311241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:24213 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.917 [2024-10-09 00:35:53.311256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:28:22.917 [2024-10-09 00:35:53.318883] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2450) with pdu=0x2000198efae0 00:28:22.917 [2024-10-09 00:35:53.319596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:17152 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.917 [2024-10-09 00:35:53.319612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:28:22.917 [2024-10-09 00:35:53.327304] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2450) with pdu=0x2000198fef90 00:28:22.917 [2024-10-09 00:35:53.328071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:18596 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.917 [2024-10-09 00:35:53.328087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:28:22.917 [2024-10-09 00:35:53.335723] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2450) with pdu=0x2000198f1ca0 00:28:22.917 [2024-10-09 00:35:53.336493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:20020 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.917 [2024-10-09 00:35:53.336509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:28:22.917 [2024-10-09 00:35:53.344147] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2450) with pdu=0x2000198de470 00:28:22.917 [2024-10-09 00:35:53.344928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:16992 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.917 [2024-10-09 00:35:53.344944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:28:22.917 [2024-10-09 00:35:53.352576] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2450) with pdu=0x2000198df550 00:28:22.917 [2024-10-09 00:35:53.353352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:9323 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.917 [2024-10-09 00:35:53.353368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:28:22.917 [2024-10-09 00:35:53.361004] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2450) with pdu=0x2000198eaab8 00:28:22.917 [2024-10-09 00:35:53.361715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:2653 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.917 [2024-10-09 00:35:53.361734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:28:22.917 [2024-10-09 00:35:53.369418] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2450) with pdu=0x2000198f35f0 00:28:22.917 [2024-10-09 00:35:53.370201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:5300 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.917 [2024-10-09 00:35:53.370216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:28:22.917 [2024-10-09 00:35:53.377874] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2450) with pdu=0x2000198f1430 00:28:22.917 [2024-10-09 00:35:53.378659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:15985 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.917 [2024-10-09 00:35:53.378674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:28:22.917 [2024-10-09 00:35:53.386321] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2450) with pdu=0x2000198f57b0 00:28:22.917 [2024-10-09 00:35:53.387134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:11102 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.917 [2024-10-09 00:35:53.387149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:28:22.917 [2024-10-09 00:35:53.394893] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2450) with pdu=0x2000198e88f8 00:28:22.917 [2024-10-09 00:35:53.395669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:4464 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.917 [2024-10-09 00:35:53.395685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:22.917 [2024-10-09 00:35:53.403341] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2450) with pdu=0x2000198e7c50 00:28:22.917 [2024-10-09 00:35:53.404119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:7463 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.917 [2024-10-09 00:35:53.404135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:22.917 [2024-10-09 00:35:53.411775] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2450) with pdu=0x2000198f8e88 00:28:22.917 [2024-10-09 00:35:53.412501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:12238 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.917 [2024-10-09 00:35:53.412517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:22.917 [2024-10-09 00:35:53.420178] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2450) with pdu=0x2000198efae0 00:28:22.917 [2024-10-09 00:35:53.420934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:4460 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.917 [2024-10-09 00:35:53.420949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:22.917 [2024-10-09 00:35:53.428589] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2450) with pdu=0x2000198f1ca0 00:28:22.917 [2024-10-09 00:35:53.429355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:16219 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.918 [2024-10-09 00:35:53.429371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:22.918 [2024-10-09 00:35:53.437018] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2450) with pdu=0x2000198df550 00:28:22.918 [2024-10-09 00:35:53.437795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:15191 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.918 [2024-10-09 00:35:53.437810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:22.918 [2024-10-09 00:35:53.445506] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2450) with pdu=0x2000198f35f0 00:28:22.918 [2024-10-09 00:35:53.446278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:11754 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.918 [2024-10-09 00:35:53.446294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:22.918 [2024-10-09 00:35:53.454982] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2450) with pdu=0x2000198f57b0 00:28:22.918 [2024-10-09 00:35:53.456171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:17940 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.918 [2024-10-09 00:35:53.456186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:22.918 [2024-10-09 00:35:53.462444] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2450) with pdu=0x2000198fd208 00:28:22.918 [2024-10-09 00:35:53.462962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:13066 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.918 [2024-10-09 00:35:53.462978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:22.918 [2024-10-09 00:35:53.472599] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2450) with pdu=0x2000198e12d8 00:28:22.918 [2024-10-09 00:35:53.473911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:12284 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.918 [2024-10-09 00:35:53.473929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:22.918 [2024-10-09 00:35:53.479513] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2450) with pdu=0x2000198e0630 00:28:22.918 [2024-10-09 00:35:53.480170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:13485 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.918 [2024-10-09 00:35:53.480185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:22.918 [2024-10-09 00:35:53.487873] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2450) with pdu=0x2000198e4de8 00:28:22.918 [2024-10-09 00:35:53.488539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:1225 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.918 [2024-10-09 00:35:53.488554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:22.918 [2024-10-09 00:35:53.497508] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2450) with pdu=0x2000198df550 00:28:22.918 [2024-10-09 00:35:53.498726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:23261 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.918 [2024-10-09 00:35:53.498741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:22.918 [2024-10-09 00:35:53.505833] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2450) with pdu=0x2000198f7100 00:28:22.918 [2024-10-09 00:35:53.506708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:2754 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.918 [2024-10-09 00:35:53.506728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:22.918 [2024-10-09 00:35:53.513554] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2450) with pdu=0x2000198f1868 00:28:22.918 [2024-10-09 00:35:53.514453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:23148 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.918 [2024-10-09 00:35:53.514469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:28:22.918 [2024-10-09 00:35:53.522172] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2450) with pdu=0x2000198f1ca0 00:28:22.918 [2024-10-09 00:35:53.522954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:22253 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.918 [2024-10-09 00:35:53.522969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:28:22.918 [2024-10-09 00:35:53.531044] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2450) with pdu=0x2000198e6b70 00:28:22.918 [2024-10-09 00:35:53.532027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:20128 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.918 [2024-10-09 00:35:53.532042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:28:22.918 [2024-10-09 00:35:53.539326] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2450) with pdu=0x2000198e1710 00:28:22.918 [2024-10-09 00:35:53.539565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.918 [2024-10-09 00:35:53.539580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.918 [2024-10-09 00:35:53.548009] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2450) with pdu=0x2000198e1710 00:28:22.918 [2024-10-09 00:35:53.548294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23018 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.918 [2024-10-09 00:35:53.548311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.178 [2024-10-09 00:35:53.556753] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2450) with pdu=0x2000198e1710 00:28:23.178 [2024-10-09 00:35:53.557039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16849 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.178 [2024-10-09 00:35:53.557054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.178 [2024-10-09 00:35:53.565454] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2450) with pdu=0x2000198e1710 00:28:23.178 [2024-10-09 00:35:53.565727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21905 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.179 [2024-10-09 00:35:53.565742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.179 [2024-10-09 00:35:53.574134] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2450) with pdu=0x2000198e1710 00:28:23.179 [2024-10-09 00:35:53.574394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23554 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.179 [2024-10-09 00:35:53.574409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.179 [2024-10-09 00:35:53.582871] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2450) with pdu=0x2000198e1710 00:28:23.179 [2024-10-09 00:35:53.583019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13407 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.179 [2024-10-09 00:35:53.583034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.179 [2024-10-09 00:35:53.591557] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2450) with pdu=0x2000198e1710 00:28:23.179 [2024-10-09 00:35:53.591882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15542 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.179 [2024-10-09 00:35:53.591898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.179 [2024-10-09 00:35:53.600257] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2450) with pdu=0x2000198e1710 00:28:23.179 [2024-10-09 00:35:53.600568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18525 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.179 [2024-10-09 00:35:53.600584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.179 [2024-10-09 00:35:53.608925] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2450) with pdu=0x2000198e1710 00:28:23.179 [2024-10-09 00:35:53.609255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13950 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.179 [2024-10-09 00:35:53.609270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.179 [2024-10-09 00:35:53.617587] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2450) with pdu=0x2000198e1710 00:28:23.179 [2024-10-09 00:35:53.617836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16241 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.179 [2024-10-09 00:35:53.617852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.179 [2024-10-09 00:35:53.626304] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2450) with pdu=0x2000198e1710 00:28:23.179 [2024-10-09 00:35:53.626601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8910 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.179 [2024-10-09 00:35:53.626617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.179 [2024-10-09 00:35:53.635034] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2450) with pdu=0x2000198e1710 00:28:23.179 [2024-10-09 00:35:53.635334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12065 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.179 [2024-10-09 00:35:53.635350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.179 [2024-10-09 00:35:53.643760] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2450) with pdu=0x2000198e1710 00:28:23.179 [2024-10-09 00:35:53.644068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16162 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.179 [2024-10-09 00:35:53.644083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.179 [2024-10-09 00:35:53.652448] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2450) with pdu=0x2000198e1710 00:28:23.179 [2024-10-09 00:35:53.652728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7988 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.179 [2024-10-09 00:35:53.652743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.179 [2024-10-09 00:35:53.661134] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2450) with pdu=0x2000198e1710 00:28:23.179 [2024-10-09 00:35:53.661412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16698 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.179 [2024-10-09 00:35:53.661428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.179 [2024-10-09 00:35:53.669824] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2450) with pdu=0x2000198e1710 00:28:23.179 [2024-10-09 00:35:53.670100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22379 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.179 [2024-10-09 00:35:53.670115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.179 [2024-10-09 00:35:53.678589] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2450) with pdu=0x2000198e1710 00:28:23.179 30070.00 IOPS, 117.46 MiB/s [2024-10-08T22:35:53.814Z] [2024-10-09 00:35:53.678878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4675 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.179 [2024-10-09 00:35:53.678892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.179 00:28:23.179 Latency(us) 00:28:23.179 [2024-10-08T22:35:53.814Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:23.179 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:23.179 nvme0n1 : 2.00 30064.58 117.44 0.00 0.00 4250.41 2048.00 11905.71 00:28:23.179 [2024-10-08T22:35:53.814Z] =================================================================================================================== 00:28:23.179 [2024-10-08T22:35:53.814Z] Total : 30064.58 117.44 0.00 0.00 4250.41 2048.00 11905.71 00:28:23.179 { 00:28:23.179 "results": [ 00:28:23.179 { 00:28:23.179 "job": "nvme0n1", 00:28:23.179 "core_mask": "0x2", 00:28:23.179 "workload": "randwrite", 00:28:23.179 "status": "finished", 00:28:23.179 "queue_depth": 128, 00:28:23.179 "io_size": 4096, 00:28:23.179 "runtime": 2.004352, 00:28:23.179 "iops": 30064.579475062266, 00:28:23.179 "mibps": 117.43976357446198, 00:28:23.179 "io_failed": 0, 00:28:23.179 "io_timeout": 0, 00:28:23.179 "avg_latency_us": 4250.41045425379, 00:28:23.179 "min_latency_us": 2048.0, 00:28:23.179 "max_latency_us": 11905.706666666667 00:28:23.179 } 00:28:23.179 ], 00:28:23.179 "core_count": 1 00:28:23.179 } 00:28:23.179 00:35:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:28:23.179 00:35:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:28:23.179 00:35:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:28:23.179 | .driver_specific 00:28:23.179 | .nvme_error 00:28:23.179 | .status_code 00:28:23.179 | .command_transient_transport_error' 00:28:23.179 00:35:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:28:23.440 00:35:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 236 > 0 )) 00:28:23.440 00:35:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3423271 00:28:23.440 00:35:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 3423271 ']' 00:28:23.440 00:35:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 3423271 00:28:23.440 00:35:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:28:23.440 00:35:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:23.440 00:35:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3423271 00:28:23.440 00:35:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:28:23.440 00:35:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:28:23.440 00:35:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3423271' 00:28:23.440 killing process with pid 3423271 00:28:23.440 00:35:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 3423271 00:28:23.440 Received shutdown signal, test time was about 2.000000 seconds 00:28:23.440 00:28:23.440 Latency(us) 00:28:23.440 [2024-10-08T22:35:54.075Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:23.440 [2024-10-08T22:35:54.075Z] =================================================================================================================== 00:28:23.440 [2024-10-08T22:35:54.075Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:23.440 00:35:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 3423271 00:28:23.702 00:35:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:28:23.702 00:35:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:28:23.702 00:35:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:28:23.702 00:35:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:28:23.702 00:35:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:28:23.702 00:35:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3423988 00:28:23.702 00:35:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3423988 /var/tmp/bperf.sock 00:28:23.702 00:35:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 3423988 ']' 00:28:23.702 00:35:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:28:23.702 00:35:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:23.702 00:35:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:23.702 00:35:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:23.702 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:23.702 00:35:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:23.702 00:35:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:23.702 [2024-10-09 00:35:54.136770] Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 initialization... 00:28:23.702 [2024-10-09 00:35:54.136826] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3423988 ] 00:28:23.702 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:23.702 Zero copy mechanism will not be used. 00:28:23.702 [2024-10-09 00:35:54.212601] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:23.702 [2024-10-09 00:35:54.265898] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:28:24.643 00:35:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:24.643 00:35:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:28:24.643 00:35:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:24.643 00:35:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:24.643 00:35:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:28:24.643 00:35:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:24.643 00:35:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:24.643 00:35:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:24.643 00:35:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:24.643 00:35:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:24.904 nvme0n1 00:28:24.904 00:35:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:28:24.904 00:35:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:24.904 00:35:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:24.904 00:35:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:24.904 00:35:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:28:24.904 00:35:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:24.904 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:24.904 Zero copy mechanism will not be used. 00:28:24.904 Running I/O for 2 seconds... 00:28:24.904 [2024-10-09 00:35:55.480701] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:24.904 [2024-10-09 00:35:55.480919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.904 [2024-10-09 00:35:55.480953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:24.904 [2024-10-09 00:35:55.485306] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:24.904 [2024-10-09 00:35:55.485501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.904 [2024-10-09 00:35:55.485521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:24.904 [2024-10-09 00:35:55.493764] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:24.904 [2024-10-09 00:35:55.494088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.904 [2024-10-09 00:35:55.494108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:24.904 [2024-10-09 00:35:55.503466] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:24.904 [2024-10-09 00:35:55.503763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.904 [2024-10-09 00:35:55.503782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.904 [2024-10-09 00:35:55.513060] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:24.904 [2024-10-09 00:35:55.513368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.904 [2024-10-09 00:35:55.513386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:24.904 [2024-10-09 00:35:55.522169] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:24.904 [2024-10-09 00:35:55.522480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.904 [2024-10-09 00:35:55.522498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:24.904 [2024-10-09 00:35:55.528187] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:24.904 [2024-10-09 00:35:55.528377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.904 [2024-10-09 00:35:55.528394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:24.904 [2024-10-09 00:35:55.533768] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:24.904 [2024-10-09 00:35:55.534060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.904 [2024-10-09 00:35:55.534078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:25.168 [2024-10-09 00:35:55.541642] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:25.168 [2024-10-09 00:35:55.541860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.168 [2024-10-09 00:35:55.541876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:25.168 [2024-10-09 00:35:55.550482] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:25.168 [2024-10-09 00:35:55.550818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.168 [2024-10-09 00:35:55.550836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:25.168 [2024-10-09 00:35:55.555690] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:25.168 [2024-10-09 00:35:55.555884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.168 [2024-10-09 00:35:55.555901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:25.168 [2024-10-09 00:35:55.562733] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:25.168 [2024-10-09 00:35:55.563050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.168 [2024-10-09 00:35:55.563068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:25.168 [2024-10-09 00:35:55.570386] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:25.168 [2024-10-09 00:35:55.570578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.168 [2024-10-09 00:35:55.570595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:25.168 [2024-10-09 00:35:55.576268] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:25.168 [2024-10-09 00:35:55.576583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.168 [2024-10-09 00:35:55.576602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:25.168 [2024-10-09 00:35:55.584455] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:25.168 [2024-10-09 00:35:55.584645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.168 [2024-10-09 00:35:55.584661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:25.168 [2024-10-09 00:35:55.593254] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:25.168 [2024-10-09 00:35:55.593564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.168 [2024-10-09 00:35:55.593582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:25.168 [2024-10-09 00:35:55.601995] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:25.168 [2024-10-09 00:35:55.602301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.168 [2024-10-09 00:35:55.602318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:25.168 [2024-10-09 00:35:55.610555] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:25.168 [2024-10-09 00:35:55.610859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.168 [2024-10-09 00:35:55.610877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:25.168 [2024-10-09 00:35:55.620068] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:25.168 [2024-10-09 00:35:55.620364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.168 [2024-10-09 00:35:55.620381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:25.168 [2024-10-09 00:35:55.630156] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:25.168 [2024-10-09 00:35:55.630469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.168 [2024-10-09 00:35:55.630487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:25.168 [2024-10-09 00:35:55.640994] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:25.168 [2024-10-09 00:35:55.641287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.168 [2024-10-09 00:35:55.641304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:25.168 [2024-10-09 00:35:55.651595] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:25.168 [2024-10-09 00:35:55.651838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.168 [2024-10-09 00:35:55.651858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:25.168 [2024-10-09 00:35:55.661917] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:25.168 [2024-10-09 00:35:55.662131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.168 [2024-10-09 00:35:55.662147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:25.168 [2024-10-09 00:35:55.672437] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:25.168 [2024-10-09 00:35:55.672672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.168 [2024-10-09 00:35:55.672688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:25.168 [2024-10-09 00:35:55.682846] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:25.168 [2024-10-09 00:35:55.683114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.168 [2024-10-09 00:35:55.683132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:25.168 [2024-10-09 00:35:55.693127] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:25.168 [2024-10-09 00:35:55.693397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.168 [2024-10-09 00:35:55.693413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:25.168 [2024-10-09 00:35:55.704269] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:25.168 [2024-10-09 00:35:55.704602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.168 [2024-10-09 00:35:55.704628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:25.168 [2024-10-09 00:35:55.714408] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:25.168 [2024-10-09 00:35:55.714614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.168 [2024-10-09 00:35:55.714631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:25.168 [2024-10-09 00:35:55.724283] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:25.168 [2024-10-09 00:35:55.724542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.168 [2024-10-09 00:35:55.724558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:25.168 [2024-10-09 00:35:55.734292] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:25.168 [2024-10-09 00:35:55.734554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.168 [2024-10-09 00:35:55.734572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:25.168 [2024-10-09 00:35:55.744542] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:25.168 [2024-10-09 00:35:55.744862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.168 [2024-10-09 00:35:55.744880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:25.168 [2024-10-09 00:35:55.753926] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:25.168 [2024-10-09 00:35:55.754218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.168 [2024-10-09 00:35:55.754236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:25.168 [2024-10-09 00:35:55.764223] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:25.168 [2024-10-09 00:35:55.764541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.168 [2024-10-09 00:35:55.764559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:25.168 [2024-10-09 00:35:55.774253] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:25.168 [2024-10-09 00:35:55.774665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.168 [2024-10-09 00:35:55.774682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:25.169 [2024-10-09 00:35:55.784778] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:25.169 [2024-10-09 00:35:55.785038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.169 [2024-10-09 00:35:55.785056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:25.169 [2024-10-09 00:35:55.794371] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:25.169 [2024-10-09 00:35:55.794592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.169 [2024-10-09 00:35:55.794609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:25.437 [2024-10-09 00:35:55.804966] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:25.437 [2024-10-09 00:35:55.805314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.437 [2024-10-09 00:35:55.805332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:25.437 [2024-10-09 00:35:55.815462] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:25.437 [2024-10-09 00:35:55.815710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.437 [2024-10-09 00:35:55.815732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:25.437 [2024-10-09 00:35:55.826069] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:25.437 [2024-10-09 00:35:55.826319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.437 [2024-10-09 00:35:55.826337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:25.437 [2024-10-09 00:35:55.836397] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:25.437 [2024-10-09 00:35:55.836704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.437 [2024-10-09 00:35:55.836726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:25.437 [2024-10-09 00:35:55.846704] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:25.437 [2024-10-09 00:35:55.846925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.437 [2024-10-09 00:35:55.846941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:25.437 [2024-10-09 00:35:55.856965] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:25.437 [2024-10-09 00:35:55.857235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.437 [2024-10-09 00:35:55.857253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:25.437 [2024-10-09 00:35:55.866627] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:25.437 [2024-10-09 00:35:55.866930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.437 [2024-10-09 00:35:55.866948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:25.437 [2024-10-09 00:35:55.876363] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:25.437 [2024-10-09 00:35:55.876596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.437 [2024-10-09 00:35:55.876616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:25.437 [2024-10-09 00:35:55.885930] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:25.437 [2024-10-09 00:35:55.886214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.437 [2024-10-09 00:35:55.886232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:25.437 [2024-10-09 00:35:55.895713] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:25.437 [2024-10-09 00:35:55.895992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.437 [2024-10-09 00:35:55.896009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:25.437 [2024-10-09 00:35:55.905917] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:25.437 [2024-10-09 00:35:55.906198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.437 [2024-10-09 00:35:55.906215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:25.437 [2024-10-09 00:35:55.915911] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:25.437 [2024-10-09 00:35:55.916233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.437 [2024-10-09 00:35:55.916251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:25.437 [2024-10-09 00:35:55.926176] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:25.437 [2024-10-09 00:35:55.926472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.437 [2024-10-09 00:35:55.926489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:25.437 [2024-10-09 00:35:55.935985] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:25.437 [2024-10-09 00:35:55.936235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.437 [2024-10-09 00:35:55.936252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:25.437 [2024-10-09 00:35:55.945897] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:25.437 [2024-10-09 00:35:55.946204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.437 [2024-10-09 00:35:55.946222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:25.437 [2024-10-09 00:35:55.955490] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:25.437 [2024-10-09 00:35:55.955728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.437 [2024-10-09 00:35:55.955744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:25.437 [2024-10-09 00:35:55.965702] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:25.437 [2024-10-09 00:35:55.965926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.437 [2024-10-09 00:35:55.965942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:25.437 [2024-10-09 00:35:55.975940] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:25.437 [2024-10-09 00:35:55.976173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.437 [2024-10-09 00:35:55.976188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:25.437 [2024-10-09 00:35:55.982548] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:25.437 [2024-10-09 00:35:55.982602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.437 [2024-10-09 00:35:55.982618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:25.437 [2024-10-09 00:35:55.989218] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:25.437 [2024-10-09 00:35:55.989331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.437 [2024-10-09 00:35:55.989347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:25.437 [2024-10-09 00:35:55.995382] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:25.437 [2024-10-09 00:35:55.995785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.437 [2024-10-09 00:35:55.995802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:25.437 [2024-10-09 00:35:55.999253] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:25.437 [2024-10-09 00:35:55.999323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.437 [2024-10-09 00:35:55.999338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:25.437 [2024-10-09 00:35:56.004254] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:25.437 [2024-10-09 00:35:56.004501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.437 [2024-10-09 00:35:56.004518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:25.437 [2024-10-09 00:35:56.008450] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:25.437 [2024-10-09 00:35:56.008510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.437 [2024-10-09 00:35:56.008525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:25.437 [2024-10-09 00:35:56.012655] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:25.437 [2024-10-09 00:35:56.012716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.437 [2024-10-09 00:35:56.012737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:25.438 [2024-10-09 00:35:56.016274] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:25.438 [2024-10-09 00:35:56.016326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.438 [2024-10-09 00:35:56.016341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:25.438 [2024-10-09 00:35:56.021321] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:25.438 [2024-10-09 00:35:56.021376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.438 [2024-10-09 00:35:56.021391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:25.438 [2024-10-09 00:35:56.024699] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:25.438 [2024-10-09 00:35:56.024755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.438 [2024-10-09 00:35:56.024771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:25.438 [2024-10-09 00:35:56.029582] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:25.438 [2024-10-09 00:35:56.029627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.438 [2024-10-09 00:35:56.029642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:25.438 [2024-10-09 00:35:56.035625] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:25.438 [2024-10-09 00:35:56.035668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.438 [2024-10-09 00:35:56.035683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:25.438 [2024-10-09 00:35:56.038854] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:25.438 [2024-10-09 00:35:56.038916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.438 [2024-10-09 00:35:56.038931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:25.438 [2024-10-09 00:35:56.042891] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:25.438 [2024-10-09 00:35:56.042987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.438 [2024-10-09 00:35:56.043002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:25.438 [2024-10-09 00:35:56.050064] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:25.438 [2024-10-09 00:35:56.050122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.438 [2024-10-09 00:35:56.050138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:25.438 [2024-10-09 00:35:56.057364] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:25.438 [2024-10-09 00:35:56.057416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.438 [2024-10-09 00:35:56.057435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:25.438 [2024-10-09 00:35:56.063186] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:25.438 [2024-10-09 00:35:56.063260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.438 [2024-10-09 00:35:56.063276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:25.699 [2024-10-09 00:35:56.070994] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:25.699 [2024-10-09 00:35:56.071051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.699 [2024-10-09 00:35:56.071066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:25.699 [2024-10-09 00:35:56.078426] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:25.699 [2024-10-09 00:35:56.078487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.699 [2024-10-09 00:35:56.078503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:25.699 [2024-10-09 00:35:56.083716] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:25.699 [2024-10-09 00:35:56.083784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.699 [2024-10-09 00:35:56.083800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:25.699 [2024-10-09 00:35:56.091206] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:25.699 [2024-10-09 00:35:56.091260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.699 [2024-10-09 00:35:56.091276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:25.699 [2024-10-09 00:35:56.097667] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:25.699 [2024-10-09 00:35:56.097754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.699 [2024-10-09 00:35:56.097770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:25.699 [2024-10-09 00:35:56.104910] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:25.700 [2024-10-09 00:35:56.104984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.700 [2024-10-09 00:35:56.105000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:25.700 [2024-10-09 00:35:56.110863] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:25.700 [2024-10-09 00:35:56.111106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.700 [2024-10-09 00:35:56.111124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:25.700 [2024-10-09 00:35:56.119136] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:25.700 [2024-10-09 00:35:56.119200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.700 [2024-10-09 00:35:56.119215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:25.700 [2024-10-09 00:35:56.125868] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:25.700 [2024-10-09 00:35:56.125942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.700 [2024-10-09 00:35:56.125957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:25.700 [2024-10-09 00:35:56.130782] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:25.700 [2024-10-09 00:35:56.130843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.700 [2024-10-09 00:35:56.130858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:25.700 [2024-10-09 00:35:56.137575] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:25.700 [2024-10-09 00:35:56.137839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.700 [2024-10-09 00:35:56.137854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:25.700 [2024-10-09 00:35:56.145112] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:25.700 [2024-10-09 00:35:56.145173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.700 [2024-10-09 00:35:56.145189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:25.700 [2024-10-09 00:35:56.153476] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:25.700 [2024-10-09 00:35:56.153759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.700 [2024-10-09 00:35:56.153776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:25.700 [2024-10-09 00:35:56.161459] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:25.700 [2024-10-09 00:35:56.161514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.700 [2024-10-09 00:35:56.161530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:25.700 [2024-10-09 00:35:56.165941] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:25.700 [2024-10-09 00:35:56.165998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.700 [2024-10-09 00:35:56.166013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:25.700 [2024-10-09 00:35:56.169417] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:25.700 [2024-10-09 00:35:56.169477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.700 [2024-10-09 00:35:56.169492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:25.700 [2024-10-09 00:35:56.174996] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:25.700 [2024-10-09 00:35:56.175056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.700 [2024-10-09 00:35:56.175071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:25.700 [2024-10-09 00:35:56.184068] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:25.700 [2024-10-09 00:35:56.184114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.700 [2024-10-09 00:35:56.184130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:25.700 [2024-10-09 00:35:56.191234] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:25.700 [2024-10-09 00:35:56.191284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.700 [2024-10-09 00:35:56.191299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:25.700 [2024-10-09 00:35:56.197113] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:25.700 [2024-10-09 00:35:56.197376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.700 [2024-10-09 00:35:56.197393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:25.700 [2024-10-09 00:35:56.203951] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:25.700 [2024-10-09 00:35:56.204002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.700 [2024-10-09 00:35:56.204018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:25.700 [2024-10-09 00:35:56.211630] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:25.700 [2024-10-09 00:35:56.211712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.700 [2024-10-09 00:35:56.211732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:25.700 [2024-10-09 00:35:56.220401] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:25.700 [2024-10-09 00:35:56.220459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.700 [2024-10-09 00:35:56.220475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:25.700 [2024-10-09 00:35:56.226325] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:25.700 [2024-10-09 00:35:56.226378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.700 [2024-10-09 00:35:56.226397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:25.700 [2024-10-09 00:35:56.234191] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:25.700 [2024-10-09 00:35:56.234432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.700 [2024-10-09 00:35:56.234453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:25.700 [2024-10-09 00:35:56.240009] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:25.700 [2024-10-09 00:35:56.240069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.700 [2024-10-09 00:35:56.240085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:25.700 [2024-10-09 00:35:56.244053] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:25.700 [2024-10-09 00:35:56.244122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.700 [2024-10-09 00:35:56.244138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:25.700 [2024-10-09 00:35:56.247800] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:25.700 [2024-10-09 00:35:56.247864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.700 [2024-10-09 00:35:56.247879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:25.700 [2024-10-09 00:35:56.252441] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:25.700 [2024-10-09 00:35:56.252501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.700 [2024-10-09 00:35:56.252517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:25.700 [2024-10-09 00:35:56.258908] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:25.700 [2024-10-09 00:35:56.258958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.700 [2024-10-09 00:35:56.258973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:25.700 [2024-10-09 00:35:56.265289] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:25.700 [2024-10-09 00:35:56.265349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.700 [2024-10-09 00:35:56.265365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:25.700 [2024-10-09 00:35:56.273978] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:25.700 [2024-10-09 00:35:56.274044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.700 [2024-10-09 00:35:56.274059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:25.700 [2024-10-09 00:35:56.283133] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:25.700 [2024-10-09 00:35:56.283197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.700 [2024-10-09 00:35:56.283213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:25.700 [2024-10-09 00:35:56.290695] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:25.700 [2024-10-09 00:35:56.290874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.700 [2024-10-09 00:35:56.290889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:25.701 [2024-10-09 00:35:56.300249] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:25.701 [2024-10-09 00:35:56.300312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.701 [2024-10-09 00:35:56.300328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:25.701 [2024-10-09 00:35:56.305627] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:25.701 [2024-10-09 00:35:56.305684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.701 [2024-10-09 00:35:56.305699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:25.701 [2024-10-09 00:35:56.310953] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:25.701 [2024-10-09 00:35:56.311071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.701 [2024-10-09 00:35:56.311086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:25.701 [2024-10-09 00:35:56.320125] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:25.701 [2024-10-09 00:35:56.320305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.701 [2024-10-09 00:35:56.320320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:25.701 [2024-10-09 00:35:56.326082] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:25.701 [2024-10-09 00:35:56.326139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.701 [2024-10-09 00:35:56.326154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:25.961 [2024-10-09 00:35:56.332769] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:25.961 [2024-10-09 00:35:56.332832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.961 [2024-10-09 00:35:56.332847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:25.961 [2024-10-09 00:35:56.338759] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:25.961 [2024-10-09 00:35:56.339006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.961 [2024-10-09 00:35:56.339021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:25.961 [2024-10-09 00:35:56.348344] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:25.961 [2024-10-09 00:35:56.348638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.961 [2024-10-09 00:35:56.348658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:25.961 [2024-10-09 00:35:56.358396] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:25.961 [2024-10-09 00:35:56.358563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.961 [2024-10-09 00:35:56.358579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:25.961 [2024-10-09 00:35:56.368486] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:25.961 [2024-10-09 00:35:56.368705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.961 [2024-10-09 00:35:56.368729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:25.961 [2024-10-09 00:35:56.378583] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:25.961 [2024-10-09 00:35:56.378826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.961 [2024-10-09 00:35:56.378842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:25.961 [2024-10-09 00:35:56.388575] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:25.961 [2024-10-09 00:35:56.388660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.961 [2024-10-09 00:35:56.388675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:25.961 [2024-10-09 00:35:56.396196] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:25.961 [2024-10-09 00:35:56.396244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.961 [2024-10-09 00:35:56.396260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:25.961 [2024-10-09 00:35:56.399758] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:25.961 [2024-10-09 00:35:56.399837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.961 [2024-10-09 00:35:56.399853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:25.961 [2024-10-09 00:35:56.403006] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:25.961 [2024-10-09 00:35:56.403073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.961 [2024-10-09 00:35:56.403089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:25.961 [2024-10-09 00:35:56.407714] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:25.961 [2024-10-09 00:35:56.407789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.961 [2024-10-09 00:35:56.407805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:25.961 [2024-10-09 00:35:56.411589] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:25.961 [2024-10-09 00:35:56.411668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.961 [2024-10-09 00:35:56.411684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:25.961 [2024-10-09 00:35:56.415400] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:25.961 [2024-10-09 00:35:56.415476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.961 [2024-10-09 00:35:56.415491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:25.961 [2024-10-09 00:35:56.419035] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:25.961 [2024-10-09 00:35:56.419116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.961 [2024-10-09 00:35:56.419132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:25.961 [2024-10-09 00:35:56.423557] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:25.961 [2024-10-09 00:35:56.423636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.961 [2024-10-09 00:35:56.423652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:25.961 [2024-10-09 00:35:56.427627] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:25.961 [2024-10-09 00:35:56.427692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.961 [2024-10-09 00:35:56.427707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:25.961 [2024-10-09 00:35:56.431755] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:25.961 [2024-10-09 00:35:56.431831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.961 [2024-10-09 00:35:56.431849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:25.961 [2024-10-09 00:35:56.436378] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:25.961 [2024-10-09 00:35:56.436593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.961 [2024-10-09 00:35:56.436609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:25.961 [2024-10-09 00:35:56.441139] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:25.961 [2024-10-09 00:35:56.441204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.961 [2024-10-09 00:35:56.441219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:25.961 [2024-10-09 00:35:56.444864] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:25.961 [2024-10-09 00:35:56.444931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.961 [2024-10-09 00:35:56.444946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:25.961 [2024-10-09 00:35:56.448278] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:25.961 [2024-10-09 00:35:56.448344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.961 [2024-10-09 00:35:56.448360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:25.961 [2024-10-09 00:35:56.452066] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:25.961 [2024-10-09 00:35:56.452132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.961 [2024-10-09 00:35:56.452147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:25.961 [2024-10-09 00:35:56.457408] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:25.961 [2024-10-09 00:35:56.457666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.961 [2024-10-09 00:35:56.457682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:25.961 [2024-10-09 00:35:56.464604] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:25.961 [2024-10-09 00:35:56.464679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.961 [2024-10-09 00:35:56.464694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:25.961 [2024-10-09 00:35:56.471255] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:25.961 [2024-10-09 00:35:56.471531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.961 [2024-10-09 00:35:56.471549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:25.961 4197.00 IOPS, 524.62 MiB/s [2024-10-08T22:35:56.596Z] [2024-10-09 00:35:56.478911] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:25.961 [2024-10-09 00:35:56.479004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.961 [2024-10-09 00:35:56.479022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:25.961 [2024-10-09 00:35:56.484685] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:25.961 [2024-10-09 00:35:56.484895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.961 [2024-10-09 00:35:56.484911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:25.961 [2024-10-09 00:35:56.490398] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:25.961 [2024-10-09 00:35:56.490467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.961 [2024-10-09 00:35:56.490484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:25.961 [2024-10-09 00:35:56.493668] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:25.961 [2024-10-09 00:35:56.493738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.961 [2024-10-09 00:35:56.493758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:25.961 [2024-10-09 00:35:56.497037] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:25.961 [2024-10-09 00:35:56.497112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.961 [2024-10-09 00:35:56.497130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:25.961 [2024-10-09 00:35:56.500321] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:25.961 [2024-10-09 00:35:56.500393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.961 [2024-10-09 00:35:56.500409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:25.961 [2024-10-09 00:35:56.503347] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:25.961 [2024-10-09 00:35:56.503425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.961 [2024-10-09 00:35:56.503440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:25.961 [2024-10-09 00:35:56.506605] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:25.961 [2024-10-09 00:35:56.506748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.961 [2024-10-09 00:35:56.506764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:25.961 [2024-10-09 00:35:56.510269] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:25.961 [2024-10-09 00:35:56.510404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.961 [2024-10-09 00:35:56.510421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:25.961 [2024-10-09 00:35:56.518424] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:25.961 [2024-10-09 00:35:56.518691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.961 [2024-10-09 00:35:56.518708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:25.961 [2024-10-09 00:35:56.527048] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:25.961 [2024-10-09 00:35:56.527302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.961 [2024-10-09 00:35:56.527318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:25.961 [2024-10-09 00:35:56.533648] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:25.961 [2024-10-09 00:35:56.533730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.961 [2024-10-09 00:35:56.533745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:25.961 [2024-10-09 00:35:56.537115] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:25.961 [2024-10-09 00:35:56.537183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.961 [2024-10-09 00:35:56.537199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:25.961 [2024-10-09 00:35:56.540481] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:25.961 [2024-10-09 00:35:56.540551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.961 [2024-10-09 00:35:56.540569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:25.961 [2024-10-09 00:35:56.544290] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:25.961 [2024-10-09 00:35:56.544377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.961 [2024-10-09 00:35:56.544392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:25.961 [2024-10-09 00:35:56.548171] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:25.961 [2024-10-09 00:35:56.548240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.961 [2024-10-09 00:35:56.548256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:25.961 [2024-10-09 00:35:56.551332] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:25.961 [2024-10-09 00:35:56.551401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.961 [2024-10-09 00:35:56.551416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:25.961 [2024-10-09 00:35:56.554622] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:25.961 [2024-10-09 00:35:56.554689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.961 [2024-10-09 00:35:56.554705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:25.961 [2024-10-09 00:35:56.558425] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:25.961 [2024-10-09 00:35:56.558499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.961 [2024-10-09 00:35:56.558516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:25.962 [2024-10-09 00:35:56.562313] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:25.962 [2024-10-09 00:35:56.562607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.962 [2024-10-09 00:35:56.562624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:25.962 [2024-10-09 00:35:56.568369] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:25.962 [2024-10-09 00:35:56.568605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.962 [2024-10-09 00:35:56.568620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:25.962 [2024-10-09 00:35:56.573561] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:25.962 [2024-10-09 00:35:56.573624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.962 [2024-10-09 00:35:56.573640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:25.962 [2024-10-09 00:35:56.577402] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:25.962 [2024-10-09 00:35:56.577464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.962 [2024-10-09 00:35:56.577480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:25.962 [2024-10-09 00:35:56.581244] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:25.962 [2024-10-09 00:35:56.581300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.962 [2024-10-09 00:35:56.581316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:25.962 [2024-10-09 00:35:56.585739] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:25.962 [2024-10-09 00:35:56.585796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.962 [2024-10-09 00:35:56.585812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:25.962 [2024-10-09 00:35:56.591906] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:25.962 [2024-10-09 00:35:56.592199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.962 [2024-10-09 00:35:56.592216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:26.224 [2024-10-09 00:35:56.598225] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:26.224 [2024-10-09 00:35:56.598301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.224 [2024-10-09 00:35:56.598317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:26.224 [2024-10-09 00:35:56.601616] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:26.224 [2024-10-09 00:35:56.601684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.224 [2024-10-09 00:35:56.601700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:26.224 [2024-10-09 00:35:56.604916] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:26.224 [2024-10-09 00:35:56.604985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.224 [2024-10-09 00:35:56.605003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.224 [2024-10-09 00:35:56.608214] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:26.224 [2024-10-09 00:35:56.608287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.224 [2024-10-09 00:35:56.608305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:26.224 [2024-10-09 00:35:56.611415] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:26.224 [2024-10-09 00:35:56.611488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.224 [2024-10-09 00:35:56.611504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:26.224 [2024-10-09 00:35:56.614667] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:26.224 [2024-10-09 00:35:56.614745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.224 [2024-10-09 00:35:56.614761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:26.224 [2024-10-09 00:35:56.617396] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:26.224 [2024-10-09 00:35:56.617463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.224 [2024-10-09 00:35:56.617479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.224 [2024-10-09 00:35:56.620520] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:26.224 [2024-10-09 00:35:56.620590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.224 [2024-10-09 00:35:56.620610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:26.224 [2024-10-09 00:35:56.623769] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:26.224 [2024-10-09 00:35:56.623843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.224 [2024-10-09 00:35:56.623858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:26.224 [2024-10-09 00:35:56.626899] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:26.224 [2024-10-09 00:35:56.626967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.224 [2024-10-09 00:35:56.626984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:26.224 [2024-10-09 00:35:56.629663] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:26.224 [2024-10-09 00:35:56.629732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.224 [2024-10-09 00:35:56.629748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.224 [2024-10-09 00:35:56.632193] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:26.224 [2024-10-09 00:35:56.632260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.224 [2024-10-09 00:35:56.632275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:26.224 [2024-10-09 00:35:56.634932] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:26.224 [2024-10-09 00:35:56.635046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.224 [2024-10-09 00:35:56.635062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:26.224 [2024-10-09 00:35:56.638248] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:26.224 [2024-10-09 00:35:56.638335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.224 [2024-10-09 00:35:56.638353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:26.224 [2024-10-09 00:35:56.640705] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:26.224 [2024-10-09 00:35:56.640790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.224 [2024-10-09 00:35:56.640806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.224 [2024-10-09 00:35:56.643169] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:26.224 [2024-10-09 00:35:56.643251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.224 [2024-10-09 00:35:56.643268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:26.224 [2024-10-09 00:35:56.645650] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:26.224 [2024-10-09 00:35:56.645753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.224 [2024-10-09 00:35:56.645769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:26.224 [2024-10-09 00:35:56.648409] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:26.224 [2024-10-09 00:35:56.648497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.224 [2024-10-09 00:35:56.648512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:26.224 [2024-10-09 00:35:56.651691] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:26.224 [2024-10-09 00:35:56.651826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.224 [2024-10-09 00:35:56.651842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.224 [2024-10-09 00:35:56.657755] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:26.224 [2024-10-09 00:35:56.657822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.224 [2024-10-09 00:35:56.657837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:26.224 [2024-10-09 00:35:56.663758] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:26.224 [2024-10-09 00:35:56.663824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.224 [2024-10-09 00:35:56.663842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:26.224 [2024-10-09 00:35:56.666908] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:26.224 [2024-10-09 00:35:56.666972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.224 [2024-10-09 00:35:56.666987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:26.224 [2024-10-09 00:35:56.670933] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:26.224 [2024-10-09 00:35:56.671016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.224 [2024-10-09 00:35:56.671032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.224 [2024-10-09 00:35:56.675402] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:26.224 [2024-10-09 00:35:56.675478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.224 [2024-10-09 00:35:56.675493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:26.224 [2024-10-09 00:35:56.679050] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:26.224 [2024-10-09 00:35:56.679126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.224 [2024-10-09 00:35:56.679141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:26.224 [2024-10-09 00:35:56.682761] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:26.224 [2024-10-09 00:35:56.682845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.224 [2024-10-09 00:35:56.682860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:26.224 [2024-10-09 00:35:56.687205] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:26.224 [2024-10-09 00:35:56.687291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.225 [2024-10-09 00:35:56.687306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.225 [2024-10-09 00:35:56.691316] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:26.225 [2024-10-09 00:35:56.691390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.225 [2024-10-09 00:35:56.691405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:26.225 [2024-10-09 00:35:56.695479] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:26.225 [2024-10-09 00:35:56.695593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.225 [2024-10-09 00:35:56.695609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:26.225 [2024-10-09 00:35:56.702736] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:26.225 [2024-10-09 00:35:56.702812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.225 [2024-10-09 00:35:56.702828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:26.225 [2024-10-09 00:35:56.706425] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:26.225 [2024-10-09 00:35:56.706491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.225 [2024-10-09 00:35:56.706506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.225 [2024-10-09 00:35:56.710116] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:26.225 [2024-10-09 00:35:56.710197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.225 [2024-10-09 00:35:56.710215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:26.225 [2024-10-09 00:35:56.717166] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:26.225 [2024-10-09 00:35:56.717462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.225 [2024-10-09 00:35:56.717480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:26.225 [2024-10-09 00:35:56.725626] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:26.225 [2024-10-09 00:35:56.725732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.225 [2024-10-09 00:35:56.725747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:26.225 [2024-10-09 00:35:56.731059] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:26.225 [2024-10-09 00:35:56.731139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.225 [2024-10-09 00:35:56.731155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.225 [2024-10-09 00:35:56.734905] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:26.225 [2024-10-09 00:35:56.734971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.225 [2024-10-09 00:35:56.734986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:26.225 [2024-10-09 00:35:56.738357] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:26.225 [2024-10-09 00:35:56.738433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.225 [2024-10-09 00:35:56.738449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:26.225 [2024-10-09 00:35:56.745191] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:26.225 [2024-10-09 00:35:56.745441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.225 [2024-10-09 00:35:56.745457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:26.225 [2024-10-09 00:35:56.749823] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:26.225 [2024-10-09 00:35:56.749918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.225 [2024-10-09 00:35:56.749935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.225 [2024-10-09 00:35:56.753306] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:26.225 [2024-10-09 00:35:56.753385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.225 [2024-10-09 00:35:56.753400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:26.225 [2024-10-09 00:35:56.757401] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:26.225 [2024-10-09 00:35:56.757646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.225 [2024-10-09 00:35:56.757661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:26.225 [2024-10-09 00:35:56.762175] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:26.225 [2024-10-09 00:35:56.762245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.225 [2024-10-09 00:35:56.762263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:26.225 [2024-10-09 00:35:56.766896] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:26.225 [2024-10-09 00:35:56.766964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.225 [2024-10-09 00:35:56.766979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.225 [2024-10-09 00:35:56.770415] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:26.225 [2024-10-09 00:35:56.770483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.225 [2024-10-09 00:35:56.770498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:26.225 [2024-10-09 00:35:56.774153] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:26.225 [2024-10-09 00:35:56.774223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.225 [2024-10-09 00:35:56.774238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:26.225 [2024-10-09 00:35:56.777769] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:26.225 [2024-10-09 00:35:56.777841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.225 [2024-10-09 00:35:56.777857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:26.225 [2024-10-09 00:35:56.780575] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:26.225 [2024-10-09 00:35:56.780639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.225 [2024-10-09 00:35:56.780657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.225 [2024-10-09 00:35:56.783085] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:26.225 [2024-10-09 00:35:56.783167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.225 [2024-10-09 00:35:56.783183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:26.225 [2024-10-09 00:35:56.785637] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:26.225 [2024-10-09 00:35:56.785712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.225 [2024-10-09 00:35:56.785732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:26.225 [2024-10-09 00:35:56.788191] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:26.225 [2024-10-09 00:35:56.788265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.225 [2024-10-09 00:35:56.788280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:26.225 [2024-10-09 00:35:56.790809] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:26.225 [2024-10-09 00:35:56.790884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.225 [2024-10-09 00:35:56.790899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.225 [2024-10-09 00:35:56.793935] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:26.225 [2024-10-09 00:35:56.794035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.225 [2024-10-09 00:35:56.794052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:26.225 [2024-10-09 00:35:56.800318] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:26.225 [2024-10-09 00:35:56.800559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.225 [2024-10-09 00:35:56.800574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:26.225 [2024-10-09 00:35:56.809536] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:26.225 [2024-10-09 00:35:56.809650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.225 [2024-10-09 00:35:56.809666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:26.225 [2024-10-09 00:35:56.819516] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:26.225 [2024-10-09 00:35:56.819727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.225 [2024-10-09 00:35:56.819742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.225 [2024-10-09 00:35:56.827666] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:26.226 [2024-10-09 00:35:56.827736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.226 [2024-10-09 00:35:56.827751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:26.226 [2024-10-09 00:35:56.832140] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:26.226 [2024-10-09 00:35:56.832191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.226 [2024-10-09 00:35:56.832207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:26.226 [2024-10-09 00:35:56.835000] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:26.226 [2024-10-09 00:35:56.835047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.226 [2024-10-09 00:35:56.835063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:26.226 [2024-10-09 00:35:56.837679] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:26.226 [2024-10-09 00:35:56.837736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.226 [2024-10-09 00:35:56.837752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.226 [2024-10-09 00:35:56.840293] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:26.226 [2024-10-09 00:35:56.840344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.226 [2024-10-09 00:35:56.840359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:26.226 [2024-10-09 00:35:56.842920] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:26.226 [2024-10-09 00:35:56.842969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.226 [2024-10-09 00:35:56.842985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:26.226 [2024-10-09 00:35:56.845514] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:26.226 [2024-10-09 00:35:56.845580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.226 [2024-10-09 00:35:56.845596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:26.226 [2024-10-09 00:35:56.848086] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:26.226 [2024-10-09 00:35:56.848141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.226 [2024-10-09 00:35:56.848157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.226 [2024-10-09 00:35:56.851258] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:26.226 [2024-10-09 00:35:56.851362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.226 [2024-10-09 00:35:56.851378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:26.226 [2024-10-09 00:35:56.854191] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:26.226 [2024-10-09 00:35:56.854238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.226 [2024-10-09 00:35:56.854253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:26.489 [2024-10-09 00:35:56.856699] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:26.489 [2024-10-09 00:35:56.856769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.489 [2024-10-09 00:35:56.856785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:26.489 [2024-10-09 00:35:56.859219] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:26.489 [2024-10-09 00:35:56.859286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.489 [2024-10-09 00:35:56.859302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.489 [2024-10-09 00:35:56.861735] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:26.489 [2024-10-09 00:35:56.861799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.489 [2024-10-09 00:35:56.861815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:26.489 [2024-10-09 00:35:56.864200] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:26.489 [2024-10-09 00:35:56.864258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.489 [2024-10-09 00:35:56.864274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:26.489 [2024-10-09 00:35:56.867691] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:26.489 [2024-10-09 00:35:56.867769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.489 [2024-10-09 00:35:56.867785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:26.489 [2024-10-09 00:35:56.870678] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:26.489 [2024-10-09 00:35:56.870754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.489 [2024-10-09 00:35:56.870770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.489 [2024-10-09 00:35:56.873149] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:26.489 [2024-10-09 00:35:56.873208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.489 [2024-10-09 00:35:56.873223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:26.489 [2024-10-09 00:35:56.875618] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:26.489 [2024-10-09 00:35:56.875672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.489 [2024-10-09 00:35:56.875690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:26.489 [2024-10-09 00:35:56.878284] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:26.490 [2024-10-09 00:35:56.878372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.490 [2024-10-09 00:35:56.878387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:26.490 [2024-10-09 00:35:56.881579] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:26.490 [2024-10-09 00:35:56.881645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.490 [2024-10-09 00:35:56.881661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.490 [2024-10-09 00:35:56.884012] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:26.490 [2024-10-09 00:35:56.884077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.490 [2024-10-09 00:35:56.884092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:26.490 [2024-10-09 00:35:56.886426] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:26.490 [2024-10-09 00:35:56.886489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.490 [2024-10-09 00:35:56.886504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:26.490 [2024-10-09 00:35:56.889307] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:26.490 [2024-10-09 00:35:56.889376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.490 [2024-10-09 00:35:56.889391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:26.490 [2024-10-09 00:35:56.894369] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:26.490 [2024-10-09 00:35:56.894631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.490 [2024-10-09 00:35:56.894646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.490 [2024-10-09 00:35:56.902388] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:26.490 [2024-10-09 00:35:56.902482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.490 [2024-10-09 00:35:56.902497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:26.490 [2024-10-09 00:35:56.905816] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:26.490 [2024-10-09 00:35:56.905899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.490 [2024-10-09 00:35:56.905914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:26.490 [2024-10-09 00:35:56.911149] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:26.490 [2024-10-09 00:35:56.911200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.490 [2024-10-09 00:35:56.911216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:26.490 [2024-10-09 00:35:56.915558] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:26.490 [2024-10-09 00:35:56.915652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.490 [2024-10-09 00:35:56.915667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.490 [2024-10-09 00:35:56.921347] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:26.490 [2024-10-09 00:35:56.921611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.490 [2024-10-09 00:35:56.921629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:26.490 [2024-10-09 00:35:56.928274] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:26.490 [2024-10-09 00:35:56.928372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.490 [2024-10-09 00:35:56.928388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:26.490 [2024-10-09 00:35:56.932201] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:26.490 [2024-10-09 00:35:56.932258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.490 [2024-10-09 00:35:56.932273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:26.490 [2024-10-09 00:35:56.935913] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:26.490 [2024-10-09 00:35:56.936016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.490 [2024-10-09 00:35:56.936031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.490 [2024-10-09 00:35:56.942703] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:26.490 [2024-10-09 00:35:56.942770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.490 [2024-10-09 00:35:56.942786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:26.490 [2024-10-09 00:35:56.946095] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:26.490 [2024-10-09 00:35:56.946146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.490 [2024-10-09 00:35:56.946161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:26.490 [2024-10-09 00:35:56.949170] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:26.490 [2024-10-09 00:35:56.949211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.490 [2024-10-09 00:35:56.949229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:26.490 [2024-10-09 00:35:56.952120] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:26.490 [2024-10-09 00:35:56.952163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.490 [2024-10-09 00:35:56.952179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.490 [2024-10-09 00:35:56.955367] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:26.490 [2024-10-09 00:35:56.955427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.490 [2024-10-09 00:35:56.955443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:26.490 [2024-10-09 00:35:56.958115] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:26.490 [2024-10-09 00:35:56.958158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.490 [2024-10-09 00:35:56.958173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:26.490 [2024-10-09 00:35:56.960781] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:26.490 [2024-10-09 00:35:56.960847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.490 [2024-10-09 00:35:56.960862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:26.490 [2024-10-09 00:35:56.963335] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:26.490 [2024-10-09 00:35:56.963380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.490 [2024-10-09 00:35:56.963395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.490 [2024-10-09 00:35:56.965834] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:26.490 [2024-10-09 00:35:56.965882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.490 [2024-10-09 00:35:56.965898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:26.490 [2024-10-09 00:35:56.968527] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:26.490 [2024-10-09 00:35:56.968595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.490 [2024-10-09 00:35:56.968610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:26.490 [2024-10-09 00:35:56.971255] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:26.490 [2024-10-09 00:35:56.971303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.490 [2024-10-09 00:35:56.971323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:26.490 [2024-10-09 00:35:56.973850] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:26.490 [2024-10-09 00:35:56.973907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.490 [2024-10-09 00:35:56.973922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.490 [2024-10-09 00:35:56.979131] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:26.490 [2024-10-09 00:35:56.979353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.490 [2024-10-09 00:35:56.979368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:26.490 [2024-10-09 00:35:56.982269] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:26.490 [2024-10-09 00:35:56.982350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.490 [2024-10-09 00:35:56.982366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:26.490 [2024-10-09 00:35:56.985627] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:26.490 [2024-10-09 00:35:56.985736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.490 [2024-10-09 00:35:56.985751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:26.490 [2024-10-09 00:35:56.992954] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:26.491 [2024-10-09 00:35:56.993187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.491 [2024-10-09 00:35:56.993202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.491 [2024-10-09 00:35:57.001324] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:26.491 [2024-10-09 00:35:57.001559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.491 [2024-10-09 00:35:57.001574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:26.491 [2024-10-09 00:35:57.009803] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:26.491 [2024-10-09 00:35:57.010069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.491 [2024-10-09 00:35:57.010084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:26.491 [2024-10-09 00:35:57.018172] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:26.491 [2024-10-09 00:35:57.018446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.491 [2024-10-09 00:35:57.018463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:26.491 [2024-10-09 00:35:57.028246] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:26.491 [2024-10-09 00:35:57.028475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.491 [2024-10-09 00:35:57.028491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.491 [2024-10-09 00:35:57.036951] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:26.491 [2024-10-09 00:35:57.037187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.491 [2024-10-09 00:35:57.037203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:26.491 [2024-10-09 00:35:57.045042] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:26.491 [2024-10-09 00:35:57.045111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.491 [2024-10-09 00:35:57.045127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:26.491 [2024-10-09 00:35:57.048926] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:26.491 [2024-10-09 00:35:57.048973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.491 [2024-10-09 00:35:57.048988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:26.491 [2024-10-09 00:35:57.052400] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:26.491 [2024-10-09 00:35:57.052445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.491 [2024-10-09 00:35:57.052460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.491 [2024-10-09 00:35:57.055825] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:26.491 [2024-10-09 00:35:57.055899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.491 [2024-10-09 00:35:57.055914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:26.491 [2024-10-09 00:35:57.059303] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:26.491 [2024-10-09 00:35:57.059365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.491 [2024-10-09 00:35:57.059379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:26.491 [2024-10-09 00:35:57.062921] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:26.491 [2024-10-09 00:35:57.062972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.491 [2024-10-09 00:35:57.062988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:26.491 [2024-10-09 00:35:57.066690] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:26.491 [2024-10-09 00:35:57.066754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.491 [2024-10-09 00:35:57.066770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.491 [2024-10-09 00:35:57.069901] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:26.491 [2024-10-09 00:35:57.069949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.491 [2024-10-09 00:35:57.069967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:26.491 [2024-10-09 00:35:57.073299] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:26.491 [2024-10-09 00:35:57.073344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.491 [2024-10-09 00:35:57.073359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:26.491 [2024-10-09 00:35:57.076799] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:26.491 [2024-10-09 00:35:57.076841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.491 [2024-10-09 00:35:57.076856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:26.491 [2024-10-09 00:35:57.082136] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:26.491 [2024-10-09 00:35:57.082197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.491 [2024-10-09 00:35:57.082213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.491 [2024-10-09 00:35:57.085818] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:26.491 [2024-10-09 00:35:57.085864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.491 [2024-10-09 00:35:57.085880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:26.491 [2024-10-09 00:35:57.089691] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:26.491 [2024-10-09 00:35:57.089761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.491 [2024-10-09 00:35:57.089777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:26.491 [2024-10-09 00:35:57.097681] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:26.491 [2024-10-09 00:35:57.097743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.491 [2024-10-09 00:35:57.097759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:26.491 [2024-10-09 00:35:57.102844] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:26.491 [2024-10-09 00:35:57.102888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.491 [2024-10-09 00:35:57.102903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.491 [2024-10-09 00:35:57.106076] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:26.491 [2024-10-09 00:35:57.106130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.491 [2024-10-09 00:35:57.106146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:26.491 [2024-10-09 00:35:57.109418] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:26.491 [2024-10-09 00:35:57.109475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.491 [2024-10-09 00:35:57.109490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:26.491 [2024-10-09 00:35:57.112971] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:26.491 [2024-10-09 00:35:57.113023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.491 [2024-10-09 00:35:57.113038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:26.491 [2024-10-09 00:35:57.118829] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:26.491 [2024-10-09 00:35:57.119066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.491 [2024-10-09 00:35:57.119082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.754 [2024-10-09 00:35:57.123448] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:26.754 [2024-10-09 00:35:57.123646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.754 [2024-10-09 00:35:57.123662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:26.754 [2024-10-09 00:35:57.127513] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:26.754 [2024-10-09 00:35:57.127559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.754 [2024-10-09 00:35:57.127575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:26.754 [2024-10-09 00:35:57.131186] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:26.754 [2024-10-09 00:35:57.131250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.754 [2024-10-09 00:35:57.131265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:26.754 [2024-10-09 00:35:57.134618] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:26.754 [2024-10-09 00:35:57.134687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.754 [2024-10-09 00:35:57.134702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.754 [2024-10-09 00:35:57.137831] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:26.754 [2024-10-09 00:35:57.137876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.754 [2024-10-09 00:35:57.137891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:26.754 [2024-10-09 00:35:57.141055] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:26.754 [2024-10-09 00:35:57.141099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.754 [2024-10-09 00:35:57.141114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:26.754 [2024-10-09 00:35:57.144304] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:26.754 [2024-10-09 00:35:57.144356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.754 [2024-10-09 00:35:57.144371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:26.754 [2024-10-09 00:35:57.147677] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:26.754 [2024-10-09 00:35:57.147723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.754 [2024-10-09 00:35:57.147739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.754 [2024-10-09 00:35:57.150939] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:26.754 [2024-10-09 00:35:57.150991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.754 [2024-10-09 00:35:57.151006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:26.754 [2024-10-09 00:35:57.153398] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:26.754 [2024-10-09 00:35:57.153461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.754 [2024-10-09 00:35:57.153476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:26.754 [2024-10-09 00:35:57.156000] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:26.754 [2024-10-09 00:35:57.156065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.754 [2024-10-09 00:35:57.156080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:26.754 [2024-10-09 00:35:57.158544] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:26.754 [2024-10-09 00:35:57.158589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.754 [2024-10-09 00:35:57.158604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.754 [2024-10-09 00:35:57.161040] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:26.754 [2024-10-09 00:35:57.161081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.754 [2024-10-09 00:35:57.161099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:26.754 [2024-10-09 00:35:57.163784] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:26.754 [2024-10-09 00:35:57.163853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.754 [2024-10-09 00:35:57.163868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:26.754 [2024-10-09 00:35:57.166997] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:26.754 [2024-10-09 00:35:57.167054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.754 [2024-10-09 00:35:57.167074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:26.754 [2024-10-09 00:35:57.169433] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:26.754 [2024-10-09 00:35:57.169489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.754 [2024-10-09 00:35:57.169504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.754 [2024-10-09 00:35:57.171852] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:26.754 [2024-10-09 00:35:57.171914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.754 [2024-10-09 00:35:57.171929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:26.754 [2024-10-09 00:35:57.174289] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:26.754 [2024-10-09 00:35:57.174351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.754 [2024-10-09 00:35:57.174369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:26.754 [2024-10-09 00:35:57.176728] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:26.754 [2024-10-09 00:35:57.176777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.754 [2024-10-09 00:35:57.176793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:26.754 [2024-10-09 00:35:57.179147] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:26.754 [2024-10-09 00:35:57.179202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.754 [2024-10-09 00:35:57.179217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.754 [2024-10-09 00:35:57.181571] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:26.754 [2024-10-09 00:35:57.181626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.754 [2024-10-09 00:35:57.181642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:26.754 [2024-10-09 00:35:57.184063] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:26.754 [2024-10-09 00:35:57.184106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.754 [2024-10-09 00:35:57.184121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:26.754 [2024-10-09 00:35:57.187007] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:26.754 [2024-10-09 00:35:57.187116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.754 [2024-10-09 00:35:57.187131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:26.754 [2024-10-09 00:35:57.191348] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:26.755 [2024-10-09 00:35:57.191399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.755 [2024-10-09 00:35:57.191414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.755 [2024-10-09 00:35:57.199638] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:26.755 [2024-10-09 00:35:57.199689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.755 [2024-10-09 00:35:57.199704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:26.755 [2024-10-09 00:35:57.202598] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:26.755 [2024-10-09 00:35:57.202655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.755 [2024-10-09 00:35:57.202671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:26.755 [2024-10-09 00:35:57.205048] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:26.755 [2024-10-09 00:35:57.205090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.755 [2024-10-09 00:35:57.205107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:26.755 [2024-10-09 00:35:57.207474] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:26.755 [2024-10-09 00:35:57.207528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.755 [2024-10-09 00:35:57.207543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.755 [2024-10-09 00:35:57.209908] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:26.755 [2024-10-09 00:35:57.209951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.755 [2024-10-09 00:35:57.209966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:26.755 [2024-10-09 00:35:57.212351] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:26.755 [2024-10-09 00:35:57.212397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.755 [2024-10-09 00:35:57.212412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:26.755 [2024-10-09 00:35:57.214805] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:26.755 [2024-10-09 00:35:57.214866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.755 [2024-10-09 00:35:57.214881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:26.755 [2024-10-09 00:35:57.217247] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:26.755 [2024-10-09 00:35:57.217298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.755 [2024-10-09 00:35:57.217314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.755 [2024-10-09 00:35:57.219701] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:26.755 [2024-10-09 00:35:57.219753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.755 [2024-10-09 00:35:57.219768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:26.755 [2024-10-09 00:35:57.222141] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:26.755 [2024-10-09 00:35:57.222192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.755 [2024-10-09 00:35:57.222207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:26.755 [2024-10-09 00:35:57.224596] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:26.755 [2024-10-09 00:35:57.224645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.755 [2024-10-09 00:35:57.224660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:26.755 [2024-10-09 00:35:57.227223] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:26.755 [2024-10-09 00:35:57.227264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.755 [2024-10-09 00:35:57.227279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.755 [2024-10-09 00:35:57.233935] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:26.755 [2024-10-09 00:35:57.233979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.755 [2024-10-09 00:35:57.233994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:26.755 [2024-10-09 00:35:57.240810] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:26.755 [2024-10-09 00:35:57.241073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.755 [2024-10-09 00:35:57.241088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:26.755 [2024-10-09 00:35:57.248301] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:26.755 [2024-10-09 00:35:57.248545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.755 [2024-10-09 00:35:57.248562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:26.755 [2024-10-09 00:35:57.255006] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:26.755 [2024-10-09 00:35:57.255330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.755 [2024-10-09 00:35:57.255346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.755 [2024-10-09 00:35:57.260552] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:26.755 [2024-10-09 00:35:57.260611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.755 [2024-10-09 00:35:57.260627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:26.755 [2024-10-09 00:35:57.263689] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:26.755 [2024-10-09 00:35:57.263738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.755 [2024-10-09 00:35:57.263753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:26.755 [2024-10-09 00:35:57.267098] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:26.755 [2024-10-09 00:35:57.267145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.755 [2024-10-09 00:35:57.267160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:26.755 [2024-10-09 00:35:57.270755] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:26.755 [2024-10-09 00:35:57.270798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.755 [2024-10-09 00:35:57.270812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.755 [2024-10-09 00:35:57.274807] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:26.755 [2024-10-09 00:35:57.274877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.755 [2024-10-09 00:35:57.274892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:26.755 [2024-10-09 00:35:57.278911] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:26.755 [2024-10-09 00:35:57.278969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.755 [2024-10-09 00:35:57.278985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:26.755 [2024-10-09 00:35:57.282883] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:26.755 [2024-10-09 00:35:57.282927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.755 [2024-10-09 00:35:57.282942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:26.755 [2024-10-09 00:35:57.287060] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:26.755 [2024-10-09 00:35:57.287113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.755 [2024-10-09 00:35:57.287128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.755 [2024-10-09 00:35:57.291127] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:26.755 [2024-10-09 00:35:57.291173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.755 [2024-10-09 00:35:57.291188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:26.755 [2024-10-09 00:35:57.295600] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:26.755 [2024-10-09 00:35:57.295667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.755 [2024-10-09 00:35:57.295684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:26.755 [2024-10-09 00:35:57.299714] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:26.755 [2024-10-09 00:35:57.299765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.755 [2024-10-09 00:35:57.299781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:26.755 [2024-10-09 00:35:57.303262] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:26.755 [2024-10-09 00:35:57.303318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.755 [2024-10-09 00:35:57.303333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.756 [2024-10-09 00:35:57.307153] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:26.756 [2024-10-09 00:35:57.307197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.756 [2024-10-09 00:35:57.307213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:26.756 [2024-10-09 00:35:57.310952] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:26.756 [2024-10-09 00:35:57.311046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.756 [2024-10-09 00:35:57.311061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:26.756 [2024-10-09 00:35:57.314451] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:26.756 [2024-10-09 00:35:57.314493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.756 [2024-10-09 00:35:57.314508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:26.756 [2024-10-09 00:35:57.319221] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:26.756 [2024-10-09 00:35:57.319366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.756 [2024-10-09 00:35:57.319381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.756 [2024-10-09 00:35:57.325447] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:26.756 [2024-10-09 00:35:57.325496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.756 [2024-10-09 00:35:57.325511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:26.756 [2024-10-09 00:35:57.331885] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:26.756 [2024-10-09 00:35:57.331931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.756 [2024-10-09 00:35:57.331949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:26.756 [2024-10-09 00:35:57.334803] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:26.756 [2024-10-09 00:35:57.334848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.756 [2024-10-09 00:35:57.334863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:26.756 [2024-10-09 00:35:57.338054] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:26.756 [2024-10-09 00:35:57.338134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.756 [2024-10-09 00:35:57.338149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.756 [2024-10-09 00:35:57.341264] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:26.756 [2024-10-09 00:35:57.341340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.756 [2024-10-09 00:35:57.341355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:26.756 [2024-10-09 00:35:57.344432] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:26.756 [2024-10-09 00:35:57.344498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.756 [2024-10-09 00:35:57.344513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:26.756 [2024-10-09 00:35:57.347238] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:26.756 [2024-10-09 00:35:57.347290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.756 [2024-10-09 00:35:57.347305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:26.756 [2024-10-09 00:35:57.349734] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:26.756 [2024-10-09 00:35:57.349796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.756 [2024-10-09 00:35:57.349811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.756 [2024-10-09 00:35:57.352211] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:26.756 [2024-10-09 00:35:57.352257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.756 [2024-10-09 00:35:57.352272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:26.756 [2024-10-09 00:35:57.354697] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:26.756 [2024-10-09 00:35:57.354762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.756 [2024-10-09 00:35:57.354779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:26.756 [2024-10-09 00:35:57.357183] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:26.756 [2024-10-09 00:35:57.357237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.756 [2024-10-09 00:35:57.357253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:26.756 [2024-10-09 00:35:57.359636] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:26.756 [2024-10-09 00:35:57.359697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.756 [2024-10-09 00:35:57.359712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.756 [2024-10-09 00:35:57.362433] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:26.756 [2024-10-09 00:35:57.362511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.756 [2024-10-09 00:35:57.362527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:26.756 [2024-10-09 00:35:57.365076] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:26.756 [2024-10-09 00:35:57.365121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.756 [2024-10-09 00:35:57.365137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:26.756 [2024-10-09 00:35:57.368063] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:26.756 [2024-10-09 00:35:57.368144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.756 [2024-10-09 00:35:57.368160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:26.756 [2024-10-09 00:35:57.373726] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:26.756 [2024-10-09 00:35:57.373925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.756 [2024-10-09 00:35:57.373940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.756 [2024-10-09 00:35:57.383120] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:26.756 [2024-10-09 00:35:57.383204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.756 [2024-10-09 00:35:57.383220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:27.017 [2024-10-09 00:35:57.388183] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:27.017 [2024-10-09 00:35:57.388353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.017 [2024-10-09 00:35:57.388369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:27.017 [2024-10-09 00:35:57.395323] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:27.017 [2024-10-09 00:35:57.395461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.017 [2024-10-09 00:35:57.395476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:27.017 [2024-10-09 00:35:57.403189] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:27.017 [2024-10-09 00:35:57.403237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.017 [2024-10-09 00:35:57.403253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.017 [2024-10-09 00:35:57.409817] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:27.017 [2024-10-09 00:35:57.410053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.017 [2024-10-09 00:35:57.410069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:27.017 [2024-10-09 00:35:57.420008] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:27.017 [2024-10-09 00:35:57.420237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.017 [2024-10-09 00:35:57.420253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:27.017 [2024-10-09 00:35:57.429949] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:27.017 [2024-10-09 00:35:57.430213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.017 [2024-10-09 00:35:57.430230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:27.017 [2024-10-09 00:35:57.439909] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:27.017 [2024-10-09 00:35:57.440139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.017 [2024-10-09 00:35:57.440154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.017 [2024-10-09 00:35:57.448210] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:27.017 [2024-10-09 00:35:57.448317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.017 [2024-10-09 00:35:57.448336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:27.017 [2024-10-09 00:35:57.455143] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:27.017 [2024-10-09 00:35:57.455212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.017 [2024-10-09 00:35:57.455227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:27.017 [2024-10-09 00:35:57.459242] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:27.018 [2024-10-09 00:35:57.459340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.018 [2024-10-09 00:35:57.459356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:27.018 [2024-10-09 00:35:57.463511] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:27.018 [2024-10-09 00:35:57.463603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.018 [2024-10-09 00:35:57.463621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.018 [2024-10-09 00:35:57.468635] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:27.018 [2024-10-09 00:35:57.468858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.018 [2024-10-09 00:35:57.468874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:27.018 [2024-10-09 00:35:57.472920] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:27.018 [2024-10-09 00:35:57.472976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.018 [2024-10-09 00:35:57.472991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:27.018 [2024-10-09 00:35:57.475745] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:27.018 [2024-10-09 00:35:57.475814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.018 [2024-10-09 00:35:57.475832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:27.018 5819.50 IOPS, 727.44 MiB/s [2024-10-08T22:35:57.653Z] [2024-10-09 00:35:57.479686] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14e2790) with pdu=0x2000198fef90 00:28:27.018 [2024-10-09 00:35:57.479736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.018 [2024-10-09 00:35:57.479755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.018 00:28:27.018 Latency(us) 00:28:27.018 [2024-10-08T22:35:57.653Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:27.018 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:28:27.018 nvme0n1 : 2.00 5819.91 727.49 0.00 0.00 2745.16 1153.71 10868.05 00:28:27.018 [2024-10-08T22:35:57.653Z] =================================================================================================================== 00:28:27.018 [2024-10-08T22:35:57.653Z] Total : 5819.91 727.49 0.00 0.00 2745.16 1153.71 10868.05 00:28:27.018 { 00:28:27.018 "results": [ 00:28:27.018 { 00:28:27.018 "job": "nvme0n1", 00:28:27.018 "core_mask": "0x2", 00:28:27.018 "workload": "randwrite", 00:28:27.018 "status": "finished", 00:28:27.018 "queue_depth": 16, 00:28:27.018 "io_size": 131072, 00:28:27.018 "runtime": 2.003297, 00:28:27.018 "iops": 5819.905885148333, 00:28:27.018 "mibps": 727.4882356435417, 00:28:27.018 "io_failed": 0, 00:28:27.018 "io_timeout": 0, 00:28:27.018 "avg_latency_us": 2745.1608176801897, 00:28:27.018 "min_latency_us": 1153.7066666666667, 00:28:27.018 "max_latency_us": 10868.053333333333 00:28:27.018 } 00:28:27.018 ], 00:28:27.018 "core_count": 1 00:28:27.018 } 00:28:27.018 00:35:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:28:27.018 00:35:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:28:27.018 00:35:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:28:27.018 00:35:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:28:27.018 | .driver_specific 00:28:27.018 | .nvme_error 00:28:27.018 | .status_code 00:28:27.018 | .command_transient_transport_error' 00:28:27.292 00:35:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 376 > 0 )) 00:28:27.292 00:35:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3423988 00:28:27.292 00:35:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 3423988 ']' 00:28:27.292 00:35:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 3423988 00:28:27.292 00:35:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:28:27.292 00:35:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:27.292 00:35:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3423988 00:28:27.292 00:35:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:28:27.292 00:35:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:28:27.292 00:35:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3423988' 00:28:27.292 killing process with pid 3423988 00:28:27.292 00:35:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 3423988 00:28:27.292 Received shutdown signal, test time was about 2.000000 seconds 00:28:27.292 00:28:27.292 Latency(us) 00:28:27.292 [2024-10-08T22:35:57.927Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:27.292 [2024-10-08T22:35:57.927Z] =================================================================================================================== 00:28:27.293 [2024-10-08T22:35:57.928Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:27.293 00:35:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 3423988 00:28:27.293 00:35:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 3421555 00:28:27.293 00:35:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 3421555 ']' 00:28:27.293 00:35:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 3421555 00:28:27.293 00:35:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:28:27.293 00:35:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:27.293 00:35:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3421555 00:28:27.561 00:35:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:28:27.561 00:35:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:28:27.561 00:35:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3421555' 00:28:27.561 killing process with pid 3421555 00:28:27.561 00:35:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 3421555 00:28:27.561 00:35:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 3421555 00:28:27.561 00:28:27.561 real 0m16.634s 00:28:27.561 user 0m32.989s 00:28:27.561 sys 0m3.653s 00:28:27.561 00:35:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:27.561 00:35:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:27.561 ************************************ 00:28:27.561 END TEST nvmf_digest_error 00:28:27.561 ************************************ 00:28:27.561 00:35:58 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:28:27.561 00:35:58 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:28:27.561 00:35:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@514 -- # nvmfcleanup 00:28:27.561 00:35:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:28:27.561 00:35:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:27.561 00:35:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:28:27.561 00:35:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:27.561 00:35:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:27.561 rmmod nvme_tcp 00:28:27.561 rmmod nvme_fabrics 00:28:27.561 rmmod nvme_keyring 00:28:27.561 00:35:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:27.561 00:35:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:28:27.561 00:35:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:28:27.561 00:35:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@515 -- # '[' -n 3421555 ']' 00:28:27.561 00:35:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # killprocess 3421555 00:28:27.561 00:35:58 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@950 -- # '[' -z 3421555 ']' 00:28:27.561 00:35:58 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # kill -0 3421555 00:28:27.561 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (3421555) - No such process 00:28:27.561 00:35:58 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@977 -- # echo 'Process with pid 3421555 is not found' 00:28:27.561 Process with pid 3421555 is not found 00:28:27.561 00:35:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:28:27.561 00:35:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:28:27.561 00:35:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:28:27.561 00:35:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:28:27.561 00:35:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@789 -- # iptables-save 00:28:27.561 00:35:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:28:27.561 00:35:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@789 -- # iptables-restore 00:28:27.561 00:35:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:27.561 00:35:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:27.561 00:35:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:27.561 00:35:58 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:27.561 00:35:58 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:30.104 00:36:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:30.104 00:28:30.104 real 0m43.715s 00:28:30.104 user 1m8.653s 00:28:30.104 sys 0m13.297s 00:28:30.104 00:36:00 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:30.104 00:36:00 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:28:30.104 ************************************ 00:28:30.104 END TEST nvmf_digest 00:28:30.104 ************************************ 00:28:30.104 00:36:00 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:28:30.105 00:36:00 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:28:30.105 00:36:00 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:28:30.105 00:36:00 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:28:30.105 00:36:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:28:30.105 00:36:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:30.105 00:36:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.105 ************************************ 00:28:30.105 START TEST nvmf_bdevperf 00:28:30.105 ************************************ 00:28:30.105 00:36:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:28:30.105 * Looking for test storage... 00:28:30.105 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:30.105 00:36:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:28:30.105 00:36:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1681 -- # lcov --version 00:28:30.105 00:36:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:28:30.105 00:36:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:28:30.105 00:36:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:30.105 00:36:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:30.105 00:36:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:30.105 00:36:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:28:30.105 00:36:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:28:30.105 00:36:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:28:30.105 00:36:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:28:30.105 00:36:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:28:30.105 00:36:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:28:30.105 00:36:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:28:30.105 00:36:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:30.105 00:36:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:28:30.105 00:36:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@345 -- # : 1 00:28:30.105 00:36:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:30.105 00:36:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:30.105 00:36:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:28:30.105 00:36:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=1 00:28:30.105 00:36:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:30.105 00:36:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 1 00:28:30.105 00:36:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:28:30.105 00:36:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:28:30.105 00:36:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=2 00:28:30.105 00:36:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:30.105 00:36:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 2 00:28:30.105 00:36:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:28:30.105 00:36:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:30.105 00:36:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:30.105 00:36:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # return 0 00:28:30.105 00:36:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:30.105 00:36:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:28:30.105 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:30.105 --rc genhtml_branch_coverage=1 00:28:30.105 --rc genhtml_function_coverage=1 00:28:30.105 --rc genhtml_legend=1 00:28:30.105 --rc geninfo_all_blocks=1 00:28:30.105 --rc geninfo_unexecuted_blocks=1 00:28:30.105 00:28:30.105 ' 00:28:30.105 00:36:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:28:30.105 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:30.105 --rc genhtml_branch_coverage=1 00:28:30.105 --rc genhtml_function_coverage=1 00:28:30.105 --rc genhtml_legend=1 00:28:30.105 --rc geninfo_all_blocks=1 00:28:30.105 --rc geninfo_unexecuted_blocks=1 00:28:30.105 00:28:30.105 ' 00:28:30.105 00:36:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:28:30.105 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:30.105 --rc genhtml_branch_coverage=1 00:28:30.105 --rc genhtml_function_coverage=1 00:28:30.105 --rc genhtml_legend=1 00:28:30.105 --rc geninfo_all_blocks=1 00:28:30.105 --rc geninfo_unexecuted_blocks=1 00:28:30.105 00:28:30.105 ' 00:28:30.105 00:36:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:28:30.105 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:30.105 --rc genhtml_branch_coverage=1 00:28:30.105 --rc genhtml_function_coverage=1 00:28:30.105 --rc genhtml_legend=1 00:28:30.105 --rc geninfo_all_blocks=1 00:28:30.105 --rc geninfo_unexecuted_blocks=1 00:28:30.105 00:28:30.105 ' 00:28:30.105 00:36:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:30.105 00:36:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:28:30.105 00:36:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:30.105 00:36:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:30.105 00:36:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:30.105 00:36:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:30.105 00:36:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:30.105 00:36:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:30.105 00:36:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:30.105 00:36:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:30.105 00:36:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:30.105 00:36:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:30.105 00:36:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:28:30.105 00:36:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:28:30.105 00:36:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:30.105 00:36:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:30.105 00:36:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:30.105 00:36:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:30.105 00:36:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:30.105 00:36:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@15 -- # shopt -s extglob 00:28:30.105 00:36:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:30.105 00:36:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:30.105 00:36:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:30.105 00:36:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:30.105 00:36:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:30.105 00:36:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:30.105 00:36:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:28:30.105 00:36:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:30.105 00:36:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # : 0 00:28:30.105 00:36:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:30.105 00:36:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:30.105 00:36:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:30.105 00:36:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:30.105 00:36:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:30.105 00:36:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:30.105 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:30.105 00:36:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:30.105 00:36:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:30.105 00:36:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:30.105 00:36:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:30.105 00:36:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:30.105 00:36:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:28:30.106 00:36:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:28:30.106 00:36:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:30.106 00:36:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # prepare_net_devs 00:28:30.106 00:36:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@436 -- # local -g is_hw=no 00:28:30.106 00:36:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@438 -- # remove_spdk_ns 00:28:30.106 00:36:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:30.106 00:36:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:30.106 00:36:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:30.106 00:36:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:28:30.106 00:36:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:28:30.106 00:36:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@309 -- # xtrace_disable 00:28:30.106 00:36:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:38.253 00:36:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:38.253 00:36:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # pci_devs=() 00:28:38.253 00:36:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:38.253 00:36:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:38.253 00:36:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:38.253 00:36:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:38.253 00:36:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:38.253 00:36:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # net_devs=() 00:28:38.253 00:36:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:38.253 00:36:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # e810=() 00:28:38.253 00:36:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # local -ga e810 00:28:38.253 00:36:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # x722=() 00:28:38.253 00:36:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # local -ga x722 00:28:38.253 00:36:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # mlx=() 00:28:38.253 00:36:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # local -ga mlx 00:28:38.253 00:36:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:38.253 00:36:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:38.253 00:36:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:38.253 00:36:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:38.253 00:36:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:38.253 00:36:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:38.253 00:36:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:38.253 00:36:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:38.253 00:36:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:38.253 00:36:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:38.253 00:36:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:38.253 00:36:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:38.253 00:36:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:38.253 00:36:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:38.253 00:36:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:38.253 00:36:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:38.253 00:36:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:38.253 00:36:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:38.253 00:36:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:38.253 00:36:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:28:38.253 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:28:38.253 00:36:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:38.253 00:36:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:38.253 00:36:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:38.253 00:36:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:38.253 00:36:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:38.253 00:36:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:38.253 00:36:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:28:38.253 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:28:38.253 00:36:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:38.253 00:36:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:38.253 00:36:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:38.253 00:36:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:38.253 00:36:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:38.253 00:36:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:38.254 00:36:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:38.254 00:36:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:38.254 00:36:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:28:38.254 00:36:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:38.254 00:36:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:28:38.254 00:36:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:38.254 00:36:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ up == up ]] 00:28:38.254 00:36:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:28:38.254 00:36:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:38.254 00:36:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:28:38.254 Found net devices under 0000:4b:00.0: cvl_0_0 00:28:38.254 00:36:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:28:38.254 00:36:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:28:38.254 00:36:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:38.254 00:36:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:28:38.254 00:36:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:38.254 00:36:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ up == up ]] 00:28:38.254 00:36:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:28:38.254 00:36:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:38.254 00:36:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:28:38.254 Found net devices under 0000:4b:00.1: cvl_0_1 00:28:38.254 00:36:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:28:38.254 00:36:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:28:38.254 00:36:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # is_hw=yes 00:28:38.254 00:36:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:28:38.254 00:36:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:28:38.254 00:36:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:28:38.254 00:36:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:38.254 00:36:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:38.254 00:36:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:38.254 00:36:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:38.254 00:36:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:38.254 00:36:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:38.254 00:36:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:38.254 00:36:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:38.254 00:36:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:38.254 00:36:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:38.254 00:36:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:38.254 00:36:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:38.254 00:36:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:38.254 00:36:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:38.254 00:36:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:38.254 00:36:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:38.254 00:36:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:38.254 00:36:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:38.254 00:36:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:38.254 00:36:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:38.254 00:36:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:38.254 00:36:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:38.254 00:36:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:38.254 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:38.254 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.563 ms 00:28:38.254 00:28:38.254 --- 10.0.0.2 ping statistics --- 00:28:38.254 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:38.254 rtt min/avg/max/mdev = 0.563/0.563/0.563/0.000 ms 00:28:38.254 00:36:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:38.254 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:38.254 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.268 ms 00:28:38.254 00:28:38.254 --- 10.0.0.1 ping statistics --- 00:28:38.254 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:38.254 rtt min/avg/max/mdev = 0.268/0.268/0.268/0.000 ms 00:28:38.254 00:36:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:38.254 00:36:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@448 -- # return 0 00:28:38.254 00:36:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:28:38.254 00:36:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:38.254 00:36:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:28:38.254 00:36:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:28:38.254 00:36:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:38.254 00:36:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:28:38.254 00:36:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:28:38.254 00:36:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:28:38.254 00:36:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:28:38.254 00:36:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:28:38.254 00:36:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:38.254 00:36:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:38.254 00:36:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # nvmfpid=3429077 00:28:38.254 00:36:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # waitforlisten 3429077 00:28:38.254 00:36:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:28:38.254 00:36:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@831 -- # '[' -z 3429077 ']' 00:28:38.254 00:36:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:38.254 00:36:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:38.254 00:36:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:38.254 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:38.254 00:36:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:38.254 00:36:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:38.254 [2024-10-09 00:36:08.049431] Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 initialization... 00:28:38.254 [2024-10-09 00:36:08.049490] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:38.254 [2024-10-09 00:36:08.135183] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:38.254 [2024-10-09 00:36:08.200356] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:38.254 [2024-10-09 00:36:08.200394] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:38.254 [2024-10-09 00:36:08.200402] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:38.254 [2024-10-09 00:36:08.200409] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:38.254 [2024-10-09 00:36:08.200415] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:38.254 [2024-10-09 00:36:08.201565] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:28:38.254 [2024-10-09 00:36:08.201713] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:28:38.254 [2024-10-09 00:36:08.201715] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:28:38.254 00:36:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:38.254 00:36:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # return 0 00:28:38.254 00:36:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:28:38.254 00:36:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:38.254 00:36:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:38.515 00:36:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:38.515 00:36:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:38.515 00:36:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:38.515 00:36:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:38.516 [2024-10-09 00:36:08.901715] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:38.516 00:36:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:38.516 00:36:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:38.516 00:36:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:38.516 00:36:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:38.516 Malloc0 00:28:38.516 00:36:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:38.516 00:36:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:38.516 00:36:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:38.516 00:36:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:38.516 00:36:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:38.516 00:36:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:38.516 00:36:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:38.516 00:36:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:38.516 00:36:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:38.516 00:36:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:38.516 00:36:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:38.516 00:36:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:38.516 [2024-10-09 00:36:08.980228] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:38.516 00:36:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:38.516 00:36:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:28:38.516 00:36:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:28:38.516 00:36:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # config=() 00:28:38.516 00:36:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # local subsystem config 00:28:38.516 00:36:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:28:38.516 00:36:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:28:38.516 { 00:28:38.516 "params": { 00:28:38.516 "name": "Nvme$subsystem", 00:28:38.516 "trtype": "$TEST_TRANSPORT", 00:28:38.516 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:38.516 "adrfam": "ipv4", 00:28:38.516 "trsvcid": "$NVMF_PORT", 00:28:38.516 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:38.516 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:38.516 "hdgst": ${hdgst:-false}, 00:28:38.516 "ddgst": ${ddgst:-false} 00:28:38.516 }, 00:28:38.516 "method": "bdev_nvme_attach_controller" 00:28:38.516 } 00:28:38.516 EOF 00:28:38.516 )") 00:28:38.516 00:36:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@580 -- # cat 00:28:38.516 00:36:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # jq . 00:28:38.516 00:36:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@583 -- # IFS=, 00:28:38.516 00:36:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:28:38.516 "params": { 00:28:38.516 "name": "Nvme1", 00:28:38.516 "trtype": "tcp", 00:28:38.516 "traddr": "10.0.0.2", 00:28:38.516 "adrfam": "ipv4", 00:28:38.516 "trsvcid": "4420", 00:28:38.516 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:38.516 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:38.516 "hdgst": false, 00:28:38.516 "ddgst": false 00:28:38.516 }, 00:28:38.516 "method": "bdev_nvme_attach_controller" 00:28:38.516 }' 00:28:38.516 [2024-10-09 00:36:09.037548] Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 initialization... 00:28:38.516 [2024-10-09 00:36:09.037630] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3429425 ] 00:28:38.516 [2024-10-09 00:36:09.122512] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:38.778 [2024-10-09 00:36:09.219250] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:28:39.039 Running I/O for 1 seconds... 00:28:39.982 8498.00 IOPS, 33.20 MiB/s 00:28:39.982 Latency(us) 00:28:39.982 [2024-10-08T22:36:10.617Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:39.982 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:28:39.982 Verification LBA range: start 0x0 length 0x4000 00:28:39.982 Nvme1n1 : 1.02 8581.66 33.52 0.00 0.00 14855.12 3276.80 13653.33 00:28:39.982 [2024-10-08T22:36:10.617Z] =================================================================================================================== 00:28:39.982 [2024-10-08T22:36:10.617Z] Total : 8581.66 33.52 0.00 0.00 14855.12 3276.80 13653.33 00:28:40.243 00:36:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=3429975 00:28:40.243 00:36:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:28:40.243 00:36:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:28:40.243 00:36:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:28:40.243 00:36:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # config=() 00:28:40.243 00:36:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # local subsystem config 00:28:40.243 00:36:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:28:40.243 00:36:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:28:40.243 { 00:28:40.243 "params": { 00:28:40.243 "name": "Nvme$subsystem", 00:28:40.243 "trtype": "$TEST_TRANSPORT", 00:28:40.243 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:40.243 "adrfam": "ipv4", 00:28:40.243 "trsvcid": "$NVMF_PORT", 00:28:40.243 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:40.243 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:40.243 "hdgst": ${hdgst:-false}, 00:28:40.243 "ddgst": ${ddgst:-false} 00:28:40.243 }, 00:28:40.243 "method": "bdev_nvme_attach_controller" 00:28:40.243 } 00:28:40.243 EOF 00:28:40.243 )") 00:28:40.243 00:36:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@580 -- # cat 00:28:40.243 00:36:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # jq . 00:28:40.243 00:36:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@583 -- # IFS=, 00:28:40.243 00:36:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:28:40.243 "params": { 00:28:40.243 "name": "Nvme1", 00:28:40.243 "trtype": "tcp", 00:28:40.243 "traddr": "10.0.0.2", 00:28:40.243 "adrfam": "ipv4", 00:28:40.243 "trsvcid": "4420", 00:28:40.243 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:40.243 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:40.243 "hdgst": false, 00:28:40.243 "ddgst": false 00:28:40.243 }, 00:28:40.243 "method": "bdev_nvme_attach_controller" 00:28:40.243 }' 00:28:40.243 [2024-10-09 00:36:10.679362] Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 initialization... 00:28:40.243 [2024-10-09 00:36:10.679440] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3429975 ] 00:28:40.243 [2024-10-09 00:36:10.762954] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:40.243 [2024-10-09 00:36:10.858925] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:28:40.829 Running I/O for 15 seconds... 00:28:42.709 10006.00 IOPS, 39.09 MiB/s [2024-10-08T22:36:13.918Z] 10636.00 IOPS, 41.55 MiB/s [2024-10-08T22:36:13.918Z] 00:36:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 3429077 00:28:43.283 00:36:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:28:43.283 [2024-10-09 00:36:13.643030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:77808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.283 [2024-10-09 00:36:13.643070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.283 [2024-10-09 00:36:13.643090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:77816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.283 [2024-10-09 00:36:13.643099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.283 [2024-10-09 00:36:13.643112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:77824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.283 [2024-10-09 00:36:13.643122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.283 [2024-10-09 00:36:13.643131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:77832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.283 [2024-10-09 00:36:13.643141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.283 [2024-10-09 00:36:13.643152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:77840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.283 [2024-10-09 00:36:13.643160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.283 [2024-10-09 00:36:13.643172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:77848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.283 [2024-10-09 00:36:13.643181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.283 [2024-10-09 00:36:13.643193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:77856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.283 [2024-10-09 00:36:13.643201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.283 [2024-10-09 00:36:13.643212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:77864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.283 [2024-10-09 00:36:13.643220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.283 [2024-10-09 00:36:13.643229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:77872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.283 [2024-10-09 00:36:13.643242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.283 [2024-10-09 00:36:13.643254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:77880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.283 [2024-10-09 00:36:13.643264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.283 [2024-10-09 00:36:13.643275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:77888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.283 [2024-10-09 00:36:13.643285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.283 [2024-10-09 00:36:13.643297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:77896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.283 [2024-10-09 00:36:13.643306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.283 [2024-10-09 00:36:13.643317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:77904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.283 [2024-10-09 00:36:13.643327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.283 [2024-10-09 00:36:13.643339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.283 [2024-10-09 00:36:13.643349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.283 [2024-10-09 00:36:13.643360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:77920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.283 [2024-10-09 00:36:13.643368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.283 [2024-10-09 00:36:13.643378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:77928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.283 [2024-10-09 00:36:13.643388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.283 [2024-10-09 00:36:13.643399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:77936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.283 [2024-10-09 00:36:13.643408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.283 [2024-10-09 00:36:13.643417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:77944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.283 [2024-10-09 00:36:13.643425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.283 [2024-10-09 00:36:13.643434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:77952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.283 [2024-10-09 00:36:13.643442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.283 [2024-10-09 00:36:13.643451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:77960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.283 [2024-10-09 00:36:13.643458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.283 [2024-10-09 00:36:13.643468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:77968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.283 [2024-10-09 00:36:13.643476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.283 [2024-10-09 00:36:13.643488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:77976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.283 [2024-10-09 00:36:13.643496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.284 [2024-10-09 00:36:13.643506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:77984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.284 [2024-10-09 00:36:13.643513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.284 [2024-10-09 00:36:13.643523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:77992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.284 [2024-10-09 00:36:13.643530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.284 [2024-10-09 00:36:13.643540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:78000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.284 [2024-10-09 00:36:13.643547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.284 [2024-10-09 00:36:13.643556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:78008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.284 [2024-10-09 00:36:13.643563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.284 [2024-10-09 00:36:13.643573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:78016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.284 [2024-10-09 00:36:13.643580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.284 [2024-10-09 00:36:13.643589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:78024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.284 [2024-10-09 00:36:13.643597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.284 [2024-10-09 00:36:13.643606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:78032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.284 [2024-10-09 00:36:13.643613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.284 [2024-10-09 00:36:13.643623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:78040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.284 [2024-10-09 00:36:13.643630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.284 [2024-10-09 00:36:13.643639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:78048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.284 [2024-10-09 00:36:13.643646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.284 [2024-10-09 00:36:13.643656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:78056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.284 [2024-10-09 00:36:13.643663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.284 [2024-10-09 00:36:13.643672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:78064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.284 [2024-10-09 00:36:13.643679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.284 [2024-10-09 00:36:13.643689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:78072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.284 [2024-10-09 00:36:13.643697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.284 [2024-10-09 00:36:13.643707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:78080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.284 [2024-10-09 00:36:13.643715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.284 [2024-10-09 00:36:13.643728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:78088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.284 [2024-10-09 00:36:13.643736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.284 [2024-10-09 00:36:13.643746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:78096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.284 [2024-10-09 00:36:13.643753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.284 [2024-10-09 00:36:13.643762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:78104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.284 [2024-10-09 00:36:13.643769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.284 [2024-10-09 00:36:13.643779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:78112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.284 [2024-10-09 00:36:13.643786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.284 [2024-10-09 00:36:13.643796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:78120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.284 [2024-10-09 00:36:13.643803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.284 [2024-10-09 00:36:13.643813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:78128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.284 [2024-10-09 00:36:13.643820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.284 [2024-10-09 00:36:13.643830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:78136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.284 [2024-10-09 00:36:13.643837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.284 [2024-10-09 00:36:13.643846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:78144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.284 [2024-10-09 00:36:13.643853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.284 [2024-10-09 00:36:13.643862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:78152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.284 [2024-10-09 00:36:13.643870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.284 [2024-10-09 00:36:13.643879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:78160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.284 [2024-10-09 00:36:13.643886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.284 [2024-10-09 00:36:13.643896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:78168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.284 [2024-10-09 00:36:13.643903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.284 [2024-10-09 00:36:13.643912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:78176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.284 [2024-10-09 00:36:13.643921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.284 [2024-10-09 00:36:13.643930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:78184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.284 [2024-10-09 00:36:13.643938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.284 [2024-10-09 00:36:13.643947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:78192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.284 [2024-10-09 00:36:13.643954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.284 [2024-10-09 00:36:13.643963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:78200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.284 [2024-10-09 00:36:13.643971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.284 [2024-10-09 00:36:13.643980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:78208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.284 [2024-10-09 00:36:13.643987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.284 [2024-10-09 00:36:13.643997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:78216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.284 [2024-10-09 00:36:13.644004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.284 [2024-10-09 00:36:13.644013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:78224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.284 [2024-10-09 00:36:13.644020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.284 [2024-10-09 00:36:13.644030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:78232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.284 [2024-10-09 00:36:13.644037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.284 [2024-10-09 00:36:13.644046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:78240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.284 [2024-10-09 00:36:13.644054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.284 [2024-10-09 00:36:13.644063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:78248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.284 [2024-10-09 00:36:13.644070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.284 [2024-10-09 00:36:13.644079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:78256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.284 [2024-10-09 00:36:13.644087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.284 [2024-10-09 00:36:13.644097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:78264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.284 [2024-10-09 00:36:13.644104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.284 [2024-10-09 00:36:13.644113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:78272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.284 [2024-10-09 00:36:13.644120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.284 [2024-10-09 00:36:13.644131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:78280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.284 [2024-10-09 00:36:13.644139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.284 [2024-10-09 00:36:13.644148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:78288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.285 [2024-10-09 00:36:13.644156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.285 [2024-10-09 00:36:13.644165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:78296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.285 [2024-10-09 00:36:13.644172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.285 [2024-10-09 00:36:13.644181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:78304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.285 [2024-10-09 00:36:13.644188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.285 [2024-10-09 00:36:13.644198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:78312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.285 [2024-10-09 00:36:13.644205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.285 [2024-10-09 00:36:13.644215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:78768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.285 [2024-10-09 00:36:13.644222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.285 [2024-10-09 00:36:13.644231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:78776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.285 [2024-10-09 00:36:13.644239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.285 [2024-10-09 00:36:13.644248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:78784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.285 [2024-10-09 00:36:13.644256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.285 [2024-10-09 00:36:13.644265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:78792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.285 [2024-10-09 00:36:13.644272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.285 [2024-10-09 00:36:13.644281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:78800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.285 [2024-10-09 00:36:13.644289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.285 [2024-10-09 00:36:13.644298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:78808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.285 [2024-10-09 00:36:13.644306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.285 [2024-10-09 00:36:13.644315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:78816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.285 [2024-10-09 00:36:13.644322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.285 [2024-10-09 00:36:13.644331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:78320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.285 [2024-10-09 00:36:13.644340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.285 [2024-10-09 00:36:13.644350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:78328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.285 [2024-10-09 00:36:13.644357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.285 [2024-10-09 00:36:13.644367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:78336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.285 [2024-10-09 00:36:13.644374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.285 [2024-10-09 00:36:13.644383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:78344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.285 [2024-10-09 00:36:13.644390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.285 [2024-10-09 00:36:13.644400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:78352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.285 [2024-10-09 00:36:13.644407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.285 [2024-10-09 00:36:13.644417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:78360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.285 [2024-10-09 00:36:13.644424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.285 [2024-10-09 00:36:13.644434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:78368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.285 [2024-10-09 00:36:13.644441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.285 [2024-10-09 00:36:13.644450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:78376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.285 [2024-10-09 00:36:13.644458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.285 [2024-10-09 00:36:13.644467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:78384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.285 [2024-10-09 00:36:13.644474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.285 [2024-10-09 00:36:13.644483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:78392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.285 [2024-10-09 00:36:13.644491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.285 [2024-10-09 00:36:13.644500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:78400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.285 [2024-10-09 00:36:13.644507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.285 [2024-10-09 00:36:13.644517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:78408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.285 [2024-10-09 00:36:13.644524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.285 [2024-10-09 00:36:13.644533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:78416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.285 [2024-10-09 00:36:13.644540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.285 [2024-10-09 00:36:13.644549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:78424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.285 [2024-10-09 00:36:13.644559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.285 [2024-10-09 00:36:13.644569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:78432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.285 [2024-10-09 00:36:13.644576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.285 [2024-10-09 00:36:13.644585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:78440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.285 [2024-10-09 00:36:13.644592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.285 [2024-10-09 00:36:13.644601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:78448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.285 [2024-10-09 00:36:13.644608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.285 [2024-10-09 00:36:13.644618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:78456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.285 [2024-10-09 00:36:13.644625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.285 [2024-10-09 00:36:13.644635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:78464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.285 [2024-10-09 00:36:13.644642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.285 [2024-10-09 00:36:13.644651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:78472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.285 [2024-10-09 00:36:13.644659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.285 [2024-10-09 00:36:13.644668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:78480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.285 [2024-10-09 00:36:13.644676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.285 [2024-10-09 00:36:13.644685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:78488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.285 [2024-10-09 00:36:13.644692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.285 [2024-10-09 00:36:13.644701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:78496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.285 [2024-10-09 00:36:13.644709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.285 [2024-10-09 00:36:13.644718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:78824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.285 [2024-10-09 00:36:13.644729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.285 [2024-10-09 00:36:13.644738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:78504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.285 [2024-10-09 00:36:13.644746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.285 [2024-10-09 00:36:13.644755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:78512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.285 [2024-10-09 00:36:13.644762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.285 [2024-10-09 00:36:13.644774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:78520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.285 [2024-10-09 00:36:13.644782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.285 [2024-10-09 00:36:13.644791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:78528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.285 [2024-10-09 00:36:13.644798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.286 [2024-10-09 00:36:13.644808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:78536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.286 [2024-10-09 00:36:13.644815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.286 [2024-10-09 00:36:13.644825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:78544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.286 [2024-10-09 00:36:13.644832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.286 [2024-10-09 00:36:13.644842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:78552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.286 [2024-10-09 00:36:13.644849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.286 [2024-10-09 00:36:13.644858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:78560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.286 [2024-10-09 00:36:13.644865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.286 [2024-10-09 00:36:13.644875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:78568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.286 [2024-10-09 00:36:13.644882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.286 [2024-10-09 00:36:13.644891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:78576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.286 [2024-10-09 00:36:13.644898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.286 [2024-10-09 00:36:13.644908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:78584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.286 [2024-10-09 00:36:13.644915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.286 [2024-10-09 00:36:13.644924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:78592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.286 [2024-10-09 00:36:13.644932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.286 [2024-10-09 00:36:13.644941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:78600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.286 [2024-10-09 00:36:13.644948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.286 [2024-10-09 00:36:13.644958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:78608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.286 [2024-10-09 00:36:13.644964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.286 [2024-10-09 00:36:13.644974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:78616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.286 [2024-10-09 00:36:13.644986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.286 [2024-10-09 00:36:13.644995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:78624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.286 [2024-10-09 00:36:13.645002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.286 [2024-10-09 00:36:13.645012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:78632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.286 [2024-10-09 00:36:13.645019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.286 [2024-10-09 00:36:13.645028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:78640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.286 [2024-10-09 00:36:13.645035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.286 [2024-10-09 00:36:13.645045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:78648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.286 [2024-10-09 00:36:13.645052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.286 [2024-10-09 00:36:13.645061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:78656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.286 [2024-10-09 00:36:13.645068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.286 [2024-10-09 00:36:13.645077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:78664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.286 [2024-10-09 00:36:13.645085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.286 [2024-10-09 00:36:13.645094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:78672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.286 [2024-10-09 00:36:13.645102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.286 [2024-10-09 00:36:13.645111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:78680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.286 [2024-10-09 00:36:13.645122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.286 [2024-10-09 00:36:13.645131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:78688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.286 [2024-10-09 00:36:13.645139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.286 [2024-10-09 00:36:13.645148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:78696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.286 [2024-10-09 00:36:13.645155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.286 [2024-10-09 00:36:13.645165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:78704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.286 [2024-10-09 00:36:13.645172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.286 [2024-10-09 00:36:13.645181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:78712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.286 [2024-10-09 00:36:13.645188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.286 [2024-10-09 00:36:13.645199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:78720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.286 [2024-10-09 00:36:13.645206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.286 [2024-10-09 00:36:13.645216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:78728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.286 [2024-10-09 00:36:13.645223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.286 [2024-10-09 00:36:13.645232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:78736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.286 [2024-10-09 00:36:13.645240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.286 [2024-10-09 00:36:13.645249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:78744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.286 [2024-10-09 00:36:13.645256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.286 [2024-10-09 00:36:13.645265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:78752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.286 [2024-10-09 00:36:13.645272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.286 [2024-10-09 00:36:13.645281] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75caf0 is same with the state(6) to be set 00:28:43.286 [2024-10-09 00:36:13.645290] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:43.286 [2024-10-09 00:36:13.645296] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:43.286 [2024-10-09 00:36:13.645303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:78760 len:8 PRP1 0x0 PRP2 0x0 00:28:43.286 [2024-10-09 00:36:13.645312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.286 [2024-10-09 00:36:13.645350] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x75caf0 was disconnected and freed. reset controller. 00:28:43.286 [2024-10-09 00:36:13.649056] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:43.286 [2024-10-09 00:36:13.649107] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:43.286 [2024-10-09 00:36:13.650032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.286 [2024-10-09 00:36:13.650070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:43.286 [2024-10-09 00:36:13.650082] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:43.286 [2024-10-09 00:36:13.650319] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:43.286 [2024-10-09 00:36:13.650539] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:43.286 [2024-10-09 00:36:13.650548] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:43.286 [2024-10-09 00:36:13.650558] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:43.286 [2024-10-09 00:36:13.654065] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:43.286 [2024-10-09 00:36:13.663137] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:43.286 [2024-10-09 00:36:13.663784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.286 [2024-10-09 00:36:13.663836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:43.286 [2024-10-09 00:36:13.663849] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:43.286 [2024-10-09 00:36:13.664087] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:43.286 [2024-10-09 00:36:13.664307] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:43.286 [2024-10-09 00:36:13.664316] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:43.287 [2024-10-09 00:36:13.664324] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:43.287 [2024-10-09 00:36:13.667821] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:43.287 [2024-10-09 00:36:13.676870] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:43.287 [2024-10-09 00:36:13.677512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.287 [2024-10-09 00:36:13.677551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:43.287 [2024-10-09 00:36:13.677562] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:43.287 [2024-10-09 00:36:13.677807] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:43.287 [2024-10-09 00:36:13.678028] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:43.287 [2024-10-09 00:36:13.678037] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:43.287 [2024-10-09 00:36:13.678045] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:43.287 [2024-10-09 00:36:13.681574] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:43.287 [2024-10-09 00:36:13.690646] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:43.287 [2024-10-09 00:36:13.691283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.287 [2024-10-09 00:36:13.691324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:43.287 [2024-10-09 00:36:13.691336] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:43.287 [2024-10-09 00:36:13.691573] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:43.287 [2024-10-09 00:36:13.691802] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:43.287 [2024-10-09 00:36:13.691811] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:43.287 [2024-10-09 00:36:13.691819] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:43.287 [2024-10-09 00:36:13.695315] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:43.287 [2024-10-09 00:36:13.704570] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:43.287 [2024-10-09 00:36:13.705246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.287 [2024-10-09 00:36:13.705289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:43.287 [2024-10-09 00:36:13.705300] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:43.287 [2024-10-09 00:36:13.705538] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:43.287 [2024-10-09 00:36:13.705772] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:43.287 [2024-10-09 00:36:13.705782] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:43.287 [2024-10-09 00:36:13.705790] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:43.287 [2024-10-09 00:36:13.709287] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:43.287 [2024-10-09 00:36:13.718344] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:43.287 [2024-10-09 00:36:13.718893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.287 [2024-10-09 00:36:13.718914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:43.287 [2024-10-09 00:36:13.718923] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:43.287 [2024-10-09 00:36:13.719139] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:43.287 [2024-10-09 00:36:13.719355] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:43.287 [2024-10-09 00:36:13.719364] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:43.287 [2024-10-09 00:36:13.719371] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:43.287 [2024-10-09 00:36:13.722870] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:43.287 [2024-10-09 00:36:13.732125] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:43.287 [2024-10-09 00:36:13.732696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.287 [2024-10-09 00:36:13.732714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:43.287 [2024-10-09 00:36:13.732730] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:43.287 [2024-10-09 00:36:13.732946] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:43.287 [2024-10-09 00:36:13.733161] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:43.287 [2024-10-09 00:36:13.733169] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:43.287 [2024-10-09 00:36:13.733176] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:43.287 [2024-10-09 00:36:13.736662] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:43.287 [2024-10-09 00:36:13.745912] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:43.287 [2024-10-09 00:36:13.746453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.287 [2024-10-09 00:36:13.746470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:43.287 [2024-10-09 00:36:13.746477] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:43.287 [2024-10-09 00:36:13.746693] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:43.287 [2024-10-09 00:36:13.746916] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:43.287 [2024-10-09 00:36:13.746925] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:43.287 [2024-10-09 00:36:13.746932] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:43.287 [2024-10-09 00:36:13.750422] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:43.287 [2024-10-09 00:36:13.759689] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:43.287 [2024-10-09 00:36:13.760329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.287 [2024-10-09 00:36:13.760378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:43.287 [2024-10-09 00:36:13.760391] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:43.287 [2024-10-09 00:36:13.760635] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:43.287 [2024-10-09 00:36:13.760867] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:43.287 [2024-10-09 00:36:13.760878] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:43.287 [2024-10-09 00:36:13.760889] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:43.287 [2024-10-09 00:36:13.764395] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:43.287 [2024-10-09 00:36:13.773464] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:43.287 [2024-10-09 00:36:13.774171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.287 [2024-10-09 00:36:13.774223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:43.287 [2024-10-09 00:36:13.774236] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:43.287 [2024-10-09 00:36:13.774481] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:43.287 [2024-10-09 00:36:13.774703] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:43.287 [2024-10-09 00:36:13.774712] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:43.287 [2024-10-09 00:36:13.774730] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:43.287 [2024-10-09 00:36:13.778235] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:43.287 [2024-10-09 00:36:13.787317] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:43.287 [2024-10-09 00:36:13.788027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.287 [2024-10-09 00:36:13.788087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:43.287 [2024-10-09 00:36:13.788100] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:43.287 [2024-10-09 00:36:13.788351] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:43.287 [2024-10-09 00:36:13.788575] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:43.287 [2024-10-09 00:36:13.788584] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:43.287 [2024-10-09 00:36:13.788592] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:43.287 [2024-10-09 00:36:13.792111] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:43.288 [2024-10-09 00:36:13.801194] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:43.288 [2024-10-09 00:36:13.801967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.288 [2024-10-09 00:36:13.802027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:43.288 [2024-10-09 00:36:13.802047] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:43.288 [2024-10-09 00:36:13.802297] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:43.288 [2024-10-09 00:36:13.802520] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:43.288 [2024-10-09 00:36:13.802529] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:43.288 [2024-10-09 00:36:13.802538] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:43.288 [2024-10-09 00:36:13.806063] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:43.288 [2024-10-09 00:36:13.814948] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:43.288 [2024-10-09 00:36:13.815541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.288 [2024-10-09 00:36:13.815572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:43.288 [2024-10-09 00:36:13.815582] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:43.288 [2024-10-09 00:36:13.815815] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:43.288 [2024-10-09 00:36:13.816038] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:43.288 [2024-10-09 00:36:13.816049] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:43.288 [2024-10-09 00:36:13.816057] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:43.288 [2024-10-09 00:36:13.819566] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:43.288 [2024-10-09 00:36:13.828853] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:43.288 [2024-10-09 00:36:13.829558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.288 [2024-10-09 00:36:13.829622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:43.288 [2024-10-09 00:36:13.829635] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:43.288 [2024-10-09 00:36:13.829900] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:43.288 [2024-10-09 00:36:13.830125] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:43.288 [2024-10-09 00:36:13.830136] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:43.288 [2024-10-09 00:36:13.830144] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:43.288 [2024-10-09 00:36:13.833670] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:43.288 [2024-10-09 00:36:13.842757] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:43.288 [2024-10-09 00:36:13.843398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.288 [2024-10-09 00:36:13.843426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:43.288 [2024-10-09 00:36:13.843435] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:43.288 [2024-10-09 00:36:13.843654] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:43.288 [2024-10-09 00:36:13.843883] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:43.288 [2024-10-09 00:36:13.843901] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:43.288 [2024-10-09 00:36:13.843908] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:43.288 [2024-10-09 00:36:13.847421] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:43.288 [2024-10-09 00:36:13.856710] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:43.288 [2024-10-09 00:36:13.857357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.288 [2024-10-09 00:36:13.857419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:43.288 [2024-10-09 00:36:13.857432] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:43.288 [2024-10-09 00:36:13.857685] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:43.288 [2024-10-09 00:36:13.857921] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:43.288 [2024-10-09 00:36:13.857932] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:43.288 [2024-10-09 00:36:13.857941] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:43.288 [2024-10-09 00:36:13.861457] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:43.288 [2024-10-09 00:36:13.870550] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:43.288 [2024-10-09 00:36:13.871232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.288 [2024-10-09 00:36:13.871296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:43.288 [2024-10-09 00:36:13.871309] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:43.288 [2024-10-09 00:36:13.871561] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:43.288 [2024-10-09 00:36:13.871795] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:43.288 [2024-10-09 00:36:13.871809] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:43.288 [2024-10-09 00:36:13.871821] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:43.288 [2024-10-09 00:36:13.875342] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:43.288 [2024-10-09 00:36:13.884432] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:43.288 [2024-10-09 00:36:13.885162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.288 [2024-10-09 00:36:13.885226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:43.288 [2024-10-09 00:36:13.885239] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:43.288 [2024-10-09 00:36:13.885493] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:43.288 [2024-10-09 00:36:13.885717] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:43.288 [2024-10-09 00:36:13.885752] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:43.288 [2024-10-09 00:36:13.885760] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:43.288 [2024-10-09 00:36:13.889331] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:43.288 [2024-10-09 00:36:13.898267] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:43.288 [2024-10-09 00:36:13.898881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.288 [2024-10-09 00:36:13.898955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:43.288 [2024-10-09 00:36:13.898976] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:43.288 [2024-10-09 00:36:13.899235] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:43.288 [2024-10-09 00:36:13.899459] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:43.288 [2024-10-09 00:36:13.899468] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:43.288 [2024-10-09 00:36:13.899477] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:43.288 [2024-10-09 00:36:13.903014] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:43.288 [2024-10-09 00:36:13.912091] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:43.288 [2024-10-09 00:36:13.912739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.288 [2024-10-09 00:36:13.912770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:43.288 [2024-10-09 00:36:13.912779] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:43.288 [2024-10-09 00:36:13.913000] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:43.288 [2024-10-09 00:36:13.913218] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:43.288 [2024-10-09 00:36:13.913227] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:43.288 [2024-10-09 00:36:13.913234] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:43.551 [2024-10-09 00:36:13.916752] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:43.551 [2024-10-09 00:36:13.925848] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:43.551 [2024-10-09 00:36:13.926524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.551 [2024-10-09 00:36:13.926587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:43.551 [2024-10-09 00:36:13.926601] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:43.551 [2024-10-09 00:36:13.926870] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:43.551 [2024-10-09 00:36:13.927095] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:43.551 [2024-10-09 00:36:13.927106] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:43.551 [2024-10-09 00:36:13.927115] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:43.551 [2024-10-09 00:36:13.930630] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:43.551 [2024-10-09 00:36:13.939715] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:43.551 [2024-10-09 00:36:13.940317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.551 [2024-10-09 00:36:13.940344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:43.551 [2024-10-09 00:36:13.940353] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:43.551 [2024-10-09 00:36:13.940581] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:43.551 [2024-10-09 00:36:13.940807] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:43.552 [2024-10-09 00:36:13.940818] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:43.552 [2024-10-09 00:36:13.940827] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:43.552 [2024-10-09 00:36:13.944333] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:43.552 [2024-10-09 00:36:13.953617] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:43.552 [2024-10-09 00:36:13.954136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.552 [2024-10-09 00:36:13.954160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:43.552 [2024-10-09 00:36:13.954169] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:43.552 [2024-10-09 00:36:13.954387] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:43.552 [2024-10-09 00:36:13.954604] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:43.552 [2024-10-09 00:36:13.954615] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:43.552 [2024-10-09 00:36:13.954622] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:43.552 [2024-10-09 00:36:13.958146] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:43.552 [2024-10-09 00:36:13.967430] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:43.552 [2024-10-09 00:36:13.967873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.552 [2024-10-09 00:36:13.967898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:43.552 [2024-10-09 00:36:13.967907] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:43.552 [2024-10-09 00:36:13.968126] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:43.552 [2024-10-09 00:36:13.968344] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:43.552 [2024-10-09 00:36:13.968353] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:43.552 [2024-10-09 00:36:13.968361] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:43.552 [2024-10-09 00:36:13.971869] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:43.552 [2024-10-09 00:36:13.981352] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:43.552 [2024-10-09 00:36:13.981946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.552 [2024-10-09 00:36:13.982010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:43.552 [2024-10-09 00:36:13.982025] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:43.552 [2024-10-09 00:36:13.982278] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:43.552 [2024-10-09 00:36:13.982503] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:43.552 [2024-10-09 00:36:13.982514] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:43.552 [2024-10-09 00:36:13.982531] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:43.552 [2024-10-09 00:36:13.986074] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:43.552 [2024-10-09 00:36:13.995160] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:43.552 [2024-10-09 00:36:13.995672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.552 [2024-10-09 00:36:13.995700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:43.552 [2024-10-09 00:36:13.995709] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:43.552 [2024-10-09 00:36:13.995937] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:43.552 [2024-10-09 00:36:13.996156] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:43.552 [2024-10-09 00:36:13.996165] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:43.552 [2024-10-09 00:36:13.996173] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:43.552 [2024-10-09 00:36:13.999676] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:43.552 [2024-10-09 00:36:14.008949] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:43.552 [2024-10-09 00:36:14.009420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.552 [2024-10-09 00:36:14.009443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:43.552 [2024-10-09 00:36:14.009451] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:43.552 [2024-10-09 00:36:14.009670] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:43.552 [2024-10-09 00:36:14.009895] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:43.552 [2024-10-09 00:36:14.009904] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:43.552 [2024-10-09 00:36:14.009912] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:43.552 [2024-10-09 00:36:14.013415] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:43.552 [2024-10-09 00:36:14.022687] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:43.552 [2024-10-09 00:36:14.023246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.552 [2024-10-09 00:36:14.023268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:43.552 [2024-10-09 00:36:14.023277] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:43.552 [2024-10-09 00:36:14.023494] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:43.552 [2024-10-09 00:36:14.023712] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:43.552 [2024-10-09 00:36:14.023732] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:43.552 [2024-10-09 00:36:14.023741] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:43.552 [2024-10-09 00:36:14.027241] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:43.552 [2024-10-09 00:36:14.036515] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:43.552 [2024-10-09 00:36:14.037219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.552 [2024-10-09 00:36:14.037289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:43.552 [2024-10-09 00:36:14.037303] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:43.552 [2024-10-09 00:36:14.037555] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:43.552 [2024-10-09 00:36:14.037791] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:43.552 [2024-10-09 00:36:14.037801] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:43.552 [2024-10-09 00:36:14.037810] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:43.552 [2024-10-09 00:36:14.041333] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:43.552 [2024-10-09 00:36:14.050412] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:43.552 [2024-10-09 00:36:14.051030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.552 [2024-10-09 00:36:14.051059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:43.552 [2024-10-09 00:36:14.051069] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:43.552 [2024-10-09 00:36:14.051289] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:43.552 [2024-10-09 00:36:14.051506] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:43.552 [2024-10-09 00:36:14.051515] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:43.552 [2024-10-09 00:36:14.051523] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:43.552 [2024-10-09 00:36:14.055033] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:43.552 [2024-10-09 00:36:14.064333] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:43.552 [2024-10-09 00:36:14.065047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.552 [2024-10-09 00:36:14.065111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:43.552 [2024-10-09 00:36:14.065124] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:43.552 [2024-10-09 00:36:14.065378] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:43.552 [2024-10-09 00:36:14.065602] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:43.552 [2024-10-09 00:36:14.065611] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:43.552 [2024-10-09 00:36:14.065620] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:43.552 [2024-10-09 00:36:14.069141] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:43.552 [2024-10-09 00:36:14.078215] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:43.552 [2024-10-09 00:36:14.078919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.552 [2024-10-09 00:36:14.078982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:43.553 [2024-10-09 00:36:14.078995] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:43.553 [2024-10-09 00:36:14.079248] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:43.553 [2024-10-09 00:36:14.079479] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:43.553 [2024-10-09 00:36:14.079489] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:43.553 [2024-10-09 00:36:14.079497] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:43.553 [2024-10-09 00:36:14.083034] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:43.553 [2024-10-09 00:36:14.092138] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:43.553 [2024-10-09 00:36:14.092825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.553 [2024-10-09 00:36:14.092889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:43.553 [2024-10-09 00:36:14.092903] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:43.553 [2024-10-09 00:36:14.093158] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:43.553 [2024-10-09 00:36:14.093382] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:43.553 [2024-10-09 00:36:14.093391] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:43.553 [2024-10-09 00:36:14.093400] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:43.553 [2024-10-09 00:36:14.096969] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:43.553 [2024-10-09 00:36:14.106068] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:43.553 [2024-10-09 00:36:14.106585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.553 [2024-10-09 00:36:14.106613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:43.553 [2024-10-09 00:36:14.106622] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:43.553 [2024-10-09 00:36:14.106850] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:43.553 [2024-10-09 00:36:14.107069] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:43.553 [2024-10-09 00:36:14.107079] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:43.553 [2024-10-09 00:36:14.107087] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:43.553 [2024-10-09 00:36:14.110593] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:43.553 [2024-10-09 00:36:14.120000] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:43.553 [2024-10-09 00:36:14.120617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.553 [2024-10-09 00:36:14.120643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:43.553 [2024-10-09 00:36:14.120652] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:43.553 [2024-10-09 00:36:14.120881] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:43.553 [2024-10-09 00:36:14.121100] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:43.553 [2024-10-09 00:36:14.121115] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:43.553 [2024-10-09 00:36:14.121124] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:43.553 [2024-10-09 00:36:14.124626] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:43.553 [2024-10-09 00:36:14.133912] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:43.553 [2024-10-09 00:36:14.134563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.553 [2024-10-09 00:36:14.134625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:43.553 [2024-10-09 00:36:14.134638] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:43.553 [2024-10-09 00:36:14.134901] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:43.553 [2024-10-09 00:36:14.135126] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:43.553 [2024-10-09 00:36:14.135135] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:43.553 [2024-10-09 00:36:14.135145] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:43.553 [2024-10-09 00:36:14.138666] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:43.553 [2024-10-09 00:36:14.147751] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:43.553 [2024-10-09 00:36:14.148231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.553 [2024-10-09 00:36:14.148260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:43.553 [2024-10-09 00:36:14.148268] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:43.553 [2024-10-09 00:36:14.148488] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:43.553 [2024-10-09 00:36:14.148705] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:43.553 [2024-10-09 00:36:14.148713] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:43.553 [2024-10-09 00:36:14.148730] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:43.553 [2024-10-09 00:36:14.152239] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:43.553 [2024-10-09 00:36:14.161529] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:43.553 [2024-10-09 00:36:14.162140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.553 [2024-10-09 00:36:14.162166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:43.553 [2024-10-09 00:36:14.162175] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:43.553 [2024-10-09 00:36:14.162392] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:43.553 [2024-10-09 00:36:14.162610] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:43.553 [2024-10-09 00:36:14.162619] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:43.553 [2024-10-09 00:36:14.162627] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:43.553 [2024-10-09 00:36:14.166134] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:43.553 [2024-10-09 00:36:14.175415] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:43.553 [2024-10-09 00:36:14.176121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.553 [2024-10-09 00:36:14.176184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:43.553 [2024-10-09 00:36:14.176204] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:43.553 [2024-10-09 00:36:14.176457] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:43.553 [2024-10-09 00:36:14.176681] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:43.553 [2024-10-09 00:36:14.176691] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:43.553 [2024-10-09 00:36:14.176699] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:43.553 [2024-10-09 00:36:14.180232] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:43.816 [2024-10-09 00:36:14.189342] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:43.816 [2024-10-09 00:36:14.189846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.816 [2024-10-09 00:36:14.189877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:43.816 [2024-10-09 00:36:14.189886] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:43.817 [2024-10-09 00:36:14.190106] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:43.817 [2024-10-09 00:36:14.190324] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:43.817 [2024-10-09 00:36:14.190333] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:43.817 [2024-10-09 00:36:14.190342] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:43.817 [2024-10-09 00:36:14.193862] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:43.817 8703.33 IOPS, 34.00 MiB/s [2024-10-08T22:36:14.452Z] [2024-10-09 00:36:14.204602] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:43.817 [2024-10-09 00:36:14.205092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.817 [2024-10-09 00:36:14.205121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:43.817 [2024-10-09 00:36:14.205129] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:43.817 [2024-10-09 00:36:14.205350] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:43.817 [2024-10-09 00:36:14.205567] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:43.817 [2024-10-09 00:36:14.205577] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:43.817 [2024-10-09 00:36:14.205585] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:43.817 [2024-10-09 00:36:14.209105] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:43.817 [2024-10-09 00:36:14.218381] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:43.817 [2024-10-09 00:36:14.218945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.817 [2024-10-09 00:36:14.218969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:43.817 [2024-10-09 00:36:14.218978] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:43.817 [2024-10-09 00:36:14.219197] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:43.817 [2024-10-09 00:36:14.219414] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:43.817 [2024-10-09 00:36:14.219440] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:43.817 [2024-10-09 00:36:14.219448] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:43.817 [2024-10-09 00:36:14.222964] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:43.817 [2024-10-09 00:36:14.232241] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:43.817 [2024-10-09 00:36:14.232869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.817 [2024-10-09 00:36:14.232933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:43.817 [2024-10-09 00:36:14.232946] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:43.817 [2024-10-09 00:36:14.233199] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:43.817 [2024-10-09 00:36:14.233422] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:43.817 [2024-10-09 00:36:14.233434] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:43.817 [2024-10-09 00:36:14.233444] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:43.817 [2024-10-09 00:36:14.236981] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:43.817 [2024-10-09 00:36:14.246072] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:43.817 [2024-10-09 00:36:14.246709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.817 [2024-10-09 00:36:14.246746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:43.817 [2024-10-09 00:36:14.246756] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:43.817 [2024-10-09 00:36:14.246978] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:43.817 [2024-10-09 00:36:14.247198] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:43.817 [2024-10-09 00:36:14.247208] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:43.817 [2024-10-09 00:36:14.247216] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:43.817 [2024-10-09 00:36:14.250734] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:43.817 [2024-10-09 00:36:14.260025] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:43.817 [2024-10-09 00:36:14.260602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.817 [2024-10-09 00:36:14.260627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:43.817 [2024-10-09 00:36:14.260636] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:43.817 [2024-10-09 00:36:14.260863] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:43.817 [2024-10-09 00:36:14.261083] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:43.817 [2024-10-09 00:36:14.261092] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:43.817 [2024-10-09 00:36:14.261101] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:43.817 [2024-10-09 00:36:14.264594] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:43.817 [2024-10-09 00:36:14.273878] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:43.817 [2024-10-09 00:36:14.274492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.817 [2024-10-09 00:36:14.274513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:43.817 [2024-10-09 00:36:14.274521] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:43.817 [2024-10-09 00:36:14.274749] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:43.817 [2024-10-09 00:36:14.274969] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:43.817 [2024-10-09 00:36:14.274978] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:43.817 [2024-10-09 00:36:14.274986] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:43.817 [2024-10-09 00:36:14.278491] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:43.817 [2024-10-09 00:36:14.287801] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:43.817 [2024-10-09 00:36:14.288504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.817 [2024-10-09 00:36:14.288567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:43.817 [2024-10-09 00:36:14.288581] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:43.817 [2024-10-09 00:36:14.288847] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:43.817 [2024-10-09 00:36:14.289072] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:43.817 [2024-10-09 00:36:14.289082] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:43.817 [2024-10-09 00:36:14.289090] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:43.817 [2024-10-09 00:36:14.292599] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:43.817 [2024-10-09 00:36:14.301691] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:43.817 [2024-10-09 00:36:14.302392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.817 [2024-10-09 00:36:14.302455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:43.817 [2024-10-09 00:36:14.302468] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:43.817 [2024-10-09 00:36:14.302734] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:43.817 [2024-10-09 00:36:14.302960] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:43.817 [2024-10-09 00:36:14.302969] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:43.817 [2024-10-09 00:36:14.302977] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:43.817 [2024-10-09 00:36:14.306548] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:43.817 [2024-10-09 00:36:14.315648] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:43.817 [2024-10-09 00:36:14.316268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.817 [2024-10-09 00:36:14.316297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:43.817 [2024-10-09 00:36:14.316306] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:43.817 [2024-10-09 00:36:14.316534] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:43.818 [2024-10-09 00:36:14.316761] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:43.818 [2024-10-09 00:36:14.316772] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:43.818 [2024-10-09 00:36:14.316780] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:43.818 [2024-10-09 00:36:14.320284] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:43.818 [2024-10-09 00:36:14.329562] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:43.818 [2024-10-09 00:36:14.330132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.818 [2024-10-09 00:36:14.330155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:43.818 [2024-10-09 00:36:14.330164] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:43.818 [2024-10-09 00:36:14.330382] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:43.818 [2024-10-09 00:36:14.330599] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:43.818 [2024-10-09 00:36:14.330611] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:43.818 [2024-10-09 00:36:14.330618] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:43.818 [2024-10-09 00:36:14.334133] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:43.818 [2024-10-09 00:36:14.343410] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:43.818 [2024-10-09 00:36:14.344060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.818 [2024-10-09 00:36:14.344124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:43.818 [2024-10-09 00:36:14.344137] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:43.818 [2024-10-09 00:36:14.344390] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:43.818 [2024-10-09 00:36:14.344614] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:43.818 [2024-10-09 00:36:14.344623] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:43.818 [2024-10-09 00:36:14.344632] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:43.818 [2024-10-09 00:36:14.348158] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:43.818 [2024-10-09 00:36:14.357252] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:43.818 [2024-10-09 00:36:14.357888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.818 [2024-10-09 00:36:14.357918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:43.818 [2024-10-09 00:36:14.357927] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:43.818 [2024-10-09 00:36:14.358148] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:43.818 [2024-10-09 00:36:14.358379] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:43.818 [2024-10-09 00:36:14.358388] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:43.818 [2024-10-09 00:36:14.358403] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:43.818 [2024-10-09 00:36:14.361922] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:43.818 [2024-10-09 00:36:14.371001] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:43.818 [2024-10-09 00:36:14.371615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.818 [2024-10-09 00:36:14.371639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:43.818 [2024-10-09 00:36:14.371647] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:43.818 [2024-10-09 00:36:14.371874] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:43.818 [2024-10-09 00:36:14.372093] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:43.818 [2024-10-09 00:36:14.372101] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:43.818 [2024-10-09 00:36:14.372108] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:43.818 [2024-10-09 00:36:14.375609] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:43.818 [2024-10-09 00:36:14.384890] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:43.818 [2024-10-09 00:36:14.385448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.818 [2024-10-09 00:36:14.385470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:43.818 [2024-10-09 00:36:14.385478] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:43.818 [2024-10-09 00:36:14.385696] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:43.818 [2024-10-09 00:36:14.385923] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:43.818 [2024-10-09 00:36:14.385934] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:43.818 [2024-10-09 00:36:14.385941] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:43.818 [2024-10-09 00:36:14.389735] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:43.818 [2024-10-09 00:36:14.398629] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:43.818 [2024-10-09 00:36:14.399097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.818 [2024-10-09 00:36:14.399125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:43.818 [2024-10-09 00:36:14.399133] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:43.818 [2024-10-09 00:36:14.399353] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:43.818 [2024-10-09 00:36:14.399570] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:43.818 [2024-10-09 00:36:14.399580] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:43.818 [2024-10-09 00:36:14.399588] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:43.818 [2024-10-09 00:36:14.403108] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:43.818 [2024-10-09 00:36:14.412393] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:43.818 [2024-10-09 00:36:14.413050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.818 [2024-10-09 00:36:14.413120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:43.818 [2024-10-09 00:36:14.413134] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:43.818 [2024-10-09 00:36:14.413387] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:43.818 [2024-10-09 00:36:14.413611] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:43.818 [2024-10-09 00:36:14.413621] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:43.818 [2024-10-09 00:36:14.413630] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:43.818 [2024-10-09 00:36:14.417158] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:43.818 [2024-10-09 00:36:14.426257] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:43.818 [2024-10-09 00:36:14.426868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.818 [2024-10-09 00:36:14.426933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:43.818 [2024-10-09 00:36:14.426947] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:43.818 [2024-10-09 00:36:14.427201] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:43.818 [2024-10-09 00:36:14.427425] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:43.818 [2024-10-09 00:36:14.427435] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:43.818 [2024-10-09 00:36:14.427444] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:43.818 [2024-10-09 00:36:14.430975] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:43.818 [2024-10-09 00:36:14.440064] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:43.818 [2024-10-09 00:36:14.440697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.818 [2024-10-09 00:36:14.440733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:43.818 [2024-10-09 00:36:14.440743] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:43.818 [2024-10-09 00:36:14.440963] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:43.818 [2024-10-09 00:36:14.441180] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:43.818 [2024-10-09 00:36:14.441190] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:43.818 [2024-10-09 00:36:14.441197] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:43.818 [2024-10-09 00:36:14.444703] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:44.089 [2024-10-09 00:36:14.453999] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:44.089 [2024-10-09 00:36:14.454611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.089 [2024-10-09 00:36:14.454637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:44.089 [2024-10-09 00:36:14.454646] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:44.089 [2024-10-09 00:36:14.454876] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:44.089 [2024-10-09 00:36:14.455104] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:44.089 [2024-10-09 00:36:14.455113] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:44.089 [2024-10-09 00:36:14.455120] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:44.089 [2024-10-09 00:36:14.458654] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:44.089 [2024-10-09 00:36:14.467754] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:44.089 [2024-10-09 00:36:14.468419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.089 [2024-10-09 00:36:14.468481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:44.089 [2024-10-09 00:36:14.468494] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:44.089 [2024-10-09 00:36:14.468759] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:44.089 [2024-10-09 00:36:14.468984] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:44.089 [2024-10-09 00:36:14.468994] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:44.089 [2024-10-09 00:36:14.469003] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:44.089 [2024-10-09 00:36:14.472526] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:44.089 [2024-10-09 00:36:14.481615] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:44.089 [2024-10-09 00:36:14.482239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.089 [2024-10-09 00:36:14.482267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:44.089 [2024-10-09 00:36:14.482276] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:44.089 [2024-10-09 00:36:14.482496] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:44.089 [2024-10-09 00:36:14.482714] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:44.089 [2024-10-09 00:36:14.482733] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:44.089 [2024-10-09 00:36:14.482740] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:44.089 [2024-10-09 00:36:14.486264] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:44.089 [2024-10-09 00:36:14.495388] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:44.089 [2024-10-09 00:36:14.495969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.089 [2024-10-09 00:36:14.495995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:44.089 [2024-10-09 00:36:14.496004] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:44.089 [2024-10-09 00:36:14.496222] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:44.089 [2024-10-09 00:36:14.496440] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:44.089 [2024-10-09 00:36:14.496451] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:44.089 [2024-10-09 00:36:14.496459] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:44.089 [2024-10-09 00:36:14.499991] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:44.089 [2024-10-09 00:36:14.509304] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:44.089 [2024-10-09 00:36:14.509825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.089 [2024-10-09 00:36:14.509870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:44.089 [2024-10-09 00:36:14.509880] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:44.089 [2024-10-09 00:36:14.510118] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:44.089 [2024-10-09 00:36:14.510340] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:44.089 [2024-10-09 00:36:14.510348] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:44.089 [2024-10-09 00:36:14.510356] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:44.089 [2024-10-09 00:36:14.513942] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:44.089 [2024-10-09 00:36:14.523057] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:44.089 [2024-10-09 00:36:14.523731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.089 [2024-10-09 00:36:14.523794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:44.089 [2024-10-09 00:36:14.523807] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:44.089 [2024-10-09 00:36:14.524060] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:44.089 [2024-10-09 00:36:14.524283] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:44.089 [2024-10-09 00:36:14.524292] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:44.089 [2024-10-09 00:36:14.524301] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:44.089 [2024-10-09 00:36:14.527842] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:44.090 [2024-10-09 00:36:14.536954] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:44.090 [2024-10-09 00:36:14.537643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.090 [2024-10-09 00:36:14.537706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:44.090 [2024-10-09 00:36:14.537732] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:44.090 [2024-10-09 00:36:14.537986] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:44.090 [2024-10-09 00:36:14.538210] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:44.090 [2024-10-09 00:36:14.538219] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:44.090 [2024-10-09 00:36:14.538227] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:44.090 [2024-10-09 00:36:14.541754] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:44.090 [2024-10-09 00:36:14.550870] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:44.090 [2024-10-09 00:36:14.551541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.090 [2024-10-09 00:36:14.551603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:44.090 [2024-10-09 00:36:14.551624] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:44.090 [2024-10-09 00:36:14.551894] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:44.090 [2024-10-09 00:36:14.552118] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:44.090 [2024-10-09 00:36:14.552127] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:44.090 [2024-10-09 00:36:14.552135] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:44.090 [2024-10-09 00:36:14.555661] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:44.090 [2024-10-09 00:36:14.564808] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:44.090 [2024-10-09 00:36:14.565535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.090 [2024-10-09 00:36:14.565598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:44.090 [2024-10-09 00:36:14.565611] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:44.090 [2024-10-09 00:36:14.565879] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:44.090 [2024-10-09 00:36:14.566104] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:44.090 [2024-10-09 00:36:14.566113] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:44.090 [2024-10-09 00:36:14.566122] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:44.090 [2024-10-09 00:36:14.569649] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:44.090 [2024-10-09 00:36:14.578778] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:44.090 [2024-10-09 00:36:14.579355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.090 [2024-10-09 00:36:14.579385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:44.090 [2024-10-09 00:36:14.579394] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:44.090 [2024-10-09 00:36:14.579615] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:44.090 [2024-10-09 00:36:14.579846] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:44.090 [2024-10-09 00:36:14.579857] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:44.090 [2024-10-09 00:36:14.579865] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:44.090 [2024-10-09 00:36:14.583383] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:44.090 [2024-10-09 00:36:14.592736] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:44.090 [2024-10-09 00:36:14.593299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.090 [2024-10-09 00:36:14.593324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:44.090 [2024-10-09 00:36:14.593334] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:44.090 [2024-10-09 00:36:14.593551] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:44.090 [2024-10-09 00:36:14.593781] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:44.090 [2024-10-09 00:36:14.593801] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:44.090 [2024-10-09 00:36:14.593809] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:44.090 [2024-10-09 00:36:14.597322] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:44.090 [2024-10-09 00:36:14.606630] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:44.090 [2024-10-09 00:36:14.607296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.090 [2024-10-09 00:36:14.607359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:44.090 [2024-10-09 00:36:14.607373] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:44.090 [2024-10-09 00:36:14.607627] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:44.090 [2024-10-09 00:36:14.607863] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:44.090 [2024-10-09 00:36:14.607876] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:44.090 [2024-10-09 00:36:14.607885] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:44.090 [2024-10-09 00:36:14.611403] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:44.090 [2024-10-09 00:36:14.620481] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:44.090 [2024-10-09 00:36:14.621182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.090 [2024-10-09 00:36:14.621244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:44.090 [2024-10-09 00:36:14.621258] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:44.090 [2024-10-09 00:36:14.621512] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:44.090 [2024-10-09 00:36:14.621753] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:44.090 [2024-10-09 00:36:14.621763] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:44.090 [2024-10-09 00:36:14.621772] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:44.090 [2024-10-09 00:36:14.625289] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:44.090 [2024-10-09 00:36:14.634379] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:44.090 [2024-10-09 00:36:14.634972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.090 [2024-10-09 00:36:14.635002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:44.090 [2024-10-09 00:36:14.635011] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:44.090 [2024-10-09 00:36:14.635232] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:44.090 [2024-10-09 00:36:14.635449] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:44.090 [2024-10-09 00:36:14.635459] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:44.090 [2024-10-09 00:36:14.635467] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:44.090 [2024-10-09 00:36:14.638984] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:44.090 [2024-10-09 00:36:14.648281] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:44.090 [2024-10-09 00:36:14.648884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.090 [2024-10-09 00:36:14.648909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:44.090 [2024-10-09 00:36:14.648918] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:44.090 [2024-10-09 00:36:14.649138] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:44.090 [2024-10-09 00:36:14.649358] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:44.090 [2024-10-09 00:36:14.649368] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:44.090 [2024-10-09 00:36:14.649375] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:44.090 [2024-10-09 00:36:14.652888] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:44.090 [2024-10-09 00:36:14.662185] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:44.090 [2024-10-09 00:36:14.662838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.090 [2024-10-09 00:36:14.662900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:44.091 [2024-10-09 00:36:14.662913] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:44.091 [2024-10-09 00:36:14.663166] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:44.091 [2024-10-09 00:36:14.663390] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:44.091 [2024-10-09 00:36:14.663399] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:44.091 [2024-10-09 00:36:14.663409] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:44.091 [2024-10-09 00:36:14.666948] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:44.091 [2024-10-09 00:36:14.676016] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:44.091 [2024-10-09 00:36:14.676690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.091 [2024-10-09 00:36:14.676762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:44.091 [2024-10-09 00:36:14.676777] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:44.091 [2024-10-09 00:36:14.677030] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:44.091 [2024-10-09 00:36:14.677253] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:44.091 [2024-10-09 00:36:14.677262] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:44.091 [2024-10-09 00:36:14.677270] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:44.091 [2024-10-09 00:36:14.680931] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:44.091 [2024-10-09 00:36:14.689829] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:44.091 [2024-10-09 00:36:14.690381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.091 [2024-10-09 00:36:14.690410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:44.091 [2024-10-09 00:36:14.690419] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:44.091 [2024-10-09 00:36:14.690647] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:44.091 [2024-10-09 00:36:14.690877] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:44.091 [2024-10-09 00:36:14.690887] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:44.091 [2024-10-09 00:36:14.690895] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:44.091 [2024-10-09 00:36:14.694394] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:44.091 [2024-10-09 00:36:14.703651] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:44.091 [2024-10-09 00:36:14.704204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.091 [2024-10-09 00:36:14.704228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:44.091 [2024-10-09 00:36:14.704237] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:44.091 [2024-10-09 00:36:14.704455] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:44.091 [2024-10-09 00:36:14.704672] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:44.091 [2024-10-09 00:36:14.704681] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:44.091 [2024-10-09 00:36:14.704689] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:44.091 [2024-10-09 00:36:14.708191] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:44.091 [2024-10-09 00:36:14.717474] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:44.091 [2024-10-09 00:36:14.718029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.091 [2024-10-09 00:36:14.718052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:44.091 [2024-10-09 00:36:14.718060] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:44.091 [2024-10-09 00:36:14.718278] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:44.091 [2024-10-09 00:36:14.718495] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:44.091 [2024-10-09 00:36:14.718505] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:44.091 [2024-10-09 00:36:14.718512] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:44.353 [2024-10-09 00:36:14.722066] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:44.353 [2024-10-09 00:36:14.731362] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:44.353 [2024-10-09 00:36:14.732012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.353 [2024-10-09 00:36:14.732074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:44.353 [2024-10-09 00:36:14.732087] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:44.353 [2024-10-09 00:36:14.732340] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:44.353 [2024-10-09 00:36:14.732564] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:44.353 [2024-10-09 00:36:14.732574] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:44.353 [2024-10-09 00:36:14.732590] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:44.353 [2024-10-09 00:36:14.736121] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:44.353 [2024-10-09 00:36:14.745194] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:44.353 [2024-10-09 00:36:14.745778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.353 [2024-10-09 00:36:14.745841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:44.353 [2024-10-09 00:36:14.745856] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:44.353 [2024-10-09 00:36:14.746110] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:44.353 [2024-10-09 00:36:14.746334] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:44.353 [2024-10-09 00:36:14.746343] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:44.353 [2024-10-09 00:36:14.746352] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:44.353 [2024-10-09 00:36:14.749879] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:44.353 [2024-10-09 00:36:14.758955] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:44.353 [2024-10-09 00:36:14.759637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.353 [2024-10-09 00:36:14.759700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:44.353 [2024-10-09 00:36:14.759713] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:44.353 [2024-10-09 00:36:14.759979] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:44.353 [2024-10-09 00:36:14.760218] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:44.353 [2024-10-09 00:36:14.760228] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:44.353 [2024-10-09 00:36:14.760237] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:44.353 [2024-10-09 00:36:14.763755] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:44.353 [2024-10-09 00:36:14.772830] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:44.353 [2024-10-09 00:36:14.773552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.353 [2024-10-09 00:36:14.773615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:44.353 [2024-10-09 00:36:14.773628] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:44.353 [2024-10-09 00:36:14.773895] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:44.353 [2024-10-09 00:36:14.774120] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:44.353 [2024-10-09 00:36:14.774129] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:44.353 [2024-10-09 00:36:14.774137] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:44.353 [2024-10-09 00:36:14.777653] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:44.353 [2024-10-09 00:36:14.786737] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:44.353 [2024-10-09 00:36:14.787442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.353 [2024-10-09 00:36:14.787511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:44.353 [2024-10-09 00:36:14.787524] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:44.353 [2024-10-09 00:36:14.787791] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:44.353 [2024-10-09 00:36:14.788015] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:44.353 [2024-10-09 00:36:14.788024] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:44.353 [2024-10-09 00:36:14.788033] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:44.353 [2024-10-09 00:36:14.791546] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:44.353 [2024-10-09 00:36:14.800612] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:44.353 [2024-10-09 00:36:14.801276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.353 [2024-10-09 00:36:14.801338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:44.353 [2024-10-09 00:36:14.801351] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:44.353 [2024-10-09 00:36:14.801604] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:44.353 [2024-10-09 00:36:14.801840] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:44.353 [2024-10-09 00:36:14.801850] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:44.353 [2024-10-09 00:36:14.801859] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:44.353 [2024-10-09 00:36:14.805378] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:44.353 [2024-10-09 00:36:14.814452] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:44.353 [2024-10-09 00:36:14.815175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.353 [2024-10-09 00:36:14.815238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:44.353 [2024-10-09 00:36:14.815252] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:44.353 [2024-10-09 00:36:14.815505] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:44.353 [2024-10-09 00:36:14.815742] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:44.353 [2024-10-09 00:36:14.815752] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:44.353 [2024-10-09 00:36:14.815760] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:44.353 [2024-10-09 00:36:14.819278] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:44.353 [2024-10-09 00:36:14.828348] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:44.353 [2024-10-09 00:36:14.829071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.354 [2024-10-09 00:36:14.829133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:44.354 [2024-10-09 00:36:14.829146] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:44.354 [2024-10-09 00:36:14.829399] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:44.354 [2024-10-09 00:36:14.829631] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:44.354 [2024-10-09 00:36:14.829640] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:44.354 [2024-10-09 00:36:14.829648] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:44.354 [2024-10-09 00:36:14.833178] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:44.354 [2024-10-09 00:36:14.842248] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:44.354 [2024-10-09 00:36:14.842968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.354 [2024-10-09 00:36:14.843031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:44.354 [2024-10-09 00:36:14.843044] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:44.354 [2024-10-09 00:36:14.843297] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:44.354 [2024-10-09 00:36:14.843521] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:44.354 [2024-10-09 00:36:14.843530] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:44.354 [2024-10-09 00:36:14.843538] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:44.354 [2024-10-09 00:36:14.847063] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:44.354 [2024-10-09 00:36:14.856133] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:44.354 [2024-10-09 00:36:14.856777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.354 [2024-10-09 00:36:14.856839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:44.354 [2024-10-09 00:36:14.856852] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:44.354 [2024-10-09 00:36:14.857105] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:44.354 [2024-10-09 00:36:14.857328] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:44.354 [2024-10-09 00:36:14.857337] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:44.354 [2024-10-09 00:36:14.857347] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:44.354 [2024-10-09 00:36:14.860887] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:44.354 [2024-10-09 00:36:14.869964] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:44.354 [2024-10-09 00:36:14.870590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.354 [2024-10-09 00:36:14.870618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:44.354 [2024-10-09 00:36:14.870627] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:44.354 [2024-10-09 00:36:14.870857] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:44.354 [2024-10-09 00:36:14.871076] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:44.354 [2024-10-09 00:36:14.871086] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:44.354 [2024-10-09 00:36:14.871093] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:44.354 [2024-10-09 00:36:14.874614] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:44.354 [2024-10-09 00:36:14.883876] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:44.354 [2024-10-09 00:36:14.884428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.354 [2024-10-09 00:36:14.884452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:44.354 [2024-10-09 00:36:14.884461] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:44.354 [2024-10-09 00:36:14.884679] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:44.354 [2024-10-09 00:36:14.884906] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:44.354 [2024-10-09 00:36:14.884923] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:44.354 [2024-10-09 00:36:14.884931] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:44.354 [2024-10-09 00:36:14.888447] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:44.354 [2024-10-09 00:36:14.897712] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:44.354 [2024-10-09 00:36:14.898315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.354 [2024-10-09 00:36:14.898338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:44.354 [2024-10-09 00:36:14.898346] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:44.354 [2024-10-09 00:36:14.898564] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:44.354 [2024-10-09 00:36:14.898790] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:44.354 [2024-10-09 00:36:14.898801] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:44.354 [2024-10-09 00:36:14.898809] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:44.354 [2024-10-09 00:36:14.902307] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:44.354 [2024-10-09 00:36:14.911584] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:44.354 [2024-10-09 00:36:14.912233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.354 [2024-10-09 00:36:14.912297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:44.354 [2024-10-09 00:36:14.912310] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:44.354 [2024-10-09 00:36:14.912564] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:44.354 [2024-10-09 00:36:14.912802] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:44.354 [2024-10-09 00:36:14.912813] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:44.354 [2024-10-09 00:36:14.912823] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:44.354 [2024-10-09 00:36:14.916345] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:44.354 [2024-10-09 00:36:14.925422] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:44.354 [2024-10-09 00:36:14.926149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.354 [2024-10-09 00:36:14.926212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:44.354 [2024-10-09 00:36:14.926238] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:44.354 [2024-10-09 00:36:14.926491] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:44.354 [2024-10-09 00:36:14.926715] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:44.354 [2024-10-09 00:36:14.926737] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:44.354 [2024-10-09 00:36:14.926746] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:44.354 [2024-10-09 00:36:14.930308] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:44.354 [2024-10-09 00:36:14.939180] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:44.354 [2024-10-09 00:36:14.939861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.354 [2024-10-09 00:36:14.939924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:44.354 [2024-10-09 00:36:14.939937] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:44.354 [2024-10-09 00:36:14.940190] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:44.354 [2024-10-09 00:36:14.940413] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:44.354 [2024-10-09 00:36:14.940423] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:44.354 [2024-10-09 00:36:14.940431] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:44.354 [2024-10-09 00:36:14.944017] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:44.354 [2024-10-09 00:36:14.953126] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:44.354 [2024-10-09 00:36:14.953833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.354 [2024-10-09 00:36:14.953897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:44.354 [2024-10-09 00:36:14.953911] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:44.354 [2024-10-09 00:36:14.954166] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:44.354 [2024-10-09 00:36:14.954389] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:44.355 [2024-10-09 00:36:14.954400] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:44.355 [2024-10-09 00:36:14.954408] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:44.355 [2024-10-09 00:36:14.957943] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:44.355 [2024-10-09 00:36:14.967034] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:44.355 [2024-10-09 00:36:14.967609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.355 [2024-10-09 00:36:14.967671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:44.355 [2024-10-09 00:36:14.967684] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:44.355 [2024-10-09 00:36:14.967954] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:44.355 [2024-10-09 00:36:14.968178] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:44.355 [2024-10-09 00:36:14.968194] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:44.355 [2024-10-09 00:36:14.968202] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:44.355 [2024-10-09 00:36:14.971712] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:44.355 [2024-10-09 00:36:14.980794] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:44.355 [2024-10-09 00:36:14.981467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.355 [2024-10-09 00:36:14.981529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:44.355 [2024-10-09 00:36:14.981542] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:44.355 [2024-10-09 00:36:14.981810] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:44.355 [2024-10-09 00:36:14.982035] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:44.355 [2024-10-09 00:36:14.982044] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:44.355 [2024-10-09 00:36:14.982053] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:44.355 [2024-10-09 00:36:14.985579] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:44.617 [2024-10-09 00:36:14.994692] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:44.617 [2024-10-09 00:36:14.995324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.617 [2024-10-09 00:36:14.995353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:44.617 [2024-10-09 00:36:14.995362] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:44.617 [2024-10-09 00:36:14.995582] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:44.617 [2024-10-09 00:36:14.995809] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:44.617 [2024-10-09 00:36:14.995820] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:44.617 [2024-10-09 00:36:14.995829] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:44.617 [2024-10-09 00:36:14.999330] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:44.617 [2024-10-09 00:36:15.008588] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:44.617 [2024-10-09 00:36:15.009155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.617 [2024-10-09 00:36:15.009180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:44.617 [2024-10-09 00:36:15.009188] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:44.617 [2024-10-09 00:36:15.009407] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:44.617 [2024-10-09 00:36:15.009623] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:44.617 [2024-10-09 00:36:15.009634] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:44.617 [2024-10-09 00:36:15.009641] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:44.617 [2024-10-09 00:36:15.013146] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:44.617 [2024-10-09 00:36:15.021233] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:44.617 [2024-10-09 00:36:15.021829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.617 [2024-10-09 00:36:15.021885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:44.617 [2024-10-09 00:36:15.021896] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:44.617 [2024-10-09 00:36:15.022080] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:44.617 [2024-10-09 00:36:15.022236] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:44.617 [2024-10-09 00:36:15.022243] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:44.617 [2024-10-09 00:36:15.022249] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:44.617 [2024-10-09 00:36:15.024684] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:44.617 [2024-10-09 00:36:15.033872] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:44.617 [2024-10-09 00:36:15.034387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.617 [2024-10-09 00:36:15.034410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:44.617 [2024-10-09 00:36:15.034417] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:44.617 [2024-10-09 00:36:15.034569] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:44.617 [2024-10-09 00:36:15.034728] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:44.617 [2024-10-09 00:36:15.034735] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:44.617 [2024-10-09 00:36:15.034741] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:44.617 [2024-10-09 00:36:15.037149] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:44.617 [2024-10-09 00:36:15.046454] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:44.617 [2024-10-09 00:36:15.046950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.617 [2024-10-09 00:36:15.046995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:44.617 [2024-10-09 00:36:15.047005] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:44.617 [2024-10-09 00:36:15.047180] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:44.617 [2024-10-09 00:36:15.047334] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:44.617 [2024-10-09 00:36:15.047341] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:44.617 [2024-10-09 00:36:15.047348] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:44.617 [2024-10-09 00:36:15.049761] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:44.617 [2024-10-09 00:36:15.059078] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:44.617 [2024-10-09 00:36:15.059672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.617 [2024-10-09 00:36:15.059714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:44.617 [2024-10-09 00:36:15.059731] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:44.617 [2024-10-09 00:36:15.059910] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:44.617 [2024-10-09 00:36:15.060063] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:44.617 [2024-10-09 00:36:15.060070] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:44.617 [2024-10-09 00:36:15.060076] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:44.617 [2024-10-09 00:36:15.062498] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:44.617 [2024-10-09 00:36:15.071682] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:44.617 [2024-10-09 00:36:15.072337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.617 [2024-10-09 00:36:15.072377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:44.617 [2024-10-09 00:36:15.072386] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:44.617 [2024-10-09 00:36:15.072558] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:44.617 [2024-10-09 00:36:15.072710] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:44.617 [2024-10-09 00:36:15.072717] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:44.617 [2024-10-09 00:36:15.072732] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:44.617 [2024-10-09 00:36:15.075139] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:44.617 [2024-10-09 00:36:15.084311] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:44.617 [2024-10-09 00:36:15.084903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.617 [2024-10-09 00:36:15.084942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:44.617 [2024-10-09 00:36:15.084950] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:44.617 [2024-10-09 00:36:15.085120] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:44.617 [2024-10-09 00:36:15.085273] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:44.617 [2024-10-09 00:36:15.085280] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:44.617 [2024-10-09 00:36:15.085285] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:44.617 [2024-10-09 00:36:15.087694] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:44.617 [2024-10-09 00:36:15.097012] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:44.617 [2024-10-09 00:36:15.097598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.617 [2024-10-09 00:36:15.097634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:44.617 [2024-10-09 00:36:15.097643] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:44.617 [2024-10-09 00:36:15.097820] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:44.617 [2024-10-09 00:36:15.097974] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:44.617 [2024-10-09 00:36:15.097980] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:44.617 [2024-10-09 00:36:15.097990] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:44.617 [2024-10-09 00:36:15.100391] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:44.617 [2024-10-09 00:36:15.109702] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:44.617 [2024-10-09 00:36:15.110263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.617 [2024-10-09 00:36:15.110298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:44.617 [2024-10-09 00:36:15.110306] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:44.617 [2024-10-09 00:36:15.110474] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:44.617 [2024-10-09 00:36:15.110625] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:44.617 [2024-10-09 00:36:15.110632] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:44.617 [2024-10-09 00:36:15.110638] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:44.617 [2024-10-09 00:36:15.113047] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:44.617 [2024-10-09 00:36:15.122356] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:44.618 [2024-10-09 00:36:15.122933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.618 [2024-10-09 00:36:15.122967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:44.618 [2024-10-09 00:36:15.122975] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:44.618 [2024-10-09 00:36:15.123142] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:44.618 [2024-10-09 00:36:15.123293] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:44.618 [2024-10-09 00:36:15.123299] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:44.618 [2024-10-09 00:36:15.123305] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:44.618 [2024-10-09 00:36:15.125712] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:44.618 [2024-10-09 00:36:15.135020] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:44.618 [2024-10-09 00:36:15.135513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.618 [2024-10-09 00:36:15.135528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:44.618 [2024-10-09 00:36:15.135534] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:44.618 [2024-10-09 00:36:15.135683] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:44.618 [2024-10-09 00:36:15.135856] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:44.618 [2024-10-09 00:36:15.135864] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:44.618 [2024-10-09 00:36:15.135869] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:44.618 [2024-10-09 00:36:15.138262] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:44.618 [2024-10-09 00:36:15.147699] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:44.618 [2024-10-09 00:36:15.148223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.618 [2024-10-09 00:36:15.148258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:44.618 [2024-10-09 00:36:15.148266] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:44.618 [2024-10-09 00:36:15.148432] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:44.618 [2024-10-09 00:36:15.148583] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:44.618 [2024-10-09 00:36:15.148589] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:44.618 [2024-10-09 00:36:15.148595] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:44.618 [2024-10-09 00:36:15.150998] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:44.618 [2024-10-09 00:36:15.160306] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:44.618 [2024-10-09 00:36:15.160783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.618 [2024-10-09 00:36:15.160799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:44.618 [2024-10-09 00:36:15.160805] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:44.618 [2024-10-09 00:36:15.160954] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:44.618 [2024-10-09 00:36:15.161103] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:44.618 [2024-10-09 00:36:15.161109] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:44.618 [2024-10-09 00:36:15.161114] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:44.618 [2024-10-09 00:36:15.163515] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:44.618 [2024-10-09 00:36:15.172977] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:44.618 [2024-10-09 00:36:15.173552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.618 [2024-10-09 00:36:15.173582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:44.618 [2024-10-09 00:36:15.173592] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:44.618 [2024-10-09 00:36:15.173763] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:44.618 [2024-10-09 00:36:15.173915] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:44.618 [2024-10-09 00:36:15.173922] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:44.618 [2024-10-09 00:36:15.173928] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:44.618 [2024-10-09 00:36:15.176325] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:44.618 [2024-10-09 00:36:15.185630] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:44.618 [2024-10-09 00:36:15.186085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.618 [2024-10-09 00:36:15.186114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:44.618 [2024-10-09 00:36:15.186123] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:44.618 [2024-10-09 00:36:15.186286] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:44.618 [2024-10-09 00:36:15.186441] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:44.618 [2024-10-09 00:36:15.186447] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:44.618 [2024-10-09 00:36:15.186452] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:44.618 [2024-10-09 00:36:15.188864] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:44.618 [2024-10-09 00:36:15.198307] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:44.618 [2024-10-09 00:36:15.198917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.618 [2024-10-09 00:36:15.198947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:44.618 [2024-10-09 00:36:15.198956] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:44.618 [2024-10-09 00:36:15.199121] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:44.618 [2024-10-09 00:36:15.199272] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:44.618 [2024-10-09 00:36:15.199278] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:44.618 [2024-10-09 00:36:15.199284] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:44.618 6527.50 IOPS, 25.50 MiB/s [2024-10-08T22:36:15.253Z] [2024-10-09 00:36:15.202818] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:44.618 [2024-10-09 00:36:15.210997] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:44.618 [2024-10-09 00:36:15.211568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.618 [2024-10-09 00:36:15.211599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:44.618 [2024-10-09 00:36:15.211607] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:44.618 [2024-10-09 00:36:15.211778] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:44.618 [2024-10-09 00:36:15.211931] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:44.618 [2024-10-09 00:36:15.211937] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:44.618 [2024-10-09 00:36:15.211943] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:44.618 [2024-10-09 00:36:15.214335] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:44.618 [2024-10-09 00:36:15.223635] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:44.618 [2024-10-09 00:36:15.224192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.618 [2024-10-09 00:36:15.224222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:44.618 [2024-10-09 00:36:15.224231] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:44.618 [2024-10-09 00:36:15.224395] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:44.618 [2024-10-09 00:36:15.224546] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:44.618 [2024-10-09 00:36:15.224553] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:44.618 [2024-10-09 00:36:15.224559] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:44.618 [2024-10-09 00:36:15.226964] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:44.618 [2024-10-09 00:36:15.236270] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:44.618 [2024-10-09 00:36:15.236776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.618 [2024-10-09 00:36:15.236798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:44.618 [2024-10-09 00:36:15.236805] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:44.618 [2024-10-09 00:36:15.236960] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:44.618 [2024-10-09 00:36:15.237109] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:44.618 [2024-10-09 00:36:15.237115] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:44.618 [2024-10-09 00:36:15.237120] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:44.618 [2024-10-09 00:36:15.239516] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:44.618 [2024-10-09 00:36:15.248965] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:44.618 [2024-10-09 00:36:15.249528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.618 [2024-10-09 00:36:15.249558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:44.618 [2024-10-09 00:36:15.249567] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:44.618 [2024-10-09 00:36:15.249741] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:44.618 [2024-10-09 00:36:15.249894] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:44.618 [2024-10-09 00:36:15.249900] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:44.618 [2024-10-09 00:36:15.249905] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:44.895 [2024-10-09 00:36:15.252302] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:44.895 [2024-10-09 00:36:15.261617] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:44.895 [2024-10-09 00:36:15.262204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.895 [2024-10-09 00:36:15.262234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:44.895 [2024-10-09 00:36:15.262243] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:44.895 [2024-10-09 00:36:15.262408] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:44.895 [2024-10-09 00:36:15.262560] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:44.895 [2024-10-09 00:36:15.262566] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:44.895 [2024-10-09 00:36:15.262572] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:44.895 [2024-10-09 00:36:15.264979] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:44.895 [2024-10-09 00:36:15.274290] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:44.895 [2024-10-09 00:36:15.274827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.895 [2024-10-09 00:36:15.274858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:44.895 [2024-10-09 00:36:15.274870] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:44.895 [2024-10-09 00:36:15.275038] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:44.895 [2024-10-09 00:36:15.275189] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:44.895 [2024-10-09 00:36:15.275196] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:44.895 [2024-10-09 00:36:15.275202] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:44.895 [2024-10-09 00:36:15.277605] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:44.895 [2024-10-09 00:36:15.286909] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:44.895 [2024-10-09 00:36:15.287393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.895 [2024-10-09 00:36:15.287407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:44.896 [2024-10-09 00:36:15.287413] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:44.896 [2024-10-09 00:36:15.287562] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:44.896 [2024-10-09 00:36:15.287709] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:44.896 [2024-10-09 00:36:15.287715] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:44.896 [2024-10-09 00:36:15.287734] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:44.896 [2024-10-09 00:36:15.290129] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:44.896 [2024-10-09 00:36:15.299567] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:44.896 [2024-10-09 00:36:15.300040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.896 [2024-10-09 00:36:15.300052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:44.896 [2024-10-09 00:36:15.300058] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:44.896 [2024-10-09 00:36:15.300206] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:44.896 [2024-10-09 00:36:15.300353] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:44.896 [2024-10-09 00:36:15.300359] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:44.896 [2024-10-09 00:36:15.300364] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:44.896 [2024-10-09 00:36:15.302757] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:44.896 [2024-10-09 00:36:15.312202] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:44.896 [2024-10-09 00:36:15.312788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.896 [2024-10-09 00:36:15.312819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:44.896 [2024-10-09 00:36:15.312827] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:44.896 [2024-10-09 00:36:15.312994] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:44.896 [2024-10-09 00:36:15.313145] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:44.896 [2024-10-09 00:36:15.313155] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:44.896 [2024-10-09 00:36:15.313161] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:44.896 [2024-10-09 00:36:15.315563] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:44.896 [2024-10-09 00:36:15.324872] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:44.896 [2024-10-09 00:36:15.325432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.896 [2024-10-09 00:36:15.325463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:44.896 [2024-10-09 00:36:15.325472] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:44.896 [2024-10-09 00:36:15.325636] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:44.896 [2024-10-09 00:36:15.325795] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:44.896 [2024-10-09 00:36:15.325802] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:44.896 [2024-10-09 00:36:15.325808] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:44.896 [2024-10-09 00:36:15.328205] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:44.896 [2024-10-09 00:36:15.337512] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:44.896 [2024-10-09 00:36:15.338085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.896 [2024-10-09 00:36:15.338116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:44.896 [2024-10-09 00:36:15.338125] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:44.896 [2024-10-09 00:36:15.338289] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:44.896 [2024-10-09 00:36:15.338440] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:44.896 [2024-10-09 00:36:15.338447] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:44.896 [2024-10-09 00:36:15.338452] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:44.896 [2024-10-09 00:36:15.340856] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:44.896 [2024-10-09 00:36:15.350181] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:44.896 [2024-10-09 00:36:15.350746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.896 [2024-10-09 00:36:15.350776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:44.896 [2024-10-09 00:36:15.350785] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:44.896 [2024-10-09 00:36:15.350952] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:44.896 [2024-10-09 00:36:15.351103] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:44.896 [2024-10-09 00:36:15.351109] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:44.896 [2024-10-09 00:36:15.351115] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:44.896 [2024-10-09 00:36:15.353520] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:44.896 [2024-10-09 00:36:15.362838] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:44.896 [2024-10-09 00:36:15.363389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.896 [2024-10-09 00:36:15.363419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:44.896 [2024-10-09 00:36:15.363428] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:44.896 [2024-10-09 00:36:15.363592] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:44.896 [2024-10-09 00:36:15.363751] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:44.896 [2024-10-09 00:36:15.363758] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:44.896 [2024-10-09 00:36:15.363763] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:44.896 [2024-10-09 00:36:15.366160] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:44.896 [2024-10-09 00:36:15.375463] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:44.896 [2024-10-09 00:36:15.375938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.896 [2024-10-09 00:36:15.375969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:44.896 [2024-10-09 00:36:15.375977] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:44.896 [2024-10-09 00:36:15.376142] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:44.896 [2024-10-09 00:36:15.376293] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:44.896 [2024-10-09 00:36:15.376300] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:44.896 [2024-10-09 00:36:15.376305] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:44.896 [2024-10-09 00:36:15.378705] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:44.896 [2024-10-09 00:36:15.388307] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:44.896 [2024-10-09 00:36:15.388914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.896 [2024-10-09 00:36:15.388944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:44.896 [2024-10-09 00:36:15.388953] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:44.896 [2024-10-09 00:36:15.389117] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:44.896 [2024-10-09 00:36:15.389269] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:44.896 [2024-10-09 00:36:15.389275] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:44.896 [2024-10-09 00:36:15.389280] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:44.896 [2024-10-09 00:36:15.391677] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:44.896 [2024-10-09 00:36:15.400895] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:44.896 [2024-10-09 00:36:15.401461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.896 [2024-10-09 00:36:15.401491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:44.896 [2024-10-09 00:36:15.401500] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:44.896 [2024-10-09 00:36:15.401668] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:44.896 [2024-10-09 00:36:15.401826] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:44.896 [2024-10-09 00:36:15.401833] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:44.896 [2024-10-09 00:36:15.401839] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:44.896 [2024-10-09 00:36:15.404233] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:44.896 [2024-10-09 00:36:15.413537] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:44.896 [2024-10-09 00:36:15.414141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.896 [2024-10-09 00:36:15.414171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:44.896 [2024-10-09 00:36:15.414180] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:44.896 [2024-10-09 00:36:15.414344] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:44.896 [2024-10-09 00:36:15.414495] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:44.896 [2024-10-09 00:36:15.414502] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:44.896 [2024-10-09 00:36:15.414508] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:44.896 [2024-10-09 00:36:15.416913] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:44.896 [2024-10-09 00:36:15.426228] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:44.896 [2024-10-09 00:36:15.426794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.896 [2024-10-09 00:36:15.426825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:44.896 [2024-10-09 00:36:15.426834] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:44.896 [2024-10-09 00:36:15.427001] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:44.896 [2024-10-09 00:36:15.427153] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:44.896 [2024-10-09 00:36:15.427159] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:44.896 [2024-10-09 00:36:15.427165] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:44.896 [2024-10-09 00:36:15.429570] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:44.896 [2024-10-09 00:36:15.438878] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:44.896 [2024-10-09 00:36:15.439448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.896 [2024-10-09 00:36:15.439479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:44.896 [2024-10-09 00:36:15.439487] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:44.896 [2024-10-09 00:36:15.439652] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:44.896 [2024-10-09 00:36:15.439810] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:44.896 [2024-10-09 00:36:15.439817] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:44.896 [2024-10-09 00:36:15.439827] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:44.896 [2024-10-09 00:36:15.442223] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:44.896 [2024-10-09 00:36:15.451525] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:44.896 [2024-10-09 00:36:15.452089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.896 [2024-10-09 00:36:15.452120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:44.896 [2024-10-09 00:36:15.452129] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:44.896 [2024-10-09 00:36:15.452293] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:44.896 [2024-10-09 00:36:15.452445] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:44.896 [2024-10-09 00:36:15.452451] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:44.896 [2024-10-09 00:36:15.452457] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:44.896 [2024-10-09 00:36:15.454863] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:44.896 [2024-10-09 00:36:15.464175] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:44.896 [2024-10-09 00:36:15.464663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.896 [2024-10-09 00:36:15.464677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:44.896 [2024-10-09 00:36:15.464683] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:44.896 [2024-10-09 00:36:15.464838] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:44.896 [2024-10-09 00:36:15.464987] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:44.896 [2024-10-09 00:36:15.464993] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:44.896 [2024-10-09 00:36:15.464998] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:44.896 [2024-10-09 00:36:15.467388] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:44.896 [2024-10-09 00:36:15.476838] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:44.896 [2024-10-09 00:36:15.477317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.897 [2024-10-09 00:36:15.477330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:44.897 [2024-10-09 00:36:15.477335] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:44.897 [2024-10-09 00:36:15.477483] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:44.897 [2024-10-09 00:36:15.477631] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:44.897 [2024-10-09 00:36:15.477637] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:44.897 [2024-10-09 00:36:15.477641] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:44.897 [2024-10-09 00:36:15.480038] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:44.897 [2024-10-09 00:36:15.489501] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:44.897 [2024-10-09 00:36:15.490054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.897 [2024-10-09 00:36:15.490084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:44.897 [2024-10-09 00:36:15.490093] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:44.897 [2024-10-09 00:36:15.490260] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:44.897 [2024-10-09 00:36:15.490411] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:44.897 [2024-10-09 00:36:15.490418] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:44.897 [2024-10-09 00:36:15.490423] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:44.897 [2024-10-09 00:36:15.492832] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:44.897 [2024-10-09 00:36:15.502157] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:44.897 [2024-10-09 00:36:15.502507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.897 [2024-10-09 00:36:15.502523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:44.897 [2024-10-09 00:36:15.502529] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:44.897 [2024-10-09 00:36:15.502679] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:44.897 [2024-10-09 00:36:15.502832] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:44.897 [2024-10-09 00:36:15.502838] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:44.897 [2024-10-09 00:36:15.502843] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:44.897 [2024-10-09 00:36:15.505239] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:44.897 [2024-10-09 00:36:15.514835] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:44.897 [2024-10-09 00:36:15.515318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.897 [2024-10-09 00:36:15.515331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:44.897 [2024-10-09 00:36:15.515336] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:44.897 [2024-10-09 00:36:15.515484] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:44.897 [2024-10-09 00:36:15.515631] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:44.897 [2024-10-09 00:36:15.515637] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:44.897 [2024-10-09 00:36:15.515642] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:44.897 [2024-10-09 00:36:15.518037] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.162 [2024-10-09 00:36:15.527486] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.162 [2024-10-09 00:36:15.528018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.162 [2024-10-09 00:36:15.528049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:45.162 [2024-10-09 00:36:15.528058] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:45.162 [2024-10-09 00:36:15.528222] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:45.162 [2024-10-09 00:36:15.528377] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.162 [2024-10-09 00:36:15.528383] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.162 [2024-10-09 00:36:15.528389] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.162 [2024-10-09 00:36:15.530794] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.162 [2024-10-09 00:36:15.540106] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.162 [2024-10-09 00:36:15.540669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.162 [2024-10-09 00:36:15.540699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:45.162 [2024-10-09 00:36:15.540709] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:45.162 [2024-10-09 00:36:15.540882] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:45.162 [2024-10-09 00:36:15.541034] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.162 [2024-10-09 00:36:15.541041] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.162 [2024-10-09 00:36:15.541046] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.162 [2024-10-09 00:36:15.543445] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.162 [2024-10-09 00:36:15.552785] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.162 [2024-10-09 00:36:15.553362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.162 [2024-10-09 00:36:15.553392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:45.162 [2024-10-09 00:36:15.553401] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:45.162 [2024-10-09 00:36:15.553566] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:45.162 [2024-10-09 00:36:15.553718] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.162 [2024-10-09 00:36:15.553732] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.162 [2024-10-09 00:36:15.553738] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.163 [2024-10-09 00:36:15.556135] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.163 [2024-10-09 00:36:15.565461] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.163 [2024-10-09 00:36:15.566021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.163 [2024-10-09 00:36:15.566051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:45.163 [2024-10-09 00:36:15.566060] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:45.163 [2024-10-09 00:36:15.566224] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:45.163 [2024-10-09 00:36:15.566376] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.163 [2024-10-09 00:36:15.566382] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.163 [2024-10-09 00:36:15.566387] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.163 [2024-10-09 00:36:15.568802] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.163 [2024-10-09 00:36:15.578111] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.163 [2024-10-09 00:36:15.578636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.163 [2024-10-09 00:36:15.578667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:45.163 [2024-10-09 00:36:15.578676] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:45.163 [2024-10-09 00:36:15.578850] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:45.163 [2024-10-09 00:36:15.579002] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.163 [2024-10-09 00:36:15.579008] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.163 [2024-10-09 00:36:15.579014] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.163 [2024-10-09 00:36:15.581411] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.163 [2024-10-09 00:36:15.590732] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.163 [2024-10-09 00:36:15.591262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.163 [2024-10-09 00:36:15.591292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:45.163 [2024-10-09 00:36:15.591300] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:45.163 [2024-10-09 00:36:15.591465] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:45.163 [2024-10-09 00:36:15.591616] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.163 [2024-10-09 00:36:15.591622] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.163 [2024-10-09 00:36:15.591628] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.163 [2024-10-09 00:36:15.594029] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.163 [2024-10-09 00:36:15.603335] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.163 [2024-10-09 00:36:15.603835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.163 [2024-10-09 00:36:15.603866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:45.163 [2024-10-09 00:36:15.603875] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:45.163 [2024-10-09 00:36:15.604042] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:45.163 [2024-10-09 00:36:15.604193] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.163 [2024-10-09 00:36:15.604199] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.163 [2024-10-09 00:36:15.604204] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.163 [2024-10-09 00:36:15.606607] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.163 [2024-10-09 00:36:15.615918] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.163 [2024-10-09 00:36:15.616399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.163 [2024-10-09 00:36:15.616413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:45.163 [2024-10-09 00:36:15.616422] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:45.163 [2024-10-09 00:36:15.616572] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:45.163 [2024-10-09 00:36:15.616724] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.163 [2024-10-09 00:36:15.616730] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.163 [2024-10-09 00:36:15.616735] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.163 [2024-10-09 00:36:15.619128] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.163 [2024-10-09 00:36:15.628571] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.163 [2024-10-09 00:36:15.629116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.163 [2024-10-09 00:36:15.629147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:45.163 [2024-10-09 00:36:15.629155] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:45.163 [2024-10-09 00:36:15.629320] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:45.163 [2024-10-09 00:36:15.629472] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.163 [2024-10-09 00:36:15.629478] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.163 [2024-10-09 00:36:15.629484] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.163 [2024-10-09 00:36:15.631887] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.163 [2024-10-09 00:36:15.641196] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.163 [2024-10-09 00:36:15.641666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.163 [2024-10-09 00:36:15.641680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:45.163 [2024-10-09 00:36:15.641686] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:45.163 [2024-10-09 00:36:15.641840] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:45.163 [2024-10-09 00:36:15.641989] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.163 [2024-10-09 00:36:15.641995] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.163 [2024-10-09 00:36:15.642000] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.163 [2024-10-09 00:36:15.644392] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.163 [2024-10-09 00:36:15.653837] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.163 [2024-10-09 00:36:15.654268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.163 [2024-10-09 00:36:15.654280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:45.163 [2024-10-09 00:36:15.654286] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:45.163 [2024-10-09 00:36:15.654434] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:45.163 [2024-10-09 00:36:15.654582] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.163 [2024-10-09 00:36:15.654592] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.163 [2024-10-09 00:36:15.654597] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.163 [2024-10-09 00:36:15.656994] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.163 [2024-10-09 00:36:15.666445] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.163 [2024-10-09 00:36:15.666869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.163 [2024-10-09 00:36:15.666883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:45.163 [2024-10-09 00:36:15.666888] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:45.163 [2024-10-09 00:36:15.667037] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:45.163 [2024-10-09 00:36:15.667185] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.163 [2024-10-09 00:36:15.667192] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.163 [2024-10-09 00:36:15.667197] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.163 [2024-10-09 00:36:15.669587] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.163 [2024-10-09 00:36:15.679041] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.163 [2024-10-09 00:36:15.679572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.163 [2024-10-09 00:36:15.679603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:45.163 [2024-10-09 00:36:15.679612] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:45.164 [2024-10-09 00:36:15.679784] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:45.164 [2024-10-09 00:36:15.679936] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.164 [2024-10-09 00:36:15.679942] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.164 [2024-10-09 00:36:15.679948] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.164 [2024-10-09 00:36:15.682345] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.164 [2024-10-09 00:36:15.691666] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.164 [2024-10-09 00:36:15.692260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.164 [2024-10-09 00:36:15.692291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:45.164 [2024-10-09 00:36:15.692300] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:45.164 [2024-10-09 00:36:15.692464] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:45.164 [2024-10-09 00:36:15.692615] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.164 [2024-10-09 00:36:15.692621] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.164 [2024-10-09 00:36:15.692627] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.164 [2024-10-09 00:36:15.695028] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.164 [2024-10-09 00:36:15.704344] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.164 [2024-10-09 00:36:15.704864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.164 [2024-10-09 00:36:15.704895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:45.164 [2024-10-09 00:36:15.704904] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:45.164 [2024-10-09 00:36:15.705138] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:45.164 [2024-10-09 00:36:15.705290] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.164 [2024-10-09 00:36:15.705296] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.164 [2024-10-09 00:36:15.705302] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.164 [2024-10-09 00:36:15.707708] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.164 [2024-10-09 00:36:15.716958] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.164 [2024-10-09 00:36:15.717509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.164 [2024-10-09 00:36:15.717539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:45.164 [2024-10-09 00:36:15.717548] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:45.164 [2024-10-09 00:36:15.717712] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:45.164 [2024-10-09 00:36:15.717870] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.164 [2024-10-09 00:36:15.717877] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.164 [2024-10-09 00:36:15.717882] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.164 [2024-10-09 00:36:15.720282] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.164 [2024-10-09 00:36:15.729589] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.164 [2024-10-09 00:36:15.730174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.164 [2024-10-09 00:36:15.730204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:45.164 [2024-10-09 00:36:15.730214] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:45.164 [2024-10-09 00:36:15.730380] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:45.164 [2024-10-09 00:36:15.730531] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.164 [2024-10-09 00:36:15.730538] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.164 [2024-10-09 00:36:15.730544] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.164 [2024-10-09 00:36:15.732945] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.164 [2024-10-09 00:36:15.742256] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.164 [2024-10-09 00:36:15.742705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.164 [2024-10-09 00:36:15.742724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:45.164 [2024-10-09 00:36:15.742730] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:45.164 [2024-10-09 00:36:15.742883] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:45.164 [2024-10-09 00:36:15.743032] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.164 [2024-10-09 00:36:15.743039] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.164 [2024-10-09 00:36:15.743044] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.164 [2024-10-09 00:36:15.745436] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.164 [2024-10-09 00:36:15.754889] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.164 [2024-10-09 00:36:15.755474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.164 [2024-10-09 00:36:15.755505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:45.164 [2024-10-09 00:36:15.755514] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:45.164 [2024-10-09 00:36:15.755678] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:45.164 [2024-10-09 00:36:15.755835] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.164 [2024-10-09 00:36:15.755843] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.164 [2024-10-09 00:36:15.755848] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.164 [2024-10-09 00:36:15.758246] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.164 [2024-10-09 00:36:15.767457] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.164 [2024-10-09 00:36:15.767949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.164 [2024-10-09 00:36:15.767964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:45.164 [2024-10-09 00:36:15.767970] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:45.164 [2024-10-09 00:36:15.768119] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:45.164 [2024-10-09 00:36:15.768267] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.164 [2024-10-09 00:36:15.768272] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.164 [2024-10-09 00:36:15.768277] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.164 [2024-10-09 00:36:15.770669] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.164 [2024-10-09 00:36:15.780118] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.164 [2024-10-09 00:36:15.780562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.164 [2024-10-09 00:36:15.780574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:45.164 [2024-10-09 00:36:15.780579] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:45.164 [2024-10-09 00:36:15.780732] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:45.164 [2024-10-09 00:36:15.780880] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.164 [2024-10-09 00:36:15.780885] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.164 [2024-10-09 00:36:15.780894] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.164 [2024-10-09 00:36:15.783284] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.164 [2024-10-09 00:36:15.792740] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.164 [2024-10-09 00:36:15.792970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.164 [2024-10-09 00:36:15.792983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:45.165 [2024-10-09 00:36:15.792989] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:45.165 [2024-10-09 00:36:15.793137] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:45.165 [2024-10-09 00:36:15.793286] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.165 [2024-10-09 00:36:15.793291] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.165 [2024-10-09 00:36:15.793296] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.427 [2024-10-09 00:36:15.795690] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.427 [2024-10-09 00:36:15.805418] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.427 [2024-10-09 00:36:15.806040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.427 [2024-10-09 00:36:15.806070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:45.427 [2024-10-09 00:36:15.806079] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:45.427 [2024-10-09 00:36:15.806244] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:45.427 [2024-10-09 00:36:15.806394] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.427 [2024-10-09 00:36:15.806401] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.427 [2024-10-09 00:36:15.806407] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.427 [2024-10-09 00:36:15.808810] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.427 [2024-10-09 00:36:15.818120] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.427 [2024-10-09 00:36:15.818597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.427 [2024-10-09 00:36:15.818627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:45.427 [2024-10-09 00:36:15.818637] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:45.427 [2024-10-09 00:36:15.818806] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:45.427 [2024-10-09 00:36:15.818958] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.427 [2024-10-09 00:36:15.818965] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.427 [2024-10-09 00:36:15.818970] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.427 [2024-10-09 00:36:15.821366] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.427 [2024-10-09 00:36:15.830818] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.427 [2024-10-09 00:36:15.831310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.427 [2024-10-09 00:36:15.831324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:45.427 [2024-10-09 00:36:15.831330] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:45.427 [2024-10-09 00:36:15.831479] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:45.427 [2024-10-09 00:36:15.831627] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.427 [2024-10-09 00:36:15.831632] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.427 [2024-10-09 00:36:15.831638] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.427 [2024-10-09 00:36:15.834034] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.427 [2024-10-09 00:36:15.843473] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.427 [2024-10-09 00:36:15.843928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.427 [2024-10-09 00:36:15.843941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:45.427 [2024-10-09 00:36:15.843947] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:45.427 [2024-10-09 00:36:15.844095] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:45.427 [2024-10-09 00:36:15.844243] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.427 [2024-10-09 00:36:15.844249] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.427 [2024-10-09 00:36:15.844253] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.427 [2024-10-09 00:36:15.846644] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.427 [2024-10-09 00:36:15.856098] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.427 [2024-10-09 00:36:15.856587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.427 [2024-10-09 00:36:15.856599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:45.427 [2024-10-09 00:36:15.856604] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:45.427 [2024-10-09 00:36:15.856757] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:45.427 [2024-10-09 00:36:15.856906] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.427 [2024-10-09 00:36:15.856911] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.427 [2024-10-09 00:36:15.856916] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.427 [2024-10-09 00:36:15.859309] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.428 [2024-10-09 00:36:15.868764] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.428 [2024-10-09 00:36:15.869327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.428 [2024-10-09 00:36:15.869357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:45.428 [2024-10-09 00:36:15.869366] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:45.428 [2024-10-09 00:36:15.869534] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:45.428 [2024-10-09 00:36:15.869685] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.428 [2024-10-09 00:36:15.869691] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.428 [2024-10-09 00:36:15.869697] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.428 [2024-10-09 00:36:15.872099] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.428 [2024-10-09 00:36:15.881410] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.428 [2024-10-09 00:36:15.881987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.428 [2024-10-09 00:36:15.882002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:45.428 [2024-10-09 00:36:15.882008] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:45.428 [2024-10-09 00:36:15.882156] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:45.428 [2024-10-09 00:36:15.882304] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.428 [2024-10-09 00:36:15.882309] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.428 [2024-10-09 00:36:15.882314] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.428 [2024-10-09 00:36:15.884707] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.428 [2024-10-09 00:36:15.894026] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.428 [2024-10-09 00:36:15.894561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.428 [2024-10-09 00:36:15.894591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:45.428 [2024-10-09 00:36:15.894600] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:45.428 [2024-10-09 00:36:15.894770] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:45.428 [2024-10-09 00:36:15.894922] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.428 [2024-10-09 00:36:15.894928] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.428 [2024-10-09 00:36:15.894934] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.428 [2024-10-09 00:36:15.897331] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.428 [2024-10-09 00:36:15.906633] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.428 [2024-10-09 00:36:15.907081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.428 [2024-10-09 00:36:15.907096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:45.428 [2024-10-09 00:36:15.907101] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:45.428 [2024-10-09 00:36:15.907250] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:45.428 [2024-10-09 00:36:15.907399] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.428 [2024-10-09 00:36:15.907404] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.428 [2024-10-09 00:36:15.907409] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.428 [2024-10-09 00:36:15.909808] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.428 [2024-10-09 00:36:15.919254] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.428 [2024-10-09 00:36:15.919686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.428 [2024-10-09 00:36:15.919698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:45.428 [2024-10-09 00:36:15.919704] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:45.428 [2024-10-09 00:36:15.919856] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:45.428 [2024-10-09 00:36:15.920005] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.428 [2024-10-09 00:36:15.920011] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.428 [2024-10-09 00:36:15.920016] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.428 [2024-10-09 00:36:15.922403] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.428 [2024-10-09 00:36:15.931851] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.428 [2024-10-09 00:36:15.932092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.428 [2024-10-09 00:36:15.932106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:45.428 [2024-10-09 00:36:15.932112] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:45.428 [2024-10-09 00:36:15.932261] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:45.428 [2024-10-09 00:36:15.932408] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.428 [2024-10-09 00:36:15.932414] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.428 [2024-10-09 00:36:15.932419] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.428 [2024-10-09 00:36:15.934818] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.428 [2024-10-09 00:36:15.944548] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.428 [2024-10-09 00:36:15.944997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.428 [2024-10-09 00:36:15.945010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:45.428 [2024-10-09 00:36:15.945016] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:45.428 [2024-10-09 00:36:15.945165] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:45.428 [2024-10-09 00:36:15.945312] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.428 [2024-10-09 00:36:15.945318] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.428 [2024-10-09 00:36:15.945323] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.428 [2024-10-09 00:36:15.947711] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.428 [2024-10-09 00:36:15.957156] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.428 [2024-10-09 00:36:15.957602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.428 [2024-10-09 00:36:15.957617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:45.428 [2024-10-09 00:36:15.957623] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:45.428 [2024-10-09 00:36:15.957775] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:45.428 [2024-10-09 00:36:15.957924] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.428 [2024-10-09 00:36:15.957929] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.428 [2024-10-09 00:36:15.957934] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.428 [2024-10-09 00:36:15.960324] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.428 [2024-10-09 00:36:15.969804] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.428 [2024-10-09 00:36:15.970370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.428 [2024-10-09 00:36:15.970400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:45.428 [2024-10-09 00:36:15.970409] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:45.428 [2024-10-09 00:36:15.970573] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:45.428 [2024-10-09 00:36:15.970731] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.428 [2024-10-09 00:36:15.970738] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.428 [2024-10-09 00:36:15.970743] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.428 [2024-10-09 00:36:15.973141] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.428 [2024-10-09 00:36:15.982447] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.428 [2024-10-09 00:36:15.983131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.428 [2024-10-09 00:36:15.983162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:45.428 [2024-10-09 00:36:15.983170] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:45.428 [2024-10-09 00:36:15.983338] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:45.428 [2024-10-09 00:36:15.983489] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.429 [2024-10-09 00:36:15.983495] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.429 [2024-10-09 00:36:15.983501] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.429 [2024-10-09 00:36:15.985903] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.429 [2024-10-09 00:36:15.995079] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.429 [2024-10-09 00:36:15.995646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.429 [2024-10-09 00:36:15.995676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:45.429 [2024-10-09 00:36:15.995685] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:45.429 [2024-10-09 00:36:15.995858] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:45.429 [2024-10-09 00:36:15.996013] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.429 [2024-10-09 00:36:15.996020] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.429 [2024-10-09 00:36:15.996026] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.429 [2024-10-09 00:36:15.998422] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.429 [2024-10-09 00:36:16.007738] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.429 [2024-10-09 00:36:16.008248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.429 [2024-10-09 00:36:16.008279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:45.429 [2024-10-09 00:36:16.008288] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:45.429 [2024-10-09 00:36:16.008452] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:45.429 [2024-10-09 00:36:16.008603] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.429 [2024-10-09 00:36:16.008610] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.429 [2024-10-09 00:36:16.008615] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.429 [2024-10-09 00:36:16.011015] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.429 [2024-10-09 00:36:16.020324] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.429 [2024-10-09 00:36:16.020871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.429 [2024-10-09 00:36:16.020901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:45.429 [2024-10-09 00:36:16.020910] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:45.429 [2024-10-09 00:36:16.021077] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:45.429 [2024-10-09 00:36:16.021228] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.429 [2024-10-09 00:36:16.021234] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.429 [2024-10-09 00:36:16.021240] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.429 [2024-10-09 00:36:16.023645] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.429 [2024-10-09 00:36:16.032958] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.429 [2024-10-09 00:36:16.033432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.429 [2024-10-09 00:36:16.033446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:45.429 [2024-10-09 00:36:16.033452] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:45.429 [2024-10-09 00:36:16.033601] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:45.429 [2024-10-09 00:36:16.033754] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.429 [2024-10-09 00:36:16.033760] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.429 [2024-10-09 00:36:16.033765] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.429 [2024-10-09 00:36:16.036156] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.429 [2024-10-09 00:36:16.045604] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.429 [2024-10-09 00:36:16.046182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.429 [2024-10-09 00:36:16.046213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:45.429 [2024-10-09 00:36:16.046222] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:45.429 [2024-10-09 00:36:16.046386] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:45.429 [2024-10-09 00:36:16.046537] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.429 [2024-10-09 00:36:16.046544] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.429 [2024-10-09 00:36:16.046549] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.429 [2024-10-09 00:36:16.048953] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.429 [2024-10-09 00:36:16.058262] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.429 [2024-10-09 00:36:16.058919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.429 [2024-10-09 00:36:16.058950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:45.429 [2024-10-09 00:36:16.058959] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:45.429 [2024-10-09 00:36:16.059123] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:45.429 [2024-10-09 00:36:16.059275] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.429 [2024-10-09 00:36:16.059281] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.429 [2024-10-09 00:36:16.059287] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.691 [2024-10-09 00:36:16.061688] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.691 [2024-10-09 00:36:16.070872] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.691 [2024-10-09 00:36:16.071413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.691 [2024-10-09 00:36:16.071443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:45.691 [2024-10-09 00:36:16.071452] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:45.691 [2024-10-09 00:36:16.071617] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:45.691 [2024-10-09 00:36:16.071774] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.692 [2024-10-09 00:36:16.071781] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.692 [2024-10-09 00:36:16.071787] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.692 [2024-10-09 00:36:16.074183] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.692 [2024-10-09 00:36:16.083489] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.692 [2024-10-09 00:36:16.083973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.692 [2024-10-09 00:36:16.083988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:45.692 [2024-10-09 00:36:16.083998] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:45.692 [2024-10-09 00:36:16.084147] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:45.692 [2024-10-09 00:36:16.084295] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.692 [2024-10-09 00:36:16.084301] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.692 [2024-10-09 00:36:16.084306] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.692 [2024-10-09 00:36:16.086701] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.692 [2024-10-09 00:36:16.096158] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.692 [2024-10-09 00:36:16.096627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.692 [2024-10-09 00:36:16.096640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:45.692 [2024-10-09 00:36:16.096645] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:45.692 [2024-10-09 00:36:16.096871] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:45.692 [2024-10-09 00:36:16.097021] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.692 [2024-10-09 00:36:16.097027] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.692 [2024-10-09 00:36:16.097032] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.692 [2024-10-09 00:36:16.099426] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.692 [2024-10-09 00:36:16.108732] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.692 [2024-10-09 00:36:16.109272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.692 [2024-10-09 00:36:16.109303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:45.692 [2024-10-09 00:36:16.109311] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:45.692 [2024-10-09 00:36:16.109476] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:45.692 [2024-10-09 00:36:16.109627] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.692 [2024-10-09 00:36:16.109633] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.692 [2024-10-09 00:36:16.109639] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.692 [2024-10-09 00:36:16.112044] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.692 [2024-10-09 00:36:16.121355] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.692 [2024-10-09 00:36:16.121856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.692 [2024-10-09 00:36:16.121887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:45.692 [2024-10-09 00:36:16.121897] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:45.692 [2024-10-09 00:36:16.122064] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:45.692 [2024-10-09 00:36:16.122215] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.692 [2024-10-09 00:36:16.122221] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.692 [2024-10-09 00:36:16.122231] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.692 [2024-10-09 00:36:16.124631] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.692 [2024-10-09 00:36:16.133950] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.692 [2024-10-09 00:36:16.134435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.692 [2024-10-09 00:36:16.134449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:45.692 [2024-10-09 00:36:16.134455] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:45.692 [2024-10-09 00:36:16.134603] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:45.692 [2024-10-09 00:36:16.134757] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.692 [2024-10-09 00:36:16.134763] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.692 [2024-10-09 00:36:16.134768] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.692 [2024-10-09 00:36:16.137161] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.692 [2024-10-09 00:36:16.146609] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.692 [2024-10-09 00:36:16.147073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.692 [2024-10-09 00:36:16.147086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:45.692 [2024-10-09 00:36:16.147091] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:45.692 [2024-10-09 00:36:16.147240] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:45.692 [2024-10-09 00:36:16.147388] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.692 [2024-10-09 00:36:16.147393] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.692 [2024-10-09 00:36:16.147398] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.692 [2024-10-09 00:36:16.149791] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.692 [2024-10-09 00:36:16.159240] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.692 [2024-10-09 00:36:16.159728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.692 [2024-10-09 00:36:16.159740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:45.692 [2024-10-09 00:36:16.159745] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:45.692 [2024-10-09 00:36:16.159893] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:45.692 [2024-10-09 00:36:16.160041] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.692 [2024-10-09 00:36:16.160047] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.692 [2024-10-09 00:36:16.160052] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.692 [2024-10-09 00:36:16.162479] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.692 [2024-10-09 00:36:16.171939] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.692 [2024-10-09 00:36:16.172370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.692 [2024-10-09 00:36:16.172383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:45.692 [2024-10-09 00:36:16.172388] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:45.692 [2024-10-09 00:36:16.172536] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:45.692 [2024-10-09 00:36:16.172684] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.692 [2024-10-09 00:36:16.172690] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.692 [2024-10-09 00:36:16.172695] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.692 [2024-10-09 00:36:16.175089] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.692 [2024-10-09 00:36:16.184558] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.692 [2024-10-09 00:36:16.185029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.692 [2024-10-09 00:36:16.185042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:45.692 [2024-10-09 00:36:16.185048] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:45.692 [2024-10-09 00:36:16.185196] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:45.692 [2024-10-09 00:36:16.185344] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.692 [2024-10-09 00:36:16.185350] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.692 [2024-10-09 00:36:16.185355] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.692 [2024-10-09 00:36:16.187757] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.692 [2024-10-09 00:36:16.197203] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.692 [2024-10-09 00:36:16.197767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.692 [2024-10-09 00:36:16.197797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:45.693 [2024-10-09 00:36:16.197806] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:45.693 [2024-10-09 00:36:16.197973] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:45.693 [2024-10-09 00:36:16.198124] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.693 [2024-10-09 00:36:16.198130] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.693 [2024-10-09 00:36:16.198136] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.693 [2024-10-09 00:36:16.200540] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.693 5222.00 IOPS, 20.40 MiB/s [2024-10-08T22:36:16.328Z] [2024-10-09 00:36:16.209861] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.693 [2024-10-09 00:36:16.210428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.693 [2024-10-09 00:36:16.210458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:45.693 [2024-10-09 00:36:16.210467] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:45.693 [2024-10-09 00:36:16.210639] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:45.693 [2024-10-09 00:36:16.210795] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.693 [2024-10-09 00:36:16.210802] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.693 [2024-10-09 00:36:16.210807] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.693 [2024-10-09 00:36:16.213203] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.693 [2024-10-09 00:36:16.222513] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.693 [2024-10-09 00:36:16.223010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.693 [2024-10-09 00:36:16.223025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:45.693 [2024-10-09 00:36:16.223031] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:45.693 [2024-10-09 00:36:16.223180] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:45.693 [2024-10-09 00:36:16.223328] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.693 [2024-10-09 00:36:16.223334] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.693 [2024-10-09 00:36:16.223339] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.693 [2024-10-09 00:36:16.225739] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.693 [2024-10-09 00:36:16.235188] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.693 [2024-10-09 00:36:16.235713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.693 [2024-10-09 00:36:16.235750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:45.693 [2024-10-09 00:36:16.235759] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:45.693 [2024-10-09 00:36:16.235923] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:45.693 [2024-10-09 00:36:16.236075] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.693 [2024-10-09 00:36:16.236081] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.693 [2024-10-09 00:36:16.236086] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.693 [2024-10-09 00:36:16.238483] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.693 [2024-10-09 00:36:16.247791] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.693 [2024-10-09 00:36:16.248126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.693 [2024-10-09 00:36:16.248142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:45.693 [2024-10-09 00:36:16.248148] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:45.693 [2024-10-09 00:36:16.248297] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:45.693 [2024-10-09 00:36:16.248446] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.693 [2024-10-09 00:36:16.248451] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.693 [2024-10-09 00:36:16.248460] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.693 [2024-10-09 00:36:16.250863] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.693 [2024-10-09 00:36:16.260447] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.693 [2024-10-09 00:36:16.261025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.693 [2024-10-09 00:36:16.261056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:45.693 [2024-10-09 00:36:16.261064] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:45.693 [2024-10-09 00:36:16.261229] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:45.693 [2024-10-09 00:36:16.261380] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.693 [2024-10-09 00:36:16.261386] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.693 [2024-10-09 00:36:16.261392] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.693 [2024-10-09 00:36:16.263797] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.693 [2024-10-09 00:36:16.273122] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.693 [2024-10-09 00:36:16.273682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.693 [2024-10-09 00:36:16.273713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:45.693 [2024-10-09 00:36:16.273727] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:45.693 [2024-10-09 00:36:16.273894] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:45.693 [2024-10-09 00:36:16.274045] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.693 [2024-10-09 00:36:16.274052] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.693 [2024-10-09 00:36:16.274057] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.693 [2024-10-09 00:36:16.276453] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.693 [2024-10-09 00:36:16.285767] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.693 [2024-10-09 00:36:16.286293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.693 [2024-10-09 00:36:16.286307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:45.693 [2024-10-09 00:36:16.286313] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:45.693 [2024-10-09 00:36:16.286462] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:45.693 [2024-10-09 00:36:16.286610] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.693 [2024-10-09 00:36:16.286615] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.693 [2024-10-09 00:36:16.286620] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.693 [2024-10-09 00:36:16.289027] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.693 [2024-10-09 00:36:16.298337] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.693 [2024-10-09 00:36:16.298829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.693 [2024-10-09 00:36:16.298864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:45.693 [2024-10-09 00:36:16.298873] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:45.693 [2024-10-09 00:36:16.299040] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:45.693 [2024-10-09 00:36:16.299192] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.693 [2024-10-09 00:36:16.299199] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.693 [2024-10-09 00:36:16.299204] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.693 [2024-10-09 00:36:16.301610] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.693 [2024-10-09 00:36:16.310928] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.693 [2024-10-09 00:36:16.311493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.693 [2024-10-09 00:36:16.311523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:45.693 [2024-10-09 00:36:16.311532] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:45.693 [2024-10-09 00:36:16.311699] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:45.693 [2024-10-09 00:36:16.311856] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.693 [2024-10-09 00:36:16.311863] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.693 [2024-10-09 00:36:16.311869] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.693 [2024-10-09 00:36:16.314270] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.693 [2024-10-09 00:36:16.323580] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.693 [2024-10-09 00:36:16.324054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.693 [2024-10-09 00:36:16.324069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:45.693 [2024-10-09 00:36:16.324074] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:45.693 [2024-10-09 00:36:16.324223] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:45.693 [2024-10-09 00:36:16.324371] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.693 [2024-10-09 00:36:16.324376] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.693 [2024-10-09 00:36:16.324381] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.955 [2024-10-09 00:36:16.326778] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.955 [2024-10-09 00:36:16.336231] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.955 [2024-10-09 00:36:16.336712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.955 [2024-10-09 00:36:16.336730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:45.955 [2024-10-09 00:36:16.336736] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:45.955 [2024-10-09 00:36:16.336884] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:45.955 [2024-10-09 00:36:16.337037] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.955 [2024-10-09 00:36:16.337043] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.955 [2024-10-09 00:36:16.337048] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.955 [2024-10-09 00:36:16.339436] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.955 [2024-10-09 00:36:16.348882] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.955 [2024-10-09 00:36:16.349228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.955 [2024-10-09 00:36:16.349242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:45.955 [2024-10-09 00:36:16.349247] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:45.955 [2024-10-09 00:36:16.349396] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:45.955 [2024-10-09 00:36:16.349544] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.955 [2024-10-09 00:36:16.349549] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.955 [2024-10-09 00:36:16.349554] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.955 [2024-10-09 00:36:16.351952] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.956 [2024-10-09 00:36:16.361568] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.956 [2024-10-09 00:36:16.362115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.956 [2024-10-09 00:36:16.362146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:45.956 [2024-10-09 00:36:16.362154] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:45.956 [2024-10-09 00:36:16.362319] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:45.956 [2024-10-09 00:36:16.362471] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.956 [2024-10-09 00:36:16.362478] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.956 [2024-10-09 00:36:16.362484] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.956 [2024-10-09 00:36:16.364886] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.956 [2024-10-09 00:36:16.374203] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.956 [2024-10-09 00:36:16.374685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.956 [2024-10-09 00:36:16.374700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:45.956 [2024-10-09 00:36:16.374705] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:45.956 [2024-10-09 00:36:16.374859] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:45.956 [2024-10-09 00:36:16.375007] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.956 [2024-10-09 00:36:16.375013] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.956 [2024-10-09 00:36:16.375018] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.956 [2024-10-09 00:36:16.377412] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.956 [2024-10-09 00:36:16.386888] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.956 [2024-10-09 00:36:16.387456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.956 [2024-10-09 00:36:16.387486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:45.956 [2024-10-09 00:36:16.387494] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:45.956 [2024-10-09 00:36:16.387659] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:45.956 [2024-10-09 00:36:16.387934] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.956 [2024-10-09 00:36:16.387944] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.956 [2024-10-09 00:36:16.387950] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.956 [2024-10-09 00:36:16.390383] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.956 [2024-10-09 00:36:16.399550] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.956 [2024-10-09 00:36:16.400107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.956 [2024-10-09 00:36:16.400138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:45.956 [2024-10-09 00:36:16.400147] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:45.956 [2024-10-09 00:36:16.400311] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:45.956 [2024-10-09 00:36:16.400462] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.956 [2024-10-09 00:36:16.400469] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.956 [2024-10-09 00:36:16.400474] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.956 [2024-10-09 00:36:16.402877] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.956 [2024-10-09 00:36:16.412179] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.956 [2024-10-09 00:36:16.412762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.956 [2024-10-09 00:36:16.412793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:45.956 [2024-10-09 00:36:16.412802] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:45.956 [2024-10-09 00:36:16.412967] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:45.956 [2024-10-09 00:36:16.413118] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.956 [2024-10-09 00:36:16.413124] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.956 [2024-10-09 00:36:16.413129] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.956 [2024-10-09 00:36:16.415532] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.956 [2024-10-09 00:36:16.424836] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.956 [2024-10-09 00:36:16.425415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.956 [2024-10-09 00:36:16.425445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:45.956 [2024-10-09 00:36:16.425458] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:45.956 [2024-10-09 00:36:16.425623] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:45.956 [2024-10-09 00:36:16.425782] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.956 [2024-10-09 00:36:16.425790] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.956 [2024-10-09 00:36:16.425796] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.956 [2024-10-09 00:36:16.428195] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.956 [2024-10-09 00:36:16.437508] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.956 [2024-10-09 00:36:16.437970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.956 [2024-10-09 00:36:16.437986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:45.956 [2024-10-09 00:36:16.437992] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:45.956 [2024-10-09 00:36:16.438141] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:45.956 [2024-10-09 00:36:16.438289] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.956 [2024-10-09 00:36:16.438294] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.956 [2024-10-09 00:36:16.438299] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.956 [2024-10-09 00:36:16.440688] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.956 [2024-10-09 00:36:16.450131] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.956 [2024-10-09 00:36:16.450610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.956 [2024-10-09 00:36:16.450622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:45.956 [2024-10-09 00:36:16.450627] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:45.956 [2024-10-09 00:36:16.450781] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:45.956 [2024-10-09 00:36:16.450929] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.956 [2024-10-09 00:36:16.450935] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.956 [2024-10-09 00:36:16.450940] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.956 [2024-10-09 00:36:16.453329] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.956 [2024-10-09 00:36:16.462770] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.956 [2024-10-09 00:36:16.463329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.956 [2024-10-09 00:36:16.463359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:45.956 [2024-10-09 00:36:16.463368] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:45.956 [2024-10-09 00:36:16.463532] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:45.956 [2024-10-09 00:36:16.463683] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.956 [2024-10-09 00:36:16.463693] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.956 [2024-10-09 00:36:16.463698] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.956 [2024-10-09 00:36:16.466112] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.956 [2024-10-09 00:36:16.475419] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.956 [2024-10-09 00:36:16.476011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.956 [2024-10-09 00:36:16.476042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:45.956 [2024-10-09 00:36:16.476051] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:45.956 [2024-10-09 00:36:16.476215] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:45.956 [2024-10-09 00:36:16.476367] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.956 [2024-10-09 00:36:16.476373] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.956 [2024-10-09 00:36:16.476378] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.956 [2024-10-09 00:36:16.478779] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.956 [2024-10-09 00:36:16.488091] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.956 [2024-10-09 00:36:16.488652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.957 [2024-10-09 00:36:16.488683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:45.957 [2024-10-09 00:36:16.488692] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:45.957 [2024-10-09 00:36:16.488862] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:45.957 [2024-10-09 00:36:16.489014] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.957 [2024-10-09 00:36:16.489020] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.957 [2024-10-09 00:36:16.489026] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.957 [2024-10-09 00:36:16.491421] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.957 [2024-10-09 00:36:16.500722] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.957 [2024-10-09 00:36:16.501237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.957 [2024-10-09 00:36:16.501266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:45.957 [2024-10-09 00:36:16.501275] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:45.957 [2024-10-09 00:36:16.501439] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:45.957 [2024-10-09 00:36:16.501590] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.957 [2024-10-09 00:36:16.501597] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.957 [2024-10-09 00:36:16.501602] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.957 [2024-10-09 00:36:16.504006] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.957 [2024-10-09 00:36:16.513312] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.957 [2024-10-09 00:36:16.513858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.957 [2024-10-09 00:36:16.513888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:45.957 [2024-10-09 00:36:16.513897] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:45.957 [2024-10-09 00:36:16.514064] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:45.957 [2024-10-09 00:36:16.514215] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.957 [2024-10-09 00:36:16.514221] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.957 [2024-10-09 00:36:16.514227] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.957 [2024-10-09 00:36:16.516631] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.957 [2024-10-09 00:36:16.525943] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.957 [2024-10-09 00:36:16.526515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.957 [2024-10-09 00:36:16.526546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:45.957 [2024-10-09 00:36:16.526554] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:45.957 [2024-10-09 00:36:16.526719] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:45.957 [2024-10-09 00:36:16.526878] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.957 [2024-10-09 00:36:16.526884] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.957 [2024-10-09 00:36:16.526889] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.957 [2024-10-09 00:36:16.529284] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.957 [2024-10-09 00:36:16.538586] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.957 [2024-10-09 00:36:16.539047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.957 [2024-10-09 00:36:16.539062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:45.957 [2024-10-09 00:36:16.539068] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:45.957 [2024-10-09 00:36:16.539217] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:45.957 [2024-10-09 00:36:16.539365] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.957 [2024-10-09 00:36:16.539371] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.957 [2024-10-09 00:36:16.539376] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.957 [2024-10-09 00:36:16.541768] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.957 [2024-10-09 00:36:16.551207] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.957 [2024-10-09 00:36:16.551550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.957 [2024-10-09 00:36:16.551564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:45.957 [2024-10-09 00:36:16.551569] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:45.957 [2024-10-09 00:36:16.551727] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:45.957 [2024-10-09 00:36:16.551877] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.957 [2024-10-09 00:36:16.551882] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.957 [2024-10-09 00:36:16.551887] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.957 [2024-10-09 00:36:16.554277] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.957 [2024-10-09 00:36:16.563859] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.957 [2024-10-09 00:36:16.564342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.957 [2024-10-09 00:36:16.564354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:45.957 [2024-10-09 00:36:16.564359] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:45.957 [2024-10-09 00:36:16.564507] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:45.957 [2024-10-09 00:36:16.564655] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.957 [2024-10-09 00:36:16.564660] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.957 [2024-10-09 00:36:16.564665] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.957 [2024-10-09 00:36:16.567069] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.957 [2024-10-09 00:36:16.576506] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.957 [2024-10-09 00:36:16.577096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.957 [2024-10-09 00:36:16.577126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:45.957 [2024-10-09 00:36:16.577135] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:45.957 [2024-10-09 00:36:16.577301] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:45.957 [2024-10-09 00:36:16.577452] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.957 [2024-10-09 00:36:16.577459] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.957 [2024-10-09 00:36:16.577464] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.957 [2024-10-09 00:36:16.579865] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.219 [2024-10-09 00:36:16.589181] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.219 [2024-10-09 00:36:16.589654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.219 [2024-10-09 00:36:16.589669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:46.219 [2024-10-09 00:36:16.589674] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:46.219 [2024-10-09 00:36:16.589829] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:46.219 [2024-10-09 00:36:16.589977] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.219 [2024-10-09 00:36:16.589983] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.219 [2024-10-09 00:36:16.589992] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.219 [2024-10-09 00:36:16.592388] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.219 [2024-10-09 00:36:16.601854] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.219 [2024-10-09 00:36:16.602425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.219 [2024-10-09 00:36:16.602455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:46.219 [2024-10-09 00:36:16.602464] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:46.219 [2024-10-09 00:36:16.602628] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:46.219 [2024-10-09 00:36:16.602786] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.219 [2024-10-09 00:36:16.602793] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.219 [2024-10-09 00:36:16.602799] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.219 [2024-10-09 00:36:16.605196] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.219 [2024-10-09 00:36:16.614503] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.219 [2024-10-09 00:36:16.615095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.219 [2024-10-09 00:36:16.615125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:46.219 [2024-10-09 00:36:16.615134] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:46.219 [2024-10-09 00:36:16.615299] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:46.219 [2024-10-09 00:36:16.615450] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.219 [2024-10-09 00:36:16.615456] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.219 [2024-10-09 00:36:16.615461] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.219 [2024-10-09 00:36:16.617862] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.219 [2024-10-09 00:36:16.627161] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.219 [2024-10-09 00:36:16.627730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.219 [2024-10-09 00:36:16.627759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:46.219 [2024-10-09 00:36:16.627767] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:46.219 [2024-10-09 00:36:16.627933] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:46.219 [2024-10-09 00:36:16.628084] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.220 [2024-10-09 00:36:16.628090] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.220 [2024-10-09 00:36:16.628096] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.220 [2024-10-09 00:36:16.630493] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.220 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 3429077 Killed "${NVMF_APP[@]}" "$@" 00:28:46.220 [2024-10-09 00:36:16.639797] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.220 00:36:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:28:46.220 [2024-10-09 00:36:16.640352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.220 [2024-10-09 00:36:16.640383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:46.220 [2024-10-09 00:36:16.640392] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:46.220 [2024-10-09 00:36:16.640556] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:46.220 00:36:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:28:46.220 [2024-10-09 00:36:16.640707] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.220 [2024-10-09 00:36:16.640713] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.220 [2024-10-09 00:36:16.640728] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.220 00:36:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:28:46.220 00:36:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:46.220 00:36:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:46.220 [2024-10-09 00:36:16.643130] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.220 00:36:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # nvmfpid=3431241 00:28:46.220 00:36:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # waitforlisten 3431241 00:28:46.220 00:36:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:28:46.220 00:36:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@831 -- # '[' -z 3431241 ']' 00:28:46.220 00:36:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:46.220 00:36:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:46.220 00:36:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:46.220 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:46.220 00:36:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:46.220 00:36:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:46.220 [2024-10-09 00:36:16.652436] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.220 [2024-10-09 00:36:16.652854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.220 [2024-10-09 00:36:16.652884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:46.220 [2024-10-09 00:36:16.652893] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:46.220 [2024-10-09 00:36:16.653060] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:46.220 [2024-10-09 00:36:16.653212] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.220 [2024-10-09 00:36:16.653218] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.220 [2024-10-09 00:36:16.653224] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.220 [2024-10-09 00:36:16.655627] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.220 [2024-10-09 00:36:16.665075] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.220 [2024-10-09 00:36:16.665533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.220 [2024-10-09 00:36:16.665548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:46.220 [2024-10-09 00:36:16.665554] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:46.220 [2024-10-09 00:36:16.665703] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:46.220 [2024-10-09 00:36:16.665857] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.220 [2024-10-09 00:36:16.665864] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.220 [2024-10-09 00:36:16.665868] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.220 [2024-10-09 00:36:16.668267] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.220 [2024-10-09 00:36:16.677714] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.220 [2024-10-09 00:36:16.677945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.220 [2024-10-09 00:36:16.677964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:46.220 [2024-10-09 00:36:16.677971] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:46.220 [2024-10-09 00:36:16.678125] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:46.220 [2024-10-09 00:36:16.678274] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.220 [2024-10-09 00:36:16.678280] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.220 [2024-10-09 00:36:16.678284] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.220 [2024-10-09 00:36:16.680677] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.220 [2024-10-09 00:36:16.690414] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.220 [2024-10-09 00:36:16.690828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.220 [2024-10-09 00:36:16.690859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:46.220 [2024-10-09 00:36:16.690868] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:46.220 [2024-10-09 00:36:16.691035] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:46.220 [2024-10-09 00:36:16.691186] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.220 [2024-10-09 00:36:16.691193] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.220 [2024-10-09 00:36:16.691199] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.220 [2024-10-09 00:36:16.693600] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.220 [2024-10-09 00:36:16.701814] Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 initialization... 00:28:46.220 [2024-10-09 00:36:16.701860] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:46.220 [2024-10-09 00:36:16.703049] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.220 [2024-10-09 00:36:16.703661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.220 [2024-10-09 00:36:16.703696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:46.220 [2024-10-09 00:36:16.703706] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:46.220 [2024-10-09 00:36:16.703878] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:46.220 [2024-10-09 00:36:16.704030] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.220 [2024-10-09 00:36:16.704037] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.220 [2024-10-09 00:36:16.704042] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.220 [2024-10-09 00:36:16.706440] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.220 [2024-10-09 00:36:16.715749] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.220 [2024-10-09 00:36:16.716221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.220 [2024-10-09 00:36:16.716236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:46.220 [2024-10-09 00:36:16.716242] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:46.220 [2024-10-09 00:36:16.716391] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:46.220 [2024-10-09 00:36:16.716539] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.220 [2024-10-09 00:36:16.716545] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.220 [2024-10-09 00:36:16.716550] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.220 [2024-10-09 00:36:16.718943] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.220 [2024-10-09 00:36:16.728384] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.220 [2024-10-09 00:36:16.729000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.220 [2024-10-09 00:36:16.729030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:46.220 [2024-10-09 00:36:16.729039] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:46.220 [2024-10-09 00:36:16.729203] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:46.220 [2024-10-09 00:36:16.729355] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.220 [2024-10-09 00:36:16.729361] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.220 [2024-10-09 00:36:16.729367] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.220 [2024-10-09 00:36:16.731772] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.220 [2024-10-09 00:36:16.741153] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.220 [2024-10-09 00:36:16.741774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.220 [2024-10-09 00:36:16.741804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:46.220 [2024-10-09 00:36:16.741813] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:46.220 [2024-10-09 00:36:16.741981] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:46.220 [2024-10-09 00:36:16.742136] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.221 [2024-10-09 00:36:16.742142] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.221 [2024-10-09 00:36:16.742148] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.221 [2024-10-09 00:36:16.744548] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.221 [2024-10-09 00:36:16.753861] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.221 [2024-10-09 00:36:16.754425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.221 [2024-10-09 00:36:16.754456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:46.221 [2024-10-09 00:36:16.754465] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:46.221 [2024-10-09 00:36:16.754630] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:46.221 [2024-10-09 00:36:16.754787] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.221 [2024-10-09 00:36:16.754793] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.221 [2024-10-09 00:36:16.754799] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.221 [2024-10-09 00:36:16.757197] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.221 [2024-10-09 00:36:16.766517] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.221 [2024-10-09 00:36:16.766970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.221 [2024-10-09 00:36:16.766985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:46.221 [2024-10-09 00:36:16.766991] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:46.221 [2024-10-09 00:36:16.767140] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:46.221 [2024-10-09 00:36:16.767288] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.221 [2024-10-09 00:36:16.767294] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.221 [2024-10-09 00:36:16.767299] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.221 [2024-10-09 00:36:16.769689] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.221 [2024-10-09 00:36:16.779140] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.221 [2024-10-09 00:36:16.779694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.221 [2024-10-09 00:36:16.779729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:46.221 [2024-10-09 00:36:16.779739] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:46.221 [2024-10-09 00:36:16.779906] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:46.221 [2024-10-09 00:36:16.780057] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.221 [2024-10-09 00:36:16.780064] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.221 [2024-10-09 00:36:16.780069] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.221 [2024-10-09 00:36:16.782464] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.221 [2024-10-09 00:36:16.784137] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:46.221 [2024-10-09 00:36:16.791797] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.221 [2024-10-09 00:36:16.792423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.221 [2024-10-09 00:36:16.792454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:46.221 [2024-10-09 00:36:16.792463] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:46.221 [2024-10-09 00:36:16.792629] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:46.221 [2024-10-09 00:36:16.792786] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.221 [2024-10-09 00:36:16.792793] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.221 [2024-10-09 00:36:16.792799] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.221 [2024-10-09 00:36:16.795195] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.221 [2024-10-09 00:36:16.804407] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.221 [2024-10-09 00:36:16.805016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.221 [2024-10-09 00:36:16.805047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:46.221 [2024-10-09 00:36:16.805056] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:46.221 [2024-10-09 00:36:16.805222] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:46.221 [2024-10-09 00:36:16.805373] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.221 [2024-10-09 00:36:16.805379] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.221 [2024-10-09 00:36:16.805385] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.221 [2024-10-09 00:36:16.807786] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.221 [2024-10-09 00:36:16.817100] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.221 [2024-10-09 00:36:16.817737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.221 [2024-10-09 00:36:16.817768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:46.221 [2024-10-09 00:36:16.817778] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:46.221 [2024-10-09 00:36:16.817943] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:46.221 [2024-10-09 00:36:16.818095] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.221 [2024-10-09 00:36:16.818101] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.221 [2024-10-09 00:36:16.818107] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.221 [2024-10-09 00:36:16.820503] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.221 [2024-10-09 00:36:16.829676] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.221 [2024-10-09 00:36:16.830262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.221 [2024-10-09 00:36:16.830292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:46.221 [2024-10-09 00:36:16.830307] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:46.221 [2024-10-09 00:36:16.830472] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:46.221 [2024-10-09 00:36:16.830623] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.221 [2024-10-09 00:36:16.830630] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.221 [2024-10-09 00:36:16.830635] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.221 [2024-10-09 00:36:16.833034] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.221 [2024-10-09 00:36:16.836886] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:46.221 [2024-10-09 00:36:16.836908] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:46.221 [2024-10-09 00:36:16.836914] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:46.221 [2024-10-09 00:36:16.836920] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:46.221 [2024-10-09 00:36:16.836924] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:46.221 [2024-10-09 00:36:16.837787] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:28:46.221 [2024-10-09 00:36:16.838111] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:28:46.221 [2024-10-09 00:36:16.838112] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:28:46.221 [2024-10-09 00:36:16.842346] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.221 [2024-10-09 00:36:16.842842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.221 [2024-10-09 00:36:16.842872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:46.221 [2024-10-09 00:36:16.842881] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:46.221 [2024-10-09 00:36:16.843049] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:46.221 [2024-10-09 00:36:16.843200] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.221 [2024-10-09 00:36:16.843206] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.221 [2024-10-09 00:36:16.843212] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.221 [2024-10-09 00:36:16.845615] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.483 [2024-10-09 00:36:16.854934] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.483 [2024-10-09 00:36:16.855394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.483 [2024-10-09 00:36:16.855409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:46.483 [2024-10-09 00:36:16.855415] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:46.483 [2024-10-09 00:36:16.855564] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:46.483 [2024-10-09 00:36:16.855712] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.483 [2024-10-09 00:36:16.855717] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.483 [2024-10-09 00:36:16.855728] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.483 [2024-10-09 00:36:16.858124] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.483 [2024-10-09 00:36:16.867584] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.483 [2024-10-09 00:36:16.868228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.483 [2024-10-09 00:36:16.868259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:46.483 [2024-10-09 00:36:16.868268] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:46.483 [2024-10-09 00:36:16.868434] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:46.483 [2024-10-09 00:36:16.868586] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.483 [2024-10-09 00:36:16.868592] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.483 [2024-10-09 00:36:16.868598] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.483 [2024-10-09 00:36:16.871000] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.483 [2024-10-09 00:36:16.880174] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.483 [2024-10-09 00:36:16.880716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.483 [2024-10-09 00:36:16.880752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:46.483 [2024-10-09 00:36:16.880761] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:46.483 [2024-10-09 00:36:16.880928] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:46.484 [2024-10-09 00:36:16.881080] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.484 [2024-10-09 00:36:16.881086] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.484 [2024-10-09 00:36:16.881092] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.484 [2024-10-09 00:36:16.883491] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.484 [2024-10-09 00:36:16.892817] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.484 [2024-10-09 00:36:16.893404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.484 [2024-10-09 00:36:16.893434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:46.484 [2024-10-09 00:36:16.893443] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:46.484 [2024-10-09 00:36:16.893608] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:46.484 [2024-10-09 00:36:16.893766] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.484 [2024-10-09 00:36:16.893773] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.484 [2024-10-09 00:36:16.893779] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.484 [2024-10-09 00:36:16.896175] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.484 [2024-10-09 00:36:16.905484] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.484 [2024-10-09 00:36:16.906094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.484 [2024-10-09 00:36:16.906125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:46.484 [2024-10-09 00:36:16.906137] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:46.484 [2024-10-09 00:36:16.906302] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:46.484 [2024-10-09 00:36:16.906453] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.484 [2024-10-09 00:36:16.906460] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.484 [2024-10-09 00:36:16.906465] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.484 [2024-10-09 00:36:16.908870] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.484 [2024-10-09 00:36:16.918180] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.484 [2024-10-09 00:36:16.918716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.484 [2024-10-09 00:36:16.918753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:46.484 [2024-10-09 00:36:16.918762] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:46.484 [2024-10-09 00:36:16.918928] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:46.484 [2024-10-09 00:36:16.919079] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.484 [2024-10-09 00:36:16.919085] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.484 [2024-10-09 00:36:16.919091] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.484 [2024-10-09 00:36:16.921490] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.484 [2024-10-09 00:36:16.930805] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.484 [2024-10-09 00:36:16.931395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.484 [2024-10-09 00:36:16.931425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:46.484 [2024-10-09 00:36:16.931434] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:46.484 [2024-10-09 00:36:16.931599] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:46.484 [2024-10-09 00:36:16.931756] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.484 [2024-10-09 00:36:16.931763] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.484 [2024-10-09 00:36:16.931769] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.484 [2024-10-09 00:36:16.934163] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.484 [2024-10-09 00:36:16.943473] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.484 [2024-10-09 00:36:16.944070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.484 [2024-10-09 00:36:16.944101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:46.484 [2024-10-09 00:36:16.944110] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:46.484 [2024-10-09 00:36:16.944276] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:46.484 [2024-10-09 00:36:16.944428] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.484 [2024-10-09 00:36:16.944438] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.484 [2024-10-09 00:36:16.944444] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.484 [2024-10-09 00:36:16.946848] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.484 [2024-10-09 00:36:16.956159] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.484 [2024-10-09 00:36:16.956653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.484 [2024-10-09 00:36:16.956668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:46.484 [2024-10-09 00:36:16.956674] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:46.484 [2024-10-09 00:36:16.956827] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:46.484 [2024-10-09 00:36:16.956976] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.484 [2024-10-09 00:36:16.956981] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.484 [2024-10-09 00:36:16.956986] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.484 [2024-10-09 00:36:16.959378] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.484 [2024-10-09 00:36:16.968832] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.484 [2024-10-09 00:36:16.969299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.484 [2024-10-09 00:36:16.969311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:46.484 [2024-10-09 00:36:16.969317] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:46.484 [2024-10-09 00:36:16.969465] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:46.484 [2024-10-09 00:36:16.969613] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.484 [2024-10-09 00:36:16.969619] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.484 [2024-10-09 00:36:16.969624] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.484 [2024-10-09 00:36:16.972017] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.484 [2024-10-09 00:36:16.981455] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.484 [2024-10-09 00:36:16.981703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.484 [2024-10-09 00:36:16.981729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:46.484 [2024-10-09 00:36:16.981740] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:46.484 [2024-10-09 00:36:16.981890] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:46.484 [2024-10-09 00:36:16.982038] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.484 [2024-10-09 00:36:16.982044] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.484 [2024-10-09 00:36:16.982049] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.484 [2024-10-09 00:36:16.984441] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.484 [2024-10-09 00:36:16.994114] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.484 [2024-10-09 00:36:16.994575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.484 [2024-10-09 00:36:16.994587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:46.484 [2024-10-09 00:36:16.994593] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:46.484 [2024-10-09 00:36:16.994745] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:46.484 [2024-10-09 00:36:16.994894] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.484 [2024-10-09 00:36:16.994900] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.484 [2024-10-09 00:36:16.994905] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.484 [2024-10-09 00:36:16.997294] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.484 [2024-10-09 00:36:17.006736] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.484 [2024-10-09 00:36:17.007182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.484 [2024-10-09 00:36:17.007194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:46.484 [2024-10-09 00:36:17.007200] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:46.484 [2024-10-09 00:36:17.007349] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:46.484 [2024-10-09 00:36:17.007497] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.484 [2024-10-09 00:36:17.007502] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.484 [2024-10-09 00:36:17.007509] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.484 [2024-10-09 00:36:17.009939] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.484 [2024-10-09 00:36:17.019382] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.484 [2024-10-09 00:36:17.019958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.485 [2024-10-09 00:36:17.019988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:46.485 [2024-10-09 00:36:17.019997] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:46.485 [2024-10-09 00:36:17.020162] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:46.485 [2024-10-09 00:36:17.020313] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.485 [2024-10-09 00:36:17.020319] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.485 [2024-10-09 00:36:17.020325] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.485 [2024-10-09 00:36:17.022729] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.485 [2024-10-09 00:36:17.032038] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.485 [2024-10-09 00:36:17.032574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.485 [2024-10-09 00:36:17.032605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:46.485 [2024-10-09 00:36:17.032614] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:46.485 [2024-10-09 00:36:17.032789] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:46.485 [2024-10-09 00:36:17.032941] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.485 [2024-10-09 00:36:17.032948] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.485 [2024-10-09 00:36:17.032953] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.485 [2024-10-09 00:36:17.035350] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.485 [2024-10-09 00:36:17.044653] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.485 [2024-10-09 00:36:17.045085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.485 [2024-10-09 00:36:17.045115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:46.485 [2024-10-09 00:36:17.045124] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:46.485 [2024-10-09 00:36:17.045289] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:46.485 [2024-10-09 00:36:17.045441] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.485 [2024-10-09 00:36:17.045447] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.485 [2024-10-09 00:36:17.045452] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.485 [2024-10-09 00:36:17.047854] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.485 [2024-10-09 00:36:17.057337] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.485 [2024-10-09 00:36:17.057868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.485 [2024-10-09 00:36:17.057899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:46.485 [2024-10-09 00:36:17.057907] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:46.485 [2024-10-09 00:36:17.058072] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:46.485 [2024-10-09 00:36:17.058224] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.485 [2024-10-09 00:36:17.058230] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.485 [2024-10-09 00:36:17.058235] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.485 [2024-10-09 00:36:17.060633] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.485 [2024-10-09 00:36:17.069950] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.485 [2024-10-09 00:36:17.070402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.485 [2024-10-09 00:36:17.070433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:46.485 [2024-10-09 00:36:17.070442] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:46.485 [2024-10-09 00:36:17.070609] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:46.485 [2024-10-09 00:36:17.070765] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.485 [2024-10-09 00:36:17.070772] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.485 [2024-10-09 00:36:17.070781] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.485 [2024-10-09 00:36:17.073178] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.485 [2024-10-09 00:36:17.082626] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.485 [2024-10-09 00:36:17.083187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.485 [2024-10-09 00:36:17.083217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:46.485 [2024-10-09 00:36:17.083226] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:46.485 [2024-10-09 00:36:17.083390] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:46.485 [2024-10-09 00:36:17.083541] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.485 [2024-10-09 00:36:17.083547] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.485 [2024-10-09 00:36:17.083553] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.485 [2024-10-09 00:36:17.085951] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.485 [2024-10-09 00:36:17.095284] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.485 [2024-10-09 00:36:17.095779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.485 [2024-10-09 00:36:17.095803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:46.485 [2024-10-09 00:36:17.095809] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:46.485 [2024-10-09 00:36:17.095958] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:46.485 [2024-10-09 00:36:17.096106] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.485 [2024-10-09 00:36:17.096112] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.485 [2024-10-09 00:36:17.096117] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.485 [2024-10-09 00:36:17.098507] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.485 [2024-10-09 00:36:17.107948] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.485 [2024-10-09 00:36:17.108396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.485 [2024-10-09 00:36:17.108409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:46.485 [2024-10-09 00:36:17.108414] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:46.485 [2024-10-09 00:36:17.108562] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:46.485 [2024-10-09 00:36:17.108710] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.485 [2024-10-09 00:36:17.108715] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.485 [2024-10-09 00:36:17.108724] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.485 [2024-10-09 00:36:17.111118] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.748 [2024-10-09 00:36:17.120555] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.748 [2024-10-09 00:36:17.121007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.748 [2024-10-09 00:36:17.121020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:46.748 [2024-10-09 00:36:17.121025] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:46.748 [2024-10-09 00:36:17.121173] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:46.748 [2024-10-09 00:36:17.121321] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.748 [2024-10-09 00:36:17.121327] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.748 [2024-10-09 00:36:17.121331] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.748 [2024-10-09 00:36:17.123722] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.748 [2024-10-09 00:36:17.133156] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.748 [2024-10-09 00:36:17.133705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.748 [2024-10-09 00:36:17.133741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:46.748 [2024-10-09 00:36:17.133750] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:46.748 [2024-10-09 00:36:17.133917] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:46.748 [2024-10-09 00:36:17.134068] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.748 [2024-10-09 00:36:17.134075] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.748 [2024-10-09 00:36:17.134080] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.748 [2024-10-09 00:36:17.136476] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.748 [2024-10-09 00:36:17.145775] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.748 [2024-10-09 00:36:17.146267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.748 [2024-10-09 00:36:17.146297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:46.749 [2024-10-09 00:36:17.146306] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:46.749 [2024-10-09 00:36:17.146471] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:46.749 [2024-10-09 00:36:17.146622] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.749 [2024-10-09 00:36:17.146628] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.749 [2024-10-09 00:36:17.146634] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.749 [2024-10-09 00:36:17.149031] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.749 [2024-10-09 00:36:17.158472] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.749 [2024-10-09 00:36:17.159049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.749 [2024-10-09 00:36:17.159079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:46.749 [2024-10-09 00:36:17.159088] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:46.749 [2024-10-09 00:36:17.159253] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:46.749 [2024-10-09 00:36:17.159408] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.749 [2024-10-09 00:36:17.159414] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.749 [2024-10-09 00:36:17.159420] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.749 [2024-10-09 00:36:17.161827] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.749 [2024-10-09 00:36:17.171134] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.749 [2024-10-09 00:36:17.171562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.749 [2024-10-09 00:36:17.171592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:46.749 [2024-10-09 00:36:17.171600] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:46.749 [2024-10-09 00:36:17.171770] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:46.749 [2024-10-09 00:36:17.171922] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.749 [2024-10-09 00:36:17.171928] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.749 [2024-10-09 00:36:17.171934] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.749 [2024-10-09 00:36:17.174329] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.749 [2024-10-09 00:36:17.183778] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.749 [2024-10-09 00:36:17.184338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.749 [2024-10-09 00:36:17.184368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:46.749 [2024-10-09 00:36:17.184377] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:46.749 [2024-10-09 00:36:17.184543] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:46.749 [2024-10-09 00:36:17.184694] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.749 [2024-10-09 00:36:17.184701] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.749 [2024-10-09 00:36:17.184707] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.749 [2024-10-09 00:36:17.187114] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.749 [2024-10-09 00:36:17.196428] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.749 [2024-10-09 00:36:17.196873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.749 [2024-10-09 00:36:17.196889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:46.749 [2024-10-09 00:36:17.196895] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:46.749 [2024-10-09 00:36:17.197043] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:46.749 [2024-10-09 00:36:17.197191] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.749 [2024-10-09 00:36:17.197198] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.749 [2024-10-09 00:36:17.197204] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.749 [2024-10-09 00:36:17.199598] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.749 4351.67 IOPS, 17.00 MiB/s [2024-10-08T22:36:17.384Z] [2024-10-09 00:36:17.209052] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.749 [2024-10-09 00:36:17.209567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.749 [2024-10-09 00:36:17.209598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:46.749 [2024-10-09 00:36:17.209607] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:46.749 [2024-10-09 00:36:17.209777] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:46.749 [2024-10-09 00:36:17.209929] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.749 [2024-10-09 00:36:17.209935] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.749 [2024-10-09 00:36:17.209941] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.749 [2024-10-09 00:36:17.212335] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.749 [2024-10-09 00:36:17.221665] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.749 [2024-10-09 00:36:17.222214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.749 [2024-10-09 00:36:17.222245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:46.749 [2024-10-09 00:36:17.222254] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:46.749 [2024-10-09 00:36:17.222418] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:46.749 [2024-10-09 00:36:17.222570] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.749 [2024-10-09 00:36:17.222576] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.749 [2024-10-09 00:36:17.222581] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.749 [2024-10-09 00:36:17.224983] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.749 [2024-10-09 00:36:17.234282] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.749 [2024-10-09 00:36:17.234724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.749 [2024-10-09 00:36:17.234739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:46.749 [2024-10-09 00:36:17.234745] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:46.749 [2024-10-09 00:36:17.234894] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:46.749 [2024-10-09 00:36:17.235042] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.749 [2024-10-09 00:36:17.235048] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.749 [2024-10-09 00:36:17.235053] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.749 [2024-10-09 00:36:17.237440] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.749 [2024-10-09 00:36:17.246883] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.749 [2024-10-09 00:36:17.247322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.749 [2024-10-09 00:36:17.247335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:46.749 [2024-10-09 00:36:17.247344] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:46.749 [2024-10-09 00:36:17.247492] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:46.749 [2024-10-09 00:36:17.247640] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.749 [2024-10-09 00:36:17.247645] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.749 [2024-10-09 00:36:17.247650] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.749 [2024-10-09 00:36:17.250104] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.749 [2024-10-09 00:36:17.259541] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.749 [2024-10-09 00:36:17.259964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.749 [2024-10-09 00:36:17.259994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:46.749 [2024-10-09 00:36:17.260003] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:46.749 [2024-10-09 00:36:17.260168] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:46.749 [2024-10-09 00:36:17.260319] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.749 [2024-10-09 00:36:17.260325] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.749 [2024-10-09 00:36:17.260331] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.749 [2024-10-09 00:36:17.262732] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.749 [2024-10-09 00:36:17.272184] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.749 [2024-10-09 00:36:17.272627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.749 [2024-10-09 00:36:17.272641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:46.749 [2024-10-09 00:36:17.272647] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:46.749 [2024-10-09 00:36:17.272799] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:46.749 [2024-10-09 00:36:17.272947] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.749 [2024-10-09 00:36:17.272953] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.749 [2024-10-09 00:36:17.272958] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.749 [2024-10-09 00:36:17.275349] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.750 [2024-10-09 00:36:17.284786] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.750 [2024-10-09 00:36:17.285261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.750 [2024-10-09 00:36:17.285274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:46.750 [2024-10-09 00:36:17.285279] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:46.750 [2024-10-09 00:36:17.285427] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:46.750 [2024-10-09 00:36:17.285575] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.750 [2024-10-09 00:36:17.285584] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.750 [2024-10-09 00:36:17.285589] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.750 [2024-10-09 00:36:17.287990] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.750 [2024-10-09 00:36:17.297423] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.750 [2024-10-09 00:36:17.297989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.750 [2024-10-09 00:36:17.298020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:46.750 [2024-10-09 00:36:17.298029] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:46.750 [2024-10-09 00:36:17.298193] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:46.750 [2024-10-09 00:36:17.298344] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.750 [2024-10-09 00:36:17.298351] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.750 [2024-10-09 00:36:17.298356] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.750 [2024-10-09 00:36:17.300751] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.750 [2024-10-09 00:36:17.310059] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.750 [2024-10-09 00:36:17.310594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.750 [2024-10-09 00:36:17.310625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:46.750 [2024-10-09 00:36:17.310634] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:46.750 [2024-10-09 00:36:17.310804] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:46.750 [2024-10-09 00:36:17.310956] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.750 [2024-10-09 00:36:17.310962] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.750 [2024-10-09 00:36:17.310968] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.750 [2024-10-09 00:36:17.313360] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.750 [2024-10-09 00:36:17.322666] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.750 [2024-10-09 00:36:17.323155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.750 [2024-10-09 00:36:17.323185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:46.750 [2024-10-09 00:36:17.323195] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:46.750 [2024-10-09 00:36:17.323363] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:46.750 [2024-10-09 00:36:17.323514] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.750 [2024-10-09 00:36:17.323520] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.750 [2024-10-09 00:36:17.323526] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.750 [2024-10-09 00:36:17.325932] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.750 [2024-10-09 00:36:17.335249] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.750 [2024-10-09 00:36:17.335817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.750 [2024-10-09 00:36:17.335848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:46.750 [2024-10-09 00:36:17.335857] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:46.750 [2024-10-09 00:36:17.336023] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:46.750 [2024-10-09 00:36:17.336175] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.750 [2024-10-09 00:36:17.336181] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.750 [2024-10-09 00:36:17.336187] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.750 [2024-10-09 00:36:17.338590] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.750 [2024-10-09 00:36:17.347905] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.750 [2024-10-09 00:36:17.348441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.750 [2024-10-09 00:36:17.348472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:46.750 [2024-10-09 00:36:17.348480] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:46.750 [2024-10-09 00:36:17.348646] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:46.750 [2024-10-09 00:36:17.348804] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.750 [2024-10-09 00:36:17.348810] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.750 [2024-10-09 00:36:17.348816] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.750 [2024-10-09 00:36:17.351212] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.750 [2024-10-09 00:36:17.360522] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.750 [2024-10-09 00:36:17.361080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.750 [2024-10-09 00:36:17.361111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:46.750 [2024-10-09 00:36:17.361120] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:46.750 [2024-10-09 00:36:17.361284] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:46.750 [2024-10-09 00:36:17.361435] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.750 [2024-10-09 00:36:17.361442] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.750 [2024-10-09 00:36:17.361447] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.750 [2024-10-09 00:36:17.363848] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.750 [2024-10-09 00:36:17.373168] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.750 [2024-10-09 00:36:17.373610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.750 [2024-10-09 00:36:17.373625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:46.750 [2024-10-09 00:36:17.373634] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:46.750 [2024-10-09 00:36:17.373788] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:46.750 [2024-10-09 00:36:17.373937] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.750 [2024-10-09 00:36:17.373943] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.750 [2024-10-09 00:36:17.373948] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.750 [2024-10-09 00:36:17.376341] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.012 [2024-10-09 00:36:17.385792] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.012 [2024-10-09 00:36:17.386260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.012 [2024-10-09 00:36:17.386273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:47.012 [2024-10-09 00:36:17.386278] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:47.012 [2024-10-09 00:36:17.386426] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:47.012 [2024-10-09 00:36:17.386574] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.012 [2024-10-09 00:36:17.386580] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.012 [2024-10-09 00:36:17.386585] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.012 [2024-10-09 00:36:17.389154] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.012 [2024-10-09 00:36:17.398470] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.012 [2024-10-09 00:36:17.398854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.012 [2024-10-09 00:36:17.398886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:47.012 [2024-10-09 00:36:17.398896] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:47.012 [2024-10-09 00:36:17.399064] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:47.012 [2024-10-09 00:36:17.399216] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.012 [2024-10-09 00:36:17.399223] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.012 [2024-10-09 00:36:17.399229] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.012 [2024-10-09 00:36:17.401629] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.012 [2024-10-09 00:36:17.411084] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.012 [2024-10-09 00:36:17.411623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.012 [2024-10-09 00:36:17.411654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:47.012 [2024-10-09 00:36:17.411662] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:47.012 [2024-10-09 00:36:17.411836] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:47.012 [2024-10-09 00:36:17.411988] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.012 [2024-10-09 00:36:17.411994] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.012 [2024-10-09 00:36:17.412003] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.012 [2024-10-09 00:36:17.414400] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.012 [2024-10-09 00:36:17.423712] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.012 [2024-10-09 00:36:17.424364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.012 [2024-10-09 00:36:17.424396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:47.012 [2024-10-09 00:36:17.424405] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:47.012 [2024-10-09 00:36:17.424569] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:47.012 [2024-10-09 00:36:17.424726] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.012 [2024-10-09 00:36:17.424733] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.012 [2024-10-09 00:36:17.424739] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.012 [2024-10-09 00:36:17.427137] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.012 [2024-10-09 00:36:17.436307] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.012 [2024-10-09 00:36:17.436872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.012 [2024-10-09 00:36:17.436903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:47.012 [2024-10-09 00:36:17.436911] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:47.013 [2024-10-09 00:36:17.437078] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:47.013 [2024-10-09 00:36:17.437229] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.013 [2024-10-09 00:36:17.437236] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.013 [2024-10-09 00:36:17.437242] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.013 [2024-10-09 00:36:17.439648] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.013 [2024-10-09 00:36:17.448959] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.013 [2024-10-09 00:36:17.449498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.013 [2024-10-09 00:36:17.449528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:47.013 [2024-10-09 00:36:17.449537] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:47.013 [2024-10-09 00:36:17.449702] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:47.013 [2024-10-09 00:36:17.449860] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.013 [2024-10-09 00:36:17.449867] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.013 [2024-10-09 00:36:17.449873] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.013 [2024-10-09 00:36:17.452270] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.013 [2024-10-09 00:36:17.461576] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.013 [2024-10-09 00:36:17.461915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.013 [2024-10-09 00:36:17.461929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:47.013 [2024-10-09 00:36:17.461936] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:47.013 [2024-10-09 00:36:17.462084] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:47.013 [2024-10-09 00:36:17.462233] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.013 [2024-10-09 00:36:17.462239] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.013 [2024-10-09 00:36:17.462243] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.013 [2024-10-09 00:36:17.464637] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.013 [2024-10-09 00:36:17.474243] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.013 [2024-10-09 00:36:17.474619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.013 [2024-10-09 00:36:17.474632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:47.013 [2024-10-09 00:36:17.474637] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:47.013 [2024-10-09 00:36:17.474791] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:47.013 [2024-10-09 00:36:17.474941] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.013 [2024-10-09 00:36:17.474946] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.013 [2024-10-09 00:36:17.474952] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.013 [2024-10-09 00:36:17.477342] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.013 [2024-10-09 00:36:17.486939] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.013 [2024-10-09 00:36:17.487535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.013 [2024-10-09 00:36:17.487565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:47.013 [2024-10-09 00:36:17.487575] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:47.013 [2024-10-09 00:36:17.487744] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:47.013 [2024-10-09 00:36:17.487896] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.013 [2024-10-09 00:36:17.487903] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.013 [2024-10-09 00:36:17.487909] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.013 [2024-10-09 00:36:17.490318] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.013 [2024-10-09 00:36:17.499636] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.013 [2024-10-09 00:36:17.500099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.013 [2024-10-09 00:36:17.500114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:47.013 [2024-10-09 00:36:17.500120] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:47.013 [2024-10-09 00:36:17.500272] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:47.013 [2024-10-09 00:36:17.500420] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.013 [2024-10-09 00:36:17.500426] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.013 [2024-10-09 00:36:17.500431] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.013 00:36:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:47.013 00:36:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # return 0 00:28:47.013 00:36:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:28:47.013 [2024-10-09 00:36:17.502825] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.013 00:36:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:47.013 00:36:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:47.013 [2024-10-09 00:36:17.512293] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.013 [2024-10-09 00:36:17.512755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.013 [2024-10-09 00:36:17.512769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:47.013 [2024-10-09 00:36:17.512774] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:47.013 [2024-10-09 00:36:17.512924] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:47.013 [2024-10-09 00:36:17.513072] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.013 [2024-10-09 00:36:17.513077] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.013 [2024-10-09 00:36:17.513082] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.013 [2024-10-09 00:36:17.515471] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.013 [2024-10-09 00:36:17.524925] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.013 [2024-10-09 00:36:17.525428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.013 [2024-10-09 00:36:17.525459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:47.013 [2024-10-09 00:36:17.525468] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:47.013 [2024-10-09 00:36:17.525634] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:47.013 [2024-10-09 00:36:17.525793] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.013 [2024-10-09 00:36:17.525801] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.013 [2024-10-09 00:36:17.525807] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.013 [2024-10-09 00:36:17.528208] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.013 [2024-10-09 00:36:17.537522] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.013 [2024-10-09 00:36:17.537870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.013 [2024-10-09 00:36:17.537887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:47.013 [2024-10-09 00:36:17.537893] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:47.013 [2024-10-09 00:36:17.538047] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:47.013 [2024-10-09 00:36:17.538197] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.013 [2024-10-09 00:36:17.538203] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.013 [2024-10-09 00:36:17.538208] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.013 [2024-10-09 00:36:17.540603] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.013 00:36:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:47.013 00:36:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:47.013 00:36:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:47.013 00:36:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:47.013 [2024-10-09 00:36:17.549309] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:47.013 [2024-10-09 00:36:17.550199] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.013 [2024-10-09 00:36:17.550633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.013 [2024-10-09 00:36:17.550646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:47.013 [2024-10-09 00:36:17.550651] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:47.013 [2024-10-09 00:36:17.550802] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:47.013 [2024-10-09 00:36:17.550951] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.013 [2024-10-09 00:36:17.550957] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.013 [2024-10-09 00:36:17.550962] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.013 [2024-10-09 00:36:17.553350] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.013 [2024-10-09 00:36:17.562797] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.013 [2024-10-09 00:36:17.563256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.013 [2024-10-09 00:36:17.563268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:47.014 [2024-10-09 00:36:17.563274] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:47.014 [2024-10-09 00:36:17.563422] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:47.014 [2024-10-09 00:36:17.563569] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.014 [2024-10-09 00:36:17.563575] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.014 [2024-10-09 00:36:17.563580] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.014 [2024-10-09 00:36:17.565974] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.014 00:36:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:47.014 00:36:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:47.014 00:36:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:47.014 00:36:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:47.014 [2024-10-09 00:36:17.575425] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.014 [2024-10-09 00:36:17.575868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.014 [2024-10-09 00:36:17.575898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:47.014 [2024-10-09 00:36:17.575907] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:47.014 [2024-10-09 00:36:17.576073] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:47.014 [2024-10-09 00:36:17.576223] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.014 [2024-10-09 00:36:17.576229] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.014 [2024-10-09 00:36:17.576235] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.014 [2024-10-09 00:36:17.578641] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.014 Malloc0 00:28:47.014 00:36:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:47.014 00:36:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:47.014 00:36:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:47.014 00:36:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:47.014 [2024-10-09 00:36:17.588112] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.014 [2024-10-09 00:36:17.588455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.014 [2024-10-09 00:36:17.588470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:47.014 [2024-10-09 00:36:17.588476] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:47.014 [2024-10-09 00:36:17.588625] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:47.014 [2024-10-09 00:36:17.588778] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.014 [2024-10-09 00:36:17.588784] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.014 [2024-10-09 00:36:17.588789] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.014 [2024-10-09 00:36:17.591180] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.014 00:36:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:47.014 00:36:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:47.014 00:36:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:47.014 00:36:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:47.014 [2024-10-09 00:36:17.600776] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.014 [2024-10-09 00:36:17.601156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.014 [2024-10-09 00:36:17.601169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:47.014 [2024-10-09 00:36:17.601174] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:47.014 [2024-10-09 00:36:17.601323] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:47.014 [2024-10-09 00:36:17.601471] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.014 [2024-10-09 00:36:17.601477] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.014 [2024-10-09 00:36:17.601486] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.014 [2024-10-09 00:36:17.603880] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.014 00:36:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:47.014 00:36:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:47.014 00:36:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:47.014 00:36:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:47.014 [2024-10-09 00:36:17.613473] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.014 [2024-10-09 00:36:17.613912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.014 [2024-10-09 00:36:17.613925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74a0c0 with addr=10.0.0.2, port=4420 00:28:47.014 [2024-10-09 00:36:17.613931] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74a0c0 is same with the state(6) to be set 00:28:47.014 [2024-10-09 00:36:17.614080] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a0c0 (9): Bad file descriptor 00:28:47.014 [2024-10-09 00:36:17.614228] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.014 [2024-10-09 00:36:17.614234] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.014 [2024-10-09 00:36:17.614238] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.014 [2024-10-09 00:36:17.616628] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.014 [2024-10-09 00:36:17.618145] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:47.014 00:36:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:47.014 00:36:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 3429975 00:28:47.014 [2024-10-09 00:36:17.626081] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.275 [2024-10-09 00:36:17.701792] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:28:48.807 4655.86 IOPS, 18.19 MiB/s [2024-10-08T22:36:20.395Z] 5691.88 IOPS, 22.23 MiB/s [2024-10-08T22:36:21.337Z] 6495.22 IOPS, 25.37 MiB/s [2024-10-08T22:36:22.277Z] 7144.90 IOPS, 27.91 MiB/s [2024-10-08T22:36:23.660Z] 7672.73 IOPS, 29.97 MiB/s [2024-10-08T22:36:24.231Z] 8099.25 IOPS, 31.64 MiB/s [2024-10-08T22:36:25.612Z] 8463.77 IOPS, 33.06 MiB/s [2024-10-08T22:36:26.553Z] 8786.71 IOPS, 34.32 MiB/s [2024-10-08T22:36:26.553Z] 9063.33 IOPS, 35.40 MiB/s 00:28:55.918 Latency(us) 00:28:55.918 [2024-10-08T22:36:26.553Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:55.918 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:28:55.918 Verification LBA range: start 0x0 length 0x4000 00:28:55.918 Nvme1n1 : 15.00 9068.10 35.42 13118.29 0.00 5749.95 546.13 16165.55 00:28:55.918 [2024-10-08T22:36:26.553Z] =================================================================================================================== 00:28:55.918 [2024-10-08T22:36:26.553Z] Total : 9068.10 35.42 13118.29 0.00 5749.95 546.13 16165.55 00:28:55.918 00:36:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:28:55.918 00:36:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:55.918 00:36:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:55.918 00:36:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:55.918 00:36:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:55.918 00:36:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:28:55.918 00:36:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:28:55.918 00:36:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@514 -- # nvmfcleanup 00:28:55.918 00:36:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # sync 00:28:55.918 00:36:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:55.918 00:36:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set +e 00:28:55.918 00:36:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:55.918 00:36:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:55.918 rmmod nvme_tcp 00:28:55.918 rmmod nvme_fabrics 00:28:55.918 rmmod nvme_keyring 00:28:55.918 00:36:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:55.918 00:36:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@128 -- # set -e 00:28:55.918 00:36:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@129 -- # return 0 00:28:55.918 00:36:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@515 -- # '[' -n 3431241 ']' 00:28:55.918 00:36:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@516 -- # killprocess 3431241 00:28:55.918 00:36:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@950 -- # '[' -z 3431241 ']' 00:28:55.918 00:36:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # kill -0 3431241 00:28:55.918 00:36:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@955 -- # uname 00:28:55.918 00:36:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:55.918 00:36:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3431241 00:28:55.918 00:36:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:28:55.918 00:36:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:28:55.918 00:36:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3431241' 00:28:55.918 killing process with pid 3431241 00:28:55.918 00:36:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@969 -- # kill 3431241 00:28:55.918 00:36:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@974 -- # wait 3431241 00:28:56.179 00:36:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:28:56.179 00:36:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:28:56.179 00:36:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:28:56.179 00:36:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # iptr 00:28:56.179 00:36:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@789 -- # iptables-save 00:28:56.179 00:36:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:28:56.179 00:36:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@789 -- # iptables-restore 00:28:56.179 00:36:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:56.179 00:36:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:56.179 00:36:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:56.179 00:36:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:56.179 00:36:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:58.092 00:36:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:58.092 00:28:58.092 real 0m28.372s 00:28:58.092 user 1m3.985s 00:28:58.092 sys 0m7.629s 00:28:58.092 00:36:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:58.092 00:36:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:58.092 ************************************ 00:28:58.092 END TEST nvmf_bdevperf 00:28:58.092 ************************************ 00:28:58.370 00:36:28 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:28:58.370 00:36:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:28:58.370 00:36:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:58.370 00:36:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:58.370 ************************************ 00:28:58.370 START TEST nvmf_target_disconnect 00:28:58.370 ************************************ 00:28:58.370 00:36:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:28:58.370 * Looking for test storage... 00:28:58.370 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:58.370 00:36:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:28:58.370 00:36:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1681 -- # lcov --version 00:28:58.370 00:36:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:28:58.370 00:36:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:28:58.370 00:36:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:58.370 00:36:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:58.370 00:36:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:58.370 00:36:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:28:58.370 00:36:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:28:58.370 00:36:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:28:58.370 00:36:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:28:58.370 00:36:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:28:58.370 00:36:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:28:58.370 00:36:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:28:58.370 00:36:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:58.370 00:36:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:28:58.370 00:36:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@345 -- # : 1 00:28:58.370 00:36:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:58.370 00:36:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:58.370 00:36:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # decimal 1 00:28:58.370 00:36:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=1 00:28:58.370 00:36:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:58.370 00:36:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 1 00:28:58.370 00:36:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:28:58.370 00:36:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # decimal 2 00:28:58.370 00:36:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=2 00:28:58.370 00:36:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:58.370 00:36:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 2 00:28:58.370 00:36:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:28:58.370 00:36:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:58.370 00:36:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:58.370 00:36:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # return 0 00:28:58.370 00:36:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:58.370 00:36:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:28:58.370 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:58.370 --rc genhtml_branch_coverage=1 00:28:58.370 --rc genhtml_function_coverage=1 00:28:58.370 --rc genhtml_legend=1 00:28:58.370 --rc geninfo_all_blocks=1 00:28:58.370 --rc geninfo_unexecuted_blocks=1 00:28:58.370 00:28:58.370 ' 00:28:58.370 00:36:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:28:58.370 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:58.370 --rc genhtml_branch_coverage=1 00:28:58.370 --rc genhtml_function_coverage=1 00:28:58.370 --rc genhtml_legend=1 00:28:58.370 --rc geninfo_all_blocks=1 00:28:58.370 --rc geninfo_unexecuted_blocks=1 00:28:58.370 00:28:58.370 ' 00:28:58.370 00:36:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:28:58.370 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:58.370 --rc genhtml_branch_coverage=1 00:28:58.370 --rc genhtml_function_coverage=1 00:28:58.370 --rc genhtml_legend=1 00:28:58.370 --rc geninfo_all_blocks=1 00:28:58.370 --rc geninfo_unexecuted_blocks=1 00:28:58.370 00:28:58.370 ' 00:28:58.370 00:36:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:28:58.370 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:58.370 --rc genhtml_branch_coverage=1 00:28:58.370 --rc genhtml_function_coverage=1 00:28:58.370 --rc genhtml_legend=1 00:28:58.370 --rc geninfo_all_blocks=1 00:28:58.370 --rc geninfo_unexecuted_blocks=1 00:28:58.370 00:28:58.370 ' 00:28:58.370 00:36:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:58.370 00:36:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:28:58.370 00:36:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:58.370 00:36:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:58.632 00:36:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:58.632 00:36:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:58.632 00:36:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:58.632 00:36:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:58.632 00:36:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:58.632 00:36:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:58.632 00:36:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:58.632 00:36:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:58.632 00:36:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:28:58.632 00:36:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:28:58.632 00:36:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:58.632 00:36:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:58.632 00:36:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:58.632 00:36:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:58.632 00:36:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:58.632 00:36:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:28:58.632 00:36:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:58.632 00:36:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:58.632 00:36:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:58.632 00:36:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:58.632 00:36:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:58.632 00:36:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:58.632 00:36:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:28:58.632 00:36:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:58.632 00:36:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # : 0 00:28:58.632 00:36:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:58.632 00:36:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:58.632 00:36:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:58.632 00:36:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:58.632 00:36:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:58.632 00:36:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:58.632 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:58.632 00:36:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:58.632 00:36:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:58.632 00:36:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:58.632 00:36:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:28:58.632 00:36:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:28:58.632 00:36:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:28:58.632 00:36:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:28:58.632 00:36:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:28:58.632 00:36:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:58.632 00:36:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # prepare_net_devs 00:28:58.632 00:36:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@436 -- # local -g is_hw=no 00:28:58.632 00:36:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@438 -- # remove_spdk_ns 00:28:58.632 00:36:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:58.632 00:36:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:58.632 00:36:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:58.632 00:36:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:28:58.632 00:36:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:28:58.632 00:36:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:28:58.632 00:36:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:29:06.791 00:36:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:06.791 00:36:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:29:06.791 00:36:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:06.791 00:36:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:06.791 00:36:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:06.791 00:36:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:06.791 00:36:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:06.791 00:36:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:29:06.791 00:36:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:06.791 00:36:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # e810=() 00:29:06.791 00:36:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:29:06.791 00:36:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # x722=() 00:29:06.791 00:36:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:29:06.791 00:36:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:29:06.791 00:36:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:29:06.791 00:36:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:06.791 00:36:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:06.791 00:36:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:06.791 00:36:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:06.791 00:36:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:06.791 00:36:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:06.792 00:36:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:06.792 00:36:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:06.792 00:36:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:06.792 00:36:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:06.792 00:36:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:06.792 00:36:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:06.792 00:36:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:06.792 00:36:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:06.792 00:36:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:06.792 00:36:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:06.792 00:36:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:06.792 00:36:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:06.792 00:36:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:06.792 00:36:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:29:06.792 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:29:06.792 00:36:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:06.792 00:36:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:06.792 00:36:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:06.792 00:36:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:06.792 00:36:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:06.792 00:36:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:06.792 00:36:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:29:06.792 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:29:06.792 00:36:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:06.792 00:36:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:06.792 00:36:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:06.792 00:36:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:06.792 00:36:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:06.792 00:36:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:06.792 00:36:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:06.792 00:36:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:06.792 00:36:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:29:06.792 00:36:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:06.792 00:36:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:29:06.792 00:36:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:06.792 00:36:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ up == up ]] 00:29:06.792 00:36:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:29:06.792 00:36:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:06.792 00:36:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:29:06.792 Found net devices under 0000:4b:00.0: cvl_0_0 00:29:06.792 00:36:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:29:06.792 00:36:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:29:06.792 00:36:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:06.792 00:36:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:29:06.792 00:36:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:06.792 00:36:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ up == up ]] 00:29:06.792 00:36:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:29:06.792 00:36:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:06.792 00:36:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:29:06.792 Found net devices under 0000:4b:00.1: cvl_0_1 00:29:06.792 00:36:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:29:06.792 00:36:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:29:06.792 00:36:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # is_hw=yes 00:29:06.792 00:36:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:29:06.792 00:36:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:29:06.792 00:36:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:29:06.792 00:36:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:06.792 00:36:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:06.792 00:36:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:06.792 00:36:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:06.792 00:36:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:06.792 00:36:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:06.792 00:36:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:06.792 00:36:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:06.792 00:36:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:06.792 00:36:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:06.792 00:36:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:06.792 00:36:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:06.792 00:36:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:06.792 00:36:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:06.792 00:36:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:06.792 00:36:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:06.792 00:36:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:06.792 00:36:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:06.792 00:36:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:06.792 00:36:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:06.792 00:36:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:06.792 00:36:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:06.792 00:36:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:06.792 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:06.792 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.690 ms 00:29:06.792 00:29:06.792 --- 10.0.0.2 ping statistics --- 00:29:06.792 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:06.792 rtt min/avg/max/mdev = 0.690/0.690/0.690/0.000 ms 00:29:06.792 00:36:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:06.792 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:06.792 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.325 ms 00:29:06.792 00:29:06.792 --- 10.0.0.1 ping statistics --- 00:29:06.792 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:06.792 rtt min/avg/max/mdev = 0.325/0.325/0.325/0.000 ms 00:29:06.792 00:36:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:06.792 00:36:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@448 -- # return 0 00:29:06.792 00:36:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:29:06.792 00:36:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:06.792 00:36:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:29:06.792 00:36:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:29:06.792 00:36:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:06.792 00:36:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:29:06.792 00:36:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:29:06.792 00:36:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:29:06.792 00:36:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:29:06.792 00:36:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:06.792 00:36:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:29:06.792 ************************************ 00:29:06.792 START TEST nvmf_target_disconnect_tc1 00:29:06.792 ************************************ 00:29:06.792 00:36:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1125 -- # nvmf_target_disconnect_tc1 00:29:06.793 00:36:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:06.793 00:36:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@650 -- # local es=0 00:29:06.793 00:36:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:06.793 00:36:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:29:06.793 00:36:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:06.793 00:36:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:29:06.793 00:36:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:06.793 00:36:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:29:06.793 00:36:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:06.793 00:36:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:29:06.793 00:36:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:29:06.793 00:36:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:06.793 [2024-10-09 00:36:36.700762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.793 [2024-10-09 00:36:36.700869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacbba0 with addr=10.0.0.2, port=4420 00:29:06.793 [2024-10-09 00:36:36.700907] nvme_tcp.c:2723:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:29:06.793 [2024-10-09 00:36:36.700926] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:29:06.793 [2024-10-09 00:36:36.700935] nvme.c: 939:spdk_nvme_probe_ext: *ERROR*: Create probe context failed 00:29:06.793 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:29:06.793 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:29:06.793 Initializing NVMe Controllers 00:29:06.793 00:36:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@653 -- # es=1 00:29:06.793 00:36:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:29:06.793 00:36:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:29:06.793 00:36:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:29:06.793 00:29:06.793 real 0m0.133s 00:29:06.793 user 0m0.054s 00:29:06.793 sys 0m0.078s 00:29:06.793 00:36:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:06.793 00:36:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:06.793 ************************************ 00:29:06.793 END TEST nvmf_target_disconnect_tc1 00:29:06.793 ************************************ 00:29:06.793 00:36:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:29:06.793 00:36:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:29:06.793 00:36:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:06.793 00:36:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:29:06.793 ************************************ 00:29:06.793 START TEST nvmf_target_disconnect_tc2 00:29:06.793 ************************************ 00:29:06.793 00:36:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1125 -- # nvmf_target_disconnect_tc2 00:29:06.793 00:36:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:29:06.793 00:36:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:29:06.793 00:36:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:29:06.793 00:36:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:06.793 00:36:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:06.793 00:36:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # nvmfpid=3437291 00:29:06.793 00:36:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # waitforlisten 3437291 00:29:06.793 00:36:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@831 -- # '[' -z 3437291 ']' 00:29:06.793 00:36:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:29:06.793 00:36:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:06.793 00:36:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:06.793 00:36:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:06.793 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:06.793 00:36:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:06.793 00:36:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:06.793 [2024-10-09 00:36:36.862137] Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 initialization... 00:29:06.793 [2024-10-09 00:36:36.862196] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:06.793 [2024-10-09 00:36:36.956637] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:06.793 [2024-10-09 00:36:37.049047] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:06.793 [2024-10-09 00:36:37.049109] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:06.793 [2024-10-09 00:36:37.049117] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:06.793 [2024-10-09 00:36:37.049124] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:06.793 [2024-10-09 00:36:37.049131] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:06.793 [2024-10-09 00:36:37.051359] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 5 00:29:06.793 [2024-10-09 00:36:37.051559] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 6 00:29:06.793 [2024-10-09 00:36:37.051700] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 4 00:29:06.793 [2024-10-09 00:36:37.051701] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 7 00:29:07.054 00:36:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:07.054 00:36:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # return 0 00:29:07.316 00:36:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:29:07.316 00:36:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:07.316 00:36:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:07.316 00:36:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:07.316 00:36:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:07.316 00:36:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:07.316 00:36:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:07.316 Malloc0 00:29:07.316 00:36:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:07.316 00:36:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:29:07.316 00:36:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:07.316 00:36:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:07.316 [2024-10-09 00:36:37.766149] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:07.316 00:36:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:07.316 00:36:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:07.316 00:36:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:07.316 00:36:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:07.316 00:36:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:07.316 00:36:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:07.316 00:36:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:07.316 00:36:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:07.316 00:36:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:07.316 00:36:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:07.316 00:36:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:07.316 00:36:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:07.316 [2024-10-09 00:36:37.806612] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:07.316 00:36:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:07.316 00:36:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:07.316 00:36:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:07.316 00:36:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:07.316 00:36:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:07.316 00:36:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=3437596 00:29:07.316 00:36:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:29:07.316 00:36:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:09.233 00:36:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 3437291 00:29:09.233 00:36:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:29:09.233 Read completed with error (sct=0, sc=8) 00:29:09.233 starting I/O failed 00:29:09.233 Read completed with error (sct=0, sc=8) 00:29:09.233 starting I/O failed 00:29:09.233 Read completed with error (sct=0, sc=8) 00:29:09.233 starting I/O failed 00:29:09.233 Read completed with error (sct=0, sc=8) 00:29:09.233 starting I/O failed 00:29:09.233 Read completed with error (sct=0, sc=8) 00:29:09.233 starting I/O failed 00:29:09.233 Read completed with error (sct=0, sc=8) 00:29:09.233 starting I/O failed 00:29:09.233 Read completed with error (sct=0, sc=8) 00:29:09.233 starting I/O failed 00:29:09.233 Read completed with error (sct=0, sc=8) 00:29:09.233 starting I/O failed 00:29:09.233 Read completed with error (sct=0, sc=8) 00:29:09.233 starting I/O failed 00:29:09.233 Write completed with error (sct=0, sc=8) 00:29:09.233 starting I/O failed 00:29:09.233 Read completed with error (sct=0, sc=8) 00:29:09.233 starting I/O failed 00:29:09.233 Read completed with error (sct=0, sc=8) 00:29:09.233 starting I/O failed 00:29:09.233 Write completed with error (sct=0, sc=8) 00:29:09.233 starting I/O failed 00:29:09.233 Write completed with error (sct=0, sc=8) 00:29:09.233 starting I/O failed 00:29:09.233 Write completed with error (sct=0, sc=8) 00:29:09.233 starting I/O failed 00:29:09.233 Write completed with error (sct=0, sc=8) 00:29:09.233 starting I/O failed 00:29:09.233 Write completed with error (sct=0, sc=8) 00:29:09.233 starting I/O failed 00:29:09.233 Write completed with error (sct=0, sc=8) 00:29:09.233 starting I/O failed 00:29:09.233 Read completed with error (sct=0, sc=8) 00:29:09.233 starting I/O failed 00:29:09.233 Write completed with error (sct=0, sc=8) 00:29:09.233 starting I/O failed 00:29:09.233 Read completed with error (sct=0, sc=8) 00:29:09.233 starting I/O failed 00:29:09.233 Read completed with error (sct=0, sc=8) 00:29:09.233 starting I/O failed 00:29:09.233 Read completed with error (sct=0, sc=8) 00:29:09.233 starting I/O failed 00:29:09.233 Read completed with error (sct=0, sc=8) 00:29:09.233 starting I/O failed 00:29:09.233 Write completed with error (sct=0, sc=8) 00:29:09.233 starting I/O failed 00:29:09.233 Write completed with error (sct=0, sc=8) 00:29:09.233 starting I/O failed 00:29:09.233 Read completed with error (sct=0, sc=8) 00:29:09.233 starting I/O failed 00:29:09.233 Write completed with error (sct=0, sc=8) 00:29:09.233 starting I/O failed 00:29:09.233 Read completed with error (sct=0, sc=8) 00:29:09.233 starting I/O failed 00:29:09.233 Write completed with error (sct=0, sc=8) 00:29:09.233 starting I/O failed 00:29:09.233 Write completed with error (sct=0, sc=8) 00:29:09.233 starting I/O failed 00:29:09.233 Read completed with error (sct=0, sc=8) 00:29:09.233 starting I/O failed 00:29:09.233 [2024-10-09 00:36:39.846312] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:09.233 Read completed with error (sct=0, sc=8) 00:29:09.233 starting I/O failed 00:29:09.233 Read completed with error (sct=0, sc=8) 00:29:09.233 starting I/O failed 00:29:09.233 Read completed with error (sct=0, sc=8) 00:29:09.233 starting I/O failed 00:29:09.233 Read completed with error (sct=0, sc=8) 00:29:09.233 starting I/O failed 00:29:09.233 Write completed with error (sct=0, sc=8) 00:29:09.233 starting I/O failed 00:29:09.233 Write completed with error (sct=0, sc=8) 00:29:09.233 starting I/O failed 00:29:09.233 Read completed with error (sct=0, sc=8) 00:29:09.233 starting I/O failed 00:29:09.233 Read completed with error (sct=0, sc=8) 00:29:09.233 starting I/O failed 00:29:09.233 Write completed with error (sct=0, sc=8) 00:29:09.233 starting I/O failed 00:29:09.233 Read completed with error (sct=0, sc=8) 00:29:09.233 starting I/O failed 00:29:09.233 Write completed with error (sct=0, sc=8) 00:29:09.233 starting I/O failed 00:29:09.233 Read completed with error (sct=0, sc=8) 00:29:09.233 starting I/O failed 00:29:09.233 Write completed with error (sct=0, sc=8) 00:29:09.233 starting I/O failed 00:29:09.233 Write completed with error (sct=0, sc=8) 00:29:09.234 starting I/O failed 00:29:09.234 Write completed with error (sct=0, sc=8) 00:29:09.234 starting I/O failed 00:29:09.234 Write completed with error (sct=0, sc=8) 00:29:09.234 starting I/O failed 00:29:09.234 Read completed with error (sct=0, sc=8) 00:29:09.234 starting I/O failed 00:29:09.234 Write completed with error (sct=0, sc=8) 00:29:09.234 starting I/O failed 00:29:09.234 Write completed with error (sct=0, sc=8) 00:29:09.234 starting I/O failed 00:29:09.234 Write completed with error (sct=0, sc=8) 00:29:09.234 starting I/O failed 00:29:09.234 Write completed with error (sct=0, sc=8) 00:29:09.234 starting I/O failed 00:29:09.234 Read completed with error (sct=0, sc=8) 00:29:09.234 starting I/O failed 00:29:09.234 Read completed with error (sct=0, sc=8) 00:29:09.234 starting I/O failed 00:29:09.234 Write completed with error (sct=0, sc=8) 00:29:09.234 starting I/O failed 00:29:09.234 Write completed with error (sct=0, sc=8) 00:29:09.234 starting I/O failed 00:29:09.234 Write completed with error (sct=0, sc=8) 00:29:09.234 starting I/O failed 00:29:09.234 Read completed with error (sct=0, sc=8) 00:29:09.234 starting I/O failed 00:29:09.234 Read completed with error (sct=0, sc=8) 00:29:09.234 starting I/O failed 00:29:09.234 Read completed with error (sct=0, sc=8) 00:29:09.234 starting I/O failed 00:29:09.234 Write completed with error (sct=0, sc=8) 00:29:09.234 starting I/O failed 00:29:09.234 Read completed with error (sct=0, sc=8) 00:29:09.234 starting I/O failed 00:29:09.234 Write completed with error (sct=0, sc=8) 00:29:09.234 starting I/O failed 00:29:09.234 [2024-10-09 00:36:39.846681] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:09.234 [2024-10-09 00:36:39.847209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.234 [2024-10-09 00:36:39.847279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.234 qpair failed and we were unable to recover it. 00:29:09.234 [2024-10-09 00:36:39.847519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.234 [2024-10-09 00:36:39.847536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.234 qpair failed and we were unable to recover it. 00:29:09.234 [2024-10-09 00:36:39.848016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.234 [2024-10-09 00:36:39.848077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.234 qpair failed and we were unable to recover it. 00:29:09.234 [2024-10-09 00:36:39.848415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.234 [2024-10-09 00:36:39.848431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.234 qpair failed and we were unable to recover it. 00:29:09.234 [2024-10-09 00:36:39.848643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.234 [2024-10-09 00:36:39.848655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.234 qpair failed and we were unable to recover it. 00:29:09.234 [2024-10-09 00:36:39.849078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.234 [2024-10-09 00:36:39.849140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.234 qpair failed and we were unable to recover it. 00:29:09.234 [2024-10-09 00:36:39.849515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.234 [2024-10-09 00:36:39.849530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.234 qpair failed and we were unable to recover it. 00:29:09.234 [2024-10-09 00:36:39.849961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.234 [2024-10-09 00:36:39.850024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.234 qpair failed and we were unable to recover it. 00:29:09.234 [2024-10-09 00:36:39.850408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.234 [2024-10-09 00:36:39.850423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.234 qpair failed and we were unable to recover it. 00:29:09.234 [2024-10-09 00:36:39.850969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.234 [2024-10-09 00:36:39.851031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.234 qpair failed and we were unable to recover it. 00:29:09.234 [2024-10-09 00:36:39.851382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.234 [2024-10-09 00:36:39.851396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.234 qpair failed and we were unable to recover it. 00:29:09.234 [2024-10-09 00:36:39.851712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.234 [2024-10-09 00:36:39.851735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.234 qpair failed and we were unable to recover it. 00:29:09.234 [2024-10-09 00:36:39.852053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.234 [2024-10-09 00:36:39.852064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.234 qpair failed and we were unable to recover it. 00:29:09.234 [2024-10-09 00:36:39.852435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.234 [2024-10-09 00:36:39.852447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.234 qpair failed and we were unable to recover it. 00:29:09.234 [2024-10-09 00:36:39.852801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.234 [2024-10-09 00:36:39.852815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.234 qpair failed and we were unable to recover it. 00:29:09.234 [2024-10-09 00:36:39.853107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.234 [2024-10-09 00:36:39.853120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.234 qpair failed and we were unable to recover it. 00:29:09.234 [2024-10-09 00:36:39.853429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.234 [2024-10-09 00:36:39.853448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.234 qpair failed and we were unable to recover it. 00:29:09.234 [2024-10-09 00:36:39.853764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.234 [2024-10-09 00:36:39.853776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.234 qpair failed and we were unable to recover it. 00:29:09.234 [2024-10-09 00:36:39.854144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.234 [2024-10-09 00:36:39.854156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.234 qpair failed and we were unable to recover it. 00:29:09.234 [2024-10-09 00:36:39.854285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.234 [2024-10-09 00:36:39.854296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.234 qpair failed and we were unable to recover it. 00:29:09.234 [2024-10-09 00:36:39.854518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.234 [2024-10-09 00:36:39.854530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.234 qpair failed and we were unable to recover it. 00:29:09.234 [2024-10-09 00:36:39.854757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.234 [2024-10-09 00:36:39.854769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.234 qpair failed and we were unable to recover it. 00:29:09.234 [2024-10-09 00:36:39.855082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.234 [2024-10-09 00:36:39.855094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.234 qpair failed and we were unable to recover it. 00:29:09.234 [2024-10-09 00:36:39.855425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.234 [2024-10-09 00:36:39.855437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.234 qpair failed and we were unable to recover it. 00:29:09.234 [2024-10-09 00:36:39.855645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.234 [2024-10-09 00:36:39.855658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.234 qpair failed and we were unable to recover it. 00:29:09.234 [2024-10-09 00:36:39.856018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.234 [2024-10-09 00:36:39.856031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.234 qpair failed and we were unable to recover it. 00:29:09.234 [2024-10-09 00:36:39.856328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.234 [2024-10-09 00:36:39.856340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.234 qpair failed and we were unable to recover it. 00:29:09.234 [2024-10-09 00:36:39.856677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.234 [2024-10-09 00:36:39.856689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.234 qpair failed and we were unable to recover it. 00:29:09.234 [2024-10-09 00:36:39.857045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.234 [2024-10-09 00:36:39.857057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.234 qpair failed and we were unable to recover it. 00:29:09.234 [2024-10-09 00:36:39.857365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.234 [2024-10-09 00:36:39.857377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.234 qpair failed and we were unable to recover it. 00:29:09.234 [2024-10-09 00:36:39.857733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.234 [2024-10-09 00:36:39.857745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.234 qpair failed and we were unable to recover it. 00:29:09.234 [2024-10-09 00:36:39.858086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.234 [2024-10-09 00:36:39.858097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.234 qpair failed and we were unable to recover it. 00:29:09.234 [2024-10-09 00:36:39.858416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.234 [2024-10-09 00:36:39.858428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.234 qpair failed and we were unable to recover it. 00:29:09.234 [2024-10-09 00:36:39.858765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.235 [2024-10-09 00:36:39.858777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.235 qpair failed and we were unable to recover it. 00:29:09.235 [2024-10-09 00:36:39.859126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.235 [2024-10-09 00:36:39.859138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.235 qpair failed and we were unable to recover it. 00:29:09.235 [2024-10-09 00:36:39.859324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.235 [2024-10-09 00:36:39.859336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.235 qpair failed and we were unable to recover it. 00:29:09.235 [2024-10-09 00:36:39.859526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.235 [2024-10-09 00:36:39.859538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.235 qpair failed and we were unable to recover it. 00:29:09.235 [2024-10-09 00:36:39.859850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.235 [2024-10-09 00:36:39.859863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.235 qpair failed and we were unable to recover it. 00:29:09.235 [2024-10-09 00:36:39.860171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.235 [2024-10-09 00:36:39.860183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.235 qpair failed and we were unable to recover it. 00:29:09.235 [2024-10-09 00:36:39.860538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.235 [2024-10-09 00:36:39.860550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.235 qpair failed and we were unable to recover it. 00:29:09.235 [2024-10-09 00:36:39.860891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.235 [2024-10-09 00:36:39.860904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.235 qpair failed and we were unable to recover it. 00:29:09.235 [2024-10-09 00:36:39.861268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.235 [2024-10-09 00:36:39.861280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.235 qpair failed and we were unable to recover it. 00:29:09.235 [2024-10-09 00:36:39.861624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.235 [2024-10-09 00:36:39.861636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.235 qpair failed and we were unable to recover it. 00:29:09.235 [2024-10-09 00:36:39.861855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.235 [2024-10-09 00:36:39.861868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.235 qpair failed and we were unable to recover it. 00:29:09.235 [2024-10-09 00:36:39.862186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.235 [2024-10-09 00:36:39.862198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.235 qpair failed and we were unable to recover it. 00:29:09.235 [2024-10-09 00:36:39.862518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.235 [2024-10-09 00:36:39.862530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.235 qpair failed and we were unable to recover it. 00:29:09.235 [2024-10-09 00:36:39.862859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.235 [2024-10-09 00:36:39.862872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.235 qpair failed and we were unable to recover it. 00:29:09.235 [2024-10-09 00:36:39.863107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.235 [2024-10-09 00:36:39.863119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.235 qpair failed and we were unable to recover it. 00:29:09.235 [2024-10-09 00:36:39.863383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.235 [2024-10-09 00:36:39.863395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.235 qpair failed and we were unable to recover it. 00:29:09.235 [2024-10-09 00:36:39.863730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.235 [2024-10-09 00:36:39.863743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.235 qpair failed and we were unable to recover it. 00:29:09.235 [2024-10-09 00:36:39.863986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.235 [2024-10-09 00:36:39.863998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.235 qpair failed and we were unable to recover it. 00:29:09.235 [2024-10-09 00:36:39.864331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.235 [2024-10-09 00:36:39.864342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.235 qpair failed and we were unable to recover it. 00:29:09.235 [2024-10-09 00:36:39.864557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.235 [2024-10-09 00:36:39.864568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.235 qpair failed and we were unable to recover it. 00:29:09.235 [2024-10-09 00:36:39.864802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.235 [2024-10-09 00:36:39.864814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.235 qpair failed and we were unable to recover it. 00:29:09.235 [2024-10-09 00:36:39.865168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.235 [2024-10-09 00:36:39.865180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.235 qpair failed and we were unable to recover it. 00:29:09.235 [2024-10-09 00:36:39.865474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.235 [2024-10-09 00:36:39.865486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.235 qpair failed and we were unable to recover it. 00:29:09.235 [2024-10-09 00:36:39.865850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.235 [2024-10-09 00:36:39.865862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.235 qpair failed and we were unable to recover it. 00:29:09.235 [2024-10-09 00:36:39.866191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.235 [2024-10-09 00:36:39.866204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.235 qpair failed and we were unable to recover it. 00:29:09.508 [2024-10-09 00:36:39.866521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.508 [2024-10-09 00:36:39.866537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.508 qpair failed and we were unable to recover it. 00:29:09.508 [2024-10-09 00:36:39.866747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.508 [2024-10-09 00:36:39.866760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.508 qpair failed and we were unable to recover it. 00:29:09.508 [2024-10-09 00:36:39.867070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.508 [2024-10-09 00:36:39.867082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.508 qpair failed and we were unable to recover it. 00:29:09.508 [2024-10-09 00:36:39.867439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.508 [2024-10-09 00:36:39.867451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.508 qpair failed and we were unable to recover it. 00:29:09.508 [2024-10-09 00:36:39.867795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.508 [2024-10-09 00:36:39.867807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.508 qpair failed and we were unable to recover it. 00:29:09.508 [2024-10-09 00:36:39.868131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.508 [2024-10-09 00:36:39.868143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.508 qpair failed and we were unable to recover it. 00:29:09.508 [2024-10-09 00:36:39.868498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.508 [2024-10-09 00:36:39.868510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.508 qpair failed and we were unable to recover it. 00:29:09.508 [2024-10-09 00:36:39.868824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.508 [2024-10-09 00:36:39.868835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.508 qpair failed and we were unable to recover it. 00:29:09.508 [2024-10-09 00:36:39.869146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.508 [2024-10-09 00:36:39.869158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.508 qpair failed and we were unable to recover it. 00:29:09.508 [2024-10-09 00:36:39.869371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.508 [2024-10-09 00:36:39.869383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.508 qpair failed and we were unable to recover it. 00:29:09.508 [2024-10-09 00:36:39.869684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.508 [2024-10-09 00:36:39.869695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.508 qpair failed and we were unable to recover it. 00:29:09.508 [2024-10-09 00:36:39.870030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.508 [2024-10-09 00:36:39.870042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.508 qpair failed and we were unable to recover it. 00:29:09.508 [2024-10-09 00:36:39.870428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.508 [2024-10-09 00:36:39.870439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.508 qpair failed and we were unable to recover it. 00:29:09.508 [2024-10-09 00:36:39.870789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.508 [2024-10-09 00:36:39.870800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.508 qpair failed and we were unable to recover it. 00:29:09.508 [2024-10-09 00:36:39.870992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.508 [2024-10-09 00:36:39.871005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.508 qpair failed and we were unable to recover it. 00:29:09.508 [2024-10-09 00:36:39.871335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.508 [2024-10-09 00:36:39.871346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.508 qpair failed and we were unable to recover it. 00:29:09.508 [2024-10-09 00:36:39.871652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.508 [2024-10-09 00:36:39.871663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.508 qpair failed and we were unable to recover it. 00:29:09.508 [2024-10-09 00:36:39.871891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.508 [2024-10-09 00:36:39.871903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.508 qpair failed and we were unable to recover it. 00:29:09.508 [2024-10-09 00:36:39.872265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.508 [2024-10-09 00:36:39.872275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.508 qpair failed and we were unable to recover it. 00:29:09.508 [2024-10-09 00:36:39.872669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.508 [2024-10-09 00:36:39.872680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.508 qpair failed and we were unable to recover it. 00:29:09.508 [2024-10-09 00:36:39.872998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.508 [2024-10-09 00:36:39.873009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.508 qpair failed and we were unable to recover it. 00:29:09.508 [2024-10-09 00:36:39.873349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.508 [2024-10-09 00:36:39.873359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.508 qpair failed and we were unable to recover it. 00:29:09.508 [2024-10-09 00:36:39.873561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.508 [2024-10-09 00:36:39.873573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.508 qpair failed and we were unable to recover it. 00:29:09.508 [2024-10-09 00:36:39.873904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.508 [2024-10-09 00:36:39.873916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.508 qpair failed and we were unable to recover it. 00:29:09.508 [2024-10-09 00:36:39.874230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.508 [2024-10-09 00:36:39.874241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.508 qpair failed and we were unable to recover it. 00:29:09.508 [2024-10-09 00:36:39.874601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.508 [2024-10-09 00:36:39.874615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.508 qpair failed and we were unable to recover it. 00:29:09.508 [2024-10-09 00:36:39.874949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.508 [2024-10-09 00:36:39.874960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.508 qpair failed and we were unable to recover it. 00:29:09.508 [2024-10-09 00:36:39.875306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.508 [2024-10-09 00:36:39.875317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.508 qpair failed and we were unable to recover it. 00:29:09.508 [2024-10-09 00:36:39.875518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.508 [2024-10-09 00:36:39.875529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.508 qpair failed and we were unable to recover it. 00:29:09.509 [2024-10-09 00:36:39.875854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.509 [2024-10-09 00:36:39.875866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.509 qpair failed and we were unable to recover it. 00:29:09.509 [2024-10-09 00:36:39.876175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.509 [2024-10-09 00:36:39.876186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.509 qpair failed and we were unable to recover it. 00:29:09.509 [2024-10-09 00:36:39.876489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.509 [2024-10-09 00:36:39.876502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.509 qpair failed and we were unable to recover it. 00:29:09.509 [2024-10-09 00:36:39.876811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.509 [2024-10-09 00:36:39.876823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.509 qpair failed and we were unable to recover it. 00:29:09.509 [2024-10-09 00:36:39.877164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.509 [2024-10-09 00:36:39.877178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.509 qpair failed and we were unable to recover it. 00:29:09.509 [2024-10-09 00:36:39.877572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.509 [2024-10-09 00:36:39.877586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.509 qpair failed and we were unable to recover it. 00:29:09.509 [2024-10-09 00:36:39.877898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.509 [2024-10-09 00:36:39.877913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.509 qpair failed and we were unable to recover it. 00:29:09.509 [2024-10-09 00:36:39.878235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.509 [2024-10-09 00:36:39.878249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.509 qpair failed and we were unable to recover it. 00:29:09.509 [2024-10-09 00:36:39.878599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.509 [2024-10-09 00:36:39.878614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.509 qpair failed and we were unable to recover it. 00:29:09.509 [2024-10-09 00:36:39.878955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.509 [2024-10-09 00:36:39.878970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.509 qpair failed and we were unable to recover it. 00:29:09.509 [2024-10-09 00:36:39.879299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.509 [2024-10-09 00:36:39.879314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.509 qpair failed and we were unable to recover it. 00:29:09.509 [2024-10-09 00:36:39.879707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.509 [2024-10-09 00:36:39.879745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.509 qpair failed and we were unable to recover it. 00:29:09.509 [2024-10-09 00:36:39.879927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.509 [2024-10-09 00:36:39.879942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.509 qpair failed and we were unable to recover it. 00:29:09.509 [2024-10-09 00:36:39.880264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.509 [2024-10-09 00:36:39.880278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.509 qpair failed and we were unable to recover it. 00:29:09.509 [2024-10-09 00:36:39.880611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.509 [2024-10-09 00:36:39.880626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.509 qpair failed and we were unable to recover it. 00:29:09.509 [2024-10-09 00:36:39.880948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.509 [2024-10-09 00:36:39.880964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.509 qpair failed and we were unable to recover it. 00:29:09.509 [2024-10-09 00:36:39.881264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.509 [2024-10-09 00:36:39.881280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.509 qpair failed and we were unable to recover it. 00:29:09.509 [2024-10-09 00:36:39.881609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.509 [2024-10-09 00:36:39.881623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.509 qpair failed and we were unable to recover it. 00:29:09.509 [2024-10-09 00:36:39.881953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.509 [2024-10-09 00:36:39.881968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.509 qpair failed and we were unable to recover it. 00:29:09.509 [2024-10-09 00:36:39.882293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.509 [2024-10-09 00:36:39.882307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.509 qpair failed and we were unable to recover it. 00:29:09.509 [2024-10-09 00:36:39.882607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.509 [2024-10-09 00:36:39.882621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.509 qpair failed and we were unable to recover it. 00:29:09.509 [2024-10-09 00:36:39.882947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.509 [2024-10-09 00:36:39.882962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.509 qpair failed and we were unable to recover it. 00:29:09.509 [2024-10-09 00:36:39.883365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.509 [2024-10-09 00:36:39.883380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.509 qpair failed and we were unable to recover it. 00:29:09.509 [2024-10-09 00:36:39.883701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.509 [2024-10-09 00:36:39.883715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.509 qpair failed and we were unable to recover it. 00:29:09.509 [2024-10-09 00:36:39.884051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.509 [2024-10-09 00:36:39.884066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.509 qpair failed and we were unable to recover it. 00:29:09.509 [2024-10-09 00:36:39.884390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.509 [2024-10-09 00:36:39.884404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.509 qpair failed and we were unable to recover it. 00:29:09.509 [2024-10-09 00:36:39.884747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.509 [2024-10-09 00:36:39.884763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.509 qpair failed and we were unable to recover it. 00:29:09.509 [2024-10-09 00:36:39.885113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.509 [2024-10-09 00:36:39.885127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.509 qpair failed and we were unable to recover it. 00:29:09.509 [2024-10-09 00:36:39.885424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.509 [2024-10-09 00:36:39.885440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.509 qpair failed and we were unable to recover it. 00:29:09.509 [2024-10-09 00:36:39.885770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.509 [2024-10-09 00:36:39.885785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.509 qpair failed and we were unable to recover it. 00:29:09.509 [2024-10-09 00:36:39.886106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.509 [2024-10-09 00:36:39.886121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.509 qpair failed and we were unable to recover it. 00:29:09.509 [2024-10-09 00:36:39.886340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.509 [2024-10-09 00:36:39.886354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.509 qpair failed and we were unable to recover it. 00:29:09.509 [2024-10-09 00:36:39.886681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.509 [2024-10-09 00:36:39.886696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.509 qpair failed and we were unable to recover it. 00:29:09.509 [2024-10-09 00:36:39.887054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.509 [2024-10-09 00:36:39.887069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.509 qpair failed and we were unable to recover it. 00:29:09.509 [2024-10-09 00:36:39.887465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.509 [2024-10-09 00:36:39.887480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.509 qpair failed and we were unable to recover it. 00:29:09.509 [2024-10-09 00:36:39.887792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.509 [2024-10-09 00:36:39.887806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.509 qpair failed and we were unable to recover it. 00:29:09.509 [2024-10-09 00:36:39.888145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.509 [2024-10-09 00:36:39.888163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.509 qpair failed and we were unable to recover it. 00:29:09.509 [2024-10-09 00:36:39.888477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.509 [2024-10-09 00:36:39.888491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.509 qpair failed and we were unable to recover it. 00:29:09.509 [2024-10-09 00:36:39.888873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.509 [2024-10-09 00:36:39.888888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.509 qpair failed and we were unable to recover it. 00:29:09.509 [2024-10-09 00:36:39.889219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.509 [2024-10-09 00:36:39.889234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.509 qpair failed and we were unable to recover it. 00:29:09.510 [2024-10-09 00:36:39.889559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.510 [2024-10-09 00:36:39.889577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.510 qpair failed and we were unable to recover it. 00:29:09.510 [2024-10-09 00:36:39.889903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.510 [2024-10-09 00:36:39.889922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.510 qpair failed and we were unable to recover it. 00:29:09.510 [2024-10-09 00:36:39.890160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.510 [2024-10-09 00:36:39.890179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.510 qpair failed and we were unable to recover it. 00:29:09.510 [2024-10-09 00:36:39.890600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.510 [2024-10-09 00:36:39.890618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.510 qpair failed and we were unable to recover it. 00:29:09.510 [2024-10-09 00:36:39.890963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.510 [2024-10-09 00:36:39.890982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.510 qpair failed and we were unable to recover it. 00:29:09.510 [2024-10-09 00:36:39.891333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.510 [2024-10-09 00:36:39.891351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.510 qpair failed and we were unable to recover it. 00:29:09.510 [2024-10-09 00:36:39.891690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.510 [2024-10-09 00:36:39.891709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.510 qpair failed and we were unable to recover it. 00:29:09.510 [2024-10-09 00:36:39.892039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.510 [2024-10-09 00:36:39.892058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.510 qpair failed and we were unable to recover it. 00:29:09.510 [2024-10-09 00:36:39.892381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.510 [2024-10-09 00:36:39.892401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.510 qpair failed and we were unable to recover it. 00:29:09.510 [2024-10-09 00:36:39.892746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.510 [2024-10-09 00:36:39.892765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.510 qpair failed and we were unable to recover it. 00:29:09.510 [2024-10-09 00:36:39.892990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.510 [2024-10-09 00:36:39.893008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.510 qpair failed and we were unable to recover it. 00:29:09.510 [2024-10-09 00:36:39.893223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.510 [2024-10-09 00:36:39.893242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.510 qpair failed and we were unable to recover it. 00:29:09.510 [2024-10-09 00:36:39.893592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.510 [2024-10-09 00:36:39.893610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.510 qpair failed and we were unable to recover it. 00:29:09.510 [2024-10-09 00:36:39.893998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.510 [2024-10-09 00:36:39.894017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.510 qpair failed and we were unable to recover it. 00:29:09.510 [2024-10-09 00:36:39.894309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.510 [2024-10-09 00:36:39.894328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.510 qpair failed and we were unable to recover it. 00:29:09.510 [2024-10-09 00:36:39.894648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.510 [2024-10-09 00:36:39.894668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.510 qpair failed and we were unable to recover it. 00:29:09.510 [2024-10-09 00:36:39.895009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.510 [2024-10-09 00:36:39.895027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.510 qpair failed and we were unable to recover it. 00:29:09.510 [2024-10-09 00:36:39.895346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.510 [2024-10-09 00:36:39.895366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.510 qpair failed and we were unable to recover it. 00:29:09.510 [2024-10-09 00:36:39.895702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.510 [2024-10-09 00:36:39.895727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.510 qpair failed and we were unable to recover it. 00:29:09.510 [2024-10-09 00:36:39.896053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.510 [2024-10-09 00:36:39.896071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.510 qpair failed and we were unable to recover it. 00:29:09.510 [2024-10-09 00:36:39.896394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.510 [2024-10-09 00:36:39.896414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.510 qpair failed and we were unable to recover it. 00:29:09.510 [2024-10-09 00:36:39.896751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.510 [2024-10-09 00:36:39.896772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.510 qpair failed and we were unable to recover it. 00:29:09.510 [2024-10-09 00:36:39.896974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.510 [2024-10-09 00:36:39.896995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.510 qpair failed and we were unable to recover it. 00:29:09.510 [2024-10-09 00:36:39.897351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.510 [2024-10-09 00:36:39.897369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.510 qpair failed and we were unable to recover it. 00:29:09.510 [2024-10-09 00:36:39.897705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.510 [2024-10-09 00:36:39.897732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.510 qpair failed and we were unable to recover it. 00:29:09.510 [2024-10-09 00:36:39.898056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.510 [2024-10-09 00:36:39.898076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.510 qpair failed and we were unable to recover it. 00:29:09.510 [2024-10-09 00:36:39.898405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.510 [2024-10-09 00:36:39.898423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.510 qpair failed and we were unable to recover it. 00:29:09.510 [2024-10-09 00:36:39.898749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.510 [2024-10-09 00:36:39.898782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.510 qpair failed and we were unable to recover it. 00:29:09.510 [2024-10-09 00:36:39.899067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.510 [2024-10-09 00:36:39.899091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.510 qpair failed and we were unable to recover it. 00:29:09.510 [2024-10-09 00:36:39.899452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.510 [2024-10-09 00:36:39.899476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.510 qpair failed and we were unable to recover it. 00:29:09.510 [2024-10-09 00:36:39.899841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.510 [2024-10-09 00:36:39.899865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.510 qpair failed and we were unable to recover it. 00:29:09.510 [2024-10-09 00:36:39.900194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.510 [2024-10-09 00:36:39.900217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.510 qpair failed and we were unable to recover it. 00:29:09.510 [2024-10-09 00:36:39.900582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.510 [2024-10-09 00:36:39.900606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.510 qpair failed and we were unable to recover it. 00:29:09.510 [2024-10-09 00:36:39.900852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.510 [2024-10-09 00:36:39.900876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.510 qpair failed and we were unable to recover it. 00:29:09.510 [2024-10-09 00:36:39.901151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.510 [2024-10-09 00:36:39.901176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.510 qpair failed and we were unable to recover it. 00:29:09.510 [2024-10-09 00:36:39.901524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.510 [2024-10-09 00:36:39.901547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.510 qpair failed and we were unable to recover it. 00:29:09.510 [2024-10-09 00:36:39.901909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.510 [2024-10-09 00:36:39.901939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.510 qpair failed and we were unable to recover it. 00:29:09.510 [2024-10-09 00:36:39.902290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.510 [2024-10-09 00:36:39.902314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.510 qpair failed and we were unable to recover it. 00:29:09.510 [2024-10-09 00:36:39.902631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.510 [2024-10-09 00:36:39.902656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.510 qpair failed and we were unable to recover it. 00:29:09.510 [2024-10-09 00:36:39.903006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.510 [2024-10-09 00:36:39.903030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.510 qpair failed and we were unable to recover it. 00:29:09.510 [2024-10-09 00:36:39.903342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.511 [2024-10-09 00:36:39.903366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.511 qpair failed and we were unable to recover it. 00:29:09.511 [2024-10-09 00:36:39.903736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.511 [2024-10-09 00:36:39.903760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.511 qpair failed and we were unable to recover it. 00:29:09.511 [2024-10-09 00:36:39.904089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.511 [2024-10-09 00:36:39.904113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.511 qpair failed and we were unable to recover it. 00:29:09.511 [2024-10-09 00:36:39.904476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.511 [2024-10-09 00:36:39.904500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.511 qpair failed and we were unable to recover it. 00:29:09.511 [2024-10-09 00:36:39.904952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.511 [2024-10-09 00:36:39.904977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.511 qpair failed and we were unable to recover it. 00:29:09.511 [2024-10-09 00:36:39.905306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.511 [2024-10-09 00:36:39.905331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.511 qpair failed and we were unable to recover it. 00:29:09.511 [2024-10-09 00:36:39.905680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.511 [2024-10-09 00:36:39.905703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.511 qpair failed and we were unable to recover it. 00:29:09.511 [2024-10-09 00:36:39.906070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.511 [2024-10-09 00:36:39.906095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.511 qpair failed and we were unable to recover it. 00:29:09.511 [2024-10-09 00:36:39.906462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.511 [2024-10-09 00:36:39.906485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.511 qpair failed and we were unable to recover it. 00:29:09.511 [2024-10-09 00:36:39.906869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.511 [2024-10-09 00:36:39.906893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.511 qpair failed and we were unable to recover it. 00:29:09.511 [2024-10-09 00:36:39.907256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.511 [2024-10-09 00:36:39.907280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.511 qpair failed and we were unable to recover it. 00:29:09.511 [2024-10-09 00:36:39.907621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.511 [2024-10-09 00:36:39.907644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.511 qpair failed and we were unable to recover it. 00:29:09.511 [2024-10-09 00:36:39.907995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.511 [2024-10-09 00:36:39.908019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.511 qpair failed and we were unable to recover it. 00:29:09.511 [2024-10-09 00:36:39.908382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.511 [2024-10-09 00:36:39.908405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.511 qpair failed and we were unable to recover it. 00:29:09.511 [2024-10-09 00:36:39.908774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.511 [2024-10-09 00:36:39.908799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.511 qpair failed and we were unable to recover it. 00:29:09.511 [2024-10-09 00:36:39.909173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.511 [2024-10-09 00:36:39.909197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.511 qpair failed and we were unable to recover it. 00:29:09.511 [2024-10-09 00:36:39.909567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.511 [2024-10-09 00:36:39.909596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.511 qpair failed and we were unable to recover it. 00:29:09.511 [2024-10-09 00:36:39.909978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.511 [2024-10-09 00:36:39.910009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.511 qpair failed and we were unable to recover it. 00:29:09.511 [2024-10-09 00:36:39.910264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.511 [2024-10-09 00:36:39.910293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.511 qpair failed and we were unable to recover it. 00:29:09.511 [2024-10-09 00:36:39.910654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.511 [2024-10-09 00:36:39.910684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.511 qpair failed and we were unable to recover it. 00:29:09.511 [2024-10-09 00:36:39.911062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.511 [2024-10-09 00:36:39.911093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.511 qpair failed and we were unable to recover it. 00:29:09.511 [2024-10-09 00:36:39.911455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.511 [2024-10-09 00:36:39.911484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.511 qpair failed and we were unable to recover it. 00:29:09.511 [2024-10-09 00:36:39.911792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.511 [2024-10-09 00:36:39.911822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.511 qpair failed and we were unable to recover it. 00:29:09.511 [2024-10-09 00:36:39.912200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.511 [2024-10-09 00:36:39.912230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.511 qpair failed and we were unable to recover it. 00:29:09.511 [2024-10-09 00:36:39.912529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.511 [2024-10-09 00:36:39.912557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.511 qpair failed and we were unable to recover it. 00:29:09.511 [2024-10-09 00:36:39.912911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.511 [2024-10-09 00:36:39.912941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.511 qpair failed and we were unable to recover it. 00:29:09.511 [2024-10-09 00:36:39.913315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.511 [2024-10-09 00:36:39.913344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.511 qpair failed and we were unable to recover it. 00:29:09.511 [2024-10-09 00:36:39.913703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.511 [2024-10-09 00:36:39.913741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.511 qpair failed and we were unable to recover it. 00:29:09.511 [2024-10-09 00:36:39.914087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.511 [2024-10-09 00:36:39.914116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.511 qpair failed and we were unable to recover it. 00:29:09.511 [2024-10-09 00:36:39.914473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.511 [2024-10-09 00:36:39.914502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.511 qpair failed and we were unable to recover it. 00:29:09.511 [2024-10-09 00:36:39.914880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.511 [2024-10-09 00:36:39.914909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.511 qpair failed and we were unable to recover it. 00:29:09.511 [2024-10-09 00:36:39.915270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.511 [2024-10-09 00:36:39.915299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.511 qpair failed and we were unable to recover it. 00:29:09.511 [2024-10-09 00:36:39.915544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.511 [2024-10-09 00:36:39.915574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.511 qpair failed and we were unable to recover it. 00:29:09.511 [2024-10-09 00:36:39.915856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.511 [2024-10-09 00:36:39.915890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.511 qpair failed and we were unable to recover it. 00:29:09.511 [2024-10-09 00:36:39.916148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.511 [2024-10-09 00:36:39.916177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.511 qpair failed and we were unable to recover it. 00:29:09.511 [2024-10-09 00:36:39.916521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.511 [2024-10-09 00:36:39.916551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.511 qpair failed and we were unable to recover it. 00:29:09.511 [2024-10-09 00:36:39.916880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.511 [2024-10-09 00:36:39.916917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.511 qpair failed and we were unable to recover it. 00:29:09.511 [2024-10-09 00:36:39.917293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.511 [2024-10-09 00:36:39.917323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.511 qpair failed and we were unable to recover it. 00:29:09.511 [2024-10-09 00:36:39.917689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.511 [2024-10-09 00:36:39.917718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.511 qpair failed and we were unable to recover it. 00:29:09.511 [2024-10-09 00:36:39.918121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.511 [2024-10-09 00:36:39.918151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.511 qpair failed and we were unable to recover it. 00:29:09.511 [2024-10-09 00:36:39.918496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.512 [2024-10-09 00:36:39.918525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.512 qpair failed and we were unable to recover it. 00:29:09.512 [2024-10-09 00:36:39.918882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.512 [2024-10-09 00:36:39.918913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.512 qpair failed and we were unable to recover it. 00:29:09.512 [2024-10-09 00:36:39.919308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.512 [2024-10-09 00:36:39.919337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.512 qpair failed and we were unable to recover it. 00:29:09.512 [2024-10-09 00:36:39.919682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.512 [2024-10-09 00:36:39.919712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.512 qpair failed and we were unable to recover it. 00:29:09.512 [2024-10-09 00:36:39.920065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.512 [2024-10-09 00:36:39.920094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.512 qpair failed and we were unable to recover it. 00:29:09.512 [2024-10-09 00:36:39.920456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.512 [2024-10-09 00:36:39.920484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.512 qpair failed and we were unable to recover it. 00:29:09.512 [2024-10-09 00:36:39.920845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.512 [2024-10-09 00:36:39.920876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.512 qpair failed and we were unable to recover it. 00:29:09.512 [2024-10-09 00:36:39.921229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.512 [2024-10-09 00:36:39.921259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.512 qpair failed and we were unable to recover it. 00:29:09.512 [2024-10-09 00:36:39.921627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.512 [2024-10-09 00:36:39.921656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.512 qpair failed and we were unable to recover it. 00:29:09.512 [2024-10-09 00:36:39.921927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.512 [2024-10-09 00:36:39.921957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.512 qpair failed and we were unable to recover it. 00:29:09.512 [2024-10-09 00:36:39.922343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.512 [2024-10-09 00:36:39.922372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.512 qpair failed and we were unable to recover it. 00:29:09.512 [2024-10-09 00:36:39.922715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.512 [2024-10-09 00:36:39.922754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.512 qpair failed and we were unable to recover it. 00:29:09.512 [2024-10-09 00:36:39.923095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.512 [2024-10-09 00:36:39.923125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.512 qpair failed and we were unable to recover it. 00:29:09.512 [2024-10-09 00:36:39.923471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.512 [2024-10-09 00:36:39.923499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.512 qpair failed and we were unable to recover it. 00:29:09.512 [2024-10-09 00:36:39.923866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.512 [2024-10-09 00:36:39.923896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.512 qpair failed and we were unable to recover it. 00:29:09.512 [2024-10-09 00:36:39.924239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.512 [2024-10-09 00:36:39.924268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.512 qpair failed and we were unable to recover it. 00:29:09.512 [2024-10-09 00:36:39.924571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.512 [2024-10-09 00:36:39.924600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.512 qpair failed and we were unable to recover it. 00:29:09.512 [2024-10-09 00:36:39.924962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.512 [2024-10-09 00:36:39.924993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.512 qpair failed and we were unable to recover it. 00:29:09.512 [2024-10-09 00:36:39.925353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.512 [2024-10-09 00:36:39.925382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.512 qpair failed and we were unable to recover it. 00:29:09.512 [2024-10-09 00:36:39.925739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.512 [2024-10-09 00:36:39.925769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.512 qpair failed and we were unable to recover it. 00:29:09.512 [2024-10-09 00:36:39.926096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.512 [2024-10-09 00:36:39.926125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.512 qpair failed and we were unable to recover it. 00:29:09.512 [2024-10-09 00:36:39.926482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.512 [2024-10-09 00:36:39.926511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.512 qpair failed and we were unable to recover it. 00:29:09.512 [2024-10-09 00:36:39.926878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.512 [2024-10-09 00:36:39.926907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.512 qpair failed and we were unable to recover it. 00:29:09.512 [2024-10-09 00:36:39.927252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.512 [2024-10-09 00:36:39.927281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.512 qpair failed and we were unable to recover it. 00:29:09.512 [2024-10-09 00:36:39.927540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.512 [2024-10-09 00:36:39.927569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.512 qpair failed and we were unable to recover it. 00:29:09.512 [2024-10-09 00:36:39.927929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.512 [2024-10-09 00:36:39.927959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.512 qpair failed and we were unable to recover it. 00:29:09.512 [2024-10-09 00:36:39.928321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.512 [2024-10-09 00:36:39.928351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.512 qpair failed and we were unable to recover it. 00:29:09.512 [2024-10-09 00:36:39.928676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.512 [2024-10-09 00:36:39.928706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.512 qpair failed and we were unable to recover it. 00:29:09.512 [2024-10-09 00:36:39.929116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.512 [2024-10-09 00:36:39.929146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.512 qpair failed and we were unable to recover it. 00:29:09.512 [2024-10-09 00:36:39.929494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.512 [2024-10-09 00:36:39.929523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.512 qpair failed and we were unable to recover it. 00:29:09.512 [2024-10-09 00:36:39.929866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.512 [2024-10-09 00:36:39.929897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.512 qpair failed and we were unable to recover it. 00:29:09.512 [2024-10-09 00:36:39.930268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.512 [2024-10-09 00:36:39.930297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.512 qpair failed and we were unable to recover it. 00:29:09.512 [2024-10-09 00:36:39.930563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.512 [2024-10-09 00:36:39.930591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.512 qpair failed and we were unable to recover it. 00:29:09.512 [2024-10-09 00:36:39.930921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.512 [2024-10-09 00:36:39.930951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.512 qpair failed and we were unable to recover it. 00:29:09.513 [2024-10-09 00:36:39.931301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.513 [2024-10-09 00:36:39.931332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.513 qpair failed and we were unable to recover it. 00:29:09.513 [2024-10-09 00:36:39.931694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.513 [2024-10-09 00:36:39.931731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.513 qpair failed and we were unable to recover it. 00:29:09.513 [2024-10-09 00:36:39.932090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.513 [2024-10-09 00:36:39.932125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.513 qpair failed and we were unable to recover it. 00:29:09.513 [2024-10-09 00:36:39.932456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.513 [2024-10-09 00:36:39.932487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.513 qpair failed and we were unable to recover it. 00:29:09.513 [2024-10-09 00:36:39.932849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.513 [2024-10-09 00:36:39.932879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.513 qpair failed and we were unable to recover it. 00:29:09.513 [2024-10-09 00:36:39.933247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.513 [2024-10-09 00:36:39.933276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.513 qpair failed and we were unable to recover it. 00:29:09.513 [2024-10-09 00:36:39.933646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.513 [2024-10-09 00:36:39.933675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.513 qpair failed and we were unable to recover it. 00:29:09.513 [2024-10-09 00:36:39.934036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.513 [2024-10-09 00:36:39.934066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.513 qpair failed and we were unable to recover it. 00:29:09.513 [2024-10-09 00:36:39.934421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.513 [2024-10-09 00:36:39.934450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.513 qpair failed and we were unable to recover it. 00:29:09.513 [2024-10-09 00:36:39.934789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.513 [2024-10-09 00:36:39.934820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.513 qpair failed and we were unable to recover it. 00:29:09.513 [2024-10-09 00:36:39.935251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.513 [2024-10-09 00:36:39.935280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.513 qpair failed and we were unable to recover it. 00:29:09.513 [2024-10-09 00:36:39.935636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.513 [2024-10-09 00:36:39.935665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.513 qpair failed and we were unable to recover it. 00:29:09.513 [2024-10-09 00:36:39.936038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.513 [2024-10-09 00:36:39.936069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.513 qpair failed and we were unable to recover it. 00:29:09.513 [2024-10-09 00:36:39.936440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.513 [2024-10-09 00:36:39.936469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.513 qpair failed and we were unable to recover it. 00:29:09.513 [2024-10-09 00:36:39.936727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.513 [2024-10-09 00:36:39.936757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.513 qpair failed and we were unable to recover it. 00:29:09.513 [2024-10-09 00:36:39.937157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.513 [2024-10-09 00:36:39.937187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.513 qpair failed and we were unable to recover it. 00:29:09.513 [2024-10-09 00:36:39.937554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.513 [2024-10-09 00:36:39.937583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.513 qpair failed and we were unable to recover it. 00:29:09.513 [2024-10-09 00:36:39.937944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.513 [2024-10-09 00:36:39.937975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.513 qpair failed and we were unable to recover it. 00:29:09.513 [2024-10-09 00:36:39.938335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.513 [2024-10-09 00:36:39.938364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.513 qpair failed and we were unable to recover it. 00:29:09.513 [2024-10-09 00:36:39.938616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.513 [2024-10-09 00:36:39.938648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.513 qpair failed and we were unable to recover it. 00:29:09.513 [2024-10-09 00:36:39.938904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.513 [2024-10-09 00:36:39.938935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.513 qpair failed and we were unable to recover it. 00:29:09.513 [2024-10-09 00:36:39.939274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.513 [2024-10-09 00:36:39.939304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.513 qpair failed and we were unable to recover it. 00:29:09.513 [2024-10-09 00:36:39.939681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.513 [2024-10-09 00:36:39.939710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.513 qpair failed and we were unable to recover it. 00:29:09.513 [2024-10-09 00:36:39.940120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.513 [2024-10-09 00:36:39.940149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.513 qpair failed and we were unable to recover it. 00:29:09.513 [2024-10-09 00:36:39.940492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.513 [2024-10-09 00:36:39.940523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.513 qpair failed and we were unable to recover it. 00:29:09.513 [2024-10-09 00:36:39.940895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.513 [2024-10-09 00:36:39.940925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.513 qpair failed and we were unable to recover it. 00:29:09.513 [2024-10-09 00:36:39.941283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.513 [2024-10-09 00:36:39.941312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.513 qpair failed and we were unable to recover it. 00:29:09.513 [2024-10-09 00:36:39.941676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.513 [2024-10-09 00:36:39.941705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.513 qpair failed and we were unable to recover it. 00:29:09.513 [2024-10-09 00:36:39.942072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.513 [2024-10-09 00:36:39.942103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.513 qpair failed and we were unable to recover it. 00:29:09.513 [2024-10-09 00:36:39.942500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.513 [2024-10-09 00:36:39.942529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.513 qpair failed and we were unable to recover it. 00:29:09.513 [2024-10-09 00:36:39.942888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.513 [2024-10-09 00:36:39.942918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.513 qpair failed and we were unable to recover it. 00:29:09.513 [2024-10-09 00:36:39.943276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.513 [2024-10-09 00:36:39.943306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.513 qpair failed and we were unable to recover it. 00:29:09.513 [2024-10-09 00:36:39.943677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.513 [2024-10-09 00:36:39.943705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.513 qpair failed and we were unable to recover it. 00:29:09.513 [2024-10-09 00:36:39.944092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.513 [2024-10-09 00:36:39.944122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.513 qpair failed and we were unable to recover it. 00:29:09.513 [2024-10-09 00:36:39.944538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.513 [2024-10-09 00:36:39.944567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.513 qpair failed and we were unable to recover it. 00:29:09.513 [2024-10-09 00:36:39.944791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.513 [2024-10-09 00:36:39.944823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.513 qpair failed and we were unable to recover it. 00:29:09.513 [2024-10-09 00:36:39.945192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.513 [2024-10-09 00:36:39.945221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.513 qpair failed and we were unable to recover it. 00:29:09.513 [2024-10-09 00:36:39.945598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.513 [2024-10-09 00:36:39.945627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.513 qpair failed and we were unable to recover it. 00:29:09.513 [2024-10-09 00:36:39.945991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.513 [2024-10-09 00:36:39.946022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.513 qpair failed and we were unable to recover it. 00:29:09.513 [2024-10-09 00:36:39.946391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.513 [2024-10-09 00:36:39.946420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.513 qpair failed and we were unable to recover it. 00:29:09.514 [2024-10-09 00:36:39.946780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.514 [2024-10-09 00:36:39.946811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.514 qpair failed and we were unable to recover it. 00:29:09.514 [2024-10-09 00:36:39.947151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.514 [2024-10-09 00:36:39.947180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.514 qpair failed and we were unable to recover it. 00:29:09.514 [2024-10-09 00:36:39.947541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.514 [2024-10-09 00:36:39.947577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.514 qpair failed and we were unable to recover it. 00:29:09.514 [2024-10-09 00:36:39.947954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.514 [2024-10-09 00:36:39.947984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.514 qpair failed and we were unable to recover it. 00:29:09.514 [2024-10-09 00:36:39.948379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.514 [2024-10-09 00:36:39.948408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.514 qpair failed and we were unable to recover it. 00:29:09.514 [2024-10-09 00:36:39.948763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.514 [2024-10-09 00:36:39.948793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.514 qpair failed and we were unable to recover it. 00:29:09.514 [2024-10-09 00:36:39.949142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.514 [2024-10-09 00:36:39.949171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.514 qpair failed and we were unable to recover it. 00:29:09.514 [2024-10-09 00:36:39.949546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.514 [2024-10-09 00:36:39.949576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.514 qpair failed and we were unable to recover it. 00:29:09.514 [2024-10-09 00:36:39.949837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.514 [2024-10-09 00:36:39.949867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.514 qpair failed and we were unable to recover it. 00:29:09.514 [2024-10-09 00:36:39.950245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.514 [2024-10-09 00:36:39.950274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.514 qpair failed and we were unable to recover it. 00:29:09.514 [2024-10-09 00:36:39.950631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.514 [2024-10-09 00:36:39.950661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.514 qpair failed and we were unable to recover it. 00:29:09.514 [2024-10-09 00:36:39.951025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.514 [2024-10-09 00:36:39.951055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.514 qpair failed and we were unable to recover it. 00:29:09.514 [2024-10-09 00:36:39.951450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.514 [2024-10-09 00:36:39.951479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.514 qpair failed and we were unable to recover it. 00:29:09.514 [2024-10-09 00:36:39.951741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.514 [2024-10-09 00:36:39.951772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.514 qpair failed and we were unable to recover it. 00:29:09.514 [2024-10-09 00:36:39.952179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.514 [2024-10-09 00:36:39.952209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.514 qpair failed and we were unable to recover it. 00:29:09.514 [2024-10-09 00:36:39.952568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.514 [2024-10-09 00:36:39.952598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.514 qpair failed and we were unable to recover it. 00:29:09.514 [2024-10-09 00:36:39.952867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.514 [2024-10-09 00:36:39.952898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.514 qpair failed and we were unable to recover it. 00:29:09.514 [2024-10-09 00:36:39.953250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.514 [2024-10-09 00:36:39.953280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.514 qpair failed and we were unable to recover it. 00:29:09.514 [2024-10-09 00:36:39.953644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.514 [2024-10-09 00:36:39.953673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.514 qpair failed and we were unable to recover it. 00:29:09.514 [2024-10-09 00:36:39.954034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.514 [2024-10-09 00:36:39.954064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.514 qpair failed and we were unable to recover it. 00:29:09.514 [2024-10-09 00:36:39.954443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.514 [2024-10-09 00:36:39.954472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.514 qpair failed and we were unable to recover it. 00:29:09.514 [2024-10-09 00:36:39.954834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.514 [2024-10-09 00:36:39.954864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.514 qpair failed and we were unable to recover it. 00:29:09.514 [2024-10-09 00:36:39.955243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.514 [2024-10-09 00:36:39.955279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.514 qpair failed and we were unable to recover it. 00:29:09.514 [2024-10-09 00:36:39.955605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.514 [2024-10-09 00:36:39.955634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.514 qpair failed and we were unable to recover it. 00:29:09.514 [2024-10-09 00:36:39.955977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.514 [2024-10-09 00:36:39.956008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.514 qpair failed and we were unable to recover it. 00:29:09.514 [2024-10-09 00:36:39.956383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.514 [2024-10-09 00:36:39.956411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.514 qpair failed and we were unable to recover it. 00:29:09.514 [2024-10-09 00:36:39.956774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.514 [2024-10-09 00:36:39.956806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.514 qpair failed and we were unable to recover it. 00:29:09.514 [2024-10-09 00:36:39.957167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.514 [2024-10-09 00:36:39.957197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.514 qpair failed and we were unable to recover it. 00:29:09.514 [2024-10-09 00:36:39.957577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.514 [2024-10-09 00:36:39.957605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.514 qpair failed and we were unable to recover it. 00:29:09.514 [2024-10-09 00:36:39.957864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.514 [2024-10-09 00:36:39.957895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.514 qpair failed and we were unable to recover it. 00:29:09.514 [2024-10-09 00:36:39.958243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.514 [2024-10-09 00:36:39.958273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.514 qpair failed and we were unable to recover it. 00:29:09.514 [2024-10-09 00:36:39.958627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.514 [2024-10-09 00:36:39.958657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.514 qpair failed and we were unable to recover it. 00:29:09.514 [2024-10-09 00:36:39.959096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.514 [2024-10-09 00:36:39.959126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.514 qpair failed and we were unable to recover it. 00:29:09.514 [2024-10-09 00:36:39.959483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.514 [2024-10-09 00:36:39.959512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.514 qpair failed and we were unable to recover it. 00:29:09.514 [2024-10-09 00:36:39.959872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.514 [2024-10-09 00:36:39.959901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.514 qpair failed and we were unable to recover it. 00:29:09.514 [2024-10-09 00:36:39.960275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.514 [2024-10-09 00:36:39.960304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.514 qpair failed and we were unable to recover it. 00:29:09.514 [2024-10-09 00:36:39.960670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.514 [2024-10-09 00:36:39.960700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.514 qpair failed and we were unable to recover it. 00:29:09.514 [2024-10-09 00:36:39.961141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.514 [2024-10-09 00:36:39.961171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.514 qpair failed and we were unable to recover it. 00:29:09.514 [2024-10-09 00:36:39.961528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.514 [2024-10-09 00:36:39.961558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.514 qpair failed and we were unable to recover it. 00:29:09.514 [2024-10-09 00:36:39.961932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.514 [2024-10-09 00:36:39.961963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.514 qpair failed and we were unable to recover it. 00:29:09.514 [2024-10-09 00:36:39.962323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.515 [2024-10-09 00:36:39.962352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.515 qpair failed and we were unable to recover it. 00:29:09.515 [2024-10-09 00:36:39.962712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.515 [2024-10-09 00:36:39.962748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.515 qpair failed and we were unable to recover it. 00:29:09.515 [2024-10-09 00:36:39.963105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.515 [2024-10-09 00:36:39.963140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.515 qpair failed and we were unable to recover it. 00:29:09.515 [2024-10-09 00:36:39.963500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.515 [2024-10-09 00:36:39.963529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.515 qpair failed and we were unable to recover it. 00:29:09.515 [2024-10-09 00:36:39.963892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.515 [2024-10-09 00:36:39.963921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.515 qpair failed and we were unable to recover it. 00:29:09.515 [2024-10-09 00:36:39.964174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.515 [2024-10-09 00:36:39.964203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.515 qpair failed and we were unable to recover it. 00:29:09.515 [2024-10-09 00:36:39.964539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.515 [2024-10-09 00:36:39.964569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.515 qpair failed and we were unable to recover it. 00:29:09.515 [2024-10-09 00:36:39.964943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.515 [2024-10-09 00:36:39.964973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.515 qpair failed and we were unable to recover it. 00:29:09.515 [2024-10-09 00:36:39.965337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.515 [2024-10-09 00:36:39.965366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.515 qpair failed and we were unable to recover it. 00:29:09.515 [2024-10-09 00:36:39.965737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.515 [2024-10-09 00:36:39.965767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.515 qpair failed and we were unable to recover it. 00:29:09.515 [2024-10-09 00:36:39.966114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.515 [2024-10-09 00:36:39.966143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.515 qpair failed and we were unable to recover it. 00:29:09.515 [2024-10-09 00:36:39.966505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.515 [2024-10-09 00:36:39.966534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.515 qpair failed and we were unable to recover it. 00:29:09.515 [2024-10-09 00:36:39.966978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.515 [2024-10-09 00:36:39.967011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.515 qpair failed and we were unable to recover it. 00:29:09.515 [2024-10-09 00:36:39.967377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.515 [2024-10-09 00:36:39.967406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.515 qpair failed and we were unable to recover it. 00:29:09.515 [2024-10-09 00:36:39.967766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.515 [2024-10-09 00:36:39.967796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.515 qpair failed and we were unable to recover it. 00:29:09.515 [2024-10-09 00:36:39.968166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.515 [2024-10-09 00:36:39.968195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.515 qpair failed and we were unable to recover it. 00:29:09.515 [2024-10-09 00:36:39.968438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.515 [2024-10-09 00:36:39.968467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.515 qpair failed and we were unable to recover it. 00:29:09.515 [2024-10-09 00:36:39.968756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.515 [2024-10-09 00:36:39.968787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.515 qpair failed and we were unable to recover it. 00:29:09.515 [2024-10-09 00:36:39.969208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.515 [2024-10-09 00:36:39.969238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.515 qpair failed and we were unable to recover it. 00:29:09.515 [2024-10-09 00:36:39.969568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.515 [2024-10-09 00:36:39.969598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.515 qpair failed and we were unable to recover it. 00:29:09.515 [2024-10-09 00:36:39.969956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.515 [2024-10-09 00:36:39.969986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.515 qpair failed and we were unable to recover it. 00:29:09.515 [2024-10-09 00:36:39.970341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.515 [2024-10-09 00:36:39.970370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.515 qpair failed and we were unable to recover it. 00:29:09.515 [2024-10-09 00:36:39.970743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.515 [2024-10-09 00:36:39.970774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.515 qpair failed and we were unable to recover it. 00:29:09.515 [2024-10-09 00:36:39.971136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.515 [2024-10-09 00:36:39.971165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.515 qpair failed and we were unable to recover it. 00:29:09.515 [2024-10-09 00:36:39.971531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.515 [2024-10-09 00:36:39.971560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.515 qpair failed and we were unable to recover it. 00:29:09.515 [2024-10-09 00:36:39.971905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.515 [2024-10-09 00:36:39.971935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.515 qpair failed and we were unable to recover it. 00:29:09.515 [2024-10-09 00:36:39.972306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.515 [2024-10-09 00:36:39.972335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.515 qpair failed and we were unable to recover it. 00:29:09.515 [2024-10-09 00:36:39.972764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.515 [2024-10-09 00:36:39.972794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.515 qpair failed and we were unable to recover it. 00:29:09.515 [2024-10-09 00:36:39.973164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.515 [2024-10-09 00:36:39.973195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.515 qpair failed and we were unable to recover it. 00:29:09.515 [2024-10-09 00:36:39.973554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.515 [2024-10-09 00:36:39.973584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.515 qpair failed and we were unable to recover it. 00:29:09.515 [2024-10-09 00:36:39.973896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.515 [2024-10-09 00:36:39.973925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.515 qpair failed and we were unable to recover it. 00:29:09.515 [2024-10-09 00:36:39.974313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.515 [2024-10-09 00:36:39.974342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.515 qpair failed and we were unable to recover it. 00:29:09.515 [2024-10-09 00:36:39.974700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.515 [2024-10-09 00:36:39.974737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.515 qpair failed and we were unable to recover it. 00:29:09.515 [2024-10-09 00:36:39.975001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.515 [2024-10-09 00:36:39.975029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.515 qpair failed and we were unable to recover it. 00:29:09.515 [2024-10-09 00:36:39.975288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.515 [2024-10-09 00:36:39.975317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.515 qpair failed and we were unable to recover it. 00:29:09.515 [2024-10-09 00:36:39.975697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.515 [2024-10-09 00:36:39.975735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.515 qpair failed and we were unable to recover it. 00:29:09.515 [2024-10-09 00:36:39.975989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.515 [2024-10-09 00:36:39.976021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.515 qpair failed and we were unable to recover it. 00:29:09.515 [2024-10-09 00:36:39.976392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.515 [2024-10-09 00:36:39.976422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.515 qpair failed and we were unable to recover it. 00:29:09.515 [2024-10-09 00:36:39.976794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.515 [2024-10-09 00:36:39.976825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.515 qpair failed and we were unable to recover it. 00:29:09.515 [2024-10-09 00:36:39.977186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.515 [2024-10-09 00:36:39.977214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.515 qpair failed and we were unable to recover it. 00:29:09.515 [2024-10-09 00:36:39.977564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.516 [2024-10-09 00:36:39.977594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.516 qpair failed and we were unable to recover it. 00:29:09.516 [2024-10-09 00:36:39.977955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.516 [2024-10-09 00:36:39.977986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.516 qpair failed and we were unable to recover it. 00:29:09.516 [2024-10-09 00:36:39.978361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.516 [2024-10-09 00:36:39.978396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.516 qpair failed and we were unable to recover it. 00:29:09.516 [2024-10-09 00:36:39.978752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.516 [2024-10-09 00:36:39.978782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.516 qpair failed and we were unable to recover it. 00:29:09.516 [2024-10-09 00:36:39.979134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.516 [2024-10-09 00:36:39.979163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.516 qpair failed and we were unable to recover it. 00:29:09.516 [2024-10-09 00:36:39.979530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.516 [2024-10-09 00:36:39.979559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.516 qpair failed and we were unable to recover it. 00:29:09.516 [2024-10-09 00:36:39.979906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.516 [2024-10-09 00:36:39.979935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.516 qpair failed and we were unable to recover it. 00:29:09.516 [2024-10-09 00:36:39.980300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.516 [2024-10-09 00:36:39.980329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.516 qpair failed and we were unable to recover it. 00:29:09.516 [2024-10-09 00:36:39.980562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.516 [2024-10-09 00:36:39.980593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.516 qpair failed and we were unable to recover it. 00:29:09.516 [2024-10-09 00:36:39.980942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.516 [2024-10-09 00:36:39.980973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.516 qpair failed and we were unable to recover it. 00:29:09.516 [2024-10-09 00:36:39.981344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.516 [2024-10-09 00:36:39.981374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.516 qpair failed and we were unable to recover it. 00:29:09.516 [2024-10-09 00:36:39.981739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.516 [2024-10-09 00:36:39.981769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.516 qpair failed and we were unable to recover it. 00:29:09.516 [2024-10-09 00:36:39.982008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.516 [2024-10-09 00:36:39.982036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.516 qpair failed and we were unable to recover it. 00:29:09.516 [2024-10-09 00:36:39.982382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.516 [2024-10-09 00:36:39.982412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.516 qpair failed and we were unable to recover it. 00:29:09.516 [2024-10-09 00:36:39.982749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.516 [2024-10-09 00:36:39.982780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.516 qpair failed and we were unable to recover it. 00:29:09.516 [2024-10-09 00:36:39.983129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.516 [2024-10-09 00:36:39.983159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.516 qpair failed and we were unable to recover it. 00:29:09.516 [2024-10-09 00:36:39.983540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.516 [2024-10-09 00:36:39.983569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.516 qpair failed and we were unable to recover it. 00:29:09.516 [2024-10-09 00:36:39.983895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.516 [2024-10-09 00:36:39.983927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.516 qpair failed and we were unable to recover it. 00:29:09.516 [2024-10-09 00:36:39.984281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.516 [2024-10-09 00:36:39.984309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.516 qpair failed and we were unable to recover it. 00:29:09.516 [2024-10-09 00:36:39.984708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.516 [2024-10-09 00:36:39.984771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.516 qpair failed and we were unable to recover it. 00:29:09.516 [2024-10-09 00:36:39.985131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.516 [2024-10-09 00:36:39.985162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.516 qpair failed and we were unable to recover it. 00:29:09.516 [2024-10-09 00:36:39.985515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.516 [2024-10-09 00:36:39.985544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.516 qpair failed and we were unable to recover it. 00:29:09.516 [2024-10-09 00:36:39.985910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.516 [2024-10-09 00:36:39.985941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.516 qpair failed and we were unable to recover it. 00:29:09.516 [2024-10-09 00:36:39.986335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.516 [2024-10-09 00:36:39.986365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.516 qpair failed and we were unable to recover it. 00:29:09.516 [2024-10-09 00:36:39.986715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.516 [2024-10-09 00:36:39.986752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.516 qpair failed and we were unable to recover it. 00:29:09.516 [2024-10-09 00:36:39.987117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.516 [2024-10-09 00:36:39.987147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.516 qpair failed and we were unable to recover it. 00:29:09.516 [2024-10-09 00:36:39.987294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.516 [2024-10-09 00:36:39.987326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.516 qpair failed and we were unable to recover it. 00:29:09.516 [2024-10-09 00:36:39.987688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.516 [2024-10-09 00:36:39.987717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.516 qpair failed and we were unable to recover it. 00:29:09.516 [2024-10-09 00:36:39.988067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.516 [2024-10-09 00:36:39.988098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.516 qpair failed and we were unable to recover it. 00:29:09.516 [2024-10-09 00:36:39.988445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.516 [2024-10-09 00:36:39.988474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.516 qpair failed and we were unable to recover it. 00:29:09.516 [2024-10-09 00:36:39.988838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.516 [2024-10-09 00:36:39.988870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.516 qpair failed and we were unable to recover it. 00:29:09.516 [2024-10-09 00:36:39.989232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.516 [2024-10-09 00:36:39.989261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.516 qpair failed and we were unable to recover it. 00:29:09.516 [2024-10-09 00:36:39.989649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.516 [2024-10-09 00:36:39.989678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.516 qpair failed and we were unable to recover it. 00:29:09.516 [2024-10-09 00:36:39.990089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.516 [2024-10-09 00:36:39.990120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.516 qpair failed and we were unable to recover it. 00:29:09.516 [2024-10-09 00:36:39.990486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.516 [2024-10-09 00:36:39.990517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.516 qpair failed and we were unable to recover it. 00:29:09.517 [2024-10-09 00:36:39.990876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.517 [2024-10-09 00:36:39.990906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.517 qpair failed and we were unable to recover it. 00:29:09.517 [2024-10-09 00:36:39.991261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.517 [2024-10-09 00:36:39.991290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.517 qpair failed and we were unable to recover it. 00:29:09.517 [2024-10-09 00:36:39.991741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.517 [2024-10-09 00:36:39.991772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.517 qpair failed and we were unable to recover it. 00:29:09.517 [2024-10-09 00:36:39.992129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.517 [2024-10-09 00:36:39.992158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.517 qpair failed and we were unable to recover it. 00:29:09.517 [2024-10-09 00:36:39.992521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.517 [2024-10-09 00:36:39.992550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.517 qpair failed and we were unable to recover it. 00:29:09.517 [2024-10-09 00:36:39.992928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.517 [2024-10-09 00:36:39.992959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.517 qpair failed and we were unable to recover it. 00:29:09.517 [2024-10-09 00:36:39.993327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.517 [2024-10-09 00:36:39.993355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.517 qpair failed and we were unable to recover it. 00:29:09.517 [2024-10-09 00:36:39.993715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.517 [2024-10-09 00:36:39.993759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.517 qpair failed and we were unable to recover it. 00:29:09.517 [2024-10-09 00:36:39.994135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.517 [2024-10-09 00:36:39.994165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.517 qpair failed and we were unable to recover it. 00:29:09.517 [2024-10-09 00:36:39.994540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.517 [2024-10-09 00:36:39.994569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.517 qpair failed and we were unable to recover it. 00:29:09.517 [2024-10-09 00:36:39.994813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.517 [2024-10-09 00:36:39.994843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.517 qpair failed and we were unable to recover it. 00:29:09.517 [2024-10-09 00:36:39.995225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.517 [2024-10-09 00:36:39.995253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.517 qpair failed and we were unable to recover it. 00:29:09.517 [2024-10-09 00:36:39.995609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.517 [2024-10-09 00:36:39.995638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.517 qpair failed and we were unable to recover it. 00:29:09.517 [2024-10-09 00:36:39.996083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.517 [2024-10-09 00:36:39.996112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.517 qpair failed and we were unable to recover it. 00:29:09.517 [2024-10-09 00:36:39.996471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.517 [2024-10-09 00:36:39.996501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.517 qpair failed and we were unable to recover it. 00:29:09.517 [2024-10-09 00:36:39.996871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.517 [2024-10-09 00:36:39.996901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.517 qpair failed and we were unable to recover it. 00:29:09.517 [2024-10-09 00:36:39.997272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.517 [2024-10-09 00:36:39.997300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.517 qpair failed and we were unable to recover it. 00:29:09.517 [2024-10-09 00:36:39.997750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.517 [2024-10-09 00:36:39.997780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.517 qpair failed and we were unable to recover it. 00:29:09.517 [2024-10-09 00:36:39.998132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.517 [2024-10-09 00:36:39.998161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.517 qpair failed and we were unable to recover it. 00:29:09.517 [2024-10-09 00:36:39.998522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.517 [2024-10-09 00:36:39.998551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.517 qpair failed and we were unable to recover it. 00:29:09.517 [2024-10-09 00:36:39.998928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.517 [2024-10-09 00:36:39.998959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.517 qpair failed and we were unable to recover it. 00:29:09.517 [2024-10-09 00:36:39.999327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.517 [2024-10-09 00:36:39.999357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.517 qpair failed and we were unable to recover it. 00:29:09.517 [2024-10-09 00:36:39.999716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.517 [2024-10-09 00:36:39.999752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.517 qpair failed and we were unable to recover it. 00:29:09.517 [2024-10-09 00:36:39.999967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.517 [2024-10-09 00:36:39.999999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.517 qpair failed and we were unable to recover it. 00:29:09.517 [2024-10-09 00:36:40.000338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.517 [2024-10-09 00:36:40.000377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.517 qpair failed and we were unable to recover it. 00:29:09.517 [2024-10-09 00:36:40.000743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.517 [2024-10-09 00:36:40.000775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.517 qpair failed and we were unable to recover it. 00:29:09.517 [2024-10-09 00:36:40.001693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.517 [2024-10-09 00:36:40.001749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.517 qpair failed and we were unable to recover it. 00:29:09.517 [2024-10-09 00:36:40.002040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.517 [2024-10-09 00:36:40.002071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.517 qpair failed and we were unable to recover it. 00:29:09.517 [2024-10-09 00:36:40.002412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.517 [2024-10-09 00:36:40.002443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.517 qpair failed and we were unable to recover it. 00:29:09.517 [2024-10-09 00:36:40.002679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.517 [2024-10-09 00:36:40.002712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.517 qpair failed and we were unable to recover it. 00:29:09.517 [2024-10-09 00:36:40.002988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.517 [2024-10-09 00:36:40.003018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.517 qpair failed and we were unable to recover it. 00:29:09.517 [2024-10-09 00:36:40.003411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.517 [2024-10-09 00:36:40.003441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.517 qpair failed and we were unable to recover it. 00:29:09.517 [2024-10-09 00:36:40.003601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.517 [2024-10-09 00:36:40.003629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.517 qpair failed and we were unable to recover it. 00:29:09.517 [2024-10-09 00:36:40.004058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.517 [2024-10-09 00:36:40.004090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.517 qpair failed and we were unable to recover it. 00:29:09.517 [2024-10-09 00:36:40.004535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.517 [2024-10-09 00:36:40.004565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.517 qpair failed and we were unable to recover it. 00:29:09.517 [2024-10-09 00:36:40.004963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.517 [2024-10-09 00:36:40.004993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.517 qpair failed and we were unable to recover it. 00:29:09.517 [2024-10-09 00:36:40.005258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.517 [2024-10-09 00:36:40.005287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.517 qpair failed and we were unable to recover it. 00:29:09.517 [2024-10-09 00:36:40.005682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.517 [2024-10-09 00:36:40.005710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.517 qpair failed and we were unable to recover it. 00:29:09.517 [2024-10-09 00:36:40.005956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.517 [2024-10-09 00:36:40.005986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.517 qpair failed and we were unable to recover it. 00:29:09.517 [2024-10-09 00:36:40.006145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.517 [2024-10-09 00:36:40.006174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.517 qpair failed and we were unable to recover it. 00:29:09.518 [2024-10-09 00:36:40.006563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.518 [2024-10-09 00:36:40.006592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.518 qpair failed and we were unable to recover it. 00:29:09.518 [2024-10-09 00:36:40.006839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.518 [2024-10-09 00:36:40.006870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.518 qpair failed and we were unable to recover it. 00:29:09.518 [2024-10-09 00:36:40.007238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.518 [2024-10-09 00:36:40.007267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.518 qpair failed and we were unable to recover it. 00:29:09.518 [2024-10-09 00:36:40.007640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.518 [2024-10-09 00:36:40.007669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.518 qpair failed and we were unable to recover it. 00:29:09.518 [2024-10-09 00:36:40.007823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.518 [2024-10-09 00:36:40.007852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.518 qpair failed and we were unable to recover it. 00:29:09.518 [2024-10-09 00:36:40.007974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.518 [2024-10-09 00:36:40.008004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.518 qpair failed and we were unable to recover it. 00:29:09.518 [2024-10-09 00:36:40.008281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.518 [2024-10-09 00:36:40.008309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.518 qpair failed and we were unable to recover it. 00:29:09.518 [2024-10-09 00:36:40.008463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.518 [2024-10-09 00:36:40.008500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.518 qpair failed and we were unable to recover it. 00:29:09.518 [2024-10-09 00:36:40.008759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.518 [2024-10-09 00:36:40.008791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.518 qpair failed and we were unable to recover it. 00:29:09.518 [2024-10-09 00:36:40.009175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.518 [2024-10-09 00:36:40.009205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.518 qpair failed and we were unable to recover it. 00:29:09.518 [2024-10-09 00:36:40.009566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.518 [2024-10-09 00:36:40.009596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.518 qpair failed and we were unable to recover it. 00:29:09.518 [2024-10-09 00:36:40.009973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.518 [2024-10-09 00:36:40.010005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.518 qpair failed and we were unable to recover it. 00:29:09.518 [2024-10-09 00:36:40.010296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.518 [2024-10-09 00:36:40.010327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.518 qpair failed and we were unable to recover it. 00:29:09.518 [2024-10-09 00:36:40.010466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.518 [2024-10-09 00:36:40.010496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.518 qpair failed and we were unable to recover it. 00:29:09.518 [2024-10-09 00:36:40.010789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.518 [2024-10-09 00:36:40.010820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.518 qpair failed and we were unable to recover it. 00:29:09.518 [2024-10-09 00:36:40.011187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.518 [2024-10-09 00:36:40.011216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.518 qpair failed and we were unable to recover it. 00:29:09.518 [2024-10-09 00:36:40.011588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.518 [2024-10-09 00:36:40.011617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.518 qpair failed and we were unable to recover it. 00:29:09.518 [2024-10-09 00:36:40.012073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.518 [2024-10-09 00:36:40.012103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.518 qpair failed and we were unable to recover it. 00:29:09.518 [2024-10-09 00:36:40.012452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.518 [2024-10-09 00:36:40.012481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.518 qpair failed and we were unable to recover it. 00:29:09.518 [2024-10-09 00:36:40.012740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.518 [2024-10-09 00:36:40.012770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.518 qpair failed and we were unable to recover it. 00:29:09.518 [2024-10-09 00:36:40.013181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.518 [2024-10-09 00:36:40.013211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.518 qpair failed and we were unable to recover it. 00:29:09.518 [2024-10-09 00:36:40.013571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.518 [2024-10-09 00:36:40.013601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.518 qpair failed and we were unable to recover it. 00:29:09.518 [2024-10-09 00:36:40.013953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.518 [2024-10-09 00:36:40.013983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.518 qpair failed and we were unable to recover it. 00:29:09.518 [2024-10-09 00:36:40.014361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.518 [2024-10-09 00:36:40.014390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.518 qpair failed and we were unable to recover it. 00:29:09.518 [2024-10-09 00:36:40.014685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.518 [2024-10-09 00:36:40.014713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.518 qpair failed and we were unable to recover it. 00:29:09.518 [2024-10-09 00:36:40.014991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.518 [2024-10-09 00:36:40.015021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.518 qpair failed and we were unable to recover it. 00:29:09.518 [2024-10-09 00:36:40.015420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.518 [2024-10-09 00:36:40.015449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.518 qpair failed and we were unable to recover it. 00:29:09.518 [2024-10-09 00:36:40.015808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.518 [2024-10-09 00:36:40.015839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.518 qpair failed and we were unable to recover it. 00:29:09.518 [2024-10-09 00:36:40.016072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.518 [2024-10-09 00:36:40.016103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.518 qpair failed and we were unable to recover it. 00:29:09.518 [2024-10-09 00:36:40.016385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.518 [2024-10-09 00:36:40.016414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.518 qpair failed and we were unable to recover it. 00:29:09.518 [2024-10-09 00:36:40.016858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.518 [2024-10-09 00:36:40.016888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.518 qpair failed and we were unable to recover it. 00:29:09.518 [2024-10-09 00:36:40.017291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.518 [2024-10-09 00:36:40.017322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.518 qpair failed and we were unable to recover it. 00:29:09.518 [2024-10-09 00:36:40.017717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.518 [2024-10-09 00:36:40.017759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.518 qpair failed and we were unable to recover it. 00:29:09.518 [2024-10-09 00:36:40.018051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.518 [2024-10-09 00:36:40.018081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.518 qpair failed and we were unable to recover it. 00:29:09.518 [2024-10-09 00:36:40.018277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.518 [2024-10-09 00:36:40.018307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.518 qpair failed and we were unable to recover it. 00:29:09.518 [2024-10-09 00:36:40.018557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.518 [2024-10-09 00:36:40.018588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.518 qpair failed and we were unable to recover it. 00:29:09.518 [2024-10-09 00:36:40.018902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.518 [2024-10-09 00:36:40.018933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.518 qpair failed and we were unable to recover it. 00:29:09.518 [2024-10-09 00:36:40.019218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.518 [2024-10-09 00:36:40.019247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.518 qpair failed and we were unable to recover it. 00:29:09.518 [2024-10-09 00:36:40.019703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.518 [2024-10-09 00:36:40.019746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.518 qpair failed and we were unable to recover it. 00:29:09.518 [2024-10-09 00:36:40.020123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.518 [2024-10-09 00:36:40.020151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.518 qpair failed and we were unable to recover it. 00:29:09.518 [2024-10-09 00:36:40.020543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.519 [2024-10-09 00:36:40.020572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.519 qpair failed and we were unable to recover it. 00:29:09.519 [2024-10-09 00:36:40.020917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.519 [2024-10-09 00:36:40.020946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.519 qpair failed and we were unable to recover it. 00:29:09.519 [2024-10-09 00:36:40.021311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.519 [2024-10-09 00:36:40.021343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.519 qpair failed and we were unable to recover it. 00:29:09.519 [2024-10-09 00:36:40.021708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.519 [2024-10-09 00:36:40.021745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.519 qpair failed and we were unable to recover it. 00:29:09.519 [2024-10-09 00:36:40.022015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.519 [2024-10-09 00:36:40.022044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.519 qpair failed and we were unable to recover it. 00:29:09.519 [2024-10-09 00:36:40.022424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.519 [2024-10-09 00:36:40.022454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.519 qpair failed and we were unable to recover it. 00:29:09.519 [2024-10-09 00:36:40.022824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.519 [2024-10-09 00:36:40.022855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.519 qpair failed and we were unable to recover it. 00:29:09.519 [2024-10-09 00:36:40.023238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.519 [2024-10-09 00:36:40.023275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.519 qpair failed and we were unable to recover it. 00:29:09.519 [2024-10-09 00:36:40.023613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.519 [2024-10-09 00:36:40.023645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.519 qpair failed and we were unable to recover it. 00:29:09.519 [2024-10-09 00:36:40.023992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.519 [2024-10-09 00:36:40.024023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.519 qpair failed and we were unable to recover it. 00:29:09.519 [2024-10-09 00:36:40.024399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.519 [2024-10-09 00:36:40.024428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.519 qpair failed and we were unable to recover it. 00:29:09.519 [2024-10-09 00:36:40.024813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.519 [2024-10-09 00:36:40.024844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.519 qpair failed and we were unable to recover it. 00:29:09.519 [2024-10-09 00:36:40.025292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.519 [2024-10-09 00:36:40.025324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.519 qpair failed and we were unable to recover it. 00:29:09.519 [2024-10-09 00:36:40.025563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.519 [2024-10-09 00:36:40.025595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.519 qpair failed and we were unable to recover it. 00:29:09.519 [2024-10-09 00:36:40.025975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.519 [2024-10-09 00:36:40.026005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.519 qpair failed and we were unable to recover it. 00:29:09.519 [2024-10-09 00:36:40.026349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.519 [2024-10-09 00:36:40.026380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.519 qpair failed and we were unable to recover it. 00:29:09.519 [2024-10-09 00:36:40.026629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.519 [2024-10-09 00:36:40.026659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.519 qpair failed and we were unable to recover it. 00:29:09.519 [2024-10-09 00:36:40.026881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.519 [2024-10-09 00:36:40.026914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.519 qpair failed and we were unable to recover it. 00:29:09.519 [2024-10-09 00:36:40.027282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.519 [2024-10-09 00:36:40.027312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.519 qpair failed and we were unable to recover it. 00:29:09.519 [2024-10-09 00:36:40.027683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.519 [2024-10-09 00:36:40.027713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.519 qpair failed and we were unable to recover it. 00:29:09.519 [2024-10-09 00:36:40.028151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.519 [2024-10-09 00:36:40.028180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.519 qpair failed and we were unable to recover it. 00:29:09.519 [2024-10-09 00:36:40.028535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.519 [2024-10-09 00:36:40.028566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.519 qpair failed and we were unable to recover it. 00:29:09.519 [2024-10-09 00:36:40.028917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.519 [2024-10-09 00:36:40.028950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.519 qpair failed and we were unable to recover it. 00:29:09.519 [2024-10-09 00:36:40.029254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.519 [2024-10-09 00:36:40.029284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.519 qpair failed and we were unable to recover it. 00:29:09.519 [2024-10-09 00:36:40.029637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.519 [2024-10-09 00:36:40.029667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.519 qpair failed and we were unable to recover it. 00:29:09.519 [2024-10-09 00:36:40.030016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.519 [2024-10-09 00:36:40.030048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.519 qpair failed and we were unable to recover it. 00:29:09.519 [2024-10-09 00:36:40.030280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.519 [2024-10-09 00:36:40.030311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.519 qpair failed and we were unable to recover it. 00:29:09.519 [2024-10-09 00:36:40.030695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.519 [2024-10-09 00:36:40.030734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.519 qpair failed and we were unable to recover it. 00:29:09.519 [2024-10-09 00:36:40.031127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.519 [2024-10-09 00:36:40.031156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.519 qpair failed and we were unable to recover it. 00:29:09.519 [2024-10-09 00:36:40.031406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.519 [2024-10-09 00:36:40.031435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.519 qpair failed and we were unable to recover it. 00:29:09.519 [2024-10-09 00:36:40.031795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.519 [2024-10-09 00:36:40.031826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.519 qpair failed and we were unable to recover it. 00:29:09.519 [2024-10-09 00:36:40.032068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.519 [2024-10-09 00:36:40.032098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.519 qpair failed and we were unable to recover it. 00:29:09.519 [2024-10-09 00:36:40.032494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.519 [2024-10-09 00:36:40.032524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.519 qpair failed and we were unable to recover it. 00:29:09.519 [2024-10-09 00:36:40.032888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.519 [2024-10-09 00:36:40.032925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.519 qpair failed and we were unable to recover it. 00:29:09.519 [2024-10-09 00:36:40.033179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.519 [2024-10-09 00:36:40.033209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.519 qpair failed and we were unable to recover it. 00:29:09.519 [2024-10-09 00:36:40.033576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.519 [2024-10-09 00:36:40.033606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.519 qpair failed and we were unable to recover it. 00:29:09.519 [2024-10-09 00:36:40.033861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.519 [2024-10-09 00:36:40.033891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.519 qpair failed and we were unable to recover it. 00:29:09.519 [2024-10-09 00:36:40.034259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.519 [2024-10-09 00:36:40.034288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.519 qpair failed and we were unable to recover it. 00:29:09.519 [2024-10-09 00:36:40.034651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.519 [2024-10-09 00:36:40.034681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.519 qpair failed and we were unable to recover it. 00:29:09.519 [2024-10-09 00:36:40.035122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.519 [2024-10-09 00:36:40.035155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.519 qpair failed and we were unable to recover it. 00:29:09.519 [2024-10-09 00:36:40.035526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.519 [2024-10-09 00:36:40.035555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.520 qpair failed and we were unable to recover it. 00:29:09.520 [2024-10-09 00:36:40.035940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.520 [2024-10-09 00:36:40.035971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.520 qpair failed and we were unable to recover it. 00:29:09.520 [2024-10-09 00:36:40.036337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.520 [2024-10-09 00:36:40.036369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.520 qpair failed and we were unable to recover it. 00:29:09.520 [2024-10-09 00:36:40.036746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.520 [2024-10-09 00:36:40.036778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.520 qpair failed and we were unable to recover it. 00:29:09.520 [2024-10-09 00:36:40.037032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.520 [2024-10-09 00:36:40.037061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.520 qpair failed and we were unable to recover it. 00:29:09.520 [2024-10-09 00:36:40.037320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.520 [2024-10-09 00:36:40.037349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.520 qpair failed and we were unable to recover it. 00:29:09.520 [2024-10-09 00:36:40.037711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.520 [2024-10-09 00:36:40.037747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.520 qpair failed and we were unable to recover it. 00:29:09.520 [2024-10-09 00:36:40.038100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.520 [2024-10-09 00:36:40.038138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.520 qpair failed and we were unable to recover it. 00:29:09.520 [2024-10-09 00:36:40.038494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.520 [2024-10-09 00:36:40.038522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.520 qpair failed and we were unable to recover it. 00:29:09.520 [2024-10-09 00:36:40.038895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.520 [2024-10-09 00:36:40.038925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.520 qpair failed and we were unable to recover it. 00:29:09.520 [2024-10-09 00:36:40.039315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.520 [2024-10-09 00:36:40.039345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.520 qpair failed and we were unable to recover it. 00:29:09.520 [2024-10-09 00:36:40.039588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.520 [2024-10-09 00:36:40.039625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.520 qpair failed and we were unable to recover it. 00:29:09.520 [2024-10-09 00:36:40.039885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.520 [2024-10-09 00:36:40.039918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.520 qpair failed and we were unable to recover it. 00:29:09.520 [2024-10-09 00:36:40.040297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.520 [2024-10-09 00:36:40.040327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.520 qpair failed and we were unable to recover it. 00:29:09.520 [2024-10-09 00:36:40.040684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.520 [2024-10-09 00:36:40.040716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.520 qpair failed and we were unable to recover it. 00:29:09.520 [2024-10-09 00:36:40.040982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.520 [2024-10-09 00:36:40.041017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.520 qpair failed and we were unable to recover it. 00:29:09.520 [2024-10-09 00:36:40.041409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.520 [2024-10-09 00:36:40.041441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.520 qpair failed and we were unable to recover it. 00:29:09.520 [2024-10-09 00:36:40.041791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.520 [2024-10-09 00:36:40.041827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.520 qpair failed and we were unable to recover it. 00:29:09.520 [2024-10-09 00:36:40.042160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.520 [2024-10-09 00:36:40.042189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.520 qpair failed and we were unable to recover it. 00:29:09.520 [2024-10-09 00:36:40.042557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.520 [2024-10-09 00:36:40.042586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.520 qpair failed and we were unable to recover it. 00:29:09.520 [2024-10-09 00:36:40.042961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.520 [2024-10-09 00:36:40.042991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.520 qpair failed and we were unable to recover it. 00:29:09.520 [2024-10-09 00:36:40.043363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.520 [2024-10-09 00:36:40.043393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.520 qpair failed and we were unable to recover it. 00:29:09.520 [2024-10-09 00:36:40.043799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.520 [2024-10-09 00:36:40.043829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.520 qpair failed and we were unable to recover it. 00:29:09.520 [2024-10-09 00:36:40.044079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.520 [2024-10-09 00:36:40.044109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.520 qpair failed and we were unable to recover it. 00:29:09.520 [2024-10-09 00:36:40.044407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.520 [2024-10-09 00:36:40.044437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.520 qpair failed and we were unable to recover it. 00:29:09.520 [2024-10-09 00:36:40.044678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.520 [2024-10-09 00:36:40.044706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.520 qpair failed and we were unable to recover it. 00:29:09.520 [2024-10-09 00:36:40.045102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.520 [2024-10-09 00:36:40.045132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.520 qpair failed and we were unable to recover it. 00:29:09.520 [2024-10-09 00:36:40.045400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.520 [2024-10-09 00:36:40.045429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.520 qpair failed and we were unable to recover it. 00:29:09.520 [2024-10-09 00:36:40.045860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.520 [2024-10-09 00:36:40.045893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.520 qpair failed and we were unable to recover it. 00:29:09.520 [2024-10-09 00:36:40.046319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.520 [2024-10-09 00:36:40.046352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.520 qpair failed and we were unable to recover it. 00:29:09.520 [2024-10-09 00:36:40.046498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.520 [2024-10-09 00:36:40.046531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.520 qpair failed and we were unable to recover it. 00:29:09.520 [2024-10-09 00:36:40.046859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.520 [2024-10-09 00:36:40.046891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.520 qpair failed and we were unable to recover it. 00:29:09.520 [2024-10-09 00:36:40.047208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.520 [2024-10-09 00:36:40.047238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.520 qpair failed and we were unable to recover it. 00:29:09.520 [2024-10-09 00:36:40.047444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.520 [2024-10-09 00:36:40.047474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.520 qpair failed and we were unable to recover it. 00:29:09.520 [2024-10-09 00:36:40.047778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.520 [2024-10-09 00:36:40.047810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.520 qpair failed and we were unable to recover it. 00:29:09.520 [2024-10-09 00:36:40.047992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.520 [2024-10-09 00:36:40.048021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.520 qpair failed and we were unable to recover it. 00:29:09.520 [2024-10-09 00:36:40.048284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.520 [2024-10-09 00:36:40.048315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.520 qpair failed and we were unable to recover it. 00:29:09.520 [2024-10-09 00:36:40.048473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.520 [2024-10-09 00:36:40.048500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.520 qpair failed and we were unable to recover it. 00:29:09.520 [2024-10-09 00:36:40.048695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.520 [2024-10-09 00:36:40.048736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.520 qpair failed and we were unable to recover it. 00:29:09.520 [2024-10-09 00:36:40.048989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.520 [2024-10-09 00:36:40.049019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.520 qpair failed and we were unable to recover it. 00:29:09.520 [2024-10-09 00:36:40.049286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.520 [2024-10-09 00:36:40.049316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.520 qpair failed and we were unable to recover it. 00:29:09.521 [2024-10-09 00:36:40.049664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.521 [2024-10-09 00:36:40.049694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.521 qpair failed and we were unable to recover it. 00:29:09.521 [2024-10-09 00:36:40.049943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.521 [2024-10-09 00:36:40.049974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.521 qpair failed and we were unable to recover it. 00:29:09.521 [2024-10-09 00:36:40.050415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.521 [2024-10-09 00:36:40.050445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.521 qpair failed and we were unable to recover it. 00:29:09.521 [2024-10-09 00:36:40.050805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.521 [2024-10-09 00:36:40.050836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.521 qpair failed and we were unable to recover it. 00:29:09.521 [2024-10-09 00:36:40.051236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.521 [2024-10-09 00:36:40.051265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.521 qpair failed and we were unable to recover it. 00:29:09.521 [2024-10-09 00:36:40.051621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.521 [2024-10-09 00:36:40.051652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.521 qpair failed and we were unable to recover it. 00:29:09.521 [2024-10-09 00:36:40.052010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.521 [2024-10-09 00:36:40.052041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.521 qpair failed and we were unable to recover it. 00:29:09.521 [2024-10-09 00:36:40.052388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.521 [2024-10-09 00:36:40.052419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.521 qpair failed and we were unable to recover it. 00:29:09.521 [2024-10-09 00:36:40.052629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.521 [2024-10-09 00:36:40.052659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.521 qpair failed and we were unable to recover it. 00:29:09.521 [2024-10-09 00:36:40.052932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.521 [2024-10-09 00:36:40.052961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.521 qpair failed and we were unable to recover it. 00:29:09.521 [2024-10-09 00:36:40.053319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.521 [2024-10-09 00:36:40.053349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.521 qpair failed and we were unable to recover it. 00:29:09.521 [2024-10-09 00:36:40.053735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.521 [2024-10-09 00:36:40.053766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.521 qpair failed and we were unable to recover it. 00:29:09.521 [2024-10-09 00:36:40.054107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.521 [2024-10-09 00:36:40.054135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.521 qpair failed and we were unable to recover it. 00:29:09.521 [2024-10-09 00:36:40.054379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.521 [2024-10-09 00:36:40.054407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.521 qpair failed and we were unable to recover it. 00:29:09.521 [2024-10-09 00:36:40.054784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.521 [2024-10-09 00:36:40.054815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.521 qpair failed and we were unable to recover it. 00:29:09.521 [2024-10-09 00:36:40.055192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.521 [2024-10-09 00:36:40.055221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.521 qpair failed and we were unable to recover it. 00:29:09.521 [2024-10-09 00:36:40.055474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.521 [2024-10-09 00:36:40.055502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.521 qpair failed and we were unable to recover it. 00:29:09.521 [2024-10-09 00:36:40.055857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.521 [2024-10-09 00:36:40.055888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.521 qpair failed and we were unable to recover it. 00:29:09.521 [2024-10-09 00:36:40.056260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.521 [2024-10-09 00:36:40.056289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.521 qpair failed and we were unable to recover it. 00:29:09.521 [2024-10-09 00:36:40.056542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.521 [2024-10-09 00:36:40.056570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.521 qpair failed and we were unable to recover it. 00:29:09.521 [2024-10-09 00:36:40.056952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.521 [2024-10-09 00:36:40.056982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.521 qpair failed and we were unable to recover it. 00:29:09.521 [2024-10-09 00:36:40.057335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.521 [2024-10-09 00:36:40.057365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.521 qpair failed and we were unable to recover it. 00:29:09.521 [2024-10-09 00:36:40.057744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.521 [2024-10-09 00:36:40.057774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.521 qpair failed and we were unable to recover it. 00:29:09.521 [2024-10-09 00:36:40.058175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.521 [2024-10-09 00:36:40.058205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.521 qpair failed and we were unable to recover it. 00:29:09.521 [2024-10-09 00:36:40.058660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.521 [2024-10-09 00:36:40.058690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.521 qpair failed and we were unable to recover it. 00:29:09.521 [2024-10-09 00:36:40.058940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.521 [2024-10-09 00:36:40.058974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.521 qpair failed and we were unable to recover it. 00:29:09.521 [2024-10-09 00:36:40.059363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.521 [2024-10-09 00:36:40.059392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.521 qpair failed and we were unable to recover it. 00:29:09.521 [2024-10-09 00:36:40.059767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.521 [2024-10-09 00:36:40.059798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.521 qpair failed and we were unable to recover it. 00:29:09.521 [2024-10-09 00:36:40.060183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.521 [2024-10-09 00:36:40.060213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.521 qpair failed and we were unable to recover it. 00:29:09.521 [2024-10-09 00:36:40.060457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.521 [2024-10-09 00:36:40.060489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.521 qpair failed and we were unable to recover it. 00:29:09.521 [2024-10-09 00:36:40.060873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.521 [2024-10-09 00:36:40.060903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.521 qpair failed and we were unable to recover it. 00:29:09.521 [2024-10-09 00:36:40.061150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.521 [2024-10-09 00:36:40.061182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.521 qpair failed and we were unable to recover it. 00:29:09.521 [2024-10-09 00:36:40.061546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.521 [2024-10-09 00:36:40.061575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.521 qpair failed and we were unable to recover it. 00:29:09.521 [2024-10-09 00:36:40.061937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.521 [2024-10-09 00:36:40.061975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.521 qpair failed and we were unable to recover it. 00:29:09.521 [2024-10-09 00:36:40.062347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.521 [2024-10-09 00:36:40.062376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.521 qpair failed and we were unable to recover it. 00:29:09.521 [2024-10-09 00:36:40.062655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.521 [2024-10-09 00:36:40.062684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.522 qpair failed and we were unable to recover it. 00:29:09.522 [2024-10-09 00:36:40.062976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.522 [2024-10-09 00:36:40.063006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.522 qpair failed and we were unable to recover it. 00:29:09.522 [2024-10-09 00:36:40.063360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.522 [2024-10-09 00:36:40.063389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.522 qpair failed and we were unable to recover it. 00:29:09.522 [2024-10-09 00:36:40.063552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.522 [2024-10-09 00:36:40.063583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.522 qpair failed and we were unable to recover it. 00:29:09.522 [2024-10-09 00:36:40.063961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.522 [2024-10-09 00:36:40.063992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.522 qpair failed and we were unable to recover it. 00:29:09.522 [2024-10-09 00:36:40.064360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.522 [2024-10-09 00:36:40.064388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.522 qpair failed and we were unable to recover it. 00:29:09.522 [2024-10-09 00:36:40.064551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.522 [2024-10-09 00:36:40.064580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.522 qpair failed and we were unable to recover it. 00:29:09.522 [2024-10-09 00:36:40.064941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.522 [2024-10-09 00:36:40.064972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.522 qpair failed and we were unable to recover it. 00:29:09.522 [2024-10-09 00:36:40.065345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.522 [2024-10-09 00:36:40.065375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.522 qpair failed and we were unable to recover it. 00:29:09.522 [2024-10-09 00:36:40.065649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.522 [2024-10-09 00:36:40.065677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.522 qpair failed and we were unable to recover it. 00:29:09.522 [2024-10-09 00:36:40.065947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.522 [2024-10-09 00:36:40.065978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.522 qpair failed and we were unable to recover it. 00:29:09.522 [2024-10-09 00:36:40.066215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.522 [2024-10-09 00:36:40.066244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.522 qpair failed and we were unable to recover it. 00:29:09.522 [2024-10-09 00:36:40.066535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.522 [2024-10-09 00:36:40.066567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.522 qpair failed and we were unable to recover it. 00:29:09.522 [2024-10-09 00:36:40.066816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.522 [2024-10-09 00:36:40.066846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.522 qpair failed and we were unable to recover it. 00:29:09.522 [2024-10-09 00:36:40.067115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.522 [2024-10-09 00:36:40.067146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.522 qpair failed and we were unable to recover it. 00:29:09.522 [2024-10-09 00:36:40.067529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.522 [2024-10-09 00:36:40.067560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.522 qpair failed and we were unable to recover it. 00:29:09.522 [2024-10-09 00:36:40.067807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.522 [2024-10-09 00:36:40.067837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.522 qpair failed and we were unable to recover it. 00:29:09.522 [2024-10-09 00:36:40.068112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.522 [2024-10-09 00:36:40.068141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.522 qpair failed and we were unable to recover it. 00:29:09.522 [2024-10-09 00:36:40.068492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.522 [2024-10-09 00:36:40.068521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.522 qpair failed and we were unable to recover it. 00:29:09.522 [2024-10-09 00:36:40.068881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.522 [2024-10-09 00:36:40.068913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.522 qpair failed and we were unable to recover it. 00:29:09.522 [2024-10-09 00:36:40.069286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.522 [2024-10-09 00:36:40.069314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.522 qpair failed and we were unable to recover it. 00:29:09.522 [2024-10-09 00:36:40.069556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.522 [2024-10-09 00:36:40.069585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.522 qpair failed and we were unable to recover it. 00:29:09.522 [2024-10-09 00:36:40.069973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.522 [2024-10-09 00:36:40.070004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.522 qpair failed and we were unable to recover it. 00:29:09.522 [2024-10-09 00:36:40.070362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.522 [2024-10-09 00:36:40.070392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.522 qpair failed and we were unable to recover it. 00:29:09.522 [2024-10-09 00:36:40.070536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.522 [2024-10-09 00:36:40.070566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.522 qpair failed and we were unable to recover it. 00:29:09.522 [2024-10-09 00:36:40.070997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.522 [2024-10-09 00:36:40.071027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.522 qpair failed and we were unable to recover it. 00:29:09.522 [2024-10-09 00:36:40.071314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.522 [2024-10-09 00:36:40.071343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.522 qpair failed and we were unable to recover it. 00:29:09.522 [2024-10-09 00:36:40.071713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.522 [2024-10-09 00:36:40.071761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.522 qpair failed and we were unable to recover it. 00:29:09.522 [2024-10-09 00:36:40.072131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.522 [2024-10-09 00:36:40.072161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.522 qpair failed and we were unable to recover it. 00:29:09.522 [2024-10-09 00:36:40.072526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.522 [2024-10-09 00:36:40.072555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.522 qpair failed and we were unable to recover it. 00:29:09.522 [2024-10-09 00:36:40.072918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.522 [2024-10-09 00:36:40.072950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.522 qpair failed and we were unable to recover it. 00:29:09.522 [2024-10-09 00:36:40.073211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.522 [2024-10-09 00:36:40.073240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.522 qpair failed and we were unable to recover it. 00:29:09.522 [2024-10-09 00:36:40.073652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.522 [2024-10-09 00:36:40.073681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.522 qpair failed and we were unable to recover it. 00:29:09.522 [2024-10-09 00:36:40.073919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.522 [2024-10-09 00:36:40.073949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.522 qpair failed and we were unable to recover it. 00:29:09.522 [2024-10-09 00:36:40.074287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.522 [2024-10-09 00:36:40.074316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.522 qpair failed and we were unable to recover it. 00:29:09.522 [2024-10-09 00:36:40.074679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.522 [2024-10-09 00:36:40.074708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.522 qpair failed and we were unable to recover it. 00:29:09.522 [2024-10-09 00:36:40.075041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.522 [2024-10-09 00:36:40.075071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.522 qpair failed and we were unable to recover it. 00:29:09.522 [2024-10-09 00:36:40.075445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.522 [2024-10-09 00:36:40.075474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.522 qpair failed and we were unable to recover it. 00:29:09.522 [2024-10-09 00:36:40.075772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.522 [2024-10-09 00:36:40.075809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.522 qpair failed and we were unable to recover it. 00:29:09.522 [2024-10-09 00:36:40.076169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.522 [2024-10-09 00:36:40.076199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.522 qpair failed and we were unable to recover it. 00:29:09.522 [2024-10-09 00:36:40.076594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.522 [2024-10-09 00:36:40.076624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.522 qpair failed and we were unable to recover it. 00:29:09.523 [2024-10-09 00:36:40.076857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.523 [2024-10-09 00:36:40.076889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.523 qpair failed and we were unable to recover it. 00:29:09.523 [2024-10-09 00:36:40.077263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.523 [2024-10-09 00:36:40.077294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.523 qpair failed and we were unable to recover it. 00:29:09.523 [2024-10-09 00:36:40.077654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.523 [2024-10-09 00:36:40.077684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.523 qpair failed and we were unable to recover it. 00:29:09.523 [2024-10-09 00:36:40.078076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.523 [2024-10-09 00:36:40.078108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.523 qpair failed and we were unable to recover it. 00:29:09.523 [2024-10-09 00:36:40.078376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.523 [2024-10-09 00:36:40.078405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.523 qpair failed and we were unable to recover it. 00:29:09.523 [2024-10-09 00:36:40.078635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.523 [2024-10-09 00:36:40.078665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.523 qpair failed and we were unable to recover it. 00:29:09.523 [2024-10-09 00:36:40.079030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.523 [2024-10-09 00:36:40.079060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.523 qpair failed and we were unable to recover it. 00:29:09.523 [2024-10-09 00:36:40.079432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.523 [2024-10-09 00:36:40.079462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.523 qpair failed and we were unable to recover it. 00:29:09.523 [2024-10-09 00:36:40.079823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.523 [2024-10-09 00:36:40.079855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.523 qpair failed and we were unable to recover it. 00:29:09.523 [2024-10-09 00:36:40.080224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.523 [2024-10-09 00:36:40.080253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.523 qpair failed and we were unable to recover it. 00:29:09.523 [2024-10-09 00:36:40.080496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.523 [2024-10-09 00:36:40.080525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.523 qpair failed and we were unable to recover it. 00:29:09.523 [2024-10-09 00:36:40.080710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.523 [2024-10-09 00:36:40.080750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.523 qpair failed and we were unable to recover it. 00:29:09.523 [2024-10-09 00:36:40.081114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.523 [2024-10-09 00:36:40.081143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.523 qpair failed and we were unable to recover it. 00:29:09.523 [2024-10-09 00:36:40.081504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.523 [2024-10-09 00:36:40.081534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.523 qpair failed and we were unable to recover it. 00:29:09.523 [2024-10-09 00:36:40.081795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.523 [2024-10-09 00:36:40.081825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.523 qpair failed and we were unable to recover it. 00:29:09.523 [2024-10-09 00:36:40.082214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.523 [2024-10-09 00:36:40.082243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.523 qpair failed and we were unable to recover it. 00:29:09.523 [2024-10-09 00:36:40.082607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.523 [2024-10-09 00:36:40.082637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.523 qpair failed and we were unable to recover it. 00:29:09.523 [2024-10-09 00:36:40.082896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.523 [2024-10-09 00:36:40.082929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.523 qpair failed and we were unable to recover it. 00:29:09.523 [2024-10-09 00:36:40.083153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.523 [2024-10-09 00:36:40.083183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.523 qpair failed and we were unable to recover it. 00:29:09.523 [2024-10-09 00:36:40.083537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.523 [2024-10-09 00:36:40.083566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.523 qpair failed and we were unable to recover it. 00:29:09.523 [2024-10-09 00:36:40.083946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.523 [2024-10-09 00:36:40.083977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.523 qpair failed and we were unable to recover it. 00:29:09.523 [2024-10-09 00:36:40.084341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.523 [2024-10-09 00:36:40.084370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.523 qpair failed and we were unable to recover it. 00:29:09.523 [2024-10-09 00:36:40.084739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.523 [2024-10-09 00:36:40.084769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.523 qpair failed and we were unable to recover it. 00:29:09.523 [2024-10-09 00:36:40.085161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.523 [2024-10-09 00:36:40.085191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.523 qpair failed and we were unable to recover it. 00:29:09.523 [2024-10-09 00:36:40.085561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.523 [2024-10-09 00:36:40.085591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.523 qpair failed and we were unable to recover it. 00:29:09.523 [2024-10-09 00:36:40.085955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.523 [2024-10-09 00:36:40.085985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.523 qpair failed and we were unable to recover it. 00:29:09.523 [2024-10-09 00:36:40.086204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.523 [2024-10-09 00:36:40.086234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.523 qpair failed and we were unable to recover it. 00:29:09.523 [2024-10-09 00:36:40.086524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.523 [2024-10-09 00:36:40.086554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.523 qpair failed and we were unable to recover it. 00:29:09.523 [2024-10-09 00:36:40.086903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.523 [2024-10-09 00:36:40.086934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.523 qpair failed and we were unable to recover it. 00:29:09.523 [2024-10-09 00:36:40.087296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.523 [2024-10-09 00:36:40.087324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.523 qpair failed and we were unable to recover it. 00:29:09.523 [2024-10-09 00:36:40.087689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.523 [2024-10-09 00:36:40.087718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.523 qpair failed and we were unable to recover it. 00:29:09.523 [2024-10-09 00:36:40.088089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.523 [2024-10-09 00:36:40.088120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.523 qpair failed and we were unable to recover it. 00:29:09.523 [2024-10-09 00:36:40.088490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.523 [2024-10-09 00:36:40.088519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.523 qpair failed and we were unable to recover it. 00:29:09.523 [2024-10-09 00:36:40.088758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.523 [2024-10-09 00:36:40.088787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.523 qpair failed and we were unable to recover it. 00:29:09.523 [2024-10-09 00:36:40.089197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.523 [2024-10-09 00:36:40.089226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.523 qpair failed and we were unable to recover it. 00:29:09.523 [2024-10-09 00:36:40.089600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.523 [2024-10-09 00:36:40.089629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.523 qpair failed and we were unable to recover it. 00:29:09.523 [2024-10-09 00:36:40.089994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.523 [2024-10-09 00:36:40.090026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.523 qpair failed and we were unable to recover it. 00:29:09.523 [2024-10-09 00:36:40.090395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.523 [2024-10-09 00:36:40.090431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.523 qpair failed and we were unable to recover it. 00:29:09.523 [2024-10-09 00:36:40.090787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.523 [2024-10-09 00:36:40.090818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.523 qpair failed and we were unable to recover it. 00:29:09.523 [2024-10-09 00:36:40.091218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.523 [2024-10-09 00:36:40.091247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.523 qpair failed and we were unable to recover it. 00:29:09.524 [2024-10-09 00:36:40.091494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.524 [2024-10-09 00:36:40.091522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.524 qpair failed and we were unable to recover it. 00:29:09.524 [2024-10-09 00:36:40.091861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.524 [2024-10-09 00:36:40.091892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.524 qpair failed and we were unable to recover it. 00:29:09.524 [2024-10-09 00:36:40.092279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.524 [2024-10-09 00:36:40.092308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.524 qpair failed and we were unable to recover it. 00:29:09.524 [2024-10-09 00:36:40.092692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.524 [2024-10-09 00:36:40.092727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.524 qpair failed and we were unable to recover it. 00:29:09.524 [2024-10-09 00:36:40.092994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.524 [2024-10-09 00:36:40.093024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.524 qpair failed and we were unable to recover it. 00:29:09.524 [2024-10-09 00:36:40.093377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.524 [2024-10-09 00:36:40.093407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.524 qpair failed and we were unable to recover it. 00:29:09.524 [2024-10-09 00:36:40.093777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.524 [2024-10-09 00:36:40.093807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.524 qpair failed and we were unable to recover it. 00:29:09.524 [2024-10-09 00:36:40.094192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.524 [2024-10-09 00:36:40.094221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.524 qpair failed and we were unable to recover it. 00:29:09.524 [2024-10-09 00:36:40.094589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.524 [2024-10-09 00:36:40.094618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.524 qpair failed and we were unable to recover it. 00:29:09.524 [2024-10-09 00:36:40.095008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.524 [2024-10-09 00:36:40.095038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.524 qpair failed and we were unable to recover it. 00:29:09.524 [2024-10-09 00:36:40.095420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.524 [2024-10-09 00:36:40.095450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.524 qpair failed and we were unable to recover it. 00:29:09.524 [2024-10-09 00:36:40.095798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.524 [2024-10-09 00:36:40.095829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.524 qpair failed and we were unable to recover it. 00:29:09.524 [2024-10-09 00:36:40.096199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.524 [2024-10-09 00:36:40.096229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.524 qpair failed and we were unable to recover it. 00:29:09.524 [2024-10-09 00:36:40.096593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.524 [2024-10-09 00:36:40.096623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.524 qpair failed and we were unable to recover it. 00:29:09.524 [2024-10-09 00:36:40.096995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.524 [2024-10-09 00:36:40.097025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.524 qpair failed and we were unable to recover it. 00:29:09.524 [2024-10-09 00:36:40.097278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.524 [2024-10-09 00:36:40.097308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.524 qpair failed and we were unable to recover it. 00:29:09.524 [2024-10-09 00:36:40.097559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.524 [2024-10-09 00:36:40.097591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.524 qpair failed and we were unable to recover it. 00:29:09.524 [2024-10-09 00:36:40.097943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.524 [2024-10-09 00:36:40.097975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.524 qpair failed and we were unable to recover it. 00:29:09.524 [2024-10-09 00:36:40.098344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.524 [2024-10-09 00:36:40.098373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.524 qpair failed and we were unable to recover it. 00:29:09.524 [2024-10-09 00:36:40.098757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.524 [2024-10-09 00:36:40.098787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.524 qpair failed and we were unable to recover it. 00:29:09.524 [2024-10-09 00:36:40.099158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.524 [2024-10-09 00:36:40.099188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.524 qpair failed and we were unable to recover it. 00:29:09.524 [2024-10-09 00:36:40.099525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.524 [2024-10-09 00:36:40.099563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.524 qpair failed and we were unable to recover it. 00:29:09.524 [2024-10-09 00:36:40.099952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.524 [2024-10-09 00:36:40.099984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.524 qpair failed and we were unable to recover it. 00:29:09.524 [2024-10-09 00:36:40.100366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.524 [2024-10-09 00:36:40.100396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.524 qpair failed and we were unable to recover it. 00:29:09.524 [2024-10-09 00:36:40.100766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.524 [2024-10-09 00:36:40.100797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.524 qpair failed and we were unable to recover it. 00:29:09.524 [2024-10-09 00:36:40.101213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.524 [2024-10-09 00:36:40.101241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.524 qpair failed and we were unable to recover it. 00:29:09.524 [2024-10-09 00:36:40.101618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.524 [2024-10-09 00:36:40.101649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.524 qpair failed and we were unable to recover it. 00:29:09.524 [2024-10-09 00:36:40.102018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.524 [2024-10-09 00:36:40.102050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.524 qpair failed and we were unable to recover it. 00:29:09.524 [2024-10-09 00:36:40.102421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.524 [2024-10-09 00:36:40.102450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.524 qpair failed and we were unable to recover it. 00:29:09.524 [2024-10-09 00:36:40.102838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.524 [2024-10-09 00:36:40.102875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.524 qpair failed and we were unable to recover it. 00:29:09.524 [2024-10-09 00:36:40.103252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.524 [2024-10-09 00:36:40.103282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.524 qpair failed and we were unable to recover it. 00:29:09.524 [2024-10-09 00:36:40.103655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.524 [2024-10-09 00:36:40.103686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.524 qpair failed and we were unable to recover it. 00:29:09.524 [2024-10-09 00:36:40.103887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.524 [2024-10-09 00:36:40.103917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.524 qpair failed and we were unable to recover it. 00:29:09.524 [2024-10-09 00:36:40.104339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.524 [2024-10-09 00:36:40.104368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.524 qpair failed and we were unable to recover it. 00:29:09.524 [2024-10-09 00:36:40.104751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.524 [2024-10-09 00:36:40.104783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.524 qpair failed and we were unable to recover it. 00:29:09.524 [2024-10-09 00:36:40.105062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.524 [2024-10-09 00:36:40.105096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.524 qpair failed and we were unable to recover it. 00:29:09.524 [2024-10-09 00:36:40.105484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.524 [2024-10-09 00:36:40.105515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.524 qpair failed and we were unable to recover it. 00:29:09.524 [2024-10-09 00:36:40.105881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.524 [2024-10-09 00:36:40.105919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.524 qpair failed and we were unable to recover it. 00:29:09.524 [2024-10-09 00:36:40.106273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.524 [2024-10-09 00:36:40.106305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.524 qpair failed and we were unable to recover it. 00:29:09.524 [2024-10-09 00:36:40.106678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.524 [2024-10-09 00:36:40.106707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.524 qpair failed and we were unable to recover it. 00:29:09.524 [2024-10-09 00:36:40.107074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.525 [2024-10-09 00:36:40.107111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.525 qpair failed and we were unable to recover it. 00:29:09.525 [2024-10-09 00:36:40.107469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.525 [2024-10-09 00:36:40.107499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.525 qpair failed and we were unable to recover it. 00:29:09.525 [2024-10-09 00:36:40.108095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.525 [2024-10-09 00:36:40.108203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.525 qpair failed and we were unable to recover it. 00:29:09.525 [2024-10-09 00:36:40.108575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.525 [2024-10-09 00:36:40.108612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.525 qpair failed and we were unable to recover it. 00:29:09.525 [2024-10-09 00:36:40.108975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.525 [2024-10-09 00:36:40.109009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.525 qpair failed and we were unable to recover it. 00:29:09.525 [2024-10-09 00:36:40.109267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.525 [2024-10-09 00:36:40.109297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.525 qpair failed and we were unable to recover it. 00:29:09.525 [2024-10-09 00:36:40.109568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.525 [2024-10-09 00:36:40.109598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.525 qpair failed and we were unable to recover it. 00:29:09.525 [2024-10-09 00:36:40.109866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.525 [2024-10-09 00:36:40.109899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.525 qpair failed and we were unable to recover it. 00:29:09.525 [2024-10-09 00:36:40.110216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.525 [2024-10-09 00:36:40.110245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.525 qpair failed and we were unable to recover it. 00:29:09.525 [2024-10-09 00:36:40.110516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.525 [2024-10-09 00:36:40.110546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.525 qpair failed and we were unable to recover it. 00:29:09.525 [2024-10-09 00:36:40.110806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.525 [2024-10-09 00:36:40.110836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.525 qpair failed and we were unable to recover it. 00:29:09.525 [2024-10-09 00:36:40.111013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.525 [2024-10-09 00:36:40.111041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.525 qpair failed and we were unable to recover it. 00:29:09.525 [2024-10-09 00:36:40.111350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.525 [2024-10-09 00:36:40.111380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.525 qpair failed and we were unable to recover it. 00:29:09.525 [2024-10-09 00:36:40.111661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.525 [2024-10-09 00:36:40.111691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.525 qpair failed and we were unable to recover it. 00:29:09.525 [2024-10-09 00:36:40.112045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.525 [2024-10-09 00:36:40.112076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.525 qpair failed and we were unable to recover it. 00:29:09.525 [2024-10-09 00:36:40.112472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.525 [2024-10-09 00:36:40.112503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.525 qpair failed and we were unable to recover it. 00:29:09.525 [2024-10-09 00:36:40.112904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.525 [2024-10-09 00:36:40.112936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.525 qpair failed and we were unable to recover it. 00:29:09.525 [2024-10-09 00:36:40.113312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.525 [2024-10-09 00:36:40.113343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.525 qpair failed and we were unable to recover it. 00:29:09.525 [2024-10-09 00:36:40.113643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.525 [2024-10-09 00:36:40.113672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.525 qpair failed and we were unable to recover it. 00:29:09.525 [2024-10-09 00:36:40.114087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.525 [2024-10-09 00:36:40.114119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.525 qpair failed and we were unable to recover it. 00:29:09.525 [2024-10-09 00:36:40.114377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.525 [2024-10-09 00:36:40.114406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.525 qpair failed and we were unable to recover it. 00:29:09.525 [2024-10-09 00:36:40.114776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.525 [2024-10-09 00:36:40.114807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.525 qpair failed and we were unable to recover it. 00:29:09.525 [2024-10-09 00:36:40.115216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.525 [2024-10-09 00:36:40.115246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.525 qpair failed and we were unable to recover it. 00:29:09.525 [2024-10-09 00:36:40.115636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.525 [2024-10-09 00:36:40.115666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.525 qpair failed and we were unable to recover it. 00:29:09.525 [2024-10-09 00:36:40.115965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.525 [2024-10-09 00:36:40.115996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.525 qpair failed and we were unable to recover it. 00:29:09.525 [2024-10-09 00:36:40.116261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.525 [2024-10-09 00:36:40.116295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.525 qpair failed and we were unable to recover it. 00:29:09.525 [2024-10-09 00:36:40.116676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.525 [2024-10-09 00:36:40.116706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.525 qpair failed and we were unable to recover it. 00:29:09.525 [2024-10-09 00:36:40.116969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.525 [2024-10-09 00:36:40.116999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.525 qpair failed and we were unable to recover it. 00:29:09.525 [2024-10-09 00:36:40.117337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.525 [2024-10-09 00:36:40.117368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.525 qpair failed and we were unable to recover it. 00:29:09.525 [2024-10-09 00:36:40.117676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.525 [2024-10-09 00:36:40.117706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.525 qpair failed and we were unable to recover it. 00:29:09.525 [2024-10-09 00:36:40.118091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.525 [2024-10-09 00:36:40.118122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.525 qpair failed and we were unable to recover it. 00:29:09.525 [2024-10-09 00:36:40.118494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.525 [2024-10-09 00:36:40.118525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.525 qpair failed and we were unable to recover it. 00:29:09.525 [2024-10-09 00:36:40.118786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.525 [2024-10-09 00:36:40.118819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.525 qpair failed and we were unable to recover it. 00:29:09.525 [2024-10-09 00:36:40.119188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.525 [2024-10-09 00:36:40.119220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.525 qpair failed and we were unable to recover it. 00:29:09.525 [2024-10-09 00:36:40.119567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.525 [2024-10-09 00:36:40.119597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.525 qpair failed and we were unable to recover it. 00:29:09.525 [2024-10-09 00:36:40.119949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.525 [2024-10-09 00:36:40.119981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.525 qpair failed and we were unable to recover it. 00:29:09.526 [2024-10-09 00:36:40.120339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.526 [2024-10-09 00:36:40.120370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.526 qpair failed and we were unable to recover it. 00:29:09.526 [2024-10-09 00:36:40.120757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.526 [2024-10-09 00:36:40.120795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.526 qpair failed and we were unable to recover it. 00:29:09.526 [2024-10-09 00:36:40.121145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.526 [2024-10-09 00:36:40.121181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.526 qpair failed and we were unable to recover it. 00:29:09.526 [2024-10-09 00:36:40.121418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.526 [2024-10-09 00:36:40.121453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.526 qpair failed and we were unable to recover it. 00:29:09.526 [2024-10-09 00:36:40.121833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.526 [2024-10-09 00:36:40.121866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.526 qpair failed and we were unable to recover it. 00:29:09.526 [2024-10-09 00:36:40.122232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.526 [2024-10-09 00:36:40.122261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.526 qpair failed and we were unable to recover it. 00:29:09.526 [2024-10-09 00:36:40.122630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.526 [2024-10-09 00:36:40.122659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.526 qpair failed and we were unable to recover it. 00:29:09.526 [2024-10-09 00:36:40.122911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.526 [2024-10-09 00:36:40.122942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.526 qpair failed and we were unable to recover it. 00:29:09.526 [2024-10-09 00:36:40.123312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.526 [2024-10-09 00:36:40.123341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.526 qpair failed and we were unable to recover it. 00:29:09.526 [2024-10-09 00:36:40.123688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.526 [2024-10-09 00:36:40.123728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.526 qpair failed and we were unable to recover it. 00:29:09.526 [2024-10-09 00:36:40.124076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.526 [2024-10-09 00:36:40.124106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.526 qpair failed and we were unable to recover it. 00:29:09.526 [2024-10-09 00:36:40.124478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.526 [2024-10-09 00:36:40.124507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.526 qpair failed and we were unable to recover it. 00:29:09.526 [2024-10-09 00:36:40.124877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.526 [2024-10-09 00:36:40.124908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.526 qpair failed and we were unable to recover it. 00:29:09.526 [2024-10-09 00:36:40.125268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.526 [2024-10-09 00:36:40.125297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.526 qpair failed and we were unable to recover it. 00:29:09.526 [2024-10-09 00:36:40.125551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.526 [2024-10-09 00:36:40.125580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.526 qpair failed and we were unable to recover it. 00:29:09.526 [2024-10-09 00:36:40.125965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.526 [2024-10-09 00:36:40.125997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.526 qpair failed and we were unable to recover it. 00:29:09.526 [2024-10-09 00:36:40.126341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.526 [2024-10-09 00:36:40.126372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.526 qpair failed and we were unable to recover it. 00:29:09.526 [2024-10-09 00:36:40.126621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.526 [2024-10-09 00:36:40.126651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.526 qpair failed and we were unable to recover it. 00:29:09.526 [2024-10-09 00:36:40.126924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.526 [2024-10-09 00:36:40.126956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.526 qpair failed and we were unable to recover it. 00:29:09.526 [2024-10-09 00:36:40.127146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.526 [2024-10-09 00:36:40.127175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.526 qpair failed and we were unable to recover it. 00:29:09.526 [2024-10-09 00:36:40.127433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.526 [2024-10-09 00:36:40.127464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.526 qpair failed and we were unable to recover it. 00:29:09.526 [2024-10-09 00:36:40.127852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.526 [2024-10-09 00:36:40.127883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.526 qpair failed and we were unable to recover it. 00:29:09.526 [2024-10-09 00:36:40.128137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.526 [2024-10-09 00:36:40.128166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.526 qpair failed and we were unable to recover it. 00:29:09.526 [2024-10-09 00:36:40.128539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.526 [2024-10-09 00:36:40.128569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.526 qpair failed and we were unable to recover it. 00:29:09.526 [2024-10-09 00:36:40.128815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.526 [2024-10-09 00:36:40.128845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.526 qpair failed and we were unable to recover it. 00:29:09.526 [2024-10-09 00:36:40.129200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.526 [2024-10-09 00:36:40.129229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.526 qpair failed and we were unable to recover it. 00:29:09.526 [2024-10-09 00:36:40.129486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.526 [2024-10-09 00:36:40.129515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.526 qpair failed and we were unable to recover it. 00:29:09.526 [2024-10-09 00:36:40.129876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.526 [2024-10-09 00:36:40.129907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.526 qpair failed and we were unable to recover it. 00:29:09.526 [2024-10-09 00:36:40.130157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.526 [2024-10-09 00:36:40.130186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.526 qpair failed and we were unable to recover it. 00:29:09.526 [2024-10-09 00:36:40.130461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.526 [2024-10-09 00:36:40.130491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.526 qpair failed and we were unable to recover it. 00:29:09.526 [2024-10-09 00:36:40.130743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.526 [2024-10-09 00:36:40.130774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.526 qpair failed and we were unable to recover it. 00:29:09.799 [2024-10-09 00:36:40.131170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.799 [2024-10-09 00:36:40.131202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.799 qpair failed and we were unable to recover it. 00:29:09.799 [2024-10-09 00:36:40.131428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.799 [2024-10-09 00:36:40.131460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.799 qpair failed and we were unable to recover it. 00:29:09.799 [2024-10-09 00:36:40.131802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.799 [2024-10-09 00:36:40.131834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.799 qpair failed and we were unable to recover it. 00:29:09.799 [2024-10-09 00:36:40.132002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.799 [2024-10-09 00:36:40.132035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.799 qpair failed and we were unable to recover it. 00:29:09.799 [2024-10-09 00:36:40.132440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.799 [2024-10-09 00:36:40.132471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.799 qpair failed and we were unable to recover it. 00:29:09.799 [2024-10-09 00:36:40.132930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.799 [2024-10-09 00:36:40.132961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.799 qpair failed and we were unable to recover it. 00:29:09.799 [2024-10-09 00:36:40.133192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.799 [2024-10-09 00:36:40.133226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.799 qpair failed and we were unable to recover it. 00:29:09.799 [2024-10-09 00:36:40.133466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.799 [2024-10-09 00:36:40.133497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.799 qpair failed and we were unable to recover it. 00:29:09.799 [2024-10-09 00:36:40.133777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.799 [2024-10-09 00:36:40.133809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.799 qpair failed and we were unable to recover it. 00:29:09.799 [2024-10-09 00:36:40.134126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.799 [2024-10-09 00:36:40.134155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.799 qpair failed and we were unable to recover it. 00:29:09.799 [2024-10-09 00:36:40.134513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.799 [2024-10-09 00:36:40.134549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.799 qpair failed and we were unable to recover it. 00:29:09.799 [2024-10-09 00:36:40.134897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.799 [2024-10-09 00:36:40.134927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.799 qpair failed and we were unable to recover it. 00:29:09.799 [2024-10-09 00:36:40.135277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.799 [2024-10-09 00:36:40.135306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.799 qpair failed and we were unable to recover it. 00:29:09.799 [2024-10-09 00:36:40.135675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.799 [2024-10-09 00:36:40.135705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.799 qpair failed and we were unable to recover it. 00:29:09.799 [2024-10-09 00:36:40.135989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.799 [2024-10-09 00:36:40.136020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.799 qpair failed and we were unable to recover it. 00:29:09.799 [2024-10-09 00:36:40.136370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.799 [2024-10-09 00:36:40.136399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.799 qpair failed and we were unable to recover it. 00:29:09.799 [2024-10-09 00:36:40.136765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.799 [2024-10-09 00:36:40.136796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.799 qpair failed and we were unable to recover it. 00:29:09.799 [2024-10-09 00:36:40.137054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.799 [2024-10-09 00:36:40.137085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.799 qpair failed and we were unable to recover it. 00:29:09.799 [2024-10-09 00:36:40.137464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.799 [2024-10-09 00:36:40.137497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.799 qpair failed and we were unable to recover it. 00:29:09.799 [2024-10-09 00:36:40.137827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.799 [2024-10-09 00:36:40.137859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.799 qpair failed and we were unable to recover it. 00:29:09.799 [2024-10-09 00:36:40.138085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.799 [2024-10-09 00:36:40.138114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.799 qpair failed and we were unable to recover it. 00:29:09.799 [2024-10-09 00:36:40.138485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.799 [2024-10-09 00:36:40.138514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.799 qpair failed and we were unable to recover it. 00:29:09.799 [2024-10-09 00:36:40.138946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.799 [2024-10-09 00:36:40.138977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.799 qpair failed and we were unable to recover it. 00:29:09.799 [2024-10-09 00:36:40.139434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.799 [2024-10-09 00:36:40.139464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.799 qpair failed and we were unable to recover it. 00:29:09.799 [2024-10-09 00:36:40.139833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.799 [2024-10-09 00:36:40.139864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.799 qpair failed and we were unable to recover it. 00:29:09.799 [2024-10-09 00:36:40.140216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.799 [2024-10-09 00:36:40.140244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.799 qpair failed and we were unable to recover it. 00:29:09.799 [2024-10-09 00:36:40.140619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.799 [2024-10-09 00:36:40.140648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.799 qpair failed and we were unable to recover it. 00:29:09.799 [2024-10-09 00:36:40.141019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.799 [2024-10-09 00:36:40.141050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.799 qpair failed and we were unable to recover it. 00:29:09.799 [2024-10-09 00:36:40.141303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.799 [2024-10-09 00:36:40.141332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.799 qpair failed and we were unable to recover it. 00:29:09.799 [2024-10-09 00:36:40.141694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.799 [2024-10-09 00:36:40.141731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.799 qpair failed and we were unable to recover it. 00:29:09.799 [2024-10-09 00:36:40.142074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.799 [2024-10-09 00:36:40.142103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.799 qpair failed and we were unable to recover it. 00:29:09.799 [2024-10-09 00:36:40.142363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.799 [2024-10-09 00:36:40.142395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.799 qpair failed and we were unable to recover it. 00:29:09.799 [2024-10-09 00:36:40.142618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.799 [2024-10-09 00:36:40.142648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.799 qpair failed and we were unable to recover it. 00:29:09.799 [2024-10-09 00:36:40.143014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.799 [2024-10-09 00:36:40.143044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.799 qpair failed and we were unable to recover it. 00:29:09.799 [2024-10-09 00:36:40.143408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.799 [2024-10-09 00:36:40.143438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.799 qpair failed and we were unable to recover it. 00:29:09.799 [2024-10-09 00:36:40.143793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.799 [2024-10-09 00:36:40.143824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.799 qpair failed and we were unable to recover it. 00:29:09.799 [2024-10-09 00:36:40.144201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.800 [2024-10-09 00:36:40.144231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.800 qpair failed and we were unable to recover it. 00:29:09.800 [2024-10-09 00:36:40.144604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.800 [2024-10-09 00:36:40.144635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.800 qpair failed and we were unable to recover it. 00:29:09.800 [2024-10-09 00:36:40.144969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.800 [2024-10-09 00:36:40.145000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.800 qpair failed and we were unable to recover it. 00:29:09.800 [2024-10-09 00:36:40.145352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.800 [2024-10-09 00:36:40.145383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.800 qpair failed and we were unable to recover it. 00:29:09.800 [2024-10-09 00:36:40.145752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.800 [2024-10-09 00:36:40.145784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.800 qpair failed and we were unable to recover it. 00:29:09.800 [2024-10-09 00:36:40.146152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.800 [2024-10-09 00:36:40.146181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.800 qpair failed and we were unable to recover it. 00:29:09.800 [2024-10-09 00:36:40.146550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.800 [2024-10-09 00:36:40.146580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.800 qpair failed and we were unable to recover it. 00:29:09.800 [2024-10-09 00:36:40.146944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.800 [2024-10-09 00:36:40.146974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.800 qpair failed and we were unable to recover it. 00:29:09.800 [2024-10-09 00:36:40.147364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.800 [2024-10-09 00:36:40.147393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.800 qpair failed and we were unable to recover it. 00:29:09.800 [2024-10-09 00:36:40.147739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.800 [2024-10-09 00:36:40.147771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.800 qpair failed and we were unable to recover it. 00:29:09.800 [2024-10-09 00:36:40.148142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.800 [2024-10-09 00:36:40.148172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.800 qpair failed and we were unable to recover it. 00:29:09.800 [2024-10-09 00:36:40.148538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.800 [2024-10-09 00:36:40.148567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.800 qpair failed and we were unable to recover it. 00:29:09.800 [2024-10-09 00:36:40.148942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.800 [2024-10-09 00:36:40.148972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.800 qpair failed and we were unable to recover it. 00:29:09.800 [2024-10-09 00:36:40.149401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.800 [2024-10-09 00:36:40.149430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.800 qpair failed and we were unable to recover it. 00:29:09.800 [2024-10-09 00:36:40.149790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.800 [2024-10-09 00:36:40.149827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.800 qpair failed and we were unable to recover it. 00:29:09.800 [2024-10-09 00:36:40.150067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.800 [2024-10-09 00:36:40.150096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.800 qpair failed and we were unable to recover it. 00:29:09.800 [2024-10-09 00:36:40.150445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.800 [2024-10-09 00:36:40.150475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.800 qpair failed and we were unable to recover it. 00:29:09.800 [2024-10-09 00:36:40.150712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.800 [2024-10-09 00:36:40.150750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.800 qpair failed and we were unable to recover it. 00:29:09.800 [2024-10-09 00:36:40.151155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.800 [2024-10-09 00:36:40.151185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.800 qpair failed and we were unable to recover it. 00:29:09.800 [2024-10-09 00:36:40.151552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.800 [2024-10-09 00:36:40.151581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.800 qpair failed and we were unable to recover it. 00:29:09.800 [2024-10-09 00:36:40.151947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.800 [2024-10-09 00:36:40.151977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.800 qpair failed and we were unable to recover it. 00:29:09.800 [2024-10-09 00:36:40.152388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.800 [2024-10-09 00:36:40.152417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.800 qpair failed and we were unable to recover it. 00:29:09.800 [2024-10-09 00:36:40.152772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.800 [2024-10-09 00:36:40.152804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.800 qpair failed and we were unable to recover it. 00:29:09.800 [2024-10-09 00:36:40.153189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.800 [2024-10-09 00:36:40.153219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.800 qpair failed and we were unable to recover it. 00:29:09.800 [2024-10-09 00:36:40.153608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.800 [2024-10-09 00:36:40.153638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.800 qpair failed and we were unable to recover it. 00:29:09.800 [2024-10-09 00:36:40.153985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.800 [2024-10-09 00:36:40.154015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.800 qpair failed and we were unable to recover it. 00:29:09.800 [2024-10-09 00:36:40.154373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.800 [2024-10-09 00:36:40.154404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.800 qpair failed and we were unable to recover it. 00:29:09.800 [2024-10-09 00:36:40.154689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.800 [2024-10-09 00:36:40.154719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.800 qpair failed and we were unable to recover it. 00:29:09.800 [2024-10-09 00:36:40.155116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.800 [2024-10-09 00:36:40.155147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.800 qpair failed and we were unable to recover it. 00:29:09.800 [2024-10-09 00:36:40.155488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.800 [2024-10-09 00:36:40.155518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.800 qpair failed and we were unable to recover it. 00:29:09.800 [2024-10-09 00:36:40.155923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.800 [2024-10-09 00:36:40.155954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.800 qpair failed and we were unable to recover it. 00:29:09.800 [2024-10-09 00:36:40.156322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.800 [2024-10-09 00:36:40.156353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.800 qpair failed and we were unable to recover it. 00:29:09.800 [2024-10-09 00:36:40.156743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.800 [2024-10-09 00:36:40.156773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.800 qpair failed and we were unable to recover it. 00:29:09.800 [2024-10-09 00:36:40.157133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.800 [2024-10-09 00:36:40.157164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.800 qpair failed and we were unable to recover it. 00:29:09.800 [2024-10-09 00:36:40.157443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.800 [2024-10-09 00:36:40.157472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.800 qpair failed and we were unable to recover it. 00:29:09.800 [2024-10-09 00:36:40.157846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.800 [2024-10-09 00:36:40.157877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.800 qpair failed and we were unable to recover it. 00:29:09.800 [2024-10-09 00:36:40.158225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.800 [2024-10-09 00:36:40.158264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.800 qpair failed and we were unable to recover it. 00:29:09.800 [2024-10-09 00:36:40.158624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.800 [2024-10-09 00:36:40.158653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.800 qpair failed and we were unable to recover it. 00:29:09.800 [2024-10-09 00:36:40.158878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.800 [2024-10-09 00:36:40.158911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.800 qpair failed and we were unable to recover it. 00:29:09.800 [2024-10-09 00:36:40.159301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.800 [2024-10-09 00:36:40.159332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.800 qpair failed and we were unable to recover it. 00:29:09.800 [2024-10-09 00:36:40.159712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.801 [2024-10-09 00:36:40.159751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.801 qpair failed and we were unable to recover it. 00:29:09.801 [2024-10-09 00:36:40.160004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.801 [2024-10-09 00:36:40.160035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.801 qpair failed and we were unable to recover it. 00:29:09.801 [2024-10-09 00:36:40.160400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.801 [2024-10-09 00:36:40.160430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.801 qpair failed and we were unable to recover it. 00:29:09.801 [2024-10-09 00:36:40.160693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.801 [2024-10-09 00:36:40.160728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.801 qpair failed and we were unable to recover it. 00:29:09.801 [2024-10-09 00:36:40.160917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.801 [2024-10-09 00:36:40.160947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.801 qpair failed and we were unable to recover it. 00:29:09.801 [2024-10-09 00:36:40.161328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.801 [2024-10-09 00:36:40.161358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.801 qpair failed and we were unable to recover it. 00:29:09.801 [2024-10-09 00:36:40.161741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.801 [2024-10-09 00:36:40.161773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.801 qpair failed and we were unable to recover it. 00:29:09.801 [2024-10-09 00:36:40.162227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.801 [2024-10-09 00:36:40.162256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.801 qpair failed and we were unable to recover it. 00:29:09.801 [2024-10-09 00:36:40.162705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.801 [2024-10-09 00:36:40.162745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.801 qpair failed and we were unable to recover it. 00:29:09.801 [2024-10-09 00:36:40.163073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.801 [2024-10-09 00:36:40.163101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.801 qpair failed and we were unable to recover it. 00:29:09.801 [2024-10-09 00:36:40.163470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.801 [2024-10-09 00:36:40.163499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.801 qpair failed and we were unable to recover it. 00:29:09.801 [2024-10-09 00:36:40.163857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.801 [2024-10-09 00:36:40.163889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.801 qpair failed and we were unable to recover it. 00:29:09.801 [2024-10-09 00:36:40.164263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.801 [2024-10-09 00:36:40.164292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.801 qpair failed and we were unable to recover it. 00:29:09.801 [2024-10-09 00:36:40.164741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.801 [2024-10-09 00:36:40.164773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.801 qpair failed and we were unable to recover it. 00:29:09.801 [2024-10-09 00:36:40.165175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.801 [2024-10-09 00:36:40.165210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.801 qpair failed and we were unable to recover it. 00:29:09.801 [2024-10-09 00:36:40.165550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.801 [2024-10-09 00:36:40.165580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.801 qpair failed and we were unable to recover it. 00:29:09.801 [2024-10-09 00:36:40.165953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.801 [2024-10-09 00:36:40.165984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.801 qpair failed and we were unable to recover it. 00:29:09.801 [2024-10-09 00:36:40.166355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.801 [2024-10-09 00:36:40.166384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.801 qpair failed and we were unable to recover it. 00:29:09.801 [2024-10-09 00:36:40.166767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.801 [2024-10-09 00:36:40.166797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.801 qpair failed and we were unable to recover it. 00:29:09.801 [2024-10-09 00:36:40.167158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.801 [2024-10-09 00:36:40.167187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.801 qpair failed and we were unable to recover it. 00:29:09.801 [2024-10-09 00:36:40.167524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.801 [2024-10-09 00:36:40.167554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.801 qpair failed and we were unable to recover it. 00:29:09.801 [2024-10-09 00:36:40.167899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.801 [2024-10-09 00:36:40.167930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.801 qpair failed and we were unable to recover it. 00:29:09.801 [2024-10-09 00:36:40.168297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.801 [2024-10-09 00:36:40.168327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.801 qpair failed and we were unable to recover it. 00:29:09.801 [2024-10-09 00:36:40.168694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.801 [2024-10-09 00:36:40.168731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.801 qpair failed and we were unable to recover it. 00:29:09.801 [2024-10-09 00:36:40.169097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.801 [2024-10-09 00:36:40.169126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.801 qpair failed and we were unable to recover it. 00:29:09.801 [2024-10-09 00:36:40.169468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.801 [2024-10-09 00:36:40.169499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.801 qpair failed and we were unable to recover it. 00:29:09.801 [2024-10-09 00:36:40.169849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.801 [2024-10-09 00:36:40.169879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.801 qpair failed and we were unable to recover it. 00:29:09.801 [2024-10-09 00:36:40.170224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.801 [2024-10-09 00:36:40.170255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.801 qpair failed and we were unable to recover it. 00:29:09.801 [2024-10-09 00:36:40.170501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.801 [2024-10-09 00:36:40.170531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.801 qpair failed and we were unable to recover it. 00:29:09.801 [2024-10-09 00:36:40.170868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.801 [2024-10-09 00:36:40.170899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.801 qpair failed and we were unable to recover it. 00:29:09.801 [2024-10-09 00:36:40.171275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.801 [2024-10-09 00:36:40.171304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.801 qpair failed and we were unable to recover it. 00:29:09.801 [2024-10-09 00:36:40.171672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.801 [2024-10-09 00:36:40.171702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.801 qpair failed and we were unable to recover it. 00:29:09.801 [2024-10-09 00:36:40.172059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.801 [2024-10-09 00:36:40.172088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.801 qpair failed and we were unable to recover it. 00:29:09.801 [2024-10-09 00:36:40.172502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.801 [2024-10-09 00:36:40.172532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.801 qpair failed and we were unable to recover it. 00:29:09.801 [2024-10-09 00:36:40.172892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.801 [2024-10-09 00:36:40.172923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.801 qpair failed and we were unable to recover it. 00:29:09.801 [2024-10-09 00:36:40.173290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.801 [2024-10-09 00:36:40.173319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.801 qpair failed and we were unable to recover it. 00:29:09.801 [2024-10-09 00:36:40.173683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.801 [2024-10-09 00:36:40.173711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.801 qpair failed and we were unable to recover it. 00:29:09.801 [2024-10-09 00:36:40.174067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.801 [2024-10-09 00:36:40.174097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.801 qpair failed and we were unable to recover it. 00:29:09.801 [2024-10-09 00:36:40.174506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.801 [2024-10-09 00:36:40.174534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.801 qpair failed and we were unable to recover it. 00:29:09.801 [2024-10-09 00:36:40.174889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.801 [2024-10-09 00:36:40.174921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.801 qpair failed and we were unable to recover it. 00:29:09.801 [2024-10-09 00:36:40.175290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.801 [2024-10-09 00:36:40.175320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.802 qpair failed and we were unable to recover it. 00:29:09.802 [2024-10-09 00:36:40.175670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.802 [2024-10-09 00:36:40.175700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.802 qpair failed and we were unable to recover it. 00:29:09.802 [2024-10-09 00:36:40.175951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.802 [2024-10-09 00:36:40.175981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.802 qpair failed and we were unable to recover it. 00:29:09.802 [2024-10-09 00:36:40.176365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.802 [2024-10-09 00:36:40.176394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.802 qpair failed and we were unable to recover it. 00:29:09.802 [2024-10-09 00:36:40.176707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.802 [2024-10-09 00:36:40.176757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.802 qpair failed and we were unable to recover it. 00:29:09.802 [2024-10-09 00:36:40.177142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.802 [2024-10-09 00:36:40.177171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.802 qpair failed and we were unable to recover it. 00:29:09.802 [2024-10-09 00:36:40.177420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.802 [2024-10-09 00:36:40.177452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.802 qpair failed and we were unable to recover it. 00:29:09.802 [2024-10-09 00:36:40.177814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.802 [2024-10-09 00:36:40.177846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.802 qpair failed and we were unable to recover it. 00:29:09.802 [2024-10-09 00:36:40.178181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.802 [2024-10-09 00:36:40.178212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.802 qpair failed and we were unable to recover it. 00:29:09.802 [2024-10-09 00:36:40.178421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.802 [2024-10-09 00:36:40.178453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.802 qpair failed and we were unable to recover it. 00:29:09.802 [2024-10-09 00:36:40.178859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.802 [2024-10-09 00:36:40.178889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.802 qpair failed and we were unable to recover it. 00:29:09.802 [2024-10-09 00:36:40.179253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.802 [2024-10-09 00:36:40.179282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.802 qpair failed and we were unable to recover it. 00:29:09.802 [2024-10-09 00:36:40.179661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.802 [2024-10-09 00:36:40.179690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.802 qpair failed and we were unable to recover it. 00:29:09.802 [2024-10-09 00:36:40.180087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.802 [2024-10-09 00:36:40.180117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.802 qpair failed and we were unable to recover it. 00:29:09.802 [2024-10-09 00:36:40.180471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.802 [2024-10-09 00:36:40.180507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.802 qpair failed and we were unable to recover it. 00:29:09.802 [2024-10-09 00:36:40.180891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.802 [2024-10-09 00:36:40.180922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.802 qpair failed and we were unable to recover it. 00:29:09.802 [2024-10-09 00:36:40.181277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.802 [2024-10-09 00:36:40.181307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.802 qpair failed and we were unable to recover it. 00:29:09.802 [2024-10-09 00:36:40.181669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.802 [2024-10-09 00:36:40.181707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.802 qpair failed and we were unable to recover it. 00:29:09.802 [2024-10-09 00:36:40.182078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.802 [2024-10-09 00:36:40.182108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.802 qpair failed and we were unable to recover it. 00:29:09.802 [2024-10-09 00:36:40.182549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.802 [2024-10-09 00:36:40.182578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.802 qpair failed and we were unable to recover it. 00:29:09.802 [2024-10-09 00:36:40.182965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.802 [2024-10-09 00:36:40.182997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.802 qpair failed and we were unable to recover it. 00:29:09.802 [2024-10-09 00:36:40.183366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.802 [2024-10-09 00:36:40.183397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.802 qpair failed and we were unable to recover it. 00:29:09.802 [2024-10-09 00:36:40.183739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.802 [2024-10-09 00:36:40.183770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.802 qpair failed and we were unable to recover it. 00:29:09.802 [2024-10-09 00:36:40.184193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.802 [2024-10-09 00:36:40.184223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.802 qpair failed and we were unable to recover it. 00:29:09.802 [2024-10-09 00:36:40.184581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.802 [2024-10-09 00:36:40.184609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.802 qpair failed and we were unable to recover it. 00:29:09.802 [2024-10-09 00:36:40.184979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.802 [2024-10-09 00:36:40.185010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.802 qpair failed and we were unable to recover it. 00:29:09.802 [2024-10-09 00:36:40.185367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.802 [2024-10-09 00:36:40.185396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.802 qpair failed and we were unable to recover it. 00:29:09.802 [2024-10-09 00:36:40.185745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.802 [2024-10-09 00:36:40.185775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.802 qpair failed and we were unable to recover it. 00:29:09.802 [2024-10-09 00:36:40.186141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.802 [2024-10-09 00:36:40.186171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.802 qpair failed and we were unable to recover it. 00:29:09.802 [2024-10-09 00:36:40.186546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.802 [2024-10-09 00:36:40.186575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.802 qpair failed and we were unable to recover it. 00:29:09.802 [2024-10-09 00:36:40.186815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.802 [2024-10-09 00:36:40.186846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.802 qpair failed and we were unable to recover it. 00:29:09.802 [2024-10-09 00:36:40.187231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.802 [2024-10-09 00:36:40.187261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.802 qpair failed and we were unable to recover it. 00:29:09.802 [2024-10-09 00:36:40.187604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.802 [2024-10-09 00:36:40.187632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.802 qpair failed and we were unable to recover it. 00:29:09.802 [2024-10-09 00:36:40.187989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.802 [2024-10-09 00:36:40.188019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.802 qpair failed and we were unable to recover it. 00:29:09.802 [2024-10-09 00:36:40.188367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.802 [2024-10-09 00:36:40.188397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.802 qpair failed and we were unable to recover it. 00:29:09.802 [2024-10-09 00:36:40.188758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.802 [2024-10-09 00:36:40.188788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.802 qpair failed and we were unable to recover it. 00:29:09.802 [2024-10-09 00:36:40.189061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.802 [2024-10-09 00:36:40.189090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.802 qpair failed and we were unable to recover it. 00:29:09.802 [2024-10-09 00:36:40.189448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.802 [2024-10-09 00:36:40.189477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.802 qpair failed and we were unable to recover it. 00:29:09.802 [2024-10-09 00:36:40.189737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.802 [2024-10-09 00:36:40.189767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.802 qpair failed and we were unable to recover it. 00:29:09.802 [2024-10-09 00:36:40.190148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.802 [2024-10-09 00:36:40.190177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.802 qpair failed and we were unable to recover it. 00:29:09.802 [2024-10-09 00:36:40.190545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.802 [2024-10-09 00:36:40.190575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.802 qpair failed and we were unable to recover it. 00:29:09.803 [2024-10-09 00:36:40.190956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.803 [2024-10-09 00:36:40.190989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.803 qpair failed and we were unable to recover it. 00:29:09.803 [2024-10-09 00:36:40.191349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.803 [2024-10-09 00:36:40.191379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.803 qpair failed and we were unable to recover it. 00:29:09.803 [2024-10-09 00:36:40.191738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.803 [2024-10-09 00:36:40.191769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.803 qpair failed and we were unable to recover it. 00:29:09.803 [2024-10-09 00:36:40.192126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.803 [2024-10-09 00:36:40.192156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.803 qpair failed and we were unable to recover it. 00:29:09.803 [2024-10-09 00:36:40.192519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.803 [2024-10-09 00:36:40.192548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.803 qpair failed and we were unable to recover it. 00:29:09.803 [2024-10-09 00:36:40.192909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.803 [2024-10-09 00:36:40.192939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.803 qpair failed and we were unable to recover it. 00:29:09.803 [2024-10-09 00:36:40.193274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.803 [2024-10-09 00:36:40.193303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.803 qpair failed and we were unable to recover it. 00:29:09.803 [2024-10-09 00:36:40.193613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.803 [2024-10-09 00:36:40.193645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.803 qpair failed and we were unable to recover it. 00:29:09.803 [2024-10-09 00:36:40.193994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.803 [2024-10-09 00:36:40.194024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.803 qpair failed and we were unable to recover it. 00:29:09.803 [2024-10-09 00:36:40.194386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.803 [2024-10-09 00:36:40.194417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.803 qpair failed and we were unable to recover it. 00:29:09.803 [2024-10-09 00:36:40.194768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.803 [2024-10-09 00:36:40.194800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.803 qpair failed and we were unable to recover it. 00:29:09.803 [2024-10-09 00:36:40.195114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.803 [2024-10-09 00:36:40.195142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.803 qpair failed and we were unable to recover it. 00:29:09.803 [2024-10-09 00:36:40.195503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.803 [2024-10-09 00:36:40.195532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.803 qpair failed and we were unable to recover it. 00:29:09.803 [2024-10-09 00:36:40.195871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.803 [2024-10-09 00:36:40.195909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.803 qpair failed and we were unable to recover it. 00:29:09.803 [2024-10-09 00:36:40.196266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.803 [2024-10-09 00:36:40.196295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.803 qpair failed and we were unable to recover it. 00:29:09.803 [2024-10-09 00:36:40.196672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.803 [2024-10-09 00:36:40.196702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.803 qpair failed and we were unable to recover it. 00:29:09.803 [2024-10-09 00:36:40.197003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.803 [2024-10-09 00:36:40.197034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.803 qpair failed and we were unable to recover it. 00:29:09.803 [2024-10-09 00:36:40.197391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.803 [2024-10-09 00:36:40.197421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.803 qpair failed and we were unable to recover it. 00:29:09.803 [2024-10-09 00:36:40.197797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.803 [2024-10-09 00:36:40.197827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.803 qpair failed and we were unable to recover it. 00:29:09.803 [2024-10-09 00:36:40.198195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.803 [2024-10-09 00:36:40.198226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.803 qpair failed and we were unable to recover it. 00:29:09.803 [2024-10-09 00:36:40.198473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.803 [2024-10-09 00:36:40.198506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.803 qpair failed and we were unable to recover it. 00:29:09.803 [2024-10-09 00:36:40.198766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.803 [2024-10-09 00:36:40.198796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.803 qpair failed and we were unable to recover it. 00:29:09.803 [2024-10-09 00:36:40.199050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.803 [2024-10-09 00:36:40.199081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.803 qpair failed and we were unable to recover it. 00:29:09.803 [2024-10-09 00:36:40.199445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.803 [2024-10-09 00:36:40.199475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.803 qpair failed and we were unable to recover it. 00:29:09.803 [2024-10-09 00:36:40.199739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.803 [2024-10-09 00:36:40.199770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.803 qpair failed and we were unable to recover it. 00:29:09.803 [2024-10-09 00:36:40.200138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.803 [2024-10-09 00:36:40.200168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.803 qpair failed and we were unable to recover it. 00:29:09.803 [2024-10-09 00:36:40.200537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.803 [2024-10-09 00:36:40.200567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.803 qpair failed and we were unable to recover it. 00:29:09.803 [2024-10-09 00:36:40.200983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.803 [2024-10-09 00:36:40.201014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.803 qpair failed and we were unable to recover it. 00:29:09.803 [2024-10-09 00:36:40.201408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.803 [2024-10-09 00:36:40.201438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.803 qpair failed and we were unable to recover it. 00:29:09.803 [2024-10-09 00:36:40.201778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.803 [2024-10-09 00:36:40.201809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.803 qpair failed and we were unable to recover it. 00:29:09.803 [2024-10-09 00:36:40.202156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.803 [2024-10-09 00:36:40.202185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.803 qpair failed and we were unable to recover it. 00:29:09.803 [2024-10-09 00:36:40.202554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.803 [2024-10-09 00:36:40.202583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.803 qpair failed and we were unable to recover it. 00:29:09.803 [2024-10-09 00:36:40.202958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.803 [2024-10-09 00:36:40.202989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.803 qpair failed and we were unable to recover it. 00:29:09.803 [2024-10-09 00:36:40.203317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.803 [2024-10-09 00:36:40.203345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.803 qpair failed and we were unable to recover it. 00:29:09.803 [2024-10-09 00:36:40.203671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.804 [2024-10-09 00:36:40.203701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.804 qpair failed and we were unable to recover it. 00:29:09.804 [2024-10-09 00:36:40.204066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.804 [2024-10-09 00:36:40.204096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.804 qpair failed and we were unable to recover it. 00:29:09.804 [2024-10-09 00:36:40.204459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.804 [2024-10-09 00:36:40.204488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.804 qpair failed and we were unable to recover it. 00:29:09.804 [2024-10-09 00:36:40.204924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.804 [2024-10-09 00:36:40.204955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.804 qpair failed and we were unable to recover it. 00:29:09.804 [2024-10-09 00:36:40.205390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.804 [2024-10-09 00:36:40.205420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.804 qpair failed and we were unable to recover it. 00:29:09.804 [2024-10-09 00:36:40.205779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.804 [2024-10-09 00:36:40.205809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.804 qpair failed and we were unable to recover it. 00:29:09.804 [2024-10-09 00:36:40.206176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.804 [2024-10-09 00:36:40.206205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.804 qpair failed and we were unable to recover it. 00:29:09.804 [2024-10-09 00:36:40.206574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.804 [2024-10-09 00:36:40.206604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.804 qpair failed and we were unable to recover it. 00:29:09.804 [2024-10-09 00:36:40.206955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.804 [2024-10-09 00:36:40.206985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.804 qpair failed and we were unable to recover it. 00:29:09.804 [2024-10-09 00:36:40.207339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.804 [2024-10-09 00:36:40.207368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.804 qpair failed and we were unable to recover it. 00:29:09.804 [2024-10-09 00:36:40.207740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.804 [2024-10-09 00:36:40.207770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.804 qpair failed and we were unable to recover it. 00:29:09.804 [2024-10-09 00:36:40.208132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.804 [2024-10-09 00:36:40.208161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.804 qpair failed and we were unable to recover it. 00:29:09.804 [2024-10-09 00:36:40.208395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.804 [2024-10-09 00:36:40.208425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.804 qpair failed and we were unable to recover it. 00:29:09.804 [2024-10-09 00:36:40.208791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.804 [2024-10-09 00:36:40.208822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.804 qpair failed and we were unable to recover it. 00:29:09.804 [2024-10-09 00:36:40.209200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.804 [2024-10-09 00:36:40.209230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.804 qpair failed and we were unable to recover it. 00:29:09.804 [2024-10-09 00:36:40.209626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.804 [2024-10-09 00:36:40.209655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.804 qpair failed and we were unable to recover it. 00:29:09.804 [2024-10-09 00:36:40.210055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.804 [2024-10-09 00:36:40.210085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.804 qpair failed and we were unable to recover it. 00:29:09.804 [2024-10-09 00:36:40.210398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.804 [2024-10-09 00:36:40.210427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.804 qpair failed and we were unable to recover it. 00:29:09.804 [2024-10-09 00:36:40.210669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.804 [2024-10-09 00:36:40.210702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.804 qpair failed and we were unable to recover it. 00:29:09.804 [2024-10-09 00:36:40.211099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.804 [2024-10-09 00:36:40.211136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.804 qpair failed and we were unable to recover it. 00:29:09.804 [2024-10-09 00:36:40.211534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.804 [2024-10-09 00:36:40.211563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.804 qpair failed and we were unable to recover it. 00:29:09.804 [2024-10-09 00:36:40.211924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.804 [2024-10-09 00:36:40.211955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.804 qpair failed and we were unable to recover it. 00:29:09.804 [2024-10-09 00:36:40.212232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.804 [2024-10-09 00:36:40.212261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.804 qpair failed and we were unable to recover it. 00:29:09.804 [2024-10-09 00:36:40.212641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.804 [2024-10-09 00:36:40.212670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.804 qpair failed and we were unable to recover it. 00:29:09.804 [2024-10-09 00:36:40.213028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.804 [2024-10-09 00:36:40.213059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.804 qpair failed and we were unable to recover it. 00:29:09.804 [2024-10-09 00:36:40.213466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.804 [2024-10-09 00:36:40.213495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.804 qpair failed and we were unable to recover it. 00:29:09.804 [2024-10-09 00:36:40.213846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.804 [2024-10-09 00:36:40.213877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.804 qpair failed and we were unable to recover it. 00:29:09.804 [2024-10-09 00:36:40.214252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.804 [2024-10-09 00:36:40.214281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.804 qpair failed and we were unable to recover it. 00:29:09.804 [2024-10-09 00:36:40.214641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.804 [2024-10-09 00:36:40.214672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.804 qpair failed and we were unable to recover it. 00:29:09.804 [2024-10-09 00:36:40.215043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.804 [2024-10-09 00:36:40.215073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.804 qpair failed and we were unable to recover it. 00:29:09.804 [2024-10-09 00:36:40.215412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.804 [2024-10-09 00:36:40.215442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.804 qpair failed and we were unable to recover it. 00:29:09.804 [2024-10-09 00:36:40.215793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.804 [2024-10-09 00:36:40.215825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.804 qpair failed and we were unable to recover it. 00:29:09.804 [2024-10-09 00:36:40.216192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.804 [2024-10-09 00:36:40.216220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.804 qpair failed and we were unable to recover it. 00:29:09.804 [2024-10-09 00:36:40.216597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.804 [2024-10-09 00:36:40.216626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.804 qpair failed and we were unable to recover it. 00:29:09.804 [2024-10-09 00:36:40.216988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.804 [2024-10-09 00:36:40.217020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.804 qpair failed and we were unable to recover it. 00:29:09.804 [2024-10-09 00:36:40.217382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.804 [2024-10-09 00:36:40.217410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.804 qpair failed and we were unable to recover it. 00:29:09.804 [2024-10-09 00:36:40.217754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.804 [2024-10-09 00:36:40.217783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.804 qpair failed and we were unable to recover it. 00:29:09.804 [2024-10-09 00:36:40.218121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.804 [2024-10-09 00:36:40.218151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.804 qpair failed and we were unable to recover it. 00:29:09.804 [2024-10-09 00:36:40.218494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.804 [2024-10-09 00:36:40.218523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.804 qpair failed and we were unable to recover it. 00:29:09.804 [2024-10-09 00:36:40.218890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.804 [2024-10-09 00:36:40.218920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.804 qpair failed and we were unable to recover it. 00:29:09.804 [2024-10-09 00:36:40.219285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.804 [2024-10-09 00:36:40.219314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.805 qpair failed and we were unable to recover it. 00:29:09.805 [2024-10-09 00:36:40.219681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.805 [2024-10-09 00:36:40.219709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.805 qpair failed and we were unable to recover it. 00:29:09.805 [2024-10-09 00:36:40.220062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.805 [2024-10-09 00:36:40.220092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.805 qpair failed and we were unable to recover it. 00:29:09.805 [2024-10-09 00:36:40.220461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.805 [2024-10-09 00:36:40.220490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.805 qpair failed and we were unable to recover it. 00:29:09.805 [2024-10-09 00:36:40.220872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.805 [2024-10-09 00:36:40.220903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.805 qpair failed and we were unable to recover it. 00:29:09.805 [2024-10-09 00:36:40.221270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.805 [2024-10-09 00:36:40.221300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.805 qpair failed and we were unable to recover it. 00:29:09.805 [2024-10-09 00:36:40.221667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.805 [2024-10-09 00:36:40.221697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.805 qpair failed and we were unable to recover it. 00:29:09.805 [2024-10-09 00:36:40.222062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.805 [2024-10-09 00:36:40.222091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.805 qpair failed and we were unable to recover it. 00:29:09.805 [2024-10-09 00:36:40.222467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.805 [2024-10-09 00:36:40.222496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.805 qpair failed and we were unable to recover it. 00:29:09.805 [2024-10-09 00:36:40.222839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.805 [2024-10-09 00:36:40.222869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.805 qpair failed and we were unable to recover it. 00:29:09.805 [2024-10-09 00:36:40.223233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.805 [2024-10-09 00:36:40.223261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.805 qpair failed and we were unable to recover it. 00:29:09.805 [2024-10-09 00:36:40.223627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.805 [2024-10-09 00:36:40.223657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.805 qpair failed and we were unable to recover it. 00:29:09.805 [2024-10-09 00:36:40.224046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.805 [2024-10-09 00:36:40.224077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.805 qpair failed and we were unable to recover it. 00:29:09.805 [2024-10-09 00:36:40.224349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.805 [2024-10-09 00:36:40.224382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.805 qpair failed and we were unable to recover it. 00:29:09.805 [2024-10-09 00:36:40.224767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.805 [2024-10-09 00:36:40.224799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.805 qpair failed and we were unable to recover it. 00:29:09.805 [2024-10-09 00:36:40.225189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.805 [2024-10-09 00:36:40.225220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.805 qpair failed and we were unable to recover it. 00:29:09.805 [2024-10-09 00:36:40.225585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.805 [2024-10-09 00:36:40.225615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.805 qpair failed and we were unable to recover it. 00:29:09.805 [2024-10-09 00:36:40.225948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.805 [2024-10-09 00:36:40.225979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:09.805 qpair failed and we were unable to recover it. 00:29:09.805 [2024-10-09 00:36:40.226438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.805 [2024-10-09 00:36:40.226531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.805 qpair failed and we were unable to recover it. 00:29:09.805 [2024-10-09 00:36:40.227058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.805 [2024-10-09 00:36:40.227178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.805 qpair failed and we were unable to recover it. 00:29:09.805 [2024-10-09 00:36:40.227628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.805 [2024-10-09 00:36:40.227666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.805 qpair failed and we were unable to recover it. 00:29:09.805 [2024-10-09 00:36:40.228051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.805 [2024-10-09 00:36:40.228084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.805 qpair failed and we were unable to recover it. 00:29:09.805 [2024-10-09 00:36:40.228439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.805 [2024-10-09 00:36:40.228468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.805 qpair failed and we were unable to recover it. 00:29:09.805 [2024-10-09 00:36:40.228999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.805 [2024-10-09 00:36:40.229108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.805 qpair failed and we were unable to recover it. 00:29:09.805 [2024-10-09 00:36:40.229569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.805 [2024-10-09 00:36:40.229606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.805 qpair failed and we were unable to recover it. 00:29:09.805 [2024-10-09 00:36:40.229968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.805 [2024-10-09 00:36:40.230001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.805 qpair failed and we were unable to recover it. 00:29:09.805 [2024-10-09 00:36:40.230369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.805 [2024-10-09 00:36:40.230398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.805 qpair failed and we were unable to recover it. 00:29:09.805 [2024-10-09 00:36:40.230759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.805 [2024-10-09 00:36:40.230793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.805 qpair failed and we were unable to recover it. 00:29:09.805 [2024-10-09 00:36:40.231048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.805 [2024-10-09 00:36:40.231077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.805 qpair failed and we were unable to recover it. 00:29:09.805 [2024-10-09 00:36:40.231354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.805 [2024-10-09 00:36:40.231389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.805 qpair failed and we were unable to recover it. 00:29:09.805 [2024-10-09 00:36:40.231749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.805 [2024-10-09 00:36:40.231781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.805 qpair failed and we were unable to recover it. 00:29:09.805 [2024-10-09 00:36:40.232129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.805 [2024-10-09 00:36:40.232159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.805 qpair failed and we were unable to recover it. 00:29:09.805 [2024-10-09 00:36:40.232528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.805 [2024-10-09 00:36:40.232557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.805 qpair failed and we were unable to recover it. 00:29:09.805 [2024-10-09 00:36:40.232827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.805 [2024-10-09 00:36:40.232859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.805 qpair failed and we were unable to recover it. 00:29:09.805 [2024-10-09 00:36:40.233224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.805 [2024-10-09 00:36:40.233252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.805 qpair failed and we were unable to recover it. 00:29:09.805 [2024-10-09 00:36:40.233609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.805 [2024-10-09 00:36:40.233639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.805 qpair failed and we were unable to recover it. 00:29:09.805 [2024-10-09 00:36:40.233981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.805 [2024-10-09 00:36:40.234011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.805 qpair failed and we were unable to recover it. 00:29:09.805 [2024-10-09 00:36:40.234383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.805 [2024-10-09 00:36:40.234414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.805 qpair failed and we were unable to recover it. 00:29:09.805 [2024-10-09 00:36:40.234656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.805 [2024-10-09 00:36:40.234686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.805 qpair failed and we were unable to recover it. 00:29:09.805 [2024-10-09 00:36:40.235076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.805 [2024-10-09 00:36:40.235109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.805 qpair failed and we were unable to recover it. 00:29:09.805 [2024-10-09 00:36:40.235360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.805 [2024-10-09 00:36:40.235390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.805 qpair failed and we were unable to recover it. 00:29:09.806 [2024-10-09 00:36:40.235763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.806 [2024-10-09 00:36:40.235797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.806 qpair failed and we were unable to recover it. 00:29:09.806 [2024-10-09 00:36:40.236216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.806 [2024-10-09 00:36:40.236246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.806 qpair failed and we were unable to recover it. 00:29:09.806 [2024-10-09 00:36:40.236608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.806 [2024-10-09 00:36:40.236637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.806 qpair failed and we were unable to recover it. 00:29:09.806 [2024-10-09 00:36:40.236896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.806 [2024-10-09 00:36:40.236930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.806 qpair failed and we were unable to recover it. 00:29:09.806 [2024-10-09 00:36:40.237193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.806 [2024-10-09 00:36:40.237222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.806 qpair failed and we were unable to recover it. 00:29:09.806 [2024-10-09 00:36:40.237555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.806 [2024-10-09 00:36:40.237587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.806 qpair failed and we were unable to recover it. 00:29:09.806 [2024-10-09 00:36:40.237947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.806 [2024-10-09 00:36:40.237978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.806 qpair failed and we were unable to recover it. 00:29:09.806 [2024-10-09 00:36:40.238365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.806 [2024-10-09 00:36:40.238393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.806 qpair failed and we were unable to recover it. 00:29:09.806 [2024-10-09 00:36:40.238751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.806 [2024-10-09 00:36:40.238783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.806 qpair failed and we were unable to recover it. 00:29:09.806 [2024-10-09 00:36:40.239233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.806 [2024-10-09 00:36:40.239263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.806 qpair failed and we were unable to recover it. 00:29:09.806 [2024-10-09 00:36:40.239607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.806 [2024-10-09 00:36:40.239637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.806 qpair failed and we were unable to recover it. 00:29:09.806 [2024-10-09 00:36:40.240021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.806 [2024-10-09 00:36:40.240053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.806 qpair failed and we were unable to recover it. 00:29:09.806 [2024-10-09 00:36:40.240312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.806 [2024-10-09 00:36:40.240342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.806 qpair failed and we were unable to recover it. 00:29:09.806 [2024-10-09 00:36:40.240715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.806 [2024-10-09 00:36:40.240775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.806 qpair failed and we were unable to recover it. 00:29:09.806 [2024-10-09 00:36:40.241179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.806 [2024-10-09 00:36:40.241210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.806 qpair failed and we were unable to recover it. 00:29:09.806 [2024-10-09 00:36:40.241563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.806 [2024-10-09 00:36:40.241596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.806 qpair failed and we were unable to recover it. 00:29:09.806 [2024-10-09 00:36:40.241974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.806 [2024-10-09 00:36:40.242006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.806 qpair failed and we were unable to recover it. 00:29:09.806 [2024-10-09 00:36:40.242371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.806 [2024-10-09 00:36:40.242400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.806 qpair failed and we were unable to recover it. 00:29:09.806 [2024-10-09 00:36:40.242669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.806 [2024-10-09 00:36:40.242699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.806 qpair failed and we were unable to recover it. 00:29:09.806 [2024-10-09 00:36:40.243076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.806 [2024-10-09 00:36:40.243114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.806 qpair failed and we were unable to recover it. 00:29:09.806 [2024-10-09 00:36:40.243461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.806 [2024-10-09 00:36:40.243491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.806 qpair failed and we were unable to recover it. 00:29:09.806 [2024-10-09 00:36:40.243850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.806 [2024-10-09 00:36:40.243882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.806 qpair failed and we were unable to recover it. 00:29:09.806 [2024-10-09 00:36:40.244250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.806 [2024-10-09 00:36:40.244281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.806 qpair failed and we were unable to recover it. 00:29:09.806 [2024-10-09 00:36:40.244630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.806 [2024-10-09 00:36:40.244661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.806 qpair failed and we were unable to recover it. 00:29:09.806 [2024-10-09 00:36:40.245001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.806 [2024-10-09 00:36:40.245033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.806 qpair failed and we were unable to recover it. 00:29:09.806 [2024-10-09 00:36:40.245396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.806 [2024-10-09 00:36:40.245426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.806 qpair failed and we were unable to recover it. 00:29:09.806 [2024-10-09 00:36:40.245785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.806 [2024-10-09 00:36:40.245816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.806 qpair failed and we were unable to recover it. 00:29:09.806 [2024-10-09 00:36:40.246076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.806 [2024-10-09 00:36:40.246110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.806 qpair failed and we were unable to recover it. 00:29:09.806 [2024-10-09 00:36:40.246463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.806 [2024-10-09 00:36:40.246493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.806 qpair failed and we were unable to recover it. 00:29:09.806 [2024-10-09 00:36:40.246866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.806 [2024-10-09 00:36:40.246898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.806 qpair failed and we were unable to recover it. 00:29:09.806 [2024-10-09 00:36:40.247271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.806 [2024-10-09 00:36:40.247301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.806 qpair failed and we were unable to recover it. 00:29:09.806 [2024-10-09 00:36:40.247653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.806 [2024-10-09 00:36:40.247683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.806 qpair failed and we were unable to recover it. 00:29:09.806 [2024-10-09 00:36:40.248062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.806 [2024-10-09 00:36:40.248094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.806 qpair failed and we were unable to recover it. 00:29:09.806 [2024-10-09 00:36:40.248461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.806 [2024-10-09 00:36:40.248492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.806 qpair failed and we were unable to recover it. 00:29:09.806 [2024-10-09 00:36:40.248853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.806 [2024-10-09 00:36:40.248884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.806 qpair failed and we were unable to recover it. 00:29:09.806 [2024-10-09 00:36:40.249244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.806 [2024-10-09 00:36:40.249274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.806 qpair failed and we were unable to recover it. 00:29:09.806 [2024-10-09 00:36:40.249633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.806 [2024-10-09 00:36:40.249664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.806 qpair failed and we were unable to recover it. 00:29:09.806 [2024-10-09 00:36:40.250036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.806 [2024-10-09 00:36:40.250068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.806 qpair failed and we were unable to recover it. 00:29:09.806 [2024-10-09 00:36:40.250426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.806 [2024-10-09 00:36:40.250456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.806 qpair failed and we were unable to recover it. 00:29:09.806 [2024-10-09 00:36:40.250807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.806 [2024-10-09 00:36:40.250838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.806 qpair failed and we were unable to recover it. 00:29:09.806 [2024-10-09 00:36:40.251219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.806 [2024-10-09 00:36:40.251250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.807 qpair failed and we were unable to recover it. 00:29:09.807 [2024-10-09 00:36:40.251572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.807 [2024-10-09 00:36:40.251602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.807 qpair failed and we were unable to recover it. 00:29:09.807 [2024-10-09 00:36:40.251946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.807 [2024-10-09 00:36:40.251977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.807 qpair failed and we were unable to recover it. 00:29:09.807 [2024-10-09 00:36:40.252322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.807 [2024-10-09 00:36:40.252353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.807 qpair failed and we were unable to recover it. 00:29:09.807 [2024-10-09 00:36:40.252713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.807 [2024-10-09 00:36:40.252754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.807 qpair failed and we were unable to recover it. 00:29:09.807 [2024-10-09 00:36:40.253146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.807 [2024-10-09 00:36:40.253177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.807 qpair failed and we were unable to recover it. 00:29:09.807 [2024-10-09 00:36:40.253534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.807 [2024-10-09 00:36:40.253571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.807 qpair failed and we were unable to recover it. 00:29:09.807 [2024-10-09 00:36:40.253975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.807 [2024-10-09 00:36:40.254006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.807 qpair failed and we were unable to recover it. 00:29:09.807 [2024-10-09 00:36:40.254370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.807 [2024-10-09 00:36:40.254400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.807 qpair failed and we were unable to recover it. 00:29:09.807 [2024-10-09 00:36:40.254728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.807 [2024-10-09 00:36:40.254760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.807 qpair failed and we were unable to recover it. 00:29:09.807 [2024-10-09 00:36:40.255130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.807 [2024-10-09 00:36:40.255161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.807 qpair failed and we were unable to recover it. 00:29:09.807 [2024-10-09 00:36:40.255514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.807 [2024-10-09 00:36:40.255544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.807 qpair failed and we were unable to recover it. 00:29:09.807 [2024-10-09 00:36:40.255894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.807 [2024-10-09 00:36:40.255924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.807 qpair failed and we were unable to recover it. 00:29:09.807 [2024-10-09 00:36:40.256271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.807 [2024-10-09 00:36:40.256300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.807 qpair failed and we were unable to recover it. 00:29:09.807 [2024-10-09 00:36:40.256659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.807 [2024-10-09 00:36:40.256688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.807 qpair failed and we were unable to recover it. 00:29:09.807 [2024-10-09 00:36:40.257064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.807 [2024-10-09 00:36:40.257095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.807 qpair failed and we were unable to recover it. 00:29:09.807 [2024-10-09 00:36:40.257532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.807 [2024-10-09 00:36:40.257560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.807 qpair failed and we were unable to recover it. 00:29:09.807 [2024-10-09 00:36:40.257900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.807 [2024-10-09 00:36:40.257931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.807 qpair failed and we were unable to recover it. 00:29:09.807 [2024-10-09 00:36:40.258297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.807 [2024-10-09 00:36:40.258326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.807 qpair failed and we were unable to recover it. 00:29:09.807 [2024-10-09 00:36:40.258676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.807 [2024-10-09 00:36:40.258705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.807 qpair failed and we were unable to recover it. 00:29:09.807 [2024-10-09 00:36:40.259064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.807 [2024-10-09 00:36:40.259093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.807 qpair failed and we were unable to recover it. 00:29:09.807 [2024-10-09 00:36:40.259384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.807 [2024-10-09 00:36:40.259414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.807 qpair failed and we were unable to recover it. 00:29:09.807 [2024-10-09 00:36:40.259804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.807 [2024-10-09 00:36:40.259836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.807 qpair failed and we were unable to recover it. 00:29:09.807 [2024-10-09 00:36:40.260226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.807 [2024-10-09 00:36:40.260255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.807 qpair failed and we were unable to recover it. 00:29:09.807 [2024-10-09 00:36:40.260532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.807 [2024-10-09 00:36:40.260561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.807 qpair failed and we were unable to recover it. 00:29:09.807 [2024-10-09 00:36:40.260821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.807 [2024-10-09 00:36:40.260852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.807 qpair failed and we were unable to recover it. 00:29:09.807 [2024-10-09 00:36:40.261212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.807 [2024-10-09 00:36:40.261243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.807 qpair failed and we were unable to recover it. 00:29:09.807 [2024-10-09 00:36:40.261582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.807 [2024-10-09 00:36:40.261611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.807 qpair failed and we were unable to recover it. 00:29:09.807 [2024-10-09 00:36:40.261968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.807 [2024-10-09 00:36:40.262001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.807 qpair failed and we were unable to recover it. 00:29:09.807 [2024-10-09 00:36:40.262246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.807 [2024-10-09 00:36:40.262276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.807 qpair failed and we were unable to recover it. 00:29:09.807 [2024-10-09 00:36:40.262668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.807 [2024-10-09 00:36:40.262697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.807 qpair failed and we were unable to recover it. 00:29:09.807 [2024-10-09 00:36:40.263091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.807 [2024-10-09 00:36:40.263121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.807 qpair failed and we were unable to recover it. 00:29:09.807 [2024-10-09 00:36:40.263487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.807 [2024-10-09 00:36:40.263515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.807 qpair failed and we were unable to recover it. 00:29:09.807 [2024-10-09 00:36:40.263926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.807 [2024-10-09 00:36:40.263965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.807 qpair failed and we were unable to recover it. 00:29:09.807 [2024-10-09 00:36:40.264328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.807 [2024-10-09 00:36:40.264357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.807 qpair failed and we were unable to recover it. 00:29:09.807 [2024-10-09 00:36:40.264716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.807 [2024-10-09 00:36:40.264756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.807 qpair failed and we were unable to recover it. 00:29:09.807 [2024-10-09 00:36:40.265120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.808 [2024-10-09 00:36:40.265151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.808 qpair failed and we were unable to recover it. 00:29:09.808 [2024-10-09 00:36:40.265514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.808 [2024-10-09 00:36:40.265543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.808 qpair failed and we were unable to recover it. 00:29:09.808 [2024-10-09 00:36:40.265899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.808 [2024-10-09 00:36:40.265931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.808 qpair failed and we were unable to recover it. 00:29:09.808 [2024-10-09 00:36:40.266276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.808 [2024-10-09 00:36:40.266305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.808 qpair failed and we were unable to recover it. 00:29:09.808 [2024-10-09 00:36:40.266647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.808 [2024-10-09 00:36:40.266675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.808 qpair failed and we were unable to recover it. 00:29:09.808 [2024-10-09 00:36:40.266910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.808 [2024-10-09 00:36:40.266944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.808 qpair failed and we were unable to recover it. 00:29:09.808 [2024-10-09 00:36:40.267313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.808 [2024-10-09 00:36:40.267342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.808 qpair failed and we were unable to recover it. 00:29:09.808 [2024-10-09 00:36:40.267700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.808 [2024-10-09 00:36:40.267738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.808 qpair failed and we were unable to recover it. 00:29:09.808 [2024-10-09 00:36:40.268073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.808 [2024-10-09 00:36:40.268102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.808 qpair failed and we were unable to recover it. 00:29:09.808 [2024-10-09 00:36:40.268504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.808 [2024-10-09 00:36:40.268534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.808 qpair failed and we were unable to recover it. 00:29:09.808 [2024-10-09 00:36:40.268889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.808 [2024-10-09 00:36:40.268919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.808 qpair failed and we were unable to recover it. 00:29:09.808 [2024-10-09 00:36:40.269294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.808 [2024-10-09 00:36:40.269324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.808 qpair failed and we were unable to recover it. 00:29:09.808 [2024-10-09 00:36:40.269687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.808 [2024-10-09 00:36:40.269716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.808 qpair failed and we were unable to recover it. 00:29:09.808 [2024-10-09 00:36:40.270094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.808 [2024-10-09 00:36:40.270123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.808 qpair failed and we were unable to recover it. 00:29:09.808 [2024-10-09 00:36:40.270368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.808 [2024-10-09 00:36:40.270396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.808 qpair failed and we were unable to recover it. 00:29:09.808 [2024-10-09 00:36:40.270745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.808 [2024-10-09 00:36:40.270775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.808 qpair failed and we were unable to recover it. 00:29:09.808 [2024-10-09 00:36:40.271061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.808 [2024-10-09 00:36:40.271090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.808 qpair failed and we were unable to recover it. 00:29:09.808 [2024-10-09 00:36:40.271328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.808 [2024-10-09 00:36:40.271357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.808 qpair failed and we were unable to recover it. 00:29:09.808 [2024-10-09 00:36:40.271757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.808 [2024-10-09 00:36:40.271790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.808 qpair failed and we were unable to recover it. 00:29:09.808 [2024-10-09 00:36:40.272139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.808 [2024-10-09 00:36:40.272168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.808 qpair failed and we were unable to recover it. 00:29:09.808 [2024-10-09 00:36:40.272536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.808 [2024-10-09 00:36:40.272566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.808 qpair failed and we were unable to recover it. 00:29:09.808 [2024-10-09 00:36:40.272896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.808 [2024-10-09 00:36:40.272926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.808 qpair failed and we were unable to recover it. 00:29:09.808 [2024-10-09 00:36:40.273296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.808 [2024-10-09 00:36:40.273325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.808 qpair failed and we were unable to recover it. 00:29:09.808 [2024-10-09 00:36:40.273687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.808 [2024-10-09 00:36:40.273716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.808 qpair failed and we were unable to recover it. 00:29:09.808 [2024-10-09 00:36:40.273983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.808 [2024-10-09 00:36:40.274012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.808 qpair failed and we were unable to recover it. 00:29:09.808 [2024-10-09 00:36:40.274403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.808 [2024-10-09 00:36:40.274432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.808 qpair failed and we were unable to recover it. 00:29:09.808 [2024-10-09 00:36:40.274763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.808 [2024-10-09 00:36:40.274796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.808 qpair failed and we were unable to recover it. 00:29:09.808 [2024-10-09 00:36:40.275142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.808 [2024-10-09 00:36:40.275172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.808 qpair failed and we were unable to recover it. 00:29:09.808 [2024-10-09 00:36:40.275530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.808 [2024-10-09 00:36:40.275560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.808 qpair failed and we were unable to recover it. 00:29:09.808 [2024-10-09 00:36:40.275898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.808 [2024-10-09 00:36:40.275928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.808 qpair failed and we were unable to recover it. 00:29:09.808 [2024-10-09 00:36:40.276337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.808 [2024-10-09 00:36:40.276367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.808 qpair failed and we were unable to recover it. 00:29:09.808 [2024-10-09 00:36:40.276749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.808 [2024-10-09 00:36:40.276780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.808 qpair failed and we were unable to recover it. 00:29:09.808 [2024-10-09 00:36:40.277138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.808 [2024-10-09 00:36:40.277167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.808 qpair failed and we were unable to recover it. 00:29:09.808 [2024-10-09 00:36:40.277546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.808 [2024-10-09 00:36:40.277575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.808 qpair failed and we were unable to recover it. 00:29:09.808 [2024-10-09 00:36:40.277949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.808 [2024-10-09 00:36:40.277980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.808 qpair failed and we were unable to recover it. 00:29:09.808 [2024-10-09 00:36:40.278320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.808 [2024-10-09 00:36:40.278349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.808 qpair failed and we were unable to recover it. 00:29:09.808 [2024-10-09 00:36:40.278743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.808 [2024-10-09 00:36:40.278773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.808 qpair failed and we were unable to recover it. 00:29:09.808 [2024-10-09 00:36:40.279130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.808 [2024-10-09 00:36:40.279158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.808 qpair failed and we were unable to recover it. 00:29:09.808 [2024-10-09 00:36:40.279527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.808 [2024-10-09 00:36:40.279558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.808 qpair failed and we were unable to recover it. 00:29:09.808 [2024-10-09 00:36:40.279895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.808 [2024-10-09 00:36:40.279925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.808 qpair failed and we were unable to recover it. 00:29:09.808 [2024-10-09 00:36:40.280299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.808 [2024-10-09 00:36:40.280328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.808 qpair failed and we were unable to recover it. 00:29:09.808 [2024-10-09 00:36:40.280667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.809 [2024-10-09 00:36:40.280696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.809 qpair failed and we were unable to recover it. 00:29:09.809 [2024-10-09 00:36:40.281085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.809 [2024-10-09 00:36:40.281116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.809 qpair failed and we were unable to recover it. 00:29:09.809 [2024-10-09 00:36:40.281486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.809 [2024-10-09 00:36:40.281516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.809 qpair failed and we were unable to recover it. 00:29:09.809 [2024-10-09 00:36:40.281879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.809 [2024-10-09 00:36:40.281909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.809 qpair failed and we were unable to recover it. 00:29:09.809 [2024-10-09 00:36:40.282278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.809 [2024-10-09 00:36:40.282307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.809 qpair failed and we were unable to recover it. 00:29:09.809 [2024-10-09 00:36:40.282677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.809 [2024-10-09 00:36:40.282708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.809 qpair failed and we were unable to recover it. 00:29:09.809 [2024-10-09 00:36:40.283065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.809 [2024-10-09 00:36:40.283094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.809 qpair failed and we were unable to recover it. 00:29:09.809 [2024-10-09 00:36:40.283469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.809 [2024-10-09 00:36:40.283499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.809 qpair failed and we were unable to recover it. 00:29:09.809 [2024-10-09 00:36:40.283878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.809 [2024-10-09 00:36:40.283909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.809 qpair failed and we were unable to recover it. 00:29:09.809 [2024-10-09 00:36:40.284148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.809 [2024-10-09 00:36:40.284180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.809 qpair failed and we were unable to recover it. 00:29:09.809 [2024-10-09 00:36:40.284567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.809 [2024-10-09 00:36:40.284596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.809 qpair failed and we were unable to recover it. 00:29:09.809 [2024-10-09 00:36:40.284837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.809 [2024-10-09 00:36:40.284868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.809 qpair failed and we were unable to recover it. 00:29:09.809 [2024-10-09 00:36:40.285189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.809 [2024-10-09 00:36:40.285218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.809 qpair failed and we were unable to recover it. 00:29:09.809 [2024-10-09 00:36:40.285613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.809 [2024-10-09 00:36:40.285642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.809 qpair failed and we were unable to recover it. 00:29:09.809 [2024-10-09 00:36:40.286002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.809 [2024-10-09 00:36:40.286035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.809 qpair failed and we were unable to recover it. 00:29:09.809 [2024-10-09 00:36:40.286395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.809 [2024-10-09 00:36:40.286424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.809 qpair failed and we were unable to recover it. 00:29:09.809 [2024-10-09 00:36:40.286748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.809 [2024-10-09 00:36:40.286778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.809 qpair failed and we were unable to recover it. 00:29:09.809 [2024-10-09 00:36:40.287176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.809 [2024-10-09 00:36:40.287205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.809 qpair failed and we were unable to recover it. 00:29:09.809 [2024-10-09 00:36:40.287566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.809 [2024-10-09 00:36:40.287594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.809 qpair failed and we were unable to recover it. 00:29:09.809 [2024-10-09 00:36:40.287980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.809 [2024-10-09 00:36:40.288015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.809 qpair failed and we were unable to recover it. 00:29:09.809 [2024-10-09 00:36:40.288397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.809 [2024-10-09 00:36:40.288426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.809 qpair failed and we were unable to recover it. 00:29:09.809 [2024-10-09 00:36:40.288691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.809 [2024-10-09 00:36:40.288728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.809 qpair failed and we were unable to recover it. 00:29:09.809 [2024-10-09 00:36:40.289100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.809 [2024-10-09 00:36:40.289128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.809 qpair failed and we were unable to recover it. 00:29:09.809 [2024-10-09 00:36:40.289505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.809 [2024-10-09 00:36:40.289536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.809 qpair failed and we were unable to recover it. 00:29:09.809 [2024-10-09 00:36:40.289901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.809 [2024-10-09 00:36:40.289938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.809 qpair failed and we were unable to recover it. 00:29:09.809 [2024-10-09 00:36:40.290304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.809 [2024-10-09 00:36:40.290333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.809 qpair failed and we were unable to recover it. 00:29:09.809 [2024-10-09 00:36:40.290706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.809 [2024-10-09 00:36:40.290744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.809 qpair failed and we were unable to recover it. 00:29:09.809 [2024-10-09 00:36:40.291149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.809 [2024-10-09 00:36:40.291178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.809 qpair failed and we were unable to recover it. 00:29:09.809 [2024-10-09 00:36:40.291524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.809 [2024-10-09 00:36:40.291554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.809 qpair failed and we were unable to recover it. 00:29:09.809 [2024-10-09 00:36:40.291814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.809 [2024-10-09 00:36:40.291846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.809 qpair failed and we were unable to recover it. 00:29:09.809 [2024-10-09 00:36:40.292232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.809 [2024-10-09 00:36:40.292261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.809 qpair failed and we were unable to recover it. 00:29:09.809 [2024-10-09 00:36:40.292637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.809 [2024-10-09 00:36:40.292666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.809 qpair failed and we were unable to recover it. 00:29:09.809 [2024-10-09 00:36:40.293062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.809 [2024-10-09 00:36:40.293093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.809 qpair failed and we were unable to recover it. 00:29:09.809 [2024-10-09 00:36:40.293472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.809 [2024-10-09 00:36:40.293501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.809 qpair failed and we were unable to recover it. 00:29:09.809 [2024-10-09 00:36:40.293843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.809 [2024-10-09 00:36:40.293874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.809 qpair failed and we were unable to recover it. 00:29:09.809 [2024-10-09 00:36:40.294235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.809 [2024-10-09 00:36:40.294263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.809 qpair failed and we were unable to recover it. 00:29:09.809 [2024-10-09 00:36:40.294527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.809 [2024-10-09 00:36:40.294557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.809 qpair failed and we were unable to recover it. 00:29:09.809 [2024-10-09 00:36:40.294807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.809 [2024-10-09 00:36:40.294837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.809 qpair failed and we were unable to recover it. 00:29:09.809 [2024-10-09 00:36:40.295189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.809 [2024-10-09 00:36:40.295218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.809 qpair failed and we were unable to recover it. 00:29:09.809 [2024-10-09 00:36:40.295587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.809 [2024-10-09 00:36:40.295616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.809 qpair failed and we were unable to recover it. 00:29:09.809 [2024-10-09 00:36:40.295993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.809 [2024-10-09 00:36:40.296024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.809 qpair failed and we were unable to recover it. 00:29:09.810 [2024-10-09 00:36:40.296390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.810 [2024-10-09 00:36:40.296418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.810 qpair failed and we were unable to recover it. 00:29:09.810 [2024-10-09 00:36:40.296820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.810 [2024-10-09 00:36:40.296850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.810 qpair failed and we were unable to recover it. 00:29:09.810 [2024-10-09 00:36:40.297224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.810 [2024-10-09 00:36:40.297255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.810 qpair failed and we were unable to recover it. 00:29:09.810 [2024-10-09 00:36:40.297596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.810 [2024-10-09 00:36:40.297625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.810 qpair failed and we were unable to recover it. 00:29:09.810 [2024-10-09 00:36:40.298016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.810 [2024-10-09 00:36:40.298046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.810 qpair failed and we were unable to recover it. 00:29:09.810 [2024-10-09 00:36:40.298410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.810 [2024-10-09 00:36:40.298439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.810 qpair failed and we were unable to recover it. 00:29:09.810 [2024-10-09 00:36:40.298803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.810 [2024-10-09 00:36:40.298834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.810 qpair failed and we were unable to recover it. 00:29:09.810 [2024-10-09 00:36:40.299210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.810 [2024-10-09 00:36:40.299240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.810 qpair failed and we were unable to recover it. 00:29:09.810 [2024-10-09 00:36:40.299490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.810 [2024-10-09 00:36:40.299523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.810 qpair failed and we were unable to recover it. 00:29:09.810 [2024-10-09 00:36:40.299878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.810 [2024-10-09 00:36:40.299908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.810 qpair failed and we were unable to recover it. 00:29:09.810 [2024-10-09 00:36:40.300338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.810 [2024-10-09 00:36:40.300379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.810 qpair failed and we were unable to recover it. 00:29:09.810 [2024-10-09 00:36:40.300703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.810 [2024-10-09 00:36:40.300766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.810 qpair failed and we were unable to recover it. 00:29:09.810 [2024-10-09 00:36:40.301135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.810 [2024-10-09 00:36:40.301165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.810 qpair failed and we were unable to recover it. 00:29:09.810 [2024-10-09 00:36:40.301571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.810 [2024-10-09 00:36:40.301600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.810 qpair failed and we were unable to recover it. 00:29:09.810 [2024-10-09 00:36:40.301969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.810 [2024-10-09 00:36:40.302000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.810 qpair failed and we were unable to recover it. 00:29:09.810 [2024-10-09 00:36:40.302368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.810 [2024-10-09 00:36:40.302397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.810 qpair failed and we were unable to recover it. 00:29:09.810 [2024-10-09 00:36:40.302765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.810 [2024-10-09 00:36:40.302795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.810 qpair failed and we were unable to recover it. 00:29:09.810 [2024-10-09 00:36:40.303181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.810 [2024-10-09 00:36:40.303209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.810 qpair failed and we were unable to recover it. 00:29:09.810 [2024-10-09 00:36:40.303590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.810 [2024-10-09 00:36:40.303618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.810 qpair failed and we were unable to recover it. 00:29:09.810 [2024-10-09 00:36:40.304004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.810 [2024-10-09 00:36:40.304041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.810 qpair failed and we were unable to recover it. 00:29:09.810 [2024-10-09 00:36:40.304277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.810 [2024-10-09 00:36:40.304309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.810 qpair failed and we were unable to recover it. 00:29:09.810 [2024-10-09 00:36:40.304669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.810 [2024-10-09 00:36:40.304698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.810 qpair failed and we were unable to recover it. 00:29:09.810 [2024-10-09 00:36:40.305065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.810 [2024-10-09 00:36:40.305095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.810 qpair failed and we were unable to recover it. 00:29:09.810 [2024-10-09 00:36:40.305478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.810 [2024-10-09 00:36:40.305507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.810 qpair failed and we were unable to recover it. 00:29:09.810 [2024-10-09 00:36:40.305864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.810 [2024-10-09 00:36:40.305893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.810 qpair failed and we were unable to recover it. 00:29:09.810 [2024-10-09 00:36:40.306257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.810 [2024-10-09 00:36:40.306287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.810 qpair failed and we were unable to recover it. 00:29:09.810 [2024-10-09 00:36:40.306647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.810 [2024-10-09 00:36:40.306677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.810 qpair failed and we were unable to recover it. 00:29:09.810 [2024-10-09 00:36:40.307062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.810 [2024-10-09 00:36:40.307092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.810 qpair failed and we were unable to recover it. 00:29:09.810 [2024-10-09 00:36:40.307408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.810 [2024-10-09 00:36:40.307437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.810 qpair failed and we were unable to recover it. 00:29:09.810 [2024-10-09 00:36:40.307794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.810 [2024-10-09 00:36:40.307825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.810 qpair failed and we were unable to recover it. 00:29:09.810 [2024-10-09 00:36:40.308208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.810 [2024-10-09 00:36:40.308238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.810 qpair failed and we were unable to recover it. 00:29:09.810 [2024-10-09 00:36:40.308606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.810 [2024-10-09 00:36:40.308634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.810 qpair failed and we were unable to recover it. 00:29:09.810 [2024-10-09 00:36:40.308964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.810 [2024-10-09 00:36:40.308995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.810 qpair failed and we were unable to recover it. 00:29:09.810 [2024-10-09 00:36:40.309363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.810 [2024-10-09 00:36:40.309392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.810 qpair failed and we were unable to recover it. 00:29:09.810 [2024-10-09 00:36:40.309766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.810 [2024-10-09 00:36:40.309796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.810 qpair failed and we were unable to recover it. 00:29:09.810 [2024-10-09 00:36:40.310172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.810 [2024-10-09 00:36:40.310201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.810 qpair failed and we were unable to recover it. 00:29:09.810 [2024-10-09 00:36:40.310454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.810 [2024-10-09 00:36:40.310483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.810 qpair failed and we were unable to recover it. 00:29:09.810 [2024-10-09 00:36:40.310825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.810 [2024-10-09 00:36:40.310855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.810 qpair failed and we were unable to recover it. 00:29:09.810 [2024-10-09 00:36:40.311278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.810 [2024-10-09 00:36:40.311307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.810 qpair failed and we were unable to recover it. 00:29:09.810 [2024-10-09 00:36:40.311680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.810 [2024-10-09 00:36:40.311709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.810 qpair failed and we were unable to recover it. 00:29:09.810 [2024-10-09 00:36:40.312052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.810 [2024-10-09 00:36:40.312081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.810 qpair failed and we were unable to recover it. 00:29:09.811 [2024-10-09 00:36:40.312440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.811 [2024-10-09 00:36:40.312468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.811 qpair failed and we were unable to recover it. 00:29:09.811 [2024-10-09 00:36:40.312881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.811 [2024-10-09 00:36:40.312912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.811 qpair failed and we were unable to recover it. 00:29:09.811 [2024-10-09 00:36:40.313261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.811 [2024-10-09 00:36:40.313291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.811 qpair failed and we were unable to recover it. 00:29:09.811 [2024-10-09 00:36:40.313662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.811 [2024-10-09 00:36:40.313691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.811 qpair failed and we were unable to recover it. 00:29:09.811 [2024-10-09 00:36:40.314059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.811 [2024-10-09 00:36:40.314089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.811 qpair failed and we were unable to recover it. 00:29:09.811 [2024-10-09 00:36:40.314447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.811 [2024-10-09 00:36:40.314475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.811 qpair failed and we were unable to recover it. 00:29:09.811 [2024-10-09 00:36:40.314849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.811 [2024-10-09 00:36:40.314879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.811 qpair failed and we were unable to recover it. 00:29:09.811 [2024-10-09 00:36:40.315176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.811 [2024-10-09 00:36:40.315205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.811 qpair failed and we were unable to recover it. 00:29:09.811 [2024-10-09 00:36:40.315578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.811 [2024-10-09 00:36:40.315606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.811 qpair failed and we were unable to recover it. 00:29:09.811 [2024-10-09 00:36:40.316001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.811 [2024-10-09 00:36:40.316031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.811 qpair failed and we were unable to recover it. 00:29:09.811 [2024-10-09 00:36:40.316385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.811 [2024-10-09 00:36:40.316416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.811 qpair failed and we were unable to recover it. 00:29:09.811 [2024-10-09 00:36:40.316783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.811 [2024-10-09 00:36:40.316813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.811 qpair failed and we were unable to recover it. 00:29:09.811 [2024-10-09 00:36:40.317164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.811 [2024-10-09 00:36:40.317194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.811 qpair failed and we were unable to recover it. 00:29:09.811 [2024-10-09 00:36:40.317530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.811 [2024-10-09 00:36:40.317559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.811 qpair failed and we were unable to recover it. 00:29:09.811 [2024-10-09 00:36:40.317916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.811 [2024-10-09 00:36:40.317945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.811 qpair failed and we were unable to recover it. 00:29:09.811 [2024-10-09 00:36:40.318325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.811 [2024-10-09 00:36:40.318354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.811 qpair failed and we were unable to recover it. 00:29:09.811 [2024-10-09 00:36:40.318714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.811 [2024-10-09 00:36:40.318751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.811 qpair failed and we were unable to recover it. 00:29:09.811 [2024-10-09 00:36:40.319107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.811 [2024-10-09 00:36:40.319135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.811 qpair failed and we were unable to recover it. 00:29:09.811 [2024-10-09 00:36:40.319490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.811 [2024-10-09 00:36:40.319518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.811 qpair failed and we were unable to recover it. 00:29:09.811 [2024-10-09 00:36:40.319868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.811 [2024-10-09 00:36:40.319899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.811 qpair failed and we were unable to recover it. 00:29:09.811 [2024-10-09 00:36:40.320143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.811 [2024-10-09 00:36:40.320176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.811 qpair failed and we were unable to recover it. 00:29:09.811 [2024-10-09 00:36:40.320533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.811 [2024-10-09 00:36:40.320562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.811 qpair failed and we were unable to recover it. 00:29:09.811 [2024-10-09 00:36:40.320921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.811 [2024-10-09 00:36:40.320952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.811 qpair failed and we were unable to recover it. 00:29:09.811 [2024-10-09 00:36:40.321201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.811 [2024-10-09 00:36:40.321230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.811 qpair failed and we were unable to recover it. 00:29:09.811 [2024-10-09 00:36:40.321583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.811 [2024-10-09 00:36:40.321613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.811 qpair failed and we were unable to recover it. 00:29:09.811 [2024-10-09 00:36:40.321951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.811 [2024-10-09 00:36:40.321983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.811 qpair failed and we were unable to recover it. 00:29:09.811 [2024-10-09 00:36:40.322432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.811 [2024-10-09 00:36:40.322461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.811 qpair failed and we were unable to recover it. 00:29:09.811 [2024-10-09 00:36:40.322820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.811 [2024-10-09 00:36:40.322852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.811 qpair failed and we were unable to recover it. 00:29:09.811 [2024-10-09 00:36:40.323212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.811 [2024-10-09 00:36:40.323241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.811 qpair failed and we were unable to recover it. 00:29:09.811 [2024-10-09 00:36:40.323582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.811 [2024-10-09 00:36:40.323612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.811 qpair failed and we were unable to recover it. 00:29:09.811 [2024-10-09 00:36:40.323887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.811 [2024-10-09 00:36:40.323917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.811 qpair failed and we were unable to recover it. 00:29:09.811 [2024-10-09 00:36:40.324178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.811 [2024-10-09 00:36:40.324207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.811 qpair failed and we were unable to recover it. 00:29:09.811 [2024-10-09 00:36:40.324463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.811 [2024-10-09 00:36:40.324493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.811 qpair failed and we were unable to recover it. 00:29:09.811 [2024-10-09 00:36:40.324784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.811 [2024-10-09 00:36:40.324815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.811 qpair failed and we were unable to recover it. 00:29:09.811 [2024-10-09 00:36:40.325169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.811 [2024-10-09 00:36:40.325199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.811 qpair failed and we were unable to recover it. 00:29:09.811 [2024-10-09 00:36:40.325551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.811 [2024-10-09 00:36:40.325580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.811 qpair failed and we were unable to recover it. 00:29:09.811 [2024-10-09 00:36:40.325988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.811 [2024-10-09 00:36:40.326018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.811 qpair failed and we were unable to recover it. 00:29:09.811 [2024-10-09 00:36:40.326380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.811 [2024-10-09 00:36:40.326414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.811 qpair failed and we were unable to recover it. 00:29:09.811 [2024-10-09 00:36:40.326667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.811 [2024-10-09 00:36:40.326696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.811 qpair failed and we were unable to recover it. 00:29:09.812 [2024-10-09 00:36:40.327044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.812 [2024-10-09 00:36:40.327075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.812 qpair failed and we were unable to recover it. 00:29:09.812 [2024-10-09 00:36:40.327433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.812 [2024-10-09 00:36:40.327462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.812 qpair failed and we were unable to recover it. 00:29:09.812 [2024-10-09 00:36:40.327824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.812 [2024-10-09 00:36:40.327855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.812 qpair failed and we were unable to recover it. 00:29:09.812 [2024-10-09 00:36:40.328240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.812 [2024-10-09 00:36:40.328270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.812 qpair failed and we were unable to recover it. 00:29:09.812 [2024-10-09 00:36:40.328628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.812 [2024-10-09 00:36:40.328656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.812 qpair failed and we were unable to recover it. 00:29:09.812 [2024-10-09 00:36:40.329019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.812 [2024-10-09 00:36:40.329050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.812 qpair failed and we were unable to recover it. 00:29:09.812 [2024-10-09 00:36:40.329393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.812 [2024-10-09 00:36:40.329422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.812 qpair failed and we were unable to recover it. 00:29:09.812 [2024-10-09 00:36:40.329656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.812 [2024-10-09 00:36:40.329685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.812 qpair failed and we were unable to recover it. 00:29:09.812 [2024-10-09 00:36:40.330150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.812 [2024-10-09 00:36:40.330182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.812 qpair failed and we were unable to recover it. 00:29:09.812 [2024-10-09 00:36:40.330545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.812 [2024-10-09 00:36:40.330574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.812 qpair failed and we were unable to recover it. 00:29:09.812 [2024-10-09 00:36:40.330911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.812 [2024-10-09 00:36:40.330941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.812 qpair failed and we were unable to recover it. 00:29:09.812 [2024-10-09 00:36:40.331288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.812 [2024-10-09 00:36:40.331319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.812 qpair failed and we were unable to recover it. 00:29:09.812 [2024-10-09 00:36:40.331685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.812 [2024-10-09 00:36:40.331716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.812 qpair failed and we were unable to recover it. 00:29:09.812 [2024-10-09 00:36:40.332085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.812 [2024-10-09 00:36:40.332114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.812 qpair failed and we were unable to recover it. 00:29:09.812 [2024-10-09 00:36:40.332491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.812 [2024-10-09 00:36:40.332520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.812 qpair failed and we were unable to recover it. 00:29:09.812 [2024-10-09 00:36:40.332813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.812 [2024-10-09 00:36:40.332844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.812 qpair failed and we were unable to recover it. 00:29:09.812 [2024-10-09 00:36:40.333089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.812 [2024-10-09 00:36:40.333121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.812 qpair failed and we were unable to recover it. 00:29:09.812 [2024-10-09 00:36:40.333483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.812 [2024-10-09 00:36:40.333513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.812 qpair failed and we were unable to recover it. 00:29:09.812 [2024-10-09 00:36:40.333875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.812 [2024-10-09 00:36:40.333907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.812 qpair failed and we were unable to recover it. 00:29:09.812 [2024-10-09 00:36:40.334237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.812 [2024-10-09 00:36:40.334265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.812 qpair failed and we were unable to recover it. 00:29:09.812 [2024-10-09 00:36:40.334520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.812 [2024-10-09 00:36:40.334549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.812 qpair failed and we were unable to recover it. 00:29:09.812 [2024-10-09 00:36:40.334911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.812 [2024-10-09 00:36:40.334941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.812 qpair failed and we were unable to recover it. 00:29:09.812 [2024-10-09 00:36:40.335313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.812 [2024-10-09 00:36:40.335343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.812 qpair failed and we were unable to recover it. 00:29:09.812 [2024-10-09 00:36:40.335590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.812 [2024-10-09 00:36:40.335622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.812 qpair failed and we were unable to recover it. 00:29:09.812 [2024-10-09 00:36:40.335848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.812 [2024-10-09 00:36:40.335879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.812 qpair failed and we were unable to recover it. 00:29:09.812 [2024-10-09 00:36:40.336247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.812 [2024-10-09 00:36:40.336282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.812 qpair failed and we were unable to recover it. 00:29:09.812 [2024-10-09 00:36:40.336620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.812 [2024-10-09 00:36:40.336651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.812 qpair failed and we were unable to recover it. 00:29:09.812 [2024-10-09 00:36:40.337036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.812 [2024-10-09 00:36:40.337067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.812 qpair failed and we were unable to recover it. 00:29:09.812 [2024-10-09 00:36:40.337421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.812 [2024-10-09 00:36:40.337449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.812 qpair failed and we were unable to recover it. 00:29:09.812 [2024-10-09 00:36:40.337817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.812 [2024-10-09 00:36:40.337847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.812 qpair failed and we were unable to recover it. 00:29:09.812 [2024-10-09 00:36:40.338231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.812 [2024-10-09 00:36:40.338260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.812 qpair failed and we were unable to recover it. 00:29:09.812 [2024-10-09 00:36:40.338497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.812 [2024-10-09 00:36:40.338530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.812 qpair failed and we were unable to recover it. 00:29:09.812 [2024-10-09 00:36:40.338894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.812 [2024-10-09 00:36:40.338924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.812 qpair failed and we were unable to recover it. 00:29:09.812 [2024-10-09 00:36:40.339290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.812 [2024-10-09 00:36:40.339319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.812 qpair failed and we were unable to recover it. 00:29:09.812 [2024-10-09 00:36:40.339681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.812 [2024-10-09 00:36:40.339709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.812 qpair failed and we were unable to recover it. 00:29:09.812 [2024-10-09 00:36:40.340090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.812 [2024-10-09 00:36:40.340120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.812 qpair failed and we were unable to recover it. 00:29:09.812 [2024-10-09 00:36:40.340487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.812 [2024-10-09 00:36:40.340516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.812 qpair failed and we were unable to recover it. 00:29:09.812 [2024-10-09 00:36:40.340868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.813 [2024-10-09 00:36:40.340899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.813 qpair failed and we were unable to recover it. 00:29:09.813 [2024-10-09 00:36:40.341260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.813 [2024-10-09 00:36:40.341289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.813 qpair failed and we were unable to recover it. 00:29:09.813 [2024-10-09 00:36:40.341661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.813 [2024-10-09 00:36:40.341691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.813 qpair failed and we were unable to recover it. 00:29:09.813 [2024-10-09 00:36:40.342060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.813 [2024-10-09 00:36:40.342090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.813 qpair failed and we were unable to recover it. 00:29:09.813 [2024-10-09 00:36:40.342459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.813 [2024-10-09 00:36:40.342488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.813 qpair failed and we were unable to recover it. 00:29:09.813 [2024-10-09 00:36:40.342862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.813 [2024-10-09 00:36:40.342892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.813 qpair failed and we were unable to recover it. 00:29:09.813 [2024-10-09 00:36:40.343169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.813 [2024-10-09 00:36:40.343198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.813 qpair failed and we were unable to recover it. 00:29:09.813 [2024-10-09 00:36:40.343562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.813 [2024-10-09 00:36:40.343590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.813 qpair failed and we were unable to recover it. 00:29:09.813 [2024-10-09 00:36:40.343855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.813 [2024-10-09 00:36:40.343884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.813 qpair failed and we were unable to recover it. 00:29:09.813 [2024-10-09 00:36:40.344259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.813 [2024-10-09 00:36:40.344288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.813 qpair failed and we were unable to recover it. 00:29:09.813 [2024-10-09 00:36:40.344615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.813 [2024-10-09 00:36:40.344645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.813 qpair failed and we were unable to recover it. 00:29:09.813 [2024-10-09 00:36:40.344915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.813 [2024-10-09 00:36:40.344945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.813 qpair failed and we were unable to recover it. 00:29:09.813 [2024-10-09 00:36:40.345202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.813 [2024-10-09 00:36:40.345234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.813 qpair failed and we were unable to recover it. 00:29:09.813 [2024-10-09 00:36:40.345605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.813 [2024-10-09 00:36:40.345634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.813 qpair failed and we were unable to recover it. 00:29:09.813 [2024-10-09 00:36:40.345966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.813 [2024-10-09 00:36:40.345997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.813 qpair failed and we were unable to recover it. 00:29:09.813 [2024-10-09 00:36:40.346242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.813 [2024-10-09 00:36:40.346278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.813 qpair failed and we were unable to recover it. 00:29:09.813 [2024-10-09 00:36:40.346684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.813 [2024-10-09 00:36:40.346714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.813 qpair failed and we were unable to recover it. 00:29:09.813 [2024-10-09 00:36:40.347129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.813 [2024-10-09 00:36:40.347160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.813 qpair failed and we were unable to recover it. 00:29:09.813 [2024-10-09 00:36:40.347460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.813 [2024-10-09 00:36:40.347488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.813 qpair failed and we were unable to recover it. 00:29:09.813 [2024-10-09 00:36:40.347759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.813 [2024-10-09 00:36:40.347789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.813 qpair failed and we were unable to recover it. 00:29:09.813 [2024-10-09 00:36:40.348172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.813 [2024-10-09 00:36:40.348202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.813 qpair failed and we were unable to recover it. 00:29:09.813 [2024-10-09 00:36:40.348574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.813 [2024-10-09 00:36:40.348602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.813 qpair failed and we were unable to recover it. 00:29:09.813 [2024-10-09 00:36:40.348995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.813 [2024-10-09 00:36:40.349025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.813 qpair failed and we were unable to recover it. 00:29:09.813 [2024-10-09 00:36:40.349390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.813 [2024-10-09 00:36:40.349420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.813 qpair failed and we were unable to recover it. 00:29:09.813 [2024-10-09 00:36:40.349807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.813 [2024-10-09 00:36:40.349836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.813 qpair failed and we were unable to recover it. 00:29:09.813 [2024-10-09 00:36:40.350112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.813 [2024-10-09 00:36:40.350141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.813 qpair failed and we were unable to recover it. 00:29:09.813 [2024-10-09 00:36:40.350397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.813 [2024-10-09 00:36:40.350428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.813 qpair failed and we were unable to recover it. 00:29:09.813 [2024-10-09 00:36:40.350673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.813 [2024-10-09 00:36:40.350702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.813 qpair failed and we were unable to recover it. 00:29:09.813 [2024-10-09 00:36:40.351074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.813 [2024-10-09 00:36:40.351104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.813 qpair failed and we were unable to recover it. 00:29:09.813 [2024-10-09 00:36:40.351463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.813 [2024-10-09 00:36:40.351493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.813 qpair failed and we were unable to recover it. 00:29:09.813 [2024-10-09 00:36:40.351830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.813 [2024-10-09 00:36:40.351862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.813 qpair failed and we were unable to recover it. 00:29:09.813 [2024-10-09 00:36:40.352192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.813 [2024-10-09 00:36:40.352221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.813 qpair failed and we were unable to recover it. 00:29:09.813 [2024-10-09 00:36:40.352612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.813 [2024-10-09 00:36:40.352641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.813 qpair failed and we were unable to recover it. 00:29:09.813 [2024-10-09 00:36:40.352973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.813 [2024-10-09 00:36:40.353003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.813 qpair failed and we were unable to recover it. 00:29:09.813 [2024-10-09 00:36:40.353362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.813 [2024-10-09 00:36:40.353391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.813 qpair failed and we were unable to recover it. 00:29:09.813 [2024-10-09 00:36:40.353641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.813 [2024-10-09 00:36:40.353673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.813 qpair failed and we were unable to recover it. 00:29:09.813 [2024-10-09 00:36:40.354075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.813 [2024-10-09 00:36:40.354105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.813 qpair failed and we were unable to recover it. 00:29:09.813 [2024-10-09 00:36:40.354521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.813 [2024-10-09 00:36:40.354551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.813 qpair failed and we were unable to recover it. 00:29:09.813 [2024-10-09 00:36:40.354906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.813 [2024-10-09 00:36:40.354936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.813 qpair failed and we were unable to recover it. 00:29:09.813 [2024-10-09 00:36:40.355217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.813 [2024-10-09 00:36:40.355247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.813 qpair failed and we were unable to recover it. 00:29:09.813 [2024-10-09 00:36:40.355590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.813 [2024-10-09 00:36:40.355620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.813 qpair failed and we were unable to recover it. 00:29:09.813 [2024-10-09 00:36:40.355776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.814 [2024-10-09 00:36:40.355809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.814 qpair failed and we were unable to recover it. 00:29:09.814 [2024-10-09 00:36:40.356095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.814 [2024-10-09 00:36:40.356124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.814 qpair failed and we were unable to recover it. 00:29:09.814 [2024-10-09 00:36:40.356423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.814 [2024-10-09 00:36:40.356453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.814 qpair failed and we were unable to recover it. 00:29:09.814 [2024-10-09 00:36:40.356826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.814 [2024-10-09 00:36:40.356857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.814 qpair failed and we were unable to recover it. 00:29:09.814 [2024-10-09 00:36:40.357232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.814 [2024-10-09 00:36:40.357262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.814 qpair failed and we were unable to recover it. 00:29:09.814 [2024-10-09 00:36:40.357636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.814 [2024-10-09 00:36:40.357665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.814 qpair failed and we were unable to recover it. 00:29:09.814 [2024-10-09 00:36:40.358034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.814 [2024-10-09 00:36:40.358065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.814 qpair failed and we were unable to recover it. 00:29:09.814 [2024-10-09 00:36:40.358441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.814 [2024-10-09 00:36:40.358471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.814 qpair failed and we were unable to recover it. 00:29:09.814 [2024-10-09 00:36:40.358712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.814 [2024-10-09 00:36:40.358750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.814 qpair failed and we were unable to recover it. 00:29:09.814 [2024-10-09 00:36:40.359097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.814 [2024-10-09 00:36:40.359127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.814 qpair failed and we were unable to recover it. 00:29:09.814 [2024-10-09 00:36:40.359435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.814 [2024-10-09 00:36:40.359463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.814 qpair failed and we were unable to recover it. 00:29:09.814 [2024-10-09 00:36:40.359713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.814 [2024-10-09 00:36:40.359750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.814 qpair failed and we were unable to recover it. 00:29:09.814 [2024-10-09 00:36:40.360160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.814 [2024-10-09 00:36:40.360189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.814 qpair failed and we were unable to recover it. 00:29:09.814 [2024-10-09 00:36:40.360451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.814 [2024-10-09 00:36:40.360483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.814 qpair failed and we were unable to recover it. 00:29:09.814 [2024-10-09 00:36:40.360853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.814 [2024-10-09 00:36:40.360885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.814 qpair failed and we were unable to recover it. 00:29:09.814 [2024-10-09 00:36:40.361129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.814 [2024-10-09 00:36:40.361161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.814 qpair failed and we were unable to recover it. 00:29:09.814 [2024-10-09 00:36:40.361507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.814 [2024-10-09 00:36:40.361536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.814 qpair failed and we were unable to recover it. 00:29:09.814 [2024-10-09 00:36:40.361883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.814 [2024-10-09 00:36:40.361914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.814 qpair failed and we were unable to recover it. 00:29:09.814 [2024-10-09 00:36:40.362359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.814 [2024-10-09 00:36:40.362388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.814 qpair failed and we were unable to recover it. 00:29:09.814 [2024-10-09 00:36:40.362733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.814 [2024-10-09 00:36:40.362765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.814 qpair failed and we were unable to recover it. 00:29:09.814 [2024-10-09 00:36:40.363058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.814 [2024-10-09 00:36:40.363087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.814 qpair failed and we were unable to recover it. 00:29:09.814 [2024-10-09 00:36:40.363341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.814 [2024-10-09 00:36:40.363370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.814 qpair failed and we were unable to recover it. 00:29:09.814 [2024-10-09 00:36:40.363741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.814 [2024-10-09 00:36:40.363772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.814 qpair failed and we were unable to recover it. 00:29:09.814 [2024-10-09 00:36:40.364155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.814 [2024-10-09 00:36:40.364184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.814 qpair failed and we were unable to recover it. 00:29:09.814 [2024-10-09 00:36:40.364542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.814 [2024-10-09 00:36:40.364572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.814 qpair failed and we were unable to recover it. 00:29:09.814 [2024-10-09 00:36:40.364942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.814 [2024-10-09 00:36:40.364973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.814 qpair failed and we were unable to recover it. 00:29:09.814 [2024-10-09 00:36:40.365315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.814 [2024-10-09 00:36:40.365346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.814 qpair failed and we were unable to recover it. 00:29:09.814 [2024-10-09 00:36:40.365620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.814 [2024-10-09 00:36:40.365648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.814 qpair failed and we were unable to recover it. 00:29:09.814 [2024-10-09 00:36:40.365904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.814 [2024-10-09 00:36:40.365934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.814 qpair failed and we were unable to recover it. 00:29:09.814 [2024-10-09 00:36:40.366332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.814 [2024-10-09 00:36:40.366362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.814 qpair failed and we were unable to recover it. 00:29:09.814 [2024-10-09 00:36:40.366709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.814 [2024-10-09 00:36:40.366747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.814 qpair failed and we were unable to recover it. 00:29:09.814 [2024-10-09 00:36:40.367024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.814 [2024-10-09 00:36:40.367054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.814 qpair failed and we were unable to recover it. 00:29:09.814 [2024-10-09 00:36:40.367430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.814 [2024-10-09 00:36:40.367459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.814 qpair failed and we were unable to recover it. 00:29:09.814 [2024-10-09 00:36:40.367878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.814 [2024-10-09 00:36:40.367907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.814 qpair failed and we were unable to recover it. 00:29:09.814 [2024-10-09 00:36:40.368294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.814 [2024-10-09 00:36:40.368323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.814 qpair failed and we were unable to recover it. 00:29:09.814 [2024-10-09 00:36:40.368689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.814 [2024-10-09 00:36:40.368717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.814 qpair failed and we were unable to recover it. 00:29:09.814 [2024-10-09 00:36:40.369108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.814 [2024-10-09 00:36:40.369137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.814 qpair failed and we were unable to recover it. 00:29:09.814 [2024-10-09 00:36:40.369305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.814 [2024-10-09 00:36:40.369334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.814 qpair failed and we were unable to recover it. 00:29:09.814 [2024-10-09 00:36:40.369675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.814 [2024-10-09 00:36:40.369704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.814 qpair failed and we were unable to recover it. 00:29:09.814 [2024-10-09 00:36:40.370123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.814 [2024-10-09 00:36:40.370153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.814 qpair failed and we were unable to recover it. 00:29:09.814 [2024-10-09 00:36:40.370404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.814 [2024-10-09 00:36:40.370438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.814 qpair failed and we were unable to recover it. 00:29:09.815 [2024-10-09 00:36:40.370796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.815 [2024-10-09 00:36:40.370828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.815 qpair failed and we were unable to recover it. 00:29:09.815 [2024-10-09 00:36:40.371195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.815 [2024-10-09 00:36:40.371230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.815 qpair failed and we were unable to recover it. 00:29:09.815 [2024-10-09 00:36:40.371593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.815 [2024-10-09 00:36:40.371623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.815 qpair failed and we were unable to recover it. 00:29:09.815 [2024-10-09 00:36:40.371974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.815 [2024-10-09 00:36:40.372005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.815 qpair failed and we were unable to recover it. 00:29:09.815 [2024-10-09 00:36:40.372388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.815 [2024-10-09 00:36:40.372417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.815 qpair failed and we were unable to recover it. 00:29:09.815 [2024-10-09 00:36:40.372784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.815 [2024-10-09 00:36:40.372814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.815 qpair failed and we were unable to recover it. 00:29:09.815 [2024-10-09 00:36:40.373185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.815 [2024-10-09 00:36:40.373214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.815 qpair failed and we were unable to recover it. 00:29:09.815 [2024-10-09 00:36:40.373463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.815 [2024-10-09 00:36:40.373492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.815 qpair failed and we were unable to recover it. 00:29:09.815 [2024-10-09 00:36:40.373871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.815 [2024-10-09 00:36:40.373902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.815 qpair failed and we were unable to recover it. 00:29:09.815 [2024-10-09 00:36:40.374206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.815 [2024-10-09 00:36:40.374236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.815 qpair failed and we were unable to recover it. 00:29:09.815 [2024-10-09 00:36:40.374592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.815 [2024-10-09 00:36:40.374621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.815 qpair failed and we were unable to recover it. 00:29:09.815 [2024-10-09 00:36:40.374988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.815 [2024-10-09 00:36:40.375018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.815 qpair failed and we were unable to recover it. 00:29:09.815 [2024-10-09 00:36:40.375411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.815 [2024-10-09 00:36:40.375440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.815 qpair failed and we were unable to recover it. 00:29:09.815 [2024-10-09 00:36:40.375808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.815 [2024-10-09 00:36:40.375837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.815 qpair failed and we were unable to recover it. 00:29:09.815 [2024-10-09 00:36:40.376201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.815 [2024-10-09 00:36:40.376230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.815 qpair failed and we were unable to recover it. 00:29:09.815 [2024-10-09 00:36:40.376597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.815 [2024-10-09 00:36:40.376627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.815 qpair failed and we were unable to recover it. 00:29:09.815 [2024-10-09 00:36:40.376966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.815 [2024-10-09 00:36:40.376998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.815 qpair failed and we were unable to recover it. 00:29:09.815 [2024-10-09 00:36:40.377371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.815 [2024-10-09 00:36:40.377400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.815 qpair failed and we were unable to recover it. 00:29:09.815 [2024-10-09 00:36:40.377764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.815 [2024-10-09 00:36:40.377795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.815 qpair failed and we were unable to recover it. 00:29:09.815 [2024-10-09 00:36:40.378108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.815 [2024-10-09 00:36:40.378137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.815 qpair failed and we were unable to recover it. 00:29:09.815 [2024-10-09 00:36:40.378518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.815 [2024-10-09 00:36:40.378547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.815 qpair failed and we were unable to recover it. 00:29:09.815 [2024-10-09 00:36:40.378897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.815 [2024-10-09 00:36:40.378926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.815 qpair failed and we were unable to recover it. 00:29:09.815 [2024-10-09 00:36:40.379230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.815 [2024-10-09 00:36:40.379260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.815 qpair failed and we were unable to recover it. 00:29:09.815 [2024-10-09 00:36:40.379637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.815 [2024-10-09 00:36:40.379667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.815 qpair failed and we were unable to recover it. 00:29:09.815 [2024-10-09 00:36:40.380039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.815 [2024-10-09 00:36:40.380070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.815 qpair failed and we were unable to recover it. 00:29:09.815 [2024-10-09 00:36:40.380417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.815 [2024-10-09 00:36:40.380449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.815 qpair failed and we were unable to recover it. 00:29:09.815 [2024-10-09 00:36:40.380804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.815 [2024-10-09 00:36:40.380835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.815 qpair failed and we were unable to recover it. 00:29:09.815 [2024-10-09 00:36:40.381195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.815 [2024-10-09 00:36:40.381225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.815 qpair failed and we were unable to recover it. 00:29:09.815 [2024-10-09 00:36:40.381452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.815 [2024-10-09 00:36:40.381492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.815 qpair failed and we were unable to recover it. 00:29:09.815 [2024-10-09 00:36:40.381790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.815 [2024-10-09 00:36:40.381821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.815 qpair failed and we were unable to recover it. 00:29:09.815 [2024-10-09 00:36:40.382185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.815 [2024-10-09 00:36:40.382214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.815 qpair failed and we were unable to recover it. 00:29:09.815 [2024-10-09 00:36:40.382592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.815 [2024-10-09 00:36:40.382621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.815 qpair failed and we were unable to recover it. 00:29:09.815 [2024-10-09 00:36:40.383008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.815 [2024-10-09 00:36:40.383038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.815 qpair failed and we were unable to recover it. 00:29:09.815 [2024-10-09 00:36:40.383317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.815 [2024-10-09 00:36:40.383346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.815 qpair failed and we were unable to recover it. 00:29:09.815 [2024-10-09 00:36:40.383689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.815 [2024-10-09 00:36:40.383726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.815 qpair failed and we were unable to recover it. 00:29:09.815 [2024-10-09 00:36:40.384093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.815 [2024-10-09 00:36:40.384123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.815 qpair failed and we were unable to recover it. 00:29:09.815 [2024-10-09 00:36:40.384495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.815 [2024-10-09 00:36:40.384524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.815 qpair failed and we were unable to recover it. 00:29:09.815 [2024-10-09 00:36:40.384886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.815 [2024-10-09 00:36:40.384916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.815 qpair failed and we were unable to recover it. 00:29:09.815 [2024-10-09 00:36:40.385286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.815 [2024-10-09 00:36:40.385315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.815 qpair failed and we were unable to recover it. 00:29:09.815 [2024-10-09 00:36:40.385688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.815 [2024-10-09 00:36:40.385717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.815 qpair failed and we were unable to recover it. 00:29:09.815 [2024-10-09 00:36:40.386062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.815 [2024-10-09 00:36:40.386091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.816 qpair failed and we were unable to recover it. 00:29:09.816 [2024-10-09 00:36:40.386461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.816 [2024-10-09 00:36:40.386490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.816 qpair failed and we were unable to recover it. 00:29:09.816 [2024-10-09 00:36:40.386626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.816 [2024-10-09 00:36:40.386654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.816 qpair failed and we were unable to recover it. 00:29:09.816 [2024-10-09 00:36:40.386998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.816 [2024-10-09 00:36:40.387030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.816 qpair failed and we were unable to recover it. 00:29:09.816 [2024-10-09 00:36:40.387559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.816 [2024-10-09 00:36:40.387597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.816 qpair failed and we were unable to recover it. 00:29:09.816 [2024-10-09 00:36:40.387969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.816 [2024-10-09 00:36:40.388006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.816 qpair failed and we were unable to recover it. 00:29:09.816 [2024-10-09 00:36:40.388397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.816 [2024-10-09 00:36:40.388426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.816 qpair failed and we were unable to recover it. 00:29:09.816 [2024-10-09 00:36:40.388681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.816 [2024-10-09 00:36:40.388711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.816 qpair failed and we were unable to recover it. 00:29:09.816 [2024-10-09 00:36:40.389093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.816 [2024-10-09 00:36:40.389123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.816 qpair failed and we were unable to recover it. 00:29:09.816 [2024-10-09 00:36:40.389482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.816 [2024-10-09 00:36:40.389513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.816 qpair failed and we were unable to recover it. 00:29:09.816 [2024-10-09 00:36:40.389862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.816 [2024-10-09 00:36:40.389893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.816 qpair failed and we were unable to recover it. 00:29:09.816 [2024-10-09 00:36:40.390343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.816 [2024-10-09 00:36:40.390371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.816 qpair failed and we were unable to recover it. 00:29:09.816 [2024-10-09 00:36:40.390752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.816 [2024-10-09 00:36:40.390783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.816 qpair failed and we were unable to recover it. 00:29:09.816 [2024-10-09 00:36:40.390956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.816 [2024-10-09 00:36:40.390984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.816 qpair failed and we were unable to recover it. 00:29:09.816 [2024-10-09 00:36:40.391398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.816 [2024-10-09 00:36:40.391427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.816 qpair failed and we were unable to recover it. 00:29:09.816 [2024-10-09 00:36:40.391678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.816 [2024-10-09 00:36:40.391706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.816 qpair failed and we were unable to recover it. 00:29:09.816 [2024-10-09 00:36:40.391901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.816 [2024-10-09 00:36:40.391930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.816 qpair failed and we were unable to recover it. 00:29:09.816 [2024-10-09 00:36:40.392282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.816 [2024-10-09 00:36:40.392310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.816 qpair failed and we were unable to recover it. 00:29:09.816 [2024-10-09 00:36:40.392678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.816 [2024-10-09 00:36:40.392707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.816 qpair failed and we were unable to recover it. 00:29:09.816 [2024-10-09 00:36:40.392968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.816 [2024-10-09 00:36:40.392997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.816 qpair failed and we were unable to recover it. 00:29:09.816 [2024-10-09 00:36:40.393251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.816 [2024-10-09 00:36:40.393281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.816 qpair failed and we were unable to recover it. 00:29:09.816 [2024-10-09 00:36:40.393511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.816 [2024-10-09 00:36:40.393544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.816 qpair failed and we were unable to recover it. 00:29:09.816 [2024-10-09 00:36:40.393771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.816 [2024-10-09 00:36:40.393800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.816 qpair failed and we were unable to recover it. 00:29:09.816 [2024-10-09 00:36:40.394176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.816 [2024-10-09 00:36:40.394205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.816 qpair failed and we were unable to recover it. 00:29:09.816 [2024-10-09 00:36:40.394440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.816 [2024-10-09 00:36:40.394471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.816 qpair failed and we were unable to recover it. 00:29:09.816 [2024-10-09 00:36:40.394836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.816 [2024-10-09 00:36:40.394867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.816 qpair failed and we were unable to recover it. 00:29:09.816 [2024-10-09 00:36:40.395132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.816 [2024-10-09 00:36:40.395161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.816 qpair failed and we were unable to recover it. 00:29:09.816 [2024-10-09 00:36:40.395513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.816 [2024-10-09 00:36:40.395542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.816 qpair failed and we were unable to recover it. 00:29:09.816 [2024-10-09 00:36:40.395938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.816 [2024-10-09 00:36:40.395972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.816 qpair failed and we were unable to recover it. 00:29:09.816 [2024-10-09 00:36:40.396357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.816 [2024-10-09 00:36:40.396387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.816 qpair failed and we were unable to recover it. 00:29:09.816 [2024-10-09 00:36:40.396751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.816 [2024-10-09 00:36:40.396784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.816 qpair failed and we were unable to recover it. 00:29:09.816 [2024-10-09 00:36:40.397207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.816 [2024-10-09 00:36:40.397235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.816 qpair failed and we were unable to recover it. 00:29:09.816 [2024-10-09 00:36:40.397559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.816 [2024-10-09 00:36:40.397589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.816 qpair failed and we were unable to recover it. 00:29:09.816 [2024-10-09 00:36:40.397946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.816 [2024-10-09 00:36:40.397976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.816 qpair failed and we were unable to recover it. 00:29:09.816 [2024-10-09 00:36:40.398347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.816 [2024-10-09 00:36:40.398377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.816 qpair failed and we were unable to recover it. 00:29:09.816 [2024-10-09 00:36:40.398622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.816 [2024-10-09 00:36:40.398651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.816 qpair failed and we were unable to recover it. 00:29:09.816 [2024-10-09 00:36:40.399024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.816 [2024-10-09 00:36:40.399054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.817 qpair failed and we were unable to recover it. 00:29:09.817 [2024-10-09 00:36:40.399334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.817 [2024-10-09 00:36:40.399367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.817 qpair failed and we were unable to recover it. 00:29:09.817 [2024-10-09 00:36:40.399717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.817 [2024-10-09 00:36:40.399755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.817 qpair failed and we were unable to recover it. 00:29:09.817 [2024-10-09 00:36:40.400097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.817 [2024-10-09 00:36:40.400126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.817 qpair failed and we were unable to recover it. 00:29:09.817 [2024-10-09 00:36:40.400498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.817 [2024-10-09 00:36:40.400527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.817 qpair failed and we were unable to recover it. 00:29:09.817 [2024-10-09 00:36:40.400910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.817 [2024-10-09 00:36:40.400940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.817 qpair failed and we were unable to recover it. 00:29:09.817 [2024-10-09 00:36:40.401204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.817 [2024-10-09 00:36:40.401233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.817 qpair failed and we were unable to recover it. 00:29:09.817 [2024-10-09 00:36:40.401609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.817 [2024-10-09 00:36:40.401638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.817 qpair failed and we were unable to recover it. 00:29:09.817 [2024-10-09 00:36:40.402016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.817 [2024-10-09 00:36:40.402046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.817 qpair failed and we were unable to recover it. 00:29:09.817 [2024-10-09 00:36:40.402322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.817 [2024-10-09 00:36:40.402351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.817 qpair failed and we were unable to recover it. 00:29:09.817 [2024-10-09 00:36:40.402752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.817 [2024-10-09 00:36:40.402782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.817 qpair failed and we were unable to recover it. 00:29:09.817 [2024-10-09 00:36:40.403117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.817 [2024-10-09 00:36:40.403147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.817 qpair failed and we were unable to recover it. 00:29:09.817 [2024-10-09 00:36:40.403535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.817 [2024-10-09 00:36:40.403563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.817 qpair failed and we were unable to recover it. 00:29:09.817 [2024-10-09 00:36:40.403913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.817 [2024-10-09 00:36:40.403946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.817 qpair failed and we were unable to recover it. 00:29:09.817 [2024-10-09 00:36:40.404204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.817 [2024-10-09 00:36:40.404233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.817 qpair failed and we were unable to recover it. 00:29:09.817 [2024-10-09 00:36:40.404600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.817 [2024-10-09 00:36:40.404630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.817 qpair failed and we were unable to recover it. 00:29:09.817 [2024-10-09 00:36:40.404973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.817 [2024-10-09 00:36:40.405003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.817 qpair failed and we were unable to recover it. 00:29:09.817 [2024-10-09 00:36:40.405360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.817 [2024-10-09 00:36:40.405391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.817 qpair failed and we were unable to recover it. 00:29:09.817 [2024-10-09 00:36:40.405743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.817 [2024-10-09 00:36:40.405774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.817 qpair failed and we were unable to recover it. 00:29:09.817 [2024-10-09 00:36:40.406175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.817 [2024-10-09 00:36:40.406204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.817 qpair failed and we were unable to recover it. 00:29:09.817 [2024-10-09 00:36:40.406578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.817 [2024-10-09 00:36:40.406613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.817 qpair failed and we were unable to recover it. 00:29:09.817 [2024-10-09 00:36:40.406965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.817 [2024-10-09 00:36:40.406997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.817 qpair failed and we were unable to recover it. 00:29:09.817 [2024-10-09 00:36:40.407371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.817 [2024-10-09 00:36:40.407399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.817 qpair failed and we were unable to recover it. 00:29:09.817 [2024-10-09 00:36:40.407768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.817 [2024-10-09 00:36:40.407799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.817 qpair failed and we were unable to recover it. 00:29:09.817 [2024-10-09 00:36:40.408192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.817 [2024-10-09 00:36:40.408220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.817 qpair failed and we were unable to recover it. 00:29:09.817 [2024-10-09 00:36:40.408597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.817 [2024-10-09 00:36:40.408625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.817 qpair failed and we were unable to recover it. 00:29:09.817 [2024-10-09 00:36:40.408979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.817 [2024-10-09 00:36:40.409010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.817 qpair failed and we were unable to recover it. 00:29:09.817 [2024-10-09 00:36:40.409397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.817 [2024-10-09 00:36:40.409426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.817 qpair failed and we were unable to recover it. 00:29:09.817 [2024-10-09 00:36:40.409775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.817 [2024-10-09 00:36:40.409805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.817 qpair failed and we were unable to recover it. 00:29:09.817 [2024-10-09 00:36:40.410146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.817 [2024-10-09 00:36:40.410176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.817 qpair failed and we were unable to recover it. 00:29:09.817 [2024-10-09 00:36:40.410535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.817 [2024-10-09 00:36:40.410563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.817 qpair failed and we were unable to recover it. 00:29:09.817 [2024-10-09 00:36:40.410915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.817 [2024-10-09 00:36:40.410945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.817 qpair failed and we were unable to recover it. 00:29:09.817 [2024-10-09 00:36:40.411166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.817 [2024-10-09 00:36:40.411195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.817 qpair failed and we were unable to recover it. 00:29:09.817 [2024-10-09 00:36:40.411544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.817 [2024-10-09 00:36:40.411574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.817 qpair failed and we were unable to recover it. 00:29:09.817 [2024-10-09 00:36:40.411953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.817 [2024-10-09 00:36:40.411984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.817 qpair failed and we were unable to recover it. 00:29:09.817 [2024-10-09 00:36:40.412348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.817 [2024-10-09 00:36:40.412377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.817 qpair failed and we were unable to recover it. 00:29:09.817 [2024-10-09 00:36:40.412761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.817 [2024-10-09 00:36:40.412790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.817 qpair failed and we were unable to recover it. 00:29:09.817 [2024-10-09 00:36:40.413181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.817 [2024-10-09 00:36:40.413210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.817 qpair failed and we were unable to recover it. 00:29:09.817 [2024-10-09 00:36:40.413574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.817 [2024-10-09 00:36:40.413603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.817 qpair failed and we were unable to recover it. 00:29:09.817 [2024-10-09 00:36:40.413954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.817 [2024-10-09 00:36:40.413984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.817 qpair failed and we were unable to recover it. 00:29:09.817 [2024-10-09 00:36:40.414312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.817 [2024-10-09 00:36:40.414342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.817 qpair failed and we were unable to recover it. 00:29:09.817 [2024-10-09 00:36:40.414714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.818 [2024-10-09 00:36:40.414755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.818 qpair failed and we were unable to recover it. 00:29:09.818 [2024-10-09 00:36:40.415003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.818 [2024-10-09 00:36:40.415036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.818 qpair failed and we were unable to recover it. 00:29:09.818 [2024-10-09 00:36:40.415405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.818 [2024-10-09 00:36:40.415434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.818 qpair failed and we were unable to recover it. 00:29:09.818 [2024-10-09 00:36:40.415679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.818 [2024-10-09 00:36:40.415707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.818 qpair failed and we were unable to recover it. 00:29:09.818 [2024-10-09 00:36:40.416089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.818 [2024-10-09 00:36:40.416119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.818 qpair failed and we were unable to recover it. 00:29:09.818 [2024-10-09 00:36:40.416478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.818 [2024-10-09 00:36:40.416508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.818 qpair failed and we were unable to recover it. 00:29:09.818 [2024-10-09 00:36:40.416838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.818 [2024-10-09 00:36:40.416875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.818 qpair failed and we were unable to recover it. 00:29:09.818 [2024-10-09 00:36:40.417255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.818 [2024-10-09 00:36:40.417285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.818 qpair failed and we were unable to recover it. 00:29:09.818 [2024-10-09 00:36:40.417644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.818 [2024-10-09 00:36:40.417673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.818 qpair failed and we were unable to recover it. 00:29:09.818 [2024-10-09 00:36:40.418105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.818 [2024-10-09 00:36:40.418135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.818 qpair failed and we were unable to recover it. 00:29:09.818 [2024-10-09 00:36:40.418491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.818 [2024-10-09 00:36:40.418520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.818 qpair failed and we were unable to recover it. 00:29:09.818 [2024-10-09 00:36:40.418896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.818 [2024-10-09 00:36:40.418926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.818 qpair failed and we were unable to recover it. 00:29:09.818 [2024-10-09 00:36:40.419209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.818 [2024-10-09 00:36:40.419237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.818 qpair failed and we were unable to recover it. 00:29:09.818 [2024-10-09 00:36:40.419608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.818 [2024-10-09 00:36:40.419637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.818 qpair failed and we were unable to recover it. 00:29:09.818 [2024-10-09 00:36:40.419981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.818 [2024-10-09 00:36:40.420011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.818 qpair failed and we were unable to recover it. 00:29:09.818 [2024-10-09 00:36:40.420376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.818 [2024-10-09 00:36:40.420405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.818 qpair failed and we were unable to recover it. 00:29:09.818 [2024-10-09 00:36:40.420771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.818 [2024-10-09 00:36:40.420802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.818 qpair failed and we were unable to recover it. 00:29:09.818 [2024-10-09 00:36:40.421178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.818 [2024-10-09 00:36:40.421206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:09.818 qpair failed and we were unable to recover it. 00:29:09.818 [2024-10-09 00:36:40.421579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.090 [2024-10-09 00:36:40.421608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.090 qpair failed and we were unable to recover it. 00:29:10.090 [2024-10-09 00:36:40.421986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.090 [2024-10-09 00:36:40.422019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.090 qpair failed and we were unable to recover it. 00:29:10.090 [2024-10-09 00:36:40.422379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.090 [2024-10-09 00:36:40.422408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.090 qpair failed and we were unable to recover it. 00:29:10.090 [2024-10-09 00:36:40.422672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.090 [2024-10-09 00:36:40.422700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.090 qpair failed and we were unable to recover it. 00:29:10.090 [2024-10-09 00:36:40.423062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.090 [2024-10-09 00:36:40.423092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.090 qpair failed and we were unable to recover it. 00:29:10.090 [2024-10-09 00:36:40.423463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.090 [2024-10-09 00:36:40.423492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.090 qpair failed and we were unable to recover it. 00:29:10.090 [2024-10-09 00:36:40.423840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.090 [2024-10-09 00:36:40.423871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.090 qpair failed and we were unable to recover it. 00:29:10.090 [2024-10-09 00:36:40.424228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.090 [2024-10-09 00:36:40.424258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.090 qpair failed and we were unable to recover it. 00:29:10.090 [2024-10-09 00:36:40.424633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.090 [2024-10-09 00:36:40.424663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.090 qpair failed and we were unable to recover it. 00:29:10.090 [2024-10-09 00:36:40.425034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.090 [2024-10-09 00:36:40.425065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.090 qpair failed and we were unable to recover it. 00:29:10.090 [2024-10-09 00:36:40.425432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.090 [2024-10-09 00:36:40.425461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.090 qpair failed and we were unable to recover it. 00:29:10.090 [2024-10-09 00:36:40.425756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.090 [2024-10-09 00:36:40.425786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.090 qpair failed and we were unable to recover it. 00:29:10.090 [2024-10-09 00:36:40.426124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.090 [2024-10-09 00:36:40.426152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.090 qpair failed and we were unable to recover it. 00:29:10.090 [2024-10-09 00:36:40.426499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.090 [2024-10-09 00:36:40.426529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.090 qpair failed and we were unable to recover it. 00:29:10.090 [2024-10-09 00:36:40.426881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.090 [2024-10-09 00:36:40.426910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.090 qpair failed and we were unable to recover it. 00:29:10.090 [2024-10-09 00:36:40.427252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.090 [2024-10-09 00:36:40.427288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.090 qpair failed and we were unable to recover it. 00:29:10.090 [2024-10-09 00:36:40.427626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.090 [2024-10-09 00:36:40.427654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.090 qpair failed and we were unable to recover it. 00:29:10.090 [2024-10-09 00:36:40.428023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.090 [2024-10-09 00:36:40.428053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.090 qpair failed and we were unable to recover it. 00:29:10.090 [2024-10-09 00:36:40.428414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.090 [2024-10-09 00:36:40.428442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.090 qpair failed and we were unable to recover it. 00:29:10.090 [2024-10-09 00:36:40.428824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.090 [2024-10-09 00:36:40.428853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.090 qpair failed and we were unable to recover it. 00:29:10.090 [2024-10-09 00:36:40.429094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.090 [2024-10-09 00:36:40.429122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.090 qpair failed and we were unable to recover it. 00:29:10.090 [2024-10-09 00:36:40.429481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.090 [2024-10-09 00:36:40.429509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.090 qpair failed and we were unable to recover it. 00:29:10.090 [2024-10-09 00:36:40.429795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.090 [2024-10-09 00:36:40.429825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.090 qpair failed and we were unable to recover it. 00:29:10.090 [2024-10-09 00:36:40.430176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.090 [2024-10-09 00:36:40.430205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.090 qpair failed and we were unable to recover it. 00:29:10.090 [2024-10-09 00:36:40.430571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.090 [2024-10-09 00:36:40.430601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.090 qpair failed and we were unable to recover it. 00:29:10.091 [2024-10-09 00:36:40.430950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.091 [2024-10-09 00:36:40.430980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.091 qpair failed and we were unable to recover it. 00:29:10.091 [2024-10-09 00:36:40.431326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.091 [2024-10-09 00:36:40.431356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.091 qpair failed and we were unable to recover it. 00:29:10.091 [2024-10-09 00:36:40.431606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.091 [2024-10-09 00:36:40.431638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.091 qpair failed and we were unable to recover it. 00:29:10.091 [2024-10-09 00:36:40.431977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.091 [2024-10-09 00:36:40.432008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.091 qpair failed and we were unable to recover it. 00:29:10.091 [2024-10-09 00:36:40.432348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.091 [2024-10-09 00:36:40.432377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.091 qpair failed and we were unable to recover it. 00:29:10.091 [2024-10-09 00:36:40.432743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.091 [2024-10-09 00:36:40.432773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.091 qpair failed and we were unable to recover it. 00:29:10.091 [2024-10-09 00:36:40.433123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.091 [2024-10-09 00:36:40.433152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.091 qpair failed and we were unable to recover it. 00:29:10.091 [2024-10-09 00:36:40.433502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.091 [2024-10-09 00:36:40.433531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.091 qpair failed and we were unable to recover it. 00:29:10.091 [2024-10-09 00:36:40.433781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.091 [2024-10-09 00:36:40.433814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.091 qpair failed and we were unable to recover it. 00:29:10.091 [2024-10-09 00:36:40.434213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.091 [2024-10-09 00:36:40.434243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.091 qpair failed and we were unable to recover it. 00:29:10.091 [2024-10-09 00:36:40.434598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.091 [2024-10-09 00:36:40.434628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.091 qpair failed and we were unable to recover it. 00:29:10.091 [2024-10-09 00:36:40.434960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.091 [2024-10-09 00:36:40.434990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.091 qpair failed and we were unable to recover it. 00:29:10.091 [2024-10-09 00:36:40.435381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.091 [2024-10-09 00:36:40.435410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.091 qpair failed and we were unable to recover it. 00:29:10.091 [2024-10-09 00:36:40.435779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.091 [2024-10-09 00:36:40.435816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.091 qpair failed and we were unable to recover it. 00:29:10.091 [2024-10-09 00:36:40.436066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.091 [2024-10-09 00:36:40.436099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.091 qpair failed and we were unable to recover it. 00:29:10.091 [2024-10-09 00:36:40.436461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.091 [2024-10-09 00:36:40.436490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.091 qpair failed and we were unable to recover it. 00:29:10.091 [2024-10-09 00:36:40.436851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.091 [2024-10-09 00:36:40.436881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.091 qpair failed and we were unable to recover it. 00:29:10.091 [2024-10-09 00:36:40.437277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.091 [2024-10-09 00:36:40.437305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.091 qpair failed and we were unable to recover it. 00:29:10.091 [2024-10-09 00:36:40.437670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.091 [2024-10-09 00:36:40.437699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.091 qpair failed and we were unable to recover it. 00:29:10.091 [2024-10-09 00:36:40.438090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.091 [2024-10-09 00:36:40.438122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.091 qpair failed and we were unable to recover it. 00:29:10.091 [2024-10-09 00:36:40.438501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.091 [2024-10-09 00:36:40.438531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.091 qpair failed and we were unable to recover it. 00:29:10.091 [2024-10-09 00:36:40.438895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.091 [2024-10-09 00:36:40.438926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.091 qpair failed and we were unable to recover it. 00:29:10.091 [2024-10-09 00:36:40.439285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.091 [2024-10-09 00:36:40.439314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.091 qpair failed and we were unable to recover it. 00:29:10.091 [2024-10-09 00:36:40.439670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.091 [2024-10-09 00:36:40.439699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.091 qpair failed and we were unable to recover it. 00:29:10.091 [2024-10-09 00:36:40.440054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.091 [2024-10-09 00:36:40.440084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.091 qpair failed and we were unable to recover it. 00:29:10.091 [2024-10-09 00:36:40.440437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.091 [2024-10-09 00:36:40.440466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.091 qpair failed and we were unable to recover it. 00:29:10.091 [2024-10-09 00:36:40.440838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.091 [2024-10-09 00:36:40.440867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.091 qpair failed and we were unable to recover it. 00:29:10.091 [2024-10-09 00:36:40.441187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.091 [2024-10-09 00:36:40.441217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.091 qpair failed and we were unable to recover it. 00:29:10.091 [2024-10-09 00:36:40.441593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.091 [2024-10-09 00:36:40.441622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.091 qpair failed and we were unable to recover it. 00:29:10.091 [2024-10-09 00:36:40.441994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.091 [2024-10-09 00:36:40.442026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.091 qpair failed and we were unable to recover it. 00:29:10.091 [2024-10-09 00:36:40.442389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.091 [2024-10-09 00:36:40.442418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.091 qpair failed and we were unable to recover it. 00:29:10.091 [2024-10-09 00:36:40.442792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.091 [2024-10-09 00:36:40.442823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.091 qpair failed and we were unable to recover it. 00:29:10.091 [2024-10-09 00:36:40.443202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.091 [2024-10-09 00:36:40.443230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.091 qpair failed and we were unable to recover it. 00:29:10.091 [2024-10-09 00:36:40.443605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.091 [2024-10-09 00:36:40.443634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.091 qpair failed and we were unable to recover it. 00:29:10.091 [2024-10-09 00:36:40.443923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.091 [2024-10-09 00:36:40.443952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.091 qpair failed and we were unable to recover it. 00:29:10.091 [2024-10-09 00:36:40.444317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.091 [2024-10-09 00:36:40.444346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.091 qpair failed and we were unable to recover it. 00:29:10.091 [2024-10-09 00:36:40.444729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.091 [2024-10-09 00:36:40.444759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.091 qpair failed and we were unable to recover it. 00:29:10.091 [2024-10-09 00:36:40.445128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.091 [2024-10-09 00:36:40.445157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.091 qpair failed and we were unable to recover it. 00:29:10.091 [2024-10-09 00:36:40.445506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.091 [2024-10-09 00:36:40.445534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.091 qpair failed and we were unable to recover it. 00:29:10.091 [2024-10-09 00:36:40.445881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.091 [2024-10-09 00:36:40.445913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.091 qpair failed and we were unable to recover it. 00:29:10.091 [2024-10-09 00:36:40.446257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.091 [2024-10-09 00:36:40.446286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.092 qpair failed and we were unable to recover it. 00:29:10.092 [2024-10-09 00:36:40.446664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.092 [2024-10-09 00:36:40.446696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.092 qpair failed and we were unable to recover it. 00:29:10.092 [2024-10-09 00:36:40.447070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.092 [2024-10-09 00:36:40.447101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.092 qpair failed and we were unable to recover it. 00:29:10.092 [2024-10-09 00:36:40.447453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.092 [2024-10-09 00:36:40.447482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.092 qpair failed and we were unable to recover it. 00:29:10.092 [2024-10-09 00:36:40.447853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.092 [2024-10-09 00:36:40.447884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.092 qpair failed and we were unable to recover it. 00:29:10.092 [2024-10-09 00:36:40.448273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.092 [2024-10-09 00:36:40.448302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.092 qpair failed and we were unable to recover it. 00:29:10.092 [2024-10-09 00:36:40.448675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.092 [2024-10-09 00:36:40.448703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.092 qpair failed and we were unable to recover it. 00:29:10.092 [2024-10-09 00:36:40.449052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.092 [2024-10-09 00:36:40.449082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.092 qpair failed and we were unable to recover it. 00:29:10.092 [2024-10-09 00:36:40.449454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.092 [2024-10-09 00:36:40.449483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.092 qpair failed and we were unable to recover it. 00:29:10.092 [2024-10-09 00:36:40.449706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.092 [2024-10-09 00:36:40.449748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.092 qpair failed and we were unable to recover it. 00:29:10.092 [2024-10-09 00:36:40.450113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.092 [2024-10-09 00:36:40.450142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.092 qpair failed and we were unable to recover it. 00:29:10.092 [2024-10-09 00:36:40.450557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.092 [2024-10-09 00:36:40.450586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.092 qpair failed and we were unable to recover it. 00:29:10.092 [2024-10-09 00:36:40.450965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.092 [2024-10-09 00:36:40.450997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.092 qpair failed and we were unable to recover it. 00:29:10.092 [2024-10-09 00:36:40.451342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.092 [2024-10-09 00:36:40.451371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.092 qpair failed and we were unable to recover it. 00:29:10.092 [2024-10-09 00:36:40.451731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.092 [2024-10-09 00:36:40.451762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.092 qpair failed and we were unable to recover it. 00:29:10.092 [2024-10-09 00:36:40.452115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.092 [2024-10-09 00:36:40.452144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.092 qpair failed and we were unable to recover it. 00:29:10.092 [2024-10-09 00:36:40.452511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.092 [2024-10-09 00:36:40.452540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.092 qpair failed and we were unable to recover it. 00:29:10.092 [2024-10-09 00:36:40.452917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.092 [2024-10-09 00:36:40.452949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.092 qpair failed and we were unable to recover it. 00:29:10.092 [2024-10-09 00:36:40.453312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.092 [2024-10-09 00:36:40.453353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.092 qpair failed and we were unable to recover it. 00:29:10.092 [2024-10-09 00:36:40.453684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.092 [2024-10-09 00:36:40.453713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.092 qpair failed and we were unable to recover it. 00:29:10.092 [2024-10-09 00:36:40.454125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.092 [2024-10-09 00:36:40.454154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.092 qpair failed and we were unable to recover it. 00:29:10.092 [2024-10-09 00:36:40.454435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.092 [2024-10-09 00:36:40.454464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.092 qpair failed and we were unable to recover it. 00:29:10.092 [2024-10-09 00:36:40.454840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.092 [2024-10-09 00:36:40.454871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.092 qpair failed and we were unable to recover it. 00:29:10.092 [2024-10-09 00:36:40.455243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.092 [2024-10-09 00:36:40.455272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.092 qpair failed and we were unable to recover it. 00:29:10.092 [2024-10-09 00:36:40.455645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.092 [2024-10-09 00:36:40.455673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.092 qpair failed and we were unable to recover it. 00:29:10.092 [2024-10-09 00:36:40.456043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.092 [2024-10-09 00:36:40.456074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.092 qpair failed and we were unable to recover it. 00:29:10.092 [2024-10-09 00:36:40.456252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.092 [2024-10-09 00:36:40.456285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.092 qpair failed and we were unable to recover it. 00:29:10.092 [2024-10-09 00:36:40.456637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.092 [2024-10-09 00:36:40.456667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.092 qpair failed and we were unable to recover it. 00:29:10.092 [2024-10-09 00:36:40.456958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.092 [2024-10-09 00:36:40.456989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.092 qpair failed and we were unable to recover it. 00:29:10.092 [2024-10-09 00:36:40.457365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.092 [2024-10-09 00:36:40.457394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.092 qpair failed and we were unable to recover it. 00:29:10.092 [2024-10-09 00:36:40.457761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.092 [2024-10-09 00:36:40.457793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.092 qpair failed and we were unable to recover it. 00:29:10.092 [2024-10-09 00:36:40.458146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.092 [2024-10-09 00:36:40.458176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.092 qpair failed and we were unable to recover it. 00:29:10.092 [2024-10-09 00:36:40.458521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.092 [2024-10-09 00:36:40.458552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.092 qpair failed and we were unable to recover it. 00:29:10.092 [2024-10-09 00:36:40.458912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.092 [2024-10-09 00:36:40.458942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.092 qpair failed and we were unable to recover it. 00:29:10.092 [2024-10-09 00:36:40.459296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.092 [2024-10-09 00:36:40.459324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.092 qpair failed and we were unable to recover it. 00:29:10.092 [2024-10-09 00:36:40.459570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.092 [2024-10-09 00:36:40.459599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.092 qpair failed and we were unable to recover it. 00:29:10.092 [2024-10-09 00:36:40.459880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.092 [2024-10-09 00:36:40.459910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.092 qpair failed and we were unable to recover it. 00:29:10.092 [2024-10-09 00:36:40.460257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.092 [2024-10-09 00:36:40.460285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.092 qpair failed and we were unable to recover it. 00:29:10.092 [2024-10-09 00:36:40.460547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.092 [2024-10-09 00:36:40.460576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.092 qpair failed and we were unable to recover it. 00:29:10.092 [2024-10-09 00:36:40.460950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.092 [2024-10-09 00:36:40.460981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.092 qpair failed and we were unable to recover it. 00:29:10.092 [2024-10-09 00:36:40.461252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.092 [2024-10-09 00:36:40.461280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.092 qpair failed and we were unable to recover it. 00:29:10.092 [2024-10-09 00:36:40.461640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.093 [2024-10-09 00:36:40.461669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.093 qpair failed and we were unable to recover it. 00:29:10.093 [2024-10-09 00:36:40.462086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.093 [2024-10-09 00:36:40.462118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.093 qpair failed and we were unable to recover it. 00:29:10.093 [2024-10-09 00:36:40.462469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.093 [2024-10-09 00:36:40.462499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.093 qpair failed and we were unable to recover it. 00:29:10.093 [2024-10-09 00:36:40.462734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.093 [2024-10-09 00:36:40.462768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.093 qpair failed and we were unable to recover it. 00:29:10.093 [2024-10-09 00:36:40.463146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.093 [2024-10-09 00:36:40.463181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.093 qpair failed and we were unable to recover it. 00:29:10.093 [2024-10-09 00:36:40.463534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.093 [2024-10-09 00:36:40.463565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.093 qpair failed and we were unable to recover it. 00:29:10.093 [2024-10-09 00:36:40.463911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.093 [2024-10-09 00:36:40.463941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.093 qpair failed and we were unable to recover it. 00:29:10.093 [2024-10-09 00:36:40.464304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.093 [2024-10-09 00:36:40.464333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.093 qpair failed and we were unable to recover it. 00:29:10.093 [2024-10-09 00:36:40.464714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.093 [2024-10-09 00:36:40.464751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.093 qpair failed and we were unable to recover it. 00:29:10.093 [2024-10-09 00:36:40.465099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.093 [2024-10-09 00:36:40.465137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.093 qpair failed and we were unable to recover it. 00:29:10.093 [2024-10-09 00:36:40.465481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.093 [2024-10-09 00:36:40.465509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.093 qpair failed and we were unable to recover it. 00:29:10.093 [2024-10-09 00:36:40.465867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.093 [2024-10-09 00:36:40.465899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.093 qpair failed and we were unable to recover it. 00:29:10.093 [2024-10-09 00:36:40.466247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.093 [2024-10-09 00:36:40.466276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.093 qpair failed and we were unable to recover it. 00:29:10.093 [2024-10-09 00:36:40.466643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.093 [2024-10-09 00:36:40.466673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.093 qpair failed and we were unable to recover it. 00:29:10.093 [2024-10-09 00:36:40.467026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.093 [2024-10-09 00:36:40.467056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.093 qpair failed and we were unable to recover it. 00:29:10.093 [2024-10-09 00:36:40.467427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.093 [2024-10-09 00:36:40.467458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.093 qpair failed and we were unable to recover it. 00:29:10.093 [2024-10-09 00:36:40.467815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.093 [2024-10-09 00:36:40.467845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.093 qpair failed and we were unable to recover it. 00:29:10.093 [2024-10-09 00:36:40.468201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.093 [2024-10-09 00:36:40.468232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.093 qpair failed and we were unable to recover it. 00:29:10.093 [2024-10-09 00:36:40.468616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.093 [2024-10-09 00:36:40.468646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.093 qpair failed and we were unable to recover it. 00:29:10.093 [2024-10-09 00:36:40.469048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.093 [2024-10-09 00:36:40.469079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.093 qpair failed and we were unable to recover it. 00:29:10.093 [2024-10-09 00:36:40.469433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.093 [2024-10-09 00:36:40.469461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.093 qpair failed and we were unable to recover it. 00:29:10.093 [2024-10-09 00:36:40.469806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.093 [2024-10-09 00:36:40.469836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.093 qpair failed and we were unable to recover it. 00:29:10.093 [2024-10-09 00:36:40.470222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.093 [2024-10-09 00:36:40.470250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.093 qpair failed and we were unable to recover it. 00:29:10.093 [2024-10-09 00:36:40.470606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.093 [2024-10-09 00:36:40.470635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.093 qpair failed and we were unable to recover it. 00:29:10.093 [2024-10-09 00:36:40.470967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.093 [2024-10-09 00:36:40.470998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.093 qpair failed and we were unable to recover it. 00:29:10.093 [2024-10-09 00:36:40.471360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.093 [2024-10-09 00:36:40.471389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.093 qpair failed and we were unable to recover it. 00:29:10.093 [2024-10-09 00:36:40.471752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.093 [2024-10-09 00:36:40.471783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.093 qpair failed and we were unable to recover it. 00:29:10.093 [2024-10-09 00:36:40.472140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.093 [2024-10-09 00:36:40.472169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.093 qpair failed and we were unable to recover it. 00:29:10.093 [2024-10-09 00:36:40.472543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.093 [2024-10-09 00:36:40.472571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.093 qpair failed and we were unable to recover it. 00:29:10.093 [2024-10-09 00:36:40.472930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.093 [2024-10-09 00:36:40.472962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.093 qpair failed and we were unable to recover it. 00:29:10.093 [2024-10-09 00:36:40.473363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.093 [2024-10-09 00:36:40.473392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.093 qpair failed and we were unable to recover it. 00:29:10.093 [2024-10-09 00:36:40.473646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.093 [2024-10-09 00:36:40.473674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.093 qpair failed and we were unable to recover it. 00:29:10.093 [2024-10-09 00:36:40.474072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.093 [2024-10-09 00:36:40.474103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.093 qpair failed and we were unable to recover it. 00:29:10.093 [2024-10-09 00:36:40.474464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.093 [2024-10-09 00:36:40.474494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.093 qpair failed and we were unable to recover it. 00:29:10.093 [2024-10-09 00:36:40.474874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.093 [2024-10-09 00:36:40.474903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.093 qpair failed and we were unable to recover it. 00:29:10.093 [2024-10-09 00:36:40.475276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.093 [2024-10-09 00:36:40.475305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.093 qpair failed and we were unable to recover it. 00:29:10.093 [2024-10-09 00:36:40.475609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.093 [2024-10-09 00:36:40.475637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.093 qpair failed and we were unable to recover it. 00:29:10.093 [2024-10-09 00:36:40.475892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.093 [2024-10-09 00:36:40.475922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.093 qpair failed and we were unable to recover it. 00:29:10.093 [2024-10-09 00:36:40.476299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.093 [2024-10-09 00:36:40.476329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.093 qpair failed and we were unable to recover it. 00:29:10.093 [2024-10-09 00:36:40.476664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.093 [2024-10-09 00:36:40.476694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.093 qpair failed and we were unable to recover it. 00:29:10.093 [2024-10-09 00:36:40.477053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.093 [2024-10-09 00:36:40.477083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.093 qpair failed and we were unable to recover it. 00:29:10.094 [2024-10-09 00:36:40.477451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.094 [2024-10-09 00:36:40.477481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.094 qpair failed and we were unable to recover it. 00:29:10.094 [2024-10-09 00:36:40.477841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.094 [2024-10-09 00:36:40.477872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.094 qpair failed and we were unable to recover it. 00:29:10.094 [2024-10-09 00:36:40.478263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.094 [2024-10-09 00:36:40.478292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.094 qpair failed and we were unable to recover it. 00:29:10.094 [2024-10-09 00:36:40.478711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.094 [2024-10-09 00:36:40.478749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.094 qpair failed and we were unable to recover it. 00:29:10.094 [2024-10-09 00:36:40.479105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.094 [2024-10-09 00:36:40.479137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.094 qpair failed and we were unable to recover it. 00:29:10.094 [2024-10-09 00:36:40.479517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.094 [2024-10-09 00:36:40.479546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.094 qpair failed and we were unable to recover it. 00:29:10.094 [2024-10-09 00:36:40.479905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.094 [2024-10-09 00:36:40.479937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.094 qpair failed and we were unable to recover it. 00:29:10.094 [2024-10-09 00:36:40.480293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.094 [2024-10-09 00:36:40.480321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.094 qpair failed and we were unable to recover it. 00:29:10.094 [2024-10-09 00:36:40.480687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.094 [2024-10-09 00:36:40.480716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.094 qpair failed and we were unable to recover it. 00:29:10.094 [2024-10-09 00:36:40.481106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.094 [2024-10-09 00:36:40.481135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.094 qpair failed and we were unable to recover it. 00:29:10.094 [2024-10-09 00:36:40.481562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.094 [2024-10-09 00:36:40.481590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.094 qpair failed and we were unable to recover it. 00:29:10.094 [2024-10-09 00:36:40.481967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.094 [2024-10-09 00:36:40.481997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.094 qpair failed and we were unable to recover it. 00:29:10.094 [2024-10-09 00:36:40.482369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.094 [2024-10-09 00:36:40.482398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.094 qpair failed and we were unable to recover it. 00:29:10.094 [2024-10-09 00:36:40.482773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.094 [2024-10-09 00:36:40.482803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.094 qpair failed and we were unable to recover it. 00:29:10.094 [2024-10-09 00:36:40.483162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.094 [2024-10-09 00:36:40.483191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.094 qpair failed and we were unable to recover it. 00:29:10.094 [2024-10-09 00:36:40.483564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.094 [2024-10-09 00:36:40.483593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.094 qpair failed and we were unable to recover it. 00:29:10.094 [2024-10-09 00:36:40.484041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.094 [2024-10-09 00:36:40.484071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.094 qpair failed and we were unable to recover it. 00:29:10.094 [2024-10-09 00:36:40.484426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.094 [2024-10-09 00:36:40.484456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.094 qpair failed and we were unable to recover it. 00:29:10.094 [2024-10-09 00:36:40.484810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.094 [2024-10-09 00:36:40.484840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.094 qpair failed and we were unable to recover it. 00:29:10.094 [2024-10-09 00:36:40.485202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.094 [2024-10-09 00:36:40.485231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.094 qpair failed and we were unable to recover it. 00:29:10.094 [2024-10-09 00:36:40.485582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.094 [2024-10-09 00:36:40.485611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.094 qpair failed and we were unable to recover it. 00:29:10.094 [2024-10-09 00:36:40.485948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.094 [2024-10-09 00:36:40.485979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.094 qpair failed and we were unable to recover it. 00:29:10.094 [2024-10-09 00:36:40.486204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.094 [2024-10-09 00:36:40.486233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.094 qpair failed and we were unable to recover it. 00:29:10.094 [2024-10-09 00:36:40.486479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.094 [2024-10-09 00:36:40.486507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.094 qpair failed and we were unable to recover it. 00:29:10.094 [2024-10-09 00:36:40.486881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.094 [2024-10-09 00:36:40.486910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.094 qpair failed and we were unable to recover it. 00:29:10.094 [2024-10-09 00:36:40.487273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.094 [2024-10-09 00:36:40.487303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.094 qpair failed and we were unable to recover it. 00:29:10.094 [2024-10-09 00:36:40.487660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.094 [2024-10-09 00:36:40.487689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.094 qpair failed and we were unable to recover it. 00:29:10.094 [2024-10-09 00:36:40.487919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.094 [2024-10-09 00:36:40.487949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.094 qpair failed and we were unable to recover it. 00:29:10.094 [2024-10-09 00:36:40.488323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.094 [2024-10-09 00:36:40.488352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.094 qpair failed and we were unable to recover it. 00:29:10.094 [2024-10-09 00:36:40.488695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.094 [2024-10-09 00:36:40.488735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.094 qpair failed and we were unable to recover it. 00:29:10.094 [2024-10-09 00:36:40.489118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.094 [2024-10-09 00:36:40.489146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.094 qpair failed and we were unable to recover it. 00:29:10.094 [2024-10-09 00:36:40.489513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.094 [2024-10-09 00:36:40.489547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.094 qpair failed and we were unable to recover it. 00:29:10.094 [2024-10-09 00:36:40.490001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.094 [2024-10-09 00:36:40.490032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.094 qpair failed and we were unable to recover it. 00:29:10.094 [2024-10-09 00:36:40.490417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.094 [2024-10-09 00:36:40.490446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.094 qpair failed and we were unable to recover it. 00:29:10.094 [2024-10-09 00:36:40.490780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.095 [2024-10-09 00:36:40.490810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.095 qpair failed and we were unable to recover it. 00:29:10.095 [2024-10-09 00:36:40.491182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.095 [2024-10-09 00:36:40.491213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.095 qpair failed and we were unable to recover it. 00:29:10.095 [2024-10-09 00:36:40.491567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.095 [2024-10-09 00:36:40.491597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.095 qpair failed and we were unable to recover it. 00:29:10.095 [2024-10-09 00:36:40.491998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.095 [2024-10-09 00:36:40.492028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.095 qpair failed and we were unable to recover it. 00:29:10.095 [2024-10-09 00:36:40.492396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.095 [2024-10-09 00:36:40.492425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.095 qpair failed and we were unable to recover it. 00:29:10.095 [2024-10-09 00:36:40.492785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.095 [2024-10-09 00:36:40.492815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.095 qpair failed and we were unable to recover it. 00:29:10.095 [2024-10-09 00:36:40.493171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.095 [2024-10-09 00:36:40.493201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.095 qpair failed and we were unable to recover it. 00:29:10.095 [2024-10-09 00:36:40.493553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.095 [2024-10-09 00:36:40.493583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.095 qpair failed and we were unable to recover it. 00:29:10.095 [2024-10-09 00:36:40.493959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.095 [2024-10-09 00:36:40.493990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.095 qpair failed and we were unable to recover it. 00:29:10.095 [2024-10-09 00:36:40.494328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.095 [2024-10-09 00:36:40.494357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.095 qpair failed and we were unable to recover it. 00:29:10.095 [2024-10-09 00:36:40.494727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.095 [2024-10-09 00:36:40.494757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.095 qpair failed and we were unable to recover it. 00:29:10.095 [2024-10-09 00:36:40.495124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.095 [2024-10-09 00:36:40.495154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.095 qpair failed and we were unable to recover it. 00:29:10.095 [2024-10-09 00:36:40.495413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.095 [2024-10-09 00:36:40.495442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.095 qpair failed and we were unable to recover it. 00:29:10.095 [2024-10-09 00:36:40.495685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.095 [2024-10-09 00:36:40.495717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.095 qpair failed and we were unable to recover it. 00:29:10.095 [2024-10-09 00:36:40.496114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.095 [2024-10-09 00:36:40.496155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.095 qpair failed and we were unable to recover it. 00:29:10.095 [2024-10-09 00:36:40.496503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.095 [2024-10-09 00:36:40.496533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.095 qpair failed and we were unable to recover it. 00:29:10.095 [2024-10-09 00:36:40.496907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.095 [2024-10-09 00:36:40.496937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.095 qpair failed and we were unable to recover it. 00:29:10.095 [2024-10-09 00:36:40.497298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.095 [2024-10-09 00:36:40.497327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.095 qpair failed and we were unable to recover it. 00:29:10.095 [2024-10-09 00:36:40.497667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.095 [2024-10-09 00:36:40.497696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.095 qpair failed and we were unable to recover it. 00:29:10.095 [2024-10-09 00:36:40.498085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.095 [2024-10-09 00:36:40.498115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.095 qpair failed and we were unable to recover it. 00:29:10.095 [2024-10-09 00:36:40.498489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.095 [2024-10-09 00:36:40.498518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.095 qpair failed and we were unable to recover it. 00:29:10.095 [2024-10-09 00:36:40.498853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.095 [2024-10-09 00:36:40.498884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.095 qpair failed and we were unable to recover it. 00:29:10.095 [2024-10-09 00:36:40.499237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.095 [2024-10-09 00:36:40.499267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.095 qpair failed and we were unable to recover it. 00:29:10.095 [2024-10-09 00:36:40.499652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.095 [2024-10-09 00:36:40.499682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.095 qpair failed and we were unable to recover it. 00:29:10.095 [2024-10-09 00:36:40.500018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.095 [2024-10-09 00:36:40.500054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.095 qpair failed and we were unable to recover it. 00:29:10.095 [2024-10-09 00:36:40.500399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.095 [2024-10-09 00:36:40.500429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.095 qpair failed and we were unable to recover it. 00:29:10.095 [2024-10-09 00:36:40.500778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.095 [2024-10-09 00:36:40.500808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.095 qpair failed and we were unable to recover it. 00:29:10.095 [2024-10-09 00:36:40.501180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.095 [2024-10-09 00:36:40.501208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.095 qpair failed and we were unable to recover it. 00:29:10.095 [2024-10-09 00:36:40.501454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.095 [2024-10-09 00:36:40.501487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.095 qpair failed and we were unable to recover it. 00:29:10.095 [2024-10-09 00:36:40.501894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.095 [2024-10-09 00:36:40.501925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.095 qpair failed and we were unable to recover it. 00:29:10.095 [2024-10-09 00:36:40.502296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.095 [2024-10-09 00:36:40.502326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.095 qpair failed and we were unable to recover it. 00:29:10.095 [2024-10-09 00:36:40.502700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.095 [2024-10-09 00:36:40.502747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.095 qpair failed and we were unable to recover it. 00:29:10.095 [2024-10-09 00:36:40.503166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.095 [2024-10-09 00:36:40.503195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.095 qpair failed and we were unable to recover it. 00:29:10.095 [2024-10-09 00:36:40.503604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.095 [2024-10-09 00:36:40.503634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.095 qpair failed and we were unable to recover it. 00:29:10.095 [2024-10-09 00:36:40.503999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.095 [2024-10-09 00:36:40.504029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.095 qpair failed and we were unable to recover it. 00:29:10.095 [2024-10-09 00:36:40.504362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.095 [2024-10-09 00:36:40.504391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.095 qpair failed and we were unable to recover it. 00:29:10.095 [2024-10-09 00:36:40.504772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.095 [2024-10-09 00:36:40.504803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.095 qpair failed and we were unable to recover it. 00:29:10.095 [2024-10-09 00:36:40.505161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.095 [2024-10-09 00:36:40.505190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.095 qpair failed and we were unable to recover it. 00:29:10.095 [2024-10-09 00:36:40.505541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.095 [2024-10-09 00:36:40.505570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.095 qpair failed and we were unable to recover it. 00:29:10.095 [2024-10-09 00:36:40.505927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.095 [2024-10-09 00:36:40.505964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.095 qpair failed and we were unable to recover it. 00:29:10.095 [2024-10-09 00:36:40.506228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.095 [2024-10-09 00:36:40.506257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.095 qpair failed and we were unable to recover it. 00:29:10.096 [2024-10-09 00:36:40.506544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.096 [2024-10-09 00:36:40.506573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.096 qpair failed and we were unable to recover it. 00:29:10.096 [2024-10-09 00:36:40.506853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.096 [2024-10-09 00:36:40.506883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.096 qpair failed and we were unable to recover it. 00:29:10.096 [2024-10-09 00:36:40.507249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.096 [2024-10-09 00:36:40.507277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.096 qpair failed and we were unable to recover it. 00:29:10.096 [2024-10-09 00:36:40.507624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.096 [2024-10-09 00:36:40.507654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.096 qpair failed and we were unable to recover it. 00:29:10.096 [2024-10-09 00:36:40.508061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.096 [2024-10-09 00:36:40.508092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.096 qpair failed and we were unable to recover it. 00:29:10.096 [2024-10-09 00:36:40.508438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.096 [2024-10-09 00:36:40.508467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.096 qpair failed and we were unable to recover it. 00:29:10.096 [2024-10-09 00:36:40.508834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.096 [2024-10-09 00:36:40.508865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.096 qpair failed and we were unable to recover it. 00:29:10.096 [2024-10-09 00:36:40.509241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.096 [2024-10-09 00:36:40.509269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.096 qpair failed and we were unable to recover it. 00:29:10.096 [2024-10-09 00:36:40.509572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.096 [2024-10-09 00:36:40.509602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.096 qpair failed and we were unable to recover it. 00:29:10.096 [2024-10-09 00:36:40.510003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.096 [2024-10-09 00:36:40.510033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.096 qpair failed and we were unable to recover it. 00:29:10.096 [2024-10-09 00:36:40.510458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.096 [2024-10-09 00:36:40.510493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.096 qpair failed and we were unable to recover it. 00:29:10.096 [2024-10-09 00:36:40.510881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.096 [2024-10-09 00:36:40.510912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.096 qpair failed and we were unable to recover it. 00:29:10.096 [2024-10-09 00:36:40.511278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.096 [2024-10-09 00:36:40.511306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.096 qpair failed and we were unable to recover it. 00:29:10.096 [2024-10-09 00:36:40.511670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.096 [2024-10-09 00:36:40.511698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.096 qpair failed and we were unable to recover it. 00:29:10.096 [2024-10-09 00:36:40.512094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.096 [2024-10-09 00:36:40.512132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.096 qpair failed and we were unable to recover it. 00:29:10.096 [2024-10-09 00:36:40.512390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.096 [2024-10-09 00:36:40.512418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.096 qpair failed and we were unable to recover it. 00:29:10.096 [2024-10-09 00:36:40.512770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.096 [2024-10-09 00:36:40.512802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.096 qpair failed and we were unable to recover it. 00:29:10.096 [2024-10-09 00:36:40.513180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.096 [2024-10-09 00:36:40.513209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.096 qpair failed and we were unable to recover it. 00:29:10.096 [2024-10-09 00:36:40.513568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.096 [2024-10-09 00:36:40.513596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.096 qpair failed and we were unable to recover it. 00:29:10.096 [2024-10-09 00:36:40.513978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.096 [2024-10-09 00:36:40.514008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.096 qpair failed and we were unable to recover it. 00:29:10.096 [2024-10-09 00:36:40.514351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.096 [2024-10-09 00:36:40.514381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.096 qpair failed and we were unable to recover it. 00:29:10.096 [2024-10-09 00:36:40.514740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.096 [2024-10-09 00:36:40.514770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.096 qpair failed and we were unable to recover it. 00:29:10.096 [2024-10-09 00:36:40.515097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.096 [2024-10-09 00:36:40.515127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.096 qpair failed and we were unable to recover it. 00:29:10.096 [2024-10-09 00:36:40.515413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.096 [2024-10-09 00:36:40.515442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.096 qpair failed and we were unable to recover it. 00:29:10.096 [2024-10-09 00:36:40.515802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.096 [2024-10-09 00:36:40.515834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.096 qpair failed and we were unable to recover it. 00:29:10.096 [2024-10-09 00:36:40.516193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.096 [2024-10-09 00:36:40.516222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.096 qpair failed and we were unable to recover it. 00:29:10.096 [2024-10-09 00:36:40.516682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.096 [2024-10-09 00:36:40.516712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.096 qpair failed and we were unable to recover it. 00:29:10.096 [2024-10-09 00:36:40.517097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.096 [2024-10-09 00:36:40.517128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.096 qpair failed and we were unable to recover it. 00:29:10.096 [2024-10-09 00:36:40.517370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.096 [2024-10-09 00:36:40.517404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.096 qpair failed and we were unable to recover it. 00:29:10.096 [2024-10-09 00:36:40.517756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.096 [2024-10-09 00:36:40.517787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.096 qpair failed and we were unable to recover it. 00:29:10.096 [2024-10-09 00:36:40.518149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.096 [2024-10-09 00:36:40.518179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.096 qpair failed and we were unable to recover it. 00:29:10.096 [2024-10-09 00:36:40.518557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.096 [2024-10-09 00:36:40.518586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.096 qpair failed and we were unable to recover it. 00:29:10.096 [2024-10-09 00:36:40.518972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.096 [2024-10-09 00:36:40.519002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.096 qpair failed and we were unable to recover it. 00:29:10.096 [2024-10-09 00:36:40.519364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.096 [2024-10-09 00:36:40.519393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.096 qpair failed and we were unable to recover it. 00:29:10.096 [2024-10-09 00:36:40.519754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.096 [2024-10-09 00:36:40.519786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.096 qpair failed and we were unable to recover it. 00:29:10.096 [2024-10-09 00:36:40.520029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.096 [2024-10-09 00:36:40.520058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.096 qpair failed and we were unable to recover it. 00:29:10.096 [2024-10-09 00:36:40.520408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.096 [2024-10-09 00:36:40.520437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.096 qpair failed and we were unable to recover it. 00:29:10.096 [2024-10-09 00:36:40.520808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.096 [2024-10-09 00:36:40.520839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.096 qpair failed and we were unable to recover it. 00:29:10.096 [2024-10-09 00:36:40.521206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.096 [2024-10-09 00:36:40.521236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.096 qpair failed and we were unable to recover it. 00:29:10.096 [2024-10-09 00:36:40.521600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.096 [2024-10-09 00:36:40.521630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.096 qpair failed and we were unable to recover it. 00:29:10.096 [2024-10-09 00:36:40.522015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.097 [2024-10-09 00:36:40.522047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.097 qpair failed and we were unable to recover it. 00:29:10.097 [2024-10-09 00:36:40.522403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.097 [2024-10-09 00:36:40.522432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.097 qpair failed and we were unable to recover it. 00:29:10.097 [2024-10-09 00:36:40.522677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.097 [2024-10-09 00:36:40.522710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.097 qpair failed and we were unable to recover it. 00:29:10.097 [2024-10-09 00:36:40.523095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.097 [2024-10-09 00:36:40.523127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.097 qpair failed and we were unable to recover it. 00:29:10.097 [2024-10-09 00:36:40.523523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.097 [2024-10-09 00:36:40.523553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.097 qpair failed and we were unable to recover it. 00:29:10.097 [2024-10-09 00:36:40.523800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.097 [2024-10-09 00:36:40.523830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.097 qpair failed and we were unable to recover it. 00:29:10.097 [2024-10-09 00:36:40.524180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.097 [2024-10-09 00:36:40.524215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.097 qpair failed and we were unable to recover it. 00:29:10.097 [2024-10-09 00:36:40.524555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.097 [2024-10-09 00:36:40.524584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.097 qpair failed and we were unable to recover it. 00:29:10.097 [2024-10-09 00:36:40.524951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.097 [2024-10-09 00:36:40.524981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.097 qpair failed and we were unable to recover it. 00:29:10.097 [2024-10-09 00:36:40.525342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.097 [2024-10-09 00:36:40.525370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.097 qpair failed and we were unable to recover it. 00:29:10.097 [2024-10-09 00:36:40.525748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.097 [2024-10-09 00:36:40.525779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.097 qpair failed and we were unable to recover it. 00:29:10.097 [2024-10-09 00:36:40.526147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.097 [2024-10-09 00:36:40.526177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.097 qpair failed and we were unable to recover it. 00:29:10.097 [2024-10-09 00:36:40.526538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.097 [2024-10-09 00:36:40.526567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.097 qpair failed and we were unable to recover it. 00:29:10.097 [2024-10-09 00:36:40.526956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.097 [2024-10-09 00:36:40.526986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.097 qpair failed and we were unable to recover it. 00:29:10.097 [2024-10-09 00:36:40.527322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.097 [2024-10-09 00:36:40.527350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.097 qpair failed and we were unable to recover it. 00:29:10.097 [2024-10-09 00:36:40.527608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.097 [2024-10-09 00:36:40.527636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.097 qpair failed and we were unable to recover it. 00:29:10.097 [2024-10-09 00:36:40.527983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.097 [2024-10-09 00:36:40.528013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.097 qpair failed and we were unable to recover it. 00:29:10.097 [2024-10-09 00:36:40.528387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.097 [2024-10-09 00:36:40.528415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.097 qpair failed and we were unable to recover it. 00:29:10.097 [2024-10-09 00:36:40.528787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.097 [2024-10-09 00:36:40.528817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.097 qpair failed and we were unable to recover it. 00:29:10.097 [2024-10-09 00:36:40.529189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.097 [2024-10-09 00:36:40.529218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.097 qpair failed and we were unable to recover it. 00:29:10.097 [2024-10-09 00:36:40.529586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.097 [2024-10-09 00:36:40.529614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.097 qpair failed and we were unable to recover it. 00:29:10.097 [2024-10-09 00:36:40.529983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.097 [2024-10-09 00:36:40.530016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.097 qpair failed and we were unable to recover it. 00:29:10.097 [2024-10-09 00:36:40.530342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.097 [2024-10-09 00:36:40.530372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.097 qpair failed and we were unable to recover it. 00:29:10.097 [2024-10-09 00:36:40.530704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.097 [2024-10-09 00:36:40.530742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.097 qpair failed and we were unable to recover it. 00:29:10.097 [2024-10-09 00:36:40.531128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.097 [2024-10-09 00:36:40.531157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.097 qpair failed and we were unable to recover it. 00:29:10.097 [2024-10-09 00:36:40.531530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.097 [2024-10-09 00:36:40.531560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.097 qpair failed and we were unable to recover it. 00:29:10.097 [2024-10-09 00:36:40.531930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.097 [2024-10-09 00:36:40.531960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.097 qpair failed and we were unable to recover it. 00:29:10.097 [2024-10-09 00:36:40.532307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.097 [2024-10-09 00:36:40.532336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.097 qpair failed and we were unable to recover it. 00:29:10.097 [2024-10-09 00:36:40.532697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.097 [2024-10-09 00:36:40.532733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.097 qpair failed and we were unable to recover it. 00:29:10.097 [2024-10-09 00:36:40.533174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.097 [2024-10-09 00:36:40.533203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.097 qpair failed and we were unable to recover it. 00:29:10.097 [2024-10-09 00:36:40.533555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.097 [2024-10-09 00:36:40.533583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.097 qpair failed and we were unable to recover it. 00:29:10.097 [2024-10-09 00:36:40.533969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.097 [2024-10-09 00:36:40.534000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.097 qpair failed and we were unable to recover it. 00:29:10.097 [2024-10-09 00:36:40.534367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.097 [2024-10-09 00:36:40.534397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.097 qpair failed and we were unable to recover it. 00:29:10.097 [2024-10-09 00:36:40.534759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.097 [2024-10-09 00:36:40.534789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.097 qpair failed and we were unable to recover it. 00:29:10.097 [2024-10-09 00:36:40.535156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.097 [2024-10-09 00:36:40.535184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.097 qpair failed and we were unable to recover it. 00:29:10.097 [2024-10-09 00:36:40.535526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.097 [2024-10-09 00:36:40.535555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.097 qpair failed and we were unable to recover it. 00:29:10.097 [2024-10-09 00:36:40.535897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.097 [2024-10-09 00:36:40.535928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.097 qpair failed and we were unable to recover it. 00:29:10.097 [2024-10-09 00:36:40.536280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.097 [2024-10-09 00:36:40.536309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.097 qpair failed and we were unable to recover it. 00:29:10.097 [2024-10-09 00:36:40.536694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.097 [2024-10-09 00:36:40.536736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.097 qpair failed and we were unable to recover it. 00:29:10.097 [2024-10-09 00:36:40.537196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.097 [2024-10-09 00:36:40.537225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.097 qpair failed and we were unable to recover it. 00:29:10.097 [2024-10-09 00:36:40.537589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.097 [2024-10-09 00:36:40.537618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.097 qpair failed and we were unable to recover it. 00:29:10.098 [2024-10-09 00:36:40.537961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.098 [2024-10-09 00:36:40.537991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.098 qpair failed and we were unable to recover it. 00:29:10.098 [2024-10-09 00:36:40.538362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.098 [2024-10-09 00:36:40.538391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.098 qpair failed and we were unable to recover it. 00:29:10.098 [2024-10-09 00:36:40.538676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.098 [2024-10-09 00:36:40.538704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.098 qpair failed and we were unable to recover it. 00:29:10.098 [2024-10-09 00:36:40.539143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.098 [2024-10-09 00:36:40.539172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.098 qpair failed and we were unable to recover it. 00:29:10.098 [2024-10-09 00:36:40.539542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.098 [2024-10-09 00:36:40.539571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.098 qpair failed and we were unable to recover it. 00:29:10.098 [2024-10-09 00:36:40.539858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.098 [2024-10-09 00:36:40.539887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.098 qpair failed and we were unable to recover it. 00:29:10.098 [2024-10-09 00:36:40.540285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.098 [2024-10-09 00:36:40.540314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.098 qpair failed and we were unable to recover it. 00:29:10.098 [2024-10-09 00:36:40.540675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.098 [2024-10-09 00:36:40.540705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.098 qpair failed and we were unable to recover it. 00:29:10.098 [2024-10-09 00:36:40.541071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.098 [2024-10-09 00:36:40.541109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.098 qpair failed and we were unable to recover it. 00:29:10.098 [2024-10-09 00:36:40.541456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.098 [2024-10-09 00:36:40.541486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.098 qpair failed and we were unable to recover it. 00:29:10.098 [2024-10-09 00:36:40.541841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.098 [2024-10-09 00:36:40.541872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.098 qpair failed and we were unable to recover it. 00:29:10.098 [2024-10-09 00:36:40.542263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.098 [2024-10-09 00:36:40.542293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.098 qpair failed and we were unable to recover it. 00:29:10.098 [2024-10-09 00:36:40.542586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.098 [2024-10-09 00:36:40.542617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.098 qpair failed and we were unable to recover it. 00:29:10.098 [2024-10-09 00:36:40.542957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.098 [2024-10-09 00:36:40.542988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.098 qpair failed and we were unable to recover it. 00:29:10.098 [2024-10-09 00:36:40.543364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.098 [2024-10-09 00:36:40.543394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.098 qpair failed and we were unable to recover it. 00:29:10.098 [2024-10-09 00:36:40.543772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.098 [2024-10-09 00:36:40.543803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.098 qpair failed and we were unable to recover it. 00:29:10.098 [2024-10-09 00:36:40.544166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.098 [2024-10-09 00:36:40.544205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.098 qpair failed and we were unable to recover it. 00:29:10.098 [2024-10-09 00:36:40.544574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.098 [2024-10-09 00:36:40.544603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.098 qpair failed and we were unable to recover it. 00:29:10.098 [2024-10-09 00:36:40.544933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.098 [2024-10-09 00:36:40.544964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.098 qpair failed and we were unable to recover it. 00:29:10.098 [2024-10-09 00:36:40.545329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.098 [2024-10-09 00:36:40.545358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.098 qpair failed and we were unable to recover it. 00:29:10.098 [2024-10-09 00:36:40.545718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.098 [2024-10-09 00:36:40.545759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.098 qpair failed and we were unable to recover it. 00:29:10.098 [2024-10-09 00:36:40.546126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.098 [2024-10-09 00:36:40.546157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.098 qpair failed and we were unable to recover it. 00:29:10.098 [2024-10-09 00:36:40.546543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.098 [2024-10-09 00:36:40.546572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.098 qpair failed and we were unable to recover it. 00:29:10.098 [2024-10-09 00:36:40.546933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.098 [2024-10-09 00:36:40.546965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.098 qpair failed and we were unable to recover it. 00:29:10.098 [2024-10-09 00:36:40.547343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.098 [2024-10-09 00:36:40.547377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.098 qpair failed and we were unable to recover it. 00:29:10.098 [2024-10-09 00:36:40.547717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.098 [2024-10-09 00:36:40.547763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.098 qpair failed and we were unable to recover it. 00:29:10.098 [2024-10-09 00:36:40.548091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.098 [2024-10-09 00:36:40.548120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.098 qpair failed and we were unable to recover it. 00:29:10.098 [2024-10-09 00:36:40.548457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.098 [2024-10-09 00:36:40.548487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.098 qpair failed and we were unable to recover it. 00:29:10.098 [2024-10-09 00:36:40.548748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.098 [2024-10-09 00:36:40.548782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.098 qpair failed and we were unable to recover it. 00:29:10.098 [2024-10-09 00:36:40.549104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.098 [2024-10-09 00:36:40.549133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.098 qpair failed and we were unable to recover it. 00:29:10.098 [2024-10-09 00:36:40.549487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.098 [2024-10-09 00:36:40.549516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.098 qpair failed and we were unable to recover it. 00:29:10.098 [2024-10-09 00:36:40.549861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.098 [2024-10-09 00:36:40.549893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.098 qpair failed and we were unable to recover it. 00:29:10.098 [2024-10-09 00:36:40.550238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.098 [2024-10-09 00:36:40.550266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.098 qpair failed and we were unable to recover it. 00:29:10.098 [2024-10-09 00:36:40.550627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.098 [2024-10-09 00:36:40.550656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.098 qpair failed and we were unable to recover it. 00:29:10.098 [2024-10-09 00:36:40.551048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.098 [2024-10-09 00:36:40.551079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.098 qpair failed and we were unable to recover it. 00:29:10.098 [2024-10-09 00:36:40.551464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.098 [2024-10-09 00:36:40.551494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.098 qpair failed and we were unable to recover it. 00:29:10.098 [2024-10-09 00:36:40.551861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.098 [2024-10-09 00:36:40.551891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.098 qpair failed and we were unable to recover it. 00:29:10.098 [2024-10-09 00:36:40.552254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.098 [2024-10-09 00:36:40.552283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.098 qpair failed and we were unable to recover it. 00:29:10.098 [2024-10-09 00:36:40.552669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.098 [2024-10-09 00:36:40.552697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.098 qpair failed and we were unable to recover it. 00:29:10.098 [2024-10-09 00:36:40.553107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.098 [2024-10-09 00:36:40.553136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.099 qpair failed and we were unable to recover it. 00:29:10.099 [2024-10-09 00:36:40.553540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.099 [2024-10-09 00:36:40.553568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.099 qpair failed and we were unable to recover it. 00:29:10.099 [2024-10-09 00:36:40.553933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.099 [2024-10-09 00:36:40.553964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.099 qpair failed and we were unable to recover it. 00:29:10.099 [2024-10-09 00:36:40.554297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.099 [2024-10-09 00:36:40.554326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.099 qpair failed and we were unable to recover it. 00:29:10.099 [2024-10-09 00:36:40.554686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.099 [2024-10-09 00:36:40.554714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.099 qpair failed and we were unable to recover it. 00:29:10.099 [2024-10-09 00:36:40.555082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.099 [2024-10-09 00:36:40.555111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.099 qpair failed and we were unable to recover it. 00:29:10.099 [2024-10-09 00:36:40.555342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.099 [2024-10-09 00:36:40.555373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.099 qpair failed and we were unable to recover it. 00:29:10.099 [2024-10-09 00:36:40.555737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.099 [2024-10-09 00:36:40.555766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.099 qpair failed and we were unable to recover it. 00:29:10.099 [2024-10-09 00:36:40.556131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.099 [2024-10-09 00:36:40.556159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.099 qpair failed and we were unable to recover it. 00:29:10.099 [2024-10-09 00:36:40.556451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.099 [2024-10-09 00:36:40.556479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.099 qpair failed and we were unable to recover it. 00:29:10.099 [2024-10-09 00:36:40.556848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.099 [2024-10-09 00:36:40.556877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.099 qpair failed and we were unable to recover it. 00:29:10.099 [2024-10-09 00:36:40.557196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.099 [2024-10-09 00:36:40.557223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.099 qpair failed and we were unable to recover it. 00:29:10.099 [2024-10-09 00:36:40.557585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.099 [2024-10-09 00:36:40.557612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.099 qpair failed and we were unable to recover it. 00:29:10.099 [2024-10-09 00:36:40.557868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.099 [2024-10-09 00:36:40.557896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.099 qpair failed and we were unable to recover it. 00:29:10.099 [2024-10-09 00:36:40.558149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.099 [2024-10-09 00:36:40.558176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.099 qpair failed and we were unable to recover it. 00:29:10.099 [2024-10-09 00:36:40.558463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.099 [2024-10-09 00:36:40.558491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.099 qpair failed and we were unable to recover it. 00:29:10.099 [2024-10-09 00:36:40.558836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.099 [2024-10-09 00:36:40.558864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.099 qpair failed and we were unable to recover it. 00:29:10.099 [2024-10-09 00:36:40.559208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.099 [2024-10-09 00:36:40.559236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.099 qpair failed and we were unable to recover it. 00:29:10.099 [2024-10-09 00:36:40.559484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.099 [2024-10-09 00:36:40.559513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.099 qpair failed and we were unable to recover it. 00:29:10.099 [2024-10-09 00:36:40.559920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.099 [2024-10-09 00:36:40.559950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.099 qpair failed and we were unable to recover it. 00:29:10.099 [2024-10-09 00:36:40.560324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.099 [2024-10-09 00:36:40.560353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.099 qpair failed and we were unable to recover it. 00:29:10.099 [2024-10-09 00:36:40.560729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.099 [2024-10-09 00:36:40.560760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.099 qpair failed and we were unable to recover it. 00:29:10.099 [2024-10-09 00:36:40.561142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.099 [2024-10-09 00:36:40.561173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.099 qpair failed and we were unable to recover it. 00:29:10.099 [2024-10-09 00:36:40.561546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.099 [2024-10-09 00:36:40.561576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.099 qpair failed and we were unable to recover it. 00:29:10.099 [2024-10-09 00:36:40.561911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.099 [2024-10-09 00:36:40.561943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.099 qpair failed and we were unable to recover it. 00:29:10.099 [2024-10-09 00:36:40.562318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.099 [2024-10-09 00:36:40.562348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.099 qpair failed and we were unable to recover it. 00:29:10.099 [2024-10-09 00:36:40.562632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.099 [2024-10-09 00:36:40.562662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.099 qpair failed and we were unable to recover it. 00:29:10.099 [2024-10-09 00:36:40.563075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.099 [2024-10-09 00:36:40.563107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.099 qpair failed and we were unable to recover it. 00:29:10.099 [2024-10-09 00:36:40.563471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.099 [2024-10-09 00:36:40.563501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.099 qpair failed and we were unable to recover it. 00:29:10.099 [2024-10-09 00:36:40.563847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.099 [2024-10-09 00:36:40.563879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.099 qpair failed and we were unable to recover it. 00:29:10.099 [2024-10-09 00:36:40.564225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.099 [2024-10-09 00:36:40.564256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.099 qpair failed and we were unable to recover it. 00:29:10.099 [2024-10-09 00:36:40.564631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.099 [2024-10-09 00:36:40.564662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.099 qpair failed and we were unable to recover it. 00:29:10.099 [2024-10-09 00:36:40.565048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.099 [2024-10-09 00:36:40.565080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.099 qpair failed and we were unable to recover it. 00:29:10.099 [2024-10-09 00:36:40.565338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.099 [2024-10-09 00:36:40.565369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.099 qpair failed and we were unable to recover it. 00:29:10.099 [2024-10-09 00:36:40.565750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.099 [2024-10-09 00:36:40.565782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.099 qpair failed and we were unable to recover it. 00:29:10.099 [2024-10-09 00:36:40.566127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.099 [2024-10-09 00:36:40.566156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.099 qpair failed and we were unable to recover it. 00:29:10.099 [2024-10-09 00:36:40.566525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.099 [2024-10-09 00:36:40.566555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.099 qpair failed and we were unable to recover it. 00:29:10.099 [2024-10-09 00:36:40.566812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.100 [2024-10-09 00:36:40.566843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.100 qpair failed and we were unable to recover it. 00:29:10.100 [2024-10-09 00:36:40.567231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.100 [2024-10-09 00:36:40.567261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.100 qpair failed and we were unable to recover it. 00:29:10.100 [2024-10-09 00:36:40.567635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.100 [2024-10-09 00:36:40.567664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.100 qpair failed and we were unable to recover it. 00:29:10.100 [2024-10-09 00:36:40.568002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.100 [2024-10-09 00:36:40.568032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.100 qpair failed and we were unable to recover it. 00:29:10.100 [2024-10-09 00:36:40.568409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.100 [2024-10-09 00:36:40.568437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.100 qpair failed and we were unable to recover it. 00:29:10.100 [2024-10-09 00:36:40.568691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.100 [2024-10-09 00:36:40.568726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.100 qpair failed and we were unable to recover it. 00:29:10.100 [2024-10-09 00:36:40.568958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.100 [2024-10-09 00:36:40.568988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.100 qpair failed and we were unable to recover it. 00:29:10.100 [2024-10-09 00:36:40.569268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.100 [2024-10-09 00:36:40.569296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.100 qpair failed and we were unable to recover it. 00:29:10.100 [2024-10-09 00:36:40.569741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.100 [2024-10-09 00:36:40.569771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.100 qpair failed and we were unable to recover it. 00:29:10.100 [2024-10-09 00:36:40.570192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.100 [2024-10-09 00:36:40.570222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.100 qpair failed and we were unable to recover it. 00:29:10.100 [2024-10-09 00:36:40.570573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.100 [2024-10-09 00:36:40.570602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.100 qpair failed and we were unable to recover it. 00:29:10.100 [2024-10-09 00:36:40.571052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.100 [2024-10-09 00:36:40.571083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.100 qpair failed and we were unable to recover it. 00:29:10.100 [2024-10-09 00:36:40.571532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.100 [2024-10-09 00:36:40.571562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.100 qpair failed and we were unable to recover it. 00:29:10.100 [2024-10-09 00:36:40.571850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.100 [2024-10-09 00:36:40.571881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.100 qpair failed and we were unable to recover it. 00:29:10.100 [2024-10-09 00:36:40.572268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.100 [2024-10-09 00:36:40.572298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.100 qpair failed and we were unable to recover it. 00:29:10.100 [2024-10-09 00:36:40.572656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.100 [2024-10-09 00:36:40.572686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.100 qpair failed and we were unable to recover it. 00:29:10.100 [2024-10-09 00:36:40.573052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.100 [2024-10-09 00:36:40.573090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.100 qpair failed and we were unable to recover it. 00:29:10.100 [2024-10-09 00:36:40.573345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.100 [2024-10-09 00:36:40.573378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.100 qpair failed and we were unable to recover it. 00:29:10.100 [2024-10-09 00:36:40.573660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.100 [2024-10-09 00:36:40.573689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.100 qpair failed and we were unable to recover it. 00:29:10.100 [2024-10-09 00:36:40.574024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.100 [2024-10-09 00:36:40.574055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.100 qpair failed and we were unable to recover it. 00:29:10.100 [2024-10-09 00:36:40.574434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.100 [2024-10-09 00:36:40.574465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.100 qpair failed and we were unable to recover it. 00:29:10.100 [2024-10-09 00:36:40.574817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.100 [2024-10-09 00:36:40.574848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.100 qpair failed and we were unable to recover it. 00:29:10.100 [2024-10-09 00:36:40.575243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.100 [2024-10-09 00:36:40.575272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.100 qpair failed and we were unable to recover it. 00:29:10.100 [2024-10-09 00:36:40.575635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.100 [2024-10-09 00:36:40.575665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.100 qpair failed and we were unable to recover it. 00:29:10.100 [2024-10-09 00:36:40.576038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.100 [2024-10-09 00:36:40.576070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.100 qpair failed and we were unable to recover it. 00:29:10.100 [2024-10-09 00:36:40.576304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.100 [2024-10-09 00:36:40.576336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.100 qpair failed and we were unable to recover it. 00:29:10.100 [2024-10-09 00:36:40.576625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.100 [2024-10-09 00:36:40.576654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.100 qpair failed and we were unable to recover it. 00:29:10.100 [2024-10-09 00:36:40.576905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.100 [2024-10-09 00:36:40.576936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.100 qpair failed and we were unable to recover it. 00:29:10.100 [2024-10-09 00:36:40.577277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.100 [2024-10-09 00:36:40.577308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.100 qpair failed and we were unable to recover it. 00:29:10.100 [2024-10-09 00:36:40.577661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.100 [2024-10-09 00:36:40.577690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.100 qpair failed and we were unable to recover it. 00:29:10.100 [2024-10-09 00:36:40.578095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.100 [2024-10-09 00:36:40.578125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.100 qpair failed and we were unable to recover it. 00:29:10.100 [2024-10-09 00:36:40.578479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.100 [2024-10-09 00:36:40.578509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.100 qpair failed and we were unable to recover it. 00:29:10.100 [2024-10-09 00:36:40.578872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.100 [2024-10-09 00:36:40.578903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.100 qpair failed and we were unable to recover it. 00:29:10.100 [2024-10-09 00:36:40.579274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.100 [2024-10-09 00:36:40.579303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.100 qpair failed and we were unable to recover it. 00:29:10.100 [2024-10-09 00:36:40.579644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.100 [2024-10-09 00:36:40.579676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.100 qpair failed and we were unable to recover it. 00:29:10.100 [2024-10-09 00:36:40.579915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.100 [2024-10-09 00:36:40.579945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.100 qpair failed and we were unable to recover it. 00:29:10.100 [2024-10-09 00:36:40.580195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.100 [2024-10-09 00:36:40.580224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.100 qpair failed and we were unable to recover it. 00:29:10.100 [2024-10-09 00:36:40.580457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.100 [2024-10-09 00:36:40.580486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.100 qpair failed and we were unable to recover it. 00:29:10.100 [2024-10-09 00:36:40.580748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.100 [2024-10-09 00:36:40.580778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.100 qpair failed and we were unable to recover it. 00:29:10.100 [2024-10-09 00:36:40.581153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.100 [2024-10-09 00:36:40.581182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.100 qpair failed and we were unable to recover it. 00:29:10.100 [2024-10-09 00:36:40.581561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.100 [2024-10-09 00:36:40.581590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.100 qpair failed and we were unable to recover it. 00:29:10.101 [2024-10-09 00:36:40.581817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.101 [2024-10-09 00:36:40.581848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.101 qpair failed and we were unable to recover it. 00:29:10.101 [2024-10-09 00:36:40.581983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.101 [2024-10-09 00:36:40.582010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.101 qpair failed and we were unable to recover it. 00:29:10.101 [2024-10-09 00:36:40.582365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.101 [2024-10-09 00:36:40.582401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.101 qpair failed and we were unable to recover it. 00:29:10.101 [2024-10-09 00:36:40.582769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.101 [2024-10-09 00:36:40.582801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.101 qpair failed and we were unable to recover it. 00:29:10.101 [2024-10-09 00:36:40.583175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.101 [2024-10-09 00:36:40.583204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.101 qpair failed and we were unable to recover it. 00:29:10.101 [2024-10-09 00:36:40.583565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.101 [2024-10-09 00:36:40.583595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.101 qpair failed and we were unable to recover it. 00:29:10.101 [2024-10-09 00:36:40.583968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.101 [2024-10-09 00:36:40.583999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.101 qpair failed and we were unable to recover it. 00:29:10.101 [2024-10-09 00:36:40.584232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.101 [2024-10-09 00:36:40.584260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.101 qpair failed and we were unable to recover it. 00:29:10.101 [2024-10-09 00:36:40.584641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.101 [2024-10-09 00:36:40.584670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.101 qpair failed and we were unable to recover it. 00:29:10.101 [2024-10-09 00:36:40.584940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.101 [2024-10-09 00:36:40.584971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.101 qpair failed and we were unable to recover it. 00:29:10.101 [2024-10-09 00:36:40.585204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.101 [2024-10-09 00:36:40.585234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.101 qpair failed and we were unable to recover it. 00:29:10.101 [2024-10-09 00:36:40.585570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.101 [2024-10-09 00:36:40.585599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.101 qpair failed and we were unable to recover it. 00:29:10.101 [2024-10-09 00:36:40.585916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.101 [2024-10-09 00:36:40.585946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.101 qpair failed and we were unable to recover it. 00:29:10.101 [2024-10-09 00:36:40.586203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.101 [2024-10-09 00:36:40.586232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.101 qpair failed and we were unable to recover it. 00:29:10.101 [2024-10-09 00:36:40.586604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.101 [2024-10-09 00:36:40.586633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.101 qpair failed and we were unable to recover it. 00:29:10.101 [2024-10-09 00:36:40.586965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.101 [2024-10-09 00:36:40.586997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.101 qpair failed and we were unable to recover it. 00:29:10.101 [2024-10-09 00:36:40.587335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.101 [2024-10-09 00:36:40.587365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.101 qpair failed and we were unable to recover it. 00:29:10.101 [2024-10-09 00:36:40.587585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.101 [2024-10-09 00:36:40.587615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.101 qpair failed and we were unable to recover it. 00:29:10.101 [2024-10-09 00:36:40.588037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.101 [2024-10-09 00:36:40.588068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.101 qpair failed and we were unable to recover it. 00:29:10.101 [2024-10-09 00:36:40.588438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.101 [2024-10-09 00:36:40.588469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.101 qpair failed and we were unable to recover it. 00:29:10.101 [2024-10-09 00:36:40.588850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.101 [2024-10-09 00:36:40.588880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.101 qpair failed and we were unable to recover it. 00:29:10.101 [2024-10-09 00:36:40.589138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.101 [2024-10-09 00:36:40.589167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.101 qpair failed and we were unable to recover it. 00:29:10.101 [2024-10-09 00:36:40.589547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.101 [2024-10-09 00:36:40.589579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.101 qpair failed and we were unable to recover it. 00:29:10.101 [2024-10-09 00:36:40.589970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.101 [2024-10-09 00:36:40.590000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.101 qpair failed and we were unable to recover it. 00:29:10.101 [2024-10-09 00:36:40.590469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.101 [2024-10-09 00:36:40.590498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.101 qpair failed and we were unable to recover it. 00:29:10.101 [2024-10-09 00:36:40.590841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.101 [2024-10-09 00:36:40.590883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.101 qpair failed and we were unable to recover it. 00:29:10.101 [2024-10-09 00:36:40.591094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.101 [2024-10-09 00:36:40.591124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.101 qpair failed and we were unable to recover it. 00:29:10.101 [2024-10-09 00:36:40.591452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.101 [2024-10-09 00:36:40.591481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.101 qpair failed and we were unable to recover it. 00:29:10.101 [2024-10-09 00:36:40.591863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.101 [2024-10-09 00:36:40.591892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.101 qpair failed and we were unable to recover it. 00:29:10.101 [2024-10-09 00:36:40.592247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.101 [2024-10-09 00:36:40.592284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.101 qpair failed and we were unable to recover it. 00:29:10.101 [2024-10-09 00:36:40.592647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.101 [2024-10-09 00:36:40.592677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.101 qpair failed and we were unable to recover it. 00:29:10.101 [2024-10-09 00:36:40.593043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.101 [2024-10-09 00:36:40.593073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.101 qpair failed and we were unable to recover it. 00:29:10.101 [2024-10-09 00:36:40.593441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.101 [2024-10-09 00:36:40.593469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.101 qpair failed and we were unable to recover it. 00:29:10.101 [2024-10-09 00:36:40.593842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.101 [2024-10-09 00:36:40.593873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.101 qpair failed and we were unable to recover it. 00:29:10.101 [2024-10-09 00:36:40.594110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.101 [2024-10-09 00:36:40.594139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.101 qpair failed and we were unable to recover it. 00:29:10.101 [2024-10-09 00:36:40.594382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.101 [2024-10-09 00:36:40.594411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.101 qpair failed and we were unable to recover it. 00:29:10.101 [2024-10-09 00:36:40.594754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.101 [2024-10-09 00:36:40.594785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.101 qpair failed and we were unable to recover it. 00:29:10.101 [2024-10-09 00:36:40.595106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.101 [2024-10-09 00:36:40.595135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.101 qpair failed and we were unable to recover it. 00:29:10.101 [2024-10-09 00:36:40.595518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.101 [2024-10-09 00:36:40.595548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.101 qpair failed and we were unable to recover it. 00:29:10.101 [2024-10-09 00:36:40.595912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.101 [2024-10-09 00:36:40.595944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.101 qpair failed and we were unable to recover it. 00:29:10.101 [2024-10-09 00:36:40.596323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.101 [2024-10-09 00:36:40.596352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.102 qpair failed and we were unable to recover it. 00:29:10.102 [2024-10-09 00:36:40.596740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.102 [2024-10-09 00:36:40.596771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.102 qpair failed and we were unable to recover it. 00:29:10.102 [2024-10-09 00:36:40.597097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.102 [2024-10-09 00:36:40.597127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.102 qpair failed and we were unable to recover it. 00:29:10.102 [2024-10-09 00:36:40.597377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.102 [2024-10-09 00:36:40.597406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.102 qpair failed and we were unable to recover it. 00:29:10.102 [2024-10-09 00:36:40.597814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.102 [2024-10-09 00:36:40.597845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.102 qpair failed and we were unable to recover it. 00:29:10.102 [2024-10-09 00:36:40.598181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.102 [2024-10-09 00:36:40.598211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.102 qpair failed and we were unable to recover it. 00:29:10.102 [2024-10-09 00:36:40.598549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.102 [2024-10-09 00:36:40.598577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.102 qpair failed and we were unable to recover it. 00:29:10.102 [2024-10-09 00:36:40.598784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.102 [2024-10-09 00:36:40.598815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.102 qpair failed and we were unable to recover it. 00:29:10.102 [2024-10-09 00:36:40.599079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.102 [2024-10-09 00:36:40.599109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.102 qpair failed and we were unable to recover it. 00:29:10.102 [2024-10-09 00:36:40.599355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.102 [2024-10-09 00:36:40.599385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.102 qpair failed and we were unable to recover it. 00:29:10.102 [2024-10-09 00:36:40.599636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.102 [2024-10-09 00:36:40.599665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.102 qpair failed and we were unable to recover it. 00:29:10.102 [2024-10-09 00:36:40.599951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.102 [2024-10-09 00:36:40.599981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.102 qpair failed and we were unable to recover it. 00:29:10.102 [2024-10-09 00:36:40.600328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.102 [2024-10-09 00:36:40.600358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.102 qpair failed and we were unable to recover it. 00:29:10.102 [2024-10-09 00:36:40.600741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.102 [2024-10-09 00:36:40.600772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.102 qpair failed and we were unable to recover it. 00:29:10.102 [2024-10-09 00:36:40.601196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.102 [2024-10-09 00:36:40.601226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.102 qpair failed and we were unable to recover it. 00:29:10.102 [2024-10-09 00:36:40.601594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.102 [2024-10-09 00:36:40.601624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.102 qpair failed and we were unable to recover it. 00:29:10.102 [2024-10-09 00:36:40.601989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.102 [2024-10-09 00:36:40.602019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.102 qpair failed and we were unable to recover it. 00:29:10.102 [2024-10-09 00:36:40.602381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.102 [2024-10-09 00:36:40.602411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.102 qpair failed and we were unable to recover it. 00:29:10.102 [2024-10-09 00:36:40.602624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.102 [2024-10-09 00:36:40.602654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.102 qpair failed and we were unable to recover it. 00:29:10.102 [2024-10-09 00:36:40.603028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.102 [2024-10-09 00:36:40.603061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.102 qpair failed and we were unable to recover it. 00:29:10.102 [2024-10-09 00:36:40.603411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.102 [2024-10-09 00:36:40.603440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.102 qpair failed and we were unable to recover it. 00:29:10.102 [2024-10-09 00:36:40.603817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.102 [2024-10-09 00:36:40.603846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.102 qpair failed and we were unable to recover it. 00:29:10.102 [2024-10-09 00:36:40.604191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.102 [2024-10-09 00:36:40.604222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.102 qpair failed and we were unable to recover it. 00:29:10.102 [2024-10-09 00:36:40.604566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.102 [2024-10-09 00:36:40.604596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.102 qpair failed and we were unable to recover it. 00:29:10.102 [2024-10-09 00:36:40.604985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.102 [2024-10-09 00:36:40.605016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.102 qpair failed and we were unable to recover it. 00:29:10.102 [2024-10-09 00:36:40.605395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.102 [2024-10-09 00:36:40.605426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.102 qpair failed and we were unable to recover it. 00:29:10.102 [2024-10-09 00:36:40.605773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.102 [2024-10-09 00:36:40.605805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.102 qpair failed and we were unable to recover it. 00:29:10.102 [2024-10-09 00:36:40.606057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.102 [2024-10-09 00:36:40.606086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.102 qpair failed and we were unable to recover it. 00:29:10.102 [2024-10-09 00:36:40.606441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.102 [2024-10-09 00:36:40.606472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.102 qpair failed and we were unable to recover it. 00:29:10.102 [2024-10-09 00:36:40.606806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.102 [2024-10-09 00:36:40.606838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.102 qpair failed and we were unable to recover it. 00:29:10.102 [2024-10-09 00:36:40.607216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.102 [2024-10-09 00:36:40.607247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.102 qpair failed and we were unable to recover it. 00:29:10.102 [2024-10-09 00:36:40.607510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.102 [2024-10-09 00:36:40.607538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.102 qpair failed and we were unable to recover it. 00:29:10.102 [2024-10-09 00:36:40.607900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.102 [2024-10-09 00:36:40.607930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.102 qpair failed and we were unable to recover it. 00:29:10.102 [2024-10-09 00:36:40.608302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.102 [2024-10-09 00:36:40.608331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.102 qpair failed and we were unable to recover it. 00:29:10.102 [2024-10-09 00:36:40.608697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.102 [2024-10-09 00:36:40.608733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.102 qpair failed and we were unable to recover it. 00:29:10.102 [2024-10-09 00:36:40.609077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.102 [2024-10-09 00:36:40.609106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.102 qpair failed and we were unable to recover it. 00:29:10.102 [2024-10-09 00:36:40.609462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.102 [2024-10-09 00:36:40.609492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.102 qpair failed and we were unable to recover it. 00:29:10.102 [2024-10-09 00:36:40.609882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.102 [2024-10-09 00:36:40.609912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.102 qpair failed and we were unable to recover it. 00:29:10.102 [2024-10-09 00:36:40.610331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.102 [2024-10-09 00:36:40.610362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.102 qpair failed and we were unable to recover it. 00:29:10.102 [2024-10-09 00:36:40.610759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.102 [2024-10-09 00:36:40.610791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.102 qpair failed and we were unable to recover it. 00:29:10.102 [2024-10-09 00:36:40.611081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.102 [2024-10-09 00:36:40.611112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.102 qpair failed and we were unable to recover it. 00:29:10.102 [2024-10-09 00:36:40.611350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.103 [2024-10-09 00:36:40.611379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.103 qpair failed and we were unable to recover it. 00:29:10.103 [2024-10-09 00:36:40.611779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.103 [2024-10-09 00:36:40.611809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.103 qpair failed and we were unable to recover it. 00:29:10.103 [2024-10-09 00:36:40.612172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.103 [2024-10-09 00:36:40.612200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.103 qpair failed and we were unable to recover it. 00:29:10.103 [2024-10-09 00:36:40.612574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.103 [2024-10-09 00:36:40.612603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.103 qpair failed and we were unable to recover it. 00:29:10.103 [2024-10-09 00:36:40.612985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.103 [2024-10-09 00:36:40.613015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.103 qpair failed and we were unable to recover it. 00:29:10.103 [2024-10-09 00:36:40.613397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.103 [2024-10-09 00:36:40.613425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.103 qpair failed and we were unable to recover it. 00:29:10.103 [2024-10-09 00:36:40.613807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.103 [2024-10-09 00:36:40.613837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.103 qpair failed and we were unable to recover it. 00:29:10.103 [2024-10-09 00:36:40.614188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.103 [2024-10-09 00:36:40.614218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.103 qpair failed and we were unable to recover it. 00:29:10.103 [2024-10-09 00:36:40.614599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.103 [2024-10-09 00:36:40.614628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.103 qpair failed and we were unable to recover it. 00:29:10.103 [2024-10-09 00:36:40.614972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.103 [2024-10-09 00:36:40.615003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.103 qpair failed and we were unable to recover it. 00:29:10.103 [2024-10-09 00:36:40.615353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.103 [2024-10-09 00:36:40.615382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.103 qpair failed and we were unable to recover it. 00:29:10.103 [2024-10-09 00:36:40.615756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.103 [2024-10-09 00:36:40.615787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.103 qpair failed and we were unable to recover it. 00:29:10.103 [2024-10-09 00:36:40.616145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.103 [2024-10-09 00:36:40.616174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.103 qpair failed and we were unable to recover it. 00:29:10.103 [2024-10-09 00:36:40.616432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.103 [2024-10-09 00:36:40.616465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.103 qpair failed and we were unable to recover it. 00:29:10.103 [2024-10-09 00:36:40.616798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.103 [2024-10-09 00:36:40.616827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.103 qpair failed and we were unable to recover it. 00:29:10.103 [2024-10-09 00:36:40.617176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.103 [2024-10-09 00:36:40.617206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.103 qpair failed and we were unable to recover it. 00:29:10.103 [2024-10-09 00:36:40.617481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.103 [2024-10-09 00:36:40.617517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.103 qpair failed and we were unable to recover it. 00:29:10.103 [2024-10-09 00:36:40.617788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.103 [2024-10-09 00:36:40.617817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.103 qpair failed and we were unable to recover it. 00:29:10.103 [2024-10-09 00:36:40.618195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.103 [2024-10-09 00:36:40.618224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.103 qpair failed and we were unable to recover it. 00:29:10.103 [2024-10-09 00:36:40.618556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.103 [2024-10-09 00:36:40.618586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.103 qpair failed and we were unable to recover it. 00:29:10.103 [2024-10-09 00:36:40.618949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.103 [2024-10-09 00:36:40.618980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.103 qpair failed and we were unable to recover it. 00:29:10.103 [2024-10-09 00:36:40.619385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.103 [2024-10-09 00:36:40.619415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.103 qpair failed and we were unable to recover it. 00:29:10.103 [2024-10-09 00:36:40.619773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.103 [2024-10-09 00:36:40.619802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.103 qpair failed and we were unable to recover it. 00:29:10.103 [2024-10-09 00:36:40.620189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.103 [2024-10-09 00:36:40.620219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.103 qpair failed and we were unable to recover it. 00:29:10.103 [2024-10-09 00:36:40.620655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.103 [2024-10-09 00:36:40.620684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.103 qpair failed and we were unable to recover it. 00:29:10.103 [2024-10-09 00:36:40.621091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.103 [2024-10-09 00:36:40.621122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.103 qpair failed and we were unable to recover it. 00:29:10.103 [2024-10-09 00:36:40.621367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.103 [2024-10-09 00:36:40.621396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.103 qpair failed and we were unable to recover it. 00:29:10.103 [2024-10-09 00:36:40.621764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.103 [2024-10-09 00:36:40.621795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.103 qpair failed and we were unable to recover it. 00:29:10.103 [2024-10-09 00:36:40.622086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.103 [2024-10-09 00:36:40.622115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.103 qpair failed and we were unable to recover it. 00:29:10.103 [2024-10-09 00:36:40.622477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.103 [2024-10-09 00:36:40.622506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.103 qpair failed and we were unable to recover it. 00:29:10.103 [2024-10-09 00:36:40.622874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.103 [2024-10-09 00:36:40.622906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.103 qpair failed and we were unable to recover it. 00:29:10.103 [2024-10-09 00:36:40.623285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.103 [2024-10-09 00:36:40.623314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.103 qpair failed and we were unable to recover it. 00:29:10.103 [2024-10-09 00:36:40.623713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.103 [2024-10-09 00:36:40.623751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.103 qpair failed and we were unable to recover it. 00:29:10.103 [2024-10-09 00:36:40.624109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.103 [2024-10-09 00:36:40.624139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.103 qpair failed and we were unable to recover it. 00:29:10.103 [2024-10-09 00:36:40.624462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.103 [2024-10-09 00:36:40.624491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.103 qpair failed and we were unable to recover it. 00:29:10.103 [2024-10-09 00:36:40.624841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.103 [2024-10-09 00:36:40.624872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.103 qpair failed and we were unable to recover it. 00:29:10.103 [2024-10-09 00:36:40.625246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.103 [2024-10-09 00:36:40.625277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.104 qpair failed and we were unable to recover it. 00:29:10.104 [2024-10-09 00:36:40.625624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.104 [2024-10-09 00:36:40.625655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.104 qpair failed and we were unable to recover it. 00:29:10.104 [2024-10-09 00:36:40.626041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.104 [2024-10-09 00:36:40.626071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.104 qpair failed and we were unable to recover it. 00:29:10.104 [2024-10-09 00:36:40.626419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.104 [2024-10-09 00:36:40.626449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.104 qpair failed and we were unable to recover it. 00:29:10.104 [2024-10-09 00:36:40.626803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.104 [2024-10-09 00:36:40.626834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.104 qpair failed and we were unable to recover it. 00:29:10.104 [2024-10-09 00:36:40.627221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.104 [2024-10-09 00:36:40.627251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.104 qpair failed and we were unable to recover it. 00:29:10.104 [2024-10-09 00:36:40.627602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.104 [2024-10-09 00:36:40.627632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.104 qpair failed and we were unable to recover it. 00:29:10.104 [2024-10-09 00:36:40.627973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.104 [2024-10-09 00:36:40.628009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.104 qpair failed and we were unable to recover it. 00:29:10.104 [2024-10-09 00:36:40.628249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.104 [2024-10-09 00:36:40.628280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.104 qpair failed and we were unable to recover it. 00:29:10.104 [2024-10-09 00:36:40.628638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.104 [2024-10-09 00:36:40.628668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.104 qpair failed and we were unable to recover it. 00:29:10.104 [2024-10-09 00:36:40.629069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.104 [2024-10-09 00:36:40.629100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.104 qpair failed and we were unable to recover it. 00:29:10.104 [2024-10-09 00:36:40.629466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.104 [2024-10-09 00:36:40.629496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.104 qpair failed and we were unable to recover it. 00:29:10.104 [2024-10-09 00:36:40.629852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.104 [2024-10-09 00:36:40.629882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.104 qpair failed and we were unable to recover it. 00:29:10.104 [2024-10-09 00:36:40.630282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.104 [2024-10-09 00:36:40.630311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.104 qpair failed and we were unable to recover it. 00:29:10.104 [2024-10-09 00:36:40.630669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.104 [2024-10-09 00:36:40.630708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.104 qpair failed and we were unable to recover it. 00:29:10.104 [2024-10-09 00:36:40.631112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.104 [2024-10-09 00:36:40.631142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.104 qpair failed and we were unable to recover it. 00:29:10.104 [2024-10-09 00:36:40.631506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.104 [2024-10-09 00:36:40.631535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.104 qpair failed and we were unable to recover it. 00:29:10.104 [2024-10-09 00:36:40.631892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.104 [2024-10-09 00:36:40.631923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.104 qpair failed and we were unable to recover it. 00:29:10.104 [2024-10-09 00:36:40.632268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.104 [2024-10-09 00:36:40.632297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.104 qpair failed and we were unable to recover it. 00:29:10.104 [2024-10-09 00:36:40.632664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.104 [2024-10-09 00:36:40.632692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.104 qpair failed and we were unable to recover it. 00:29:10.104 [2024-10-09 00:36:40.633051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.104 [2024-10-09 00:36:40.633080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.104 qpair failed and we were unable to recover it. 00:29:10.104 [2024-10-09 00:36:40.633458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.104 [2024-10-09 00:36:40.633488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.104 qpair failed and we were unable to recover it. 00:29:10.104 [2024-10-09 00:36:40.633843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.104 [2024-10-09 00:36:40.633873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.104 qpair failed and we were unable to recover it. 00:29:10.104 [2024-10-09 00:36:40.634307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.104 [2024-10-09 00:36:40.634336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.104 qpair failed and we were unable to recover it. 00:29:10.104 [2024-10-09 00:36:40.634667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.104 [2024-10-09 00:36:40.634698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.104 qpair failed and we were unable to recover it. 00:29:10.104 [2024-10-09 00:36:40.635014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.104 [2024-10-09 00:36:40.635043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.104 qpair failed and we were unable to recover it. 00:29:10.104 [2024-10-09 00:36:40.635286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.104 [2024-10-09 00:36:40.635319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.104 qpair failed and we were unable to recover it. 00:29:10.104 [2024-10-09 00:36:40.635682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.104 [2024-10-09 00:36:40.635712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.104 qpair failed and we were unable to recover it. 00:29:10.104 [2024-10-09 00:36:40.636097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.104 [2024-10-09 00:36:40.636126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.104 qpair failed and we were unable to recover it. 00:29:10.104 [2024-10-09 00:36:40.636455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.104 [2024-10-09 00:36:40.636484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.104 qpair failed and we were unable to recover it. 00:29:10.104 [2024-10-09 00:36:40.636744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.104 [2024-10-09 00:36:40.636778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.104 qpair failed and we were unable to recover it. 00:29:10.104 [2024-10-09 00:36:40.637205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.104 [2024-10-09 00:36:40.637234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.104 qpair failed and we were unable to recover it. 00:29:10.104 [2024-10-09 00:36:40.637581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.104 [2024-10-09 00:36:40.637610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.104 qpair failed and we were unable to recover it. 00:29:10.104 [2024-10-09 00:36:40.637950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.104 [2024-10-09 00:36:40.637981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.104 qpair failed and we were unable to recover it. 00:29:10.104 [2024-10-09 00:36:40.638374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.104 [2024-10-09 00:36:40.638404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.104 qpair failed and we were unable to recover it. 00:29:10.104 [2024-10-09 00:36:40.638778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.104 [2024-10-09 00:36:40.638808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.104 qpair failed and we were unable to recover it. 00:29:10.104 [2024-10-09 00:36:40.639180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.104 [2024-10-09 00:36:40.639210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.104 qpair failed and we were unable to recover it. 00:29:10.104 [2024-10-09 00:36:40.639577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.104 [2024-10-09 00:36:40.639606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.104 qpair failed and we were unable to recover it. 00:29:10.104 [2024-10-09 00:36:40.639946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.104 [2024-10-09 00:36:40.639977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.104 qpair failed and we were unable to recover it. 00:29:10.104 [2024-10-09 00:36:40.640337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.104 [2024-10-09 00:36:40.640366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.104 qpair failed and we were unable to recover it. 00:29:10.104 [2024-10-09 00:36:40.640626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.104 [2024-10-09 00:36:40.640654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.104 qpair failed and we were unable to recover it. 00:29:10.104 [2024-10-09 00:36:40.641028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.105 [2024-10-09 00:36:40.641058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.105 qpair failed and we were unable to recover it. 00:29:10.105 [2024-10-09 00:36:40.641418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.105 [2024-10-09 00:36:40.641447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.105 qpair failed and we were unable to recover it. 00:29:10.105 [2024-10-09 00:36:40.641803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.105 [2024-10-09 00:36:40.641834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.105 qpair failed and we were unable to recover it. 00:29:10.105 [2024-10-09 00:36:40.642187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.105 [2024-10-09 00:36:40.642216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.105 qpair failed and we were unable to recover it. 00:29:10.105 [2024-10-09 00:36:40.642575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.105 [2024-10-09 00:36:40.642604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.105 qpair failed and we were unable to recover it. 00:29:10.105 [2024-10-09 00:36:40.642984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.105 [2024-10-09 00:36:40.643015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.105 qpair failed and we were unable to recover it. 00:29:10.105 [2024-10-09 00:36:40.643378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.105 [2024-10-09 00:36:40.643407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.105 qpair failed and we were unable to recover it. 00:29:10.105 [2024-10-09 00:36:40.643779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.105 [2024-10-09 00:36:40.643809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.105 qpair failed and we were unable to recover it. 00:29:10.105 [2024-10-09 00:36:40.644180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.105 [2024-10-09 00:36:40.644209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.105 qpair failed and we were unable to recover it. 00:29:10.105 [2024-10-09 00:36:40.644559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.105 [2024-10-09 00:36:40.644587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.105 qpair failed and we were unable to recover it. 00:29:10.105 [2024-10-09 00:36:40.644986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.105 [2024-10-09 00:36:40.645016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.105 qpair failed and we were unable to recover it. 00:29:10.105 [2024-10-09 00:36:40.645373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.105 [2024-10-09 00:36:40.645402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.105 qpair failed and we were unable to recover it. 00:29:10.105 [2024-10-09 00:36:40.645750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.105 [2024-10-09 00:36:40.645780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.105 qpair failed and we were unable to recover it. 00:29:10.105 [2024-10-09 00:36:40.646144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.105 [2024-10-09 00:36:40.646173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.105 qpair failed and we were unable to recover it. 00:29:10.105 [2024-10-09 00:36:40.646464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.105 [2024-10-09 00:36:40.646493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.105 qpair failed and we were unable to recover it. 00:29:10.105 [2024-10-09 00:36:40.646917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.105 [2024-10-09 00:36:40.646947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.105 qpair failed and we were unable to recover it. 00:29:10.105 [2024-10-09 00:36:40.647301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.105 [2024-10-09 00:36:40.647330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.105 qpair failed and we were unable to recover it. 00:29:10.105 [2024-10-09 00:36:40.647684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.105 [2024-10-09 00:36:40.647713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.105 qpair failed and we were unable to recover it. 00:29:10.105 [2024-10-09 00:36:40.647996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.105 [2024-10-09 00:36:40.648027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.105 qpair failed and we were unable to recover it. 00:29:10.105 [2024-10-09 00:36:40.648301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.105 [2024-10-09 00:36:40.648330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.105 qpair failed and we were unable to recover it. 00:29:10.105 [2024-10-09 00:36:40.648701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.105 [2024-10-09 00:36:40.648741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.105 qpair failed and we were unable to recover it. 00:29:10.105 [2024-10-09 00:36:40.649078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.105 [2024-10-09 00:36:40.649108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.105 qpair failed and we were unable to recover it. 00:29:10.105 [2024-10-09 00:36:40.649520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.105 [2024-10-09 00:36:40.649549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.105 qpair failed and we were unable to recover it. 00:29:10.105 [2024-10-09 00:36:40.649998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.105 [2024-10-09 00:36:40.650029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.105 qpair failed and we were unable to recover it. 00:29:10.105 [2024-10-09 00:36:40.650395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.105 [2024-10-09 00:36:40.650424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.105 qpair failed and we were unable to recover it. 00:29:10.105 [2024-10-09 00:36:40.650767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.105 [2024-10-09 00:36:40.650798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.105 qpair failed and we were unable to recover it. 00:29:10.105 [2024-10-09 00:36:40.651161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.105 [2024-10-09 00:36:40.651190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.105 qpair failed and we were unable to recover it. 00:29:10.105 [2024-10-09 00:36:40.651553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.105 [2024-10-09 00:36:40.651581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.105 qpair failed and we were unable to recover it. 00:29:10.105 [2024-10-09 00:36:40.651943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.105 [2024-10-09 00:36:40.651973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.105 qpair failed and we were unable to recover it. 00:29:10.105 [2024-10-09 00:36:40.652410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.105 [2024-10-09 00:36:40.652439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.105 qpair failed and we were unable to recover it. 00:29:10.105 [2024-10-09 00:36:40.652818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.105 [2024-10-09 00:36:40.652848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.105 qpair failed and we were unable to recover it. 00:29:10.105 [2024-10-09 00:36:40.653217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.105 [2024-10-09 00:36:40.653245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.105 qpair failed and we were unable to recover it. 00:29:10.105 [2024-10-09 00:36:40.653573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.105 [2024-10-09 00:36:40.653603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.105 qpair failed and we were unable to recover it. 00:29:10.105 [2024-10-09 00:36:40.653972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.105 [2024-10-09 00:36:40.654002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.105 qpair failed and we were unable to recover it. 00:29:10.105 [2024-10-09 00:36:40.654366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.105 [2024-10-09 00:36:40.654400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.105 qpair failed and we were unable to recover it. 00:29:10.105 [2024-10-09 00:36:40.654761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.105 [2024-10-09 00:36:40.654791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.105 qpair failed and we were unable to recover it. 00:29:10.105 [2024-10-09 00:36:40.655196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.105 [2024-10-09 00:36:40.655225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.105 qpair failed and we were unable to recover it. 00:29:10.105 [2024-10-09 00:36:40.655578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.105 [2024-10-09 00:36:40.655607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.105 qpair failed and we were unable to recover it. 00:29:10.105 [2024-10-09 00:36:40.655974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.105 [2024-10-09 00:36:40.656004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.105 qpair failed and we were unable to recover it. 00:29:10.105 [2024-10-09 00:36:40.656361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.105 [2024-10-09 00:36:40.656392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.105 qpair failed and we were unable to recover it. 00:29:10.105 [2024-10-09 00:36:40.656781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.105 [2024-10-09 00:36:40.656810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.105 qpair failed and we were unable to recover it. 00:29:10.106 [2024-10-09 00:36:40.657083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.106 [2024-10-09 00:36:40.657111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.106 qpair failed and we were unable to recover it. 00:29:10.106 [2024-10-09 00:36:40.657478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.106 [2024-10-09 00:36:40.657507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.106 qpair failed and we were unable to recover it. 00:29:10.106 [2024-10-09 00:36:40.657903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.106 [2024-10-09 00:36:40.657934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.106 qpair failed and we were unable to recover it. 00:29:10.106 [2024-10-09 00:36:40.658310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.106 [2024-10-09 00:36:40.658339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.106 qpair failed and we were unable to recover it. 00:29:10.106 [2024-10-09 00:36:40.658692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.106 [2024-10-09 00:36:40.658735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.106 qpair failed and we were unable to recover it. 00:29:10.106 [2024-10-09 00:36:40.659107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.106 [2024-10-09 00:36:40.659136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.106 qpair failed and we were unable to recover it. 00:29:10.106 [2024-10-09 00:36:40.659567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.106 [2024-10-09 00:36:40.659596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.106 qpair failed and we were unable to recover it. 00:29:10.106 [2024-10-09 00:36:40.659973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.106 [2024-10-09 00:36:40.660003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.106 qpair failed and we were unable to recover it. 00:29:10.106 [2024-10-09 00:36:40.660331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.106 [2024-10-09 00:36:40.660360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.106 qpair failed and we were unable to recover it. 00:29:10.106 [2024-10-09 00:36:40.660714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.106 [2024-10-09 00:36:40.660752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.106 qpair failed and we were unable to recover it. 00:29:10.106 [2024-10-09 00:36:40.661117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.106 [2024-10-09 00:36:40.661146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.106 qpair failed and we were unable to recover it. 00:29:10.106 [2024-10-09 00:36:40.661476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.106 [2024-10-09 00:36:40.661506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.106 qpair failed and we were unable to recover it. 00:29:10.106 [2024-10-09 00:36:40.661839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.106 [2024-10-09 00:36:40.661870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.106 qpair failed and we were unable to recover it. 00:29:10.106 [2024-10-09 00:36:40.662241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.106 [2024-10-09 00:36:40.662270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.106 qpair failed and we were unable to recover it. 00:29:10.106 [2024-10-09 00:36:40.662628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.106 [2024-10-09 00:36:40.662658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.106 qpair failed and we were unable to recover it. 00:29:10.106 [2024-10-09 00:36:40.662934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.106 [2024-10-09 00:36:40.662964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.106 qpair failed and we were unable to recover it. 00:29:10.106 [2024-10-09 00:36:40.663337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.106 [2024-10-09 00:36:40.663366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.106 qpair failed and we were unable to recover it. 00:29:10.106 [2024-10-09 00:36:40.663742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.106 [2024-10-09 00:36:40.663772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.106 qpair failed and we were unable to recover it. 00:29:10.106 [2024-10-09 00:36:40.664140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.106 [2024-10-09 00:36:40.664169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.106 qpair failed and we were unable to recover it. 00:29:10.106 [2024-10-09 00:36:40.664538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.106 [2024-10-09 00:36:40.664568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.106 qpair failed and we were unable to recover it. 00:29:10.106 [2024-10-09 00:36:40.664945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.106 [2024-10-09 00:36:40.664981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.106 qpair failed and we were unable to recover it. 00:29:10.106 [2024-10-09 00:36:40.665323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.106 [2024-10-09 00:36:40.665353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.106 qpair failed and we were unable to recover it. 00:29:10.106 [2024-10-09 00:36:40.665729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.106 [2024-10-09 00:36:40.665760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.106 qpair failed and we were unable to recover it. 00:29:10.106 [2024-10-09 00:36:40.666002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.106 [2024-10-09 00:36:40.666031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.106 qpair failed and we were unable to recover it. 00:29:10.106 [2024-10-09 00:36:40.666405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.106 [2024-10-09 00:36:40.666434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.106 qpair failed and we were unable to recover it. 00:29:10.106 [2024-10-09 00:36:40.666766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.106 [2024-10-09 00:36:40.666798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.106 qpair failed and we were unable to recover it. 00:29:10.106 [2024-10-09 00:36:40.667180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.106 [2024-10-09 00:36:40.667209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.106 qpair failed and we were unable to recover it. 00:29:10.106 [2024-10-09 00:36:40.667532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.106 [2024-10-09 00:36:40.667560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.106 qpair failed and we were unable to recover it. 00:29:10.106 [2024-10-09 00:36:40.667949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.106 [2024-10-09 00:36:40.667979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.106 qpair failed and we were unable to recover it. 00:29:10.106 [2024-10-09 00:36:40.668321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.106 [2024-10-09 00:36:40.668349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.106 qpair failed and we were unable to recover it. 00:29:10.106 [2024-10-09 00:36:40.668638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.106 [2024-10-09 00:36:40.668666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.106 qpair failed and we were unable to recover it. 00:29:10.106 [2024-10-09 00:36:40.668936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.106 [2024-10-09 00:36:40.668967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.106 qpair failed and we were unable to recover it. 00:29:10.106 [2024-10-09 00:36:40.669328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.106 [2024-10-09 00:36:40.669357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.106 qpair failed and we were unable to recover it. 00:29:10.106 [2024-10-09 00:36:40.669741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.106 [2024-10-09 00:36:40.669773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.106 qpair failed and we were unable to recover it. 00:29:10.106 [2024-10-09 00:36:40.670164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.106 [2024-10-09 00:36:40.670194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.106 qpair failed and we were unable to recover it. 00:29:10.106 [2024-10-09 00:36:40.670567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.106 [2024-10-09 00:36:40.670595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.106 qpair failed and we were unable to recover it. 00:29:10.106 [2024-10-09 00:36:40.670971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.106 [2024-10-09 00:36:40.671003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.106 qpair failed and we were unable to recover it. 00:29:10.106 [2024-10-09 00:36:40.671264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.106 [2024-10-09 00:36:40.671293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.106 qpair failed and we were unable to recover it. 00:29:10.106 [2024-10-09 00:36:40.671641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.106 [2024-10-09 00:36:40.671671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.106 qpair failed and we were unable to recover it. 00:29:10.106 [2024-10-09 00:36:40.672106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.106 [2024-10-09 00:36:40.672136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.106 qpair failed and we were unable to recover it. 00:29:10.106 [2024-10-09 00:36:40.672493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.106 [2024-10-09 00:36:40.672523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.107 qpair failed and we were unable to recover it. 00:29:10.107 [2024-10-09 00:36:40.672933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.107 [2024-10-09 00:36:40.672962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.107 qpair failed and we were unable to recover it. 00:29:10.107 [2024-10-09 00:36:40.673315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.107 [2024-10-09 00:36:40.673343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.107 qpair failed and we were unable to recover it. 00:29:10.107 [2024-10-09 00:36:40.673646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.107 [2024-10-09 00:36:40.673676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.107 qpair failed and we were unable to recover it. 00:29:10.107 [2024-10-09 00:36:40.673979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.107 [2024-10-09 00:36:40.674009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.107 qpair failed and we were unable to recover it. 00:29:10.107 [2024-10-09 00:36:40.674375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.107 [2024-10-09 00:36:40.674405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.107 qpair failed and we were unable to recover it. 00:29:10.107 [2024-10-09 00:36:40.674769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.107 [2024-10-09 00:36:40.674800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.107 qpair failed and we were unable to recover it. 00:29:10.107 [2024-10-09 00:36:40.675180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.107 [2024-10-09 00:36:40.675214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.107 qpair failed and we were unable to recover it. 00:29:10.107 [2024-10-09 00:36:40.675567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.107 [2024-10-09 00:36:40.675596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.107 qpair failed and we were unable to recover it. 00:29:10.107 [2024-10-09 00:36:40.675983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.107 [2024-10-09 00:36:40.676014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.107 qpair failed and we were unable to recover it. 00:29:10.107 [2024-10-09 00:36:40.676380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.107 [2024-10-09 00:36:40.676408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.107 qpair failed and we were unable to recover it. 00:29:10.107 [2024-10-09 00:36:40.676765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.107 [2024-10-09 00:36:40.676795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.107 qpair failed and we were unable to recover it. 00:29:10.107 [2024-10-09 00:36:40.677078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.107 [2024-10-09 00:36:40.677107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.107 qpair failed and we were unable to recover it. 00:29:10.107 [2024-10-09 00:36:40.677441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.107 [2024-10-09 00:36:40.677470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.107 qpair failed and we were unable to recover it. 00:29:10.107 [2024-10-09 00:36:40.677834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.107 [2024-10-09 00:36:40.677867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.107 qpair failed and we were unable to recover it. 00:29:10.107 [2024-10-09 00:36:40.678129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.107 [2024-10-09 00:36:40.678158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.107 qpair failed and we were unable to recover it. 00:29:10.107 [2024-10-09 00:36:40.678536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.107 [2024-10-09 00:36:40.678565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.107 qpair failed and we were unable to recover it. 00:29:10.107 [2024-10-09 00:36:40.678900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.107 [2024-10-09 00:36:40.678931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.107 qpair failed and we were unable to recover it. 00:29:10.107 [2024-10-09 00:36:40.679303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.107 [2024-10-09 00:36:40.679332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.107 qpair failed and we were unable to recover it. 00:29:10.107 [2024-10-09 00:36:40.679588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.107 [2024-10-09 00:36:40.679621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.107 qpair failed and we were unable to recover it. 00:29:10.107 [2024-10-09 00:36:40.679932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.107 [2024-10-09 00:36:40.679962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.107 qpair failed and we were unable to recover it. 00:29:10.107 [2024-10-09 00:36:40.680308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.107 [2024-10-09 00:36:40.680338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.107 qpair failed and we were unable to recover it. 00:29:10.107 [2024-10-09 00:36:40.680574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.107 [2024-10-09 00:36:40.680603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.107 qpair failed and we were unable to recover it. 00:29:10.107 [2024-10-09 00:36:40.680943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.107 [2024-10-09 00:36:40.680973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.107 qpair failed and we were unable to recover it. 00:29:10.107 [2024-10-09 00:36:40.681348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.107 [2024-10-09 00:36:40.681378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.107 qpair failed and we were unable to recover it. 00:29:10.107 [2024-10-09 00:36:40.681750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.107 [2024-10-09 00:36:40.681795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.107 qpair failed and we were unable to recover it. 00:29:10.107 [2024-10-09 00:36:40.682182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.107 [2024-10-09 00:36:40.682213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.107 qpair failed and we were unable to recover it. 00:29:10.107 [2024-10-09 00:36:40.682533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.107 [2024-10-09 00:36:40.682562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.107 qpair failed and we were unable to recover it. 00:29:10.107 [2024-10-09 00:36:40.682999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.107 [2024-10-09 00:36:40.683030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.107 qpair failed and we were unable to recover it. 00:29:10.107 [2024-10-09 00:36:40.683377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.107 [2024-10-09 00:36:40.683408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.107 qpair failed and we were unable to recover it. 00:29:10.107 [2024-10-09 00:36:40.683656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.107 [2024-10-09 00:36:40.683688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.107 qpair failed and we were unable to recover it. 00:29:10.107 [2024-10-09 00:36:40.683994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.107 [2024-10-09 00:36:40.684025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.107 qpair failed and we were unable to recover it. 00:29:10.107 [2024-10-09 00:36:40.684405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.107 [2024-10-09 00:36:40.684435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.107 qpair failed and we were unable to recover it. 00:29:10.107 [2024-10-09 00:36:40.684798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.107 [2024-10-09 00:36:40.684828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.107 qpair failed and we were unable to recover it. 00:29:10.107 [2024-10-09 00:36:40.685088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.107 [2024-10-09 00:36:40.685117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.107 qpair failed and we were unable to recover it. 00:29:10.107 [2024-10-09 00:36:40.685529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.107 [2024-10-09 00:36:40.685559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.107 qpair failed and we were unable to recover it. 00:29:10.107 [2024-10-09 00:36:40.685904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.107 [2024-10-09 00:36:40.685935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.108 qpair failed and we were unable to recover it. 00:29:10.108 [2024-10-09 00:36:40.686293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.108 [2024-10-09 00:36:40.686322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.108 qpair failed and we were unable to recover it. 00:29:10.108 [2024-10-09 00:36:40.686684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.108 [2024-10-09 00:36:40.686713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.108 qpair failed and we were unable to recover it. 00:29:10.108 [2024-10-09 00:36:40.687060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.108 [2024-10-09 00:36:40.687088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.108 qpair failed and we were unable to recover it. 00:29:10.108 [2024-10-09 00:36:40.687444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.108 [2024-10-09 00:36:40.687473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.108 qpair failed and we were unable to recover it. 00:29:10.108 [2024-10-09 00:36:40.687931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.108 [2024-10-09 00:36:40.687962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.108 qpair failed and we were unable to recover it. 00:29:10.108 [2024-10-09 00:36:40.688321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.108 [2024-10-09 00:36:40.688350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.108 qpair failed and we were unable to recover it. 00:29:10.108 [2024-10-09 00:36:40.688718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.108 [2024-10-09 00:36:40.688764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.108 qpair failed and we were unable to recover it. 00:29:10.108 [2024-10-09 00:36:40.689091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.108 [2024-10-09 00:36:40.689120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.108 qpair failed and we were unable to recover it. 00:29:10.108 [2024-10-09 00:36:40.689482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.108 [2024-10-09 00:36:40.689510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.108 qpair failed and we were unable to recover it. 00:29:10.108 [2024-10-09 00:36:40.689876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.108 [2024-10-09 00:36:40.689906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.108 qpair failed and we were unable to recover it. 00:29:10.108 [2024-10-09 00:36:40.690178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.108 [2024-10-09 00:36:40.690207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.108 qpair failed and we were unable to recover it. 00:29:10.108 [2024-10-09 00:36:40.690584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.108 [2024-10-09 00:36:40.690614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.108 qpair failed and we were unable to recover it. 00:29:10.108 [2024-10-09 00:36:40.690956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.108 [2024-10-09 00:36:40.690987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.108 qpair failed and we were unable to recover it. 00:29:10.108 [2024-10-09 00:36:40.691327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.108 [2024-10-09 00:36:40.691355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.108 qpair failed and we were unable to recover it. 00:29:10.108 [2024-10-09 00:36:40.691690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.108 [2024-10-09 00:36:40.691731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.108 qpair failed and we were unable to recover it. 00:29:10.108 [2024-10-09 00:36:40.692158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.108 [2024-10-09 00:36:40.692187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.108 qpair failed and we were unable to recover it. 00:29:10.108 [2024-10-09 00:36:40.692432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.108 [2024-10-09 00:36:40.692461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.108 qpair failed and we were unable to recover it. 00:29:10.108 [2024-10-09 00:36:40.692832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.108 [2024-10-09 00:36:40.692863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.108 qpair failed and we were unable to recover it. 00:29:10.108 [2024-10-09 00:36:40.693207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.108 [2024-10-09 00:36:40.693237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.108 qpair failed and we were unable to recover it. 00:29:10.108 [2024-10-09 00:36:40.693621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.108 [2024-10-09 00:36:40.693650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.108 qpair failed and we were unable to recover it. 00:29:10.108 [2024-10-09 00:36:40.694016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.108 [2024-10-09 00:36:40.694046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.108 qpair failed and we were unable to recover it. 00:29:10.108 [2024-10-09 00:36:40.694313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.108 [2024-10-09 00:36:40.694341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.108 qpair failed and we were unable to recover it. 00:29:10.108 [2024-10-09 00:36:40.694716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.108 [2024-10-09 00:36:40.694753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.108 qpair failed and we were unable to recover it. 00:29:10.108 [2024-10-09 00:36:40.695128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.108 [2024-10-09 00:36:40.695158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.108 qpair failed and we were unable to recover it. 00:29:10.108 [2024-10-09 00:36:40.695519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.108 [2024-10-09 00:36:40.695548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.108 qpair failed and we were unable to recover it. 00:29:10.108 [2024-10-09 00:36:40.695801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.108 [2024-10-09 00:36:40.695831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.108 qpair failed and we were unable to recover it. 00:29:10.108 [2024-10-09 00:36:40.696164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.108 [2024-10-09 00:36:40.696193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.108 qpair failed and we were unable to recover it. 00:29:10.108 [2024-10-09 00:36:40.696546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.108 [2024-10-09 00:36:40.696576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.108 qpair failed and we were unable to recover it. 00:29:10.108 [2024-10-09 00:36:40.696953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.108 [2024-10-09 00:36:40.696983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.108 qpair failed and we were unable to recover it. 00:29:10.108 [2024-10-09 00:36:40.697326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.108 [2024-10-09 00:36:40.697356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.108 qpair failed and we were unable to recover it. 00:29:10.108 [2024-10-09 00:36:40.697681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.108 [2024-10-09 00:36:40.697710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.108 qpair failed and we were unable to recover it. 00:29:10.108 [2024-10-09 00:36:40.698129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.108 [2024-10-09 00:36:40.698158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.108 qpair failed and we were unable to recover it. 00:29:10.108 [2024-10-09 00:36:40.698418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.108 [2024-10-09 00:36:40.698446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.108 qpair failed and we were unable to recover it. 00:29:10.108 [2024-10-09 00:36:40.698740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.108 [2024-10-09 00:36:40.698770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.108 qpair failed and we were unable to recover it. 00:29:10.108 [2024-10-09 00:36:40.699120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.108 [2024-10-09 00:36:40.699149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.108 qpair failed and we were unable to recover it. 00:29:10.108 [2024-10-09 00:36:40.699523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.108 [2024-10-09 00:36:40.699552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.108 qpair failed and we were unable to recover it. 00:29:10.108 [2024-10-09 00:36:40.699902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.108 [2024-10-09 00:36:40.699933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.108 qpair failed and we were unable to recover it. 00:29:10.108 [2024-10-09 00:36:40.700296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.108 [2024-10-09 00:36:40.700324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.108 qpair failed and we were unable to recover it. 00:29:10.108 [2024-10-09 00:36:40.700675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.108 [2024-10-09 00:36:40.700711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.108 qpair failed and we were unable to recover it. 00:29:10.108 [2024-10-09 00:36:40.701058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.109 [2024-10-09 00:36:40.701088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.109 qpair failed and we were unable to recover it. 00:29:10.109 [2024-10-09 00:36:40.701464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.109 [2024-10-09 00:36:40.701493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.109 qpair failed and we were unable to recover it. 00:29:10.109 [2024-10-09 00:36:40.701857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.109 [2024-10-09 00:36:40.701888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.109 qpair failed and we were unable to recover it. 00:29:10.109 [2024-10-09 00:36:40.702226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.109 [2024-10-09 00:36:40.702257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.109 qpair failed and we were unable to recover it. 00:29:10.109 [2024-10-09 00:36:40.702619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.109 [2024-10-09 00:36:40.702648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.109 qpair failed and we were unable to recover it. 00:29:10.109 [2024-10-09 00:36:40.702983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.109 [2024-10-09 00:36:40.703013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.109 qpair failed and we were unable to recover it. 00:29:10.109 [2024-10-09 00:36:40.703418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.109 [2024-10-09 00:36:40.703447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.109 qpair failed and we were unable to recover it. 00:29:10.109 [2024-10-09 00:36:40.703849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.109 [2024-10-09 00:36:40.703879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.109 qpair failed and we were unable to recover it. 00:29:10.109 [2024-10-09 00:36:40.704249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.109 [2024-10-09 00:36:40.704279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.109 qpair failed and we were unable to recover it. 00:29:10.109 [2024-10-09 00:36:40.704656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.109 [2024-10-09 00:36:40.704685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.109 qpair failed and we were unable to recover it. 00:29:10.109 [2024-10-09 00:36:40.705041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.109 [2024-10-09 00:36:40.705071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.109 qpair failed and we were unable to recover it. 00:29:10.109 [2024-10-09 00:36:40.705317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.109 [2024-10-09 00:36:40.705350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.109 qpair failed and we were unable to recover it. 00:29:10.109 [2024-10-09 00:36:40.705814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.109 [2024-10-09 00:36:40.705845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.109 qpair failed and we were unable to recover it. 00:29:10.109 [2024-10-09 00:36:40.706207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.109 [2024-10-09 00:36:40.706237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.109 qpair failed and we were unable to recover it. 00:29:10.109 [2024-10-09 00:36:40.706601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.109 [2024-10-09 00:36:40.706630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.109 qpair failed and we were unable to recover it. 00:29:10.109 [2024-10-09 00:36:40.706993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.109 [2024-10-09 00:36:40.707022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.109 qpair failed and we were unable to recover it. 00:29:10.109 [2024-10-09 00:36:40.707397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.109 [2024-10-09 00:36:40.707426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.109 qpair failed and we were unable to recover it. 00:29:10.109 [2024-10-09 00:36:40.707789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.109 [2024-10-09 00:36:40.707819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.109 qpair failed and we were unable to recover it. 00:29:10.109 [2024-10-09 00:36:40.708200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.109 [2024-10-09 00:36:40.708228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.109 qpair failed and we were unable to recover it. 00:29:10.109 [2024-10-09 00:36:40.708607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.109 [2024-10-09 00:36:40.708636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.109 qpair failed and we were unable to recover it. 00:29:10.109 [2024-10-09 00:36:40.709009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.109 [2024-10-09 00:36:40.709041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.109 qpair failed and we were unable to recover it. 00:29:10.109 [2024-10-09 00:36:40.709391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.109 [2024-10-09 00:36:40.709420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.109 qpair failed and we were unable to recover it. 00:29:10.109 [2024-10-09 00:36:40.709784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.109 [2024-10-09 00:36:40.709815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.109 qpair failed and we were unable to recover it. 00:29:10.109 [2024-10-09 00:36:40.710161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.109 [2024-10-09 00:36:40.710190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.109 qpair failed and we were unable to recover it. 00:29:10.109 [2024-10-09 00:36:40.710552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.109 [2024-10-09 00:36:40.710581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.109 qpair failed and we were unable to recover it. 00:29:10.109 [2024-10-09 00:36:40.710849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.109 [2024-10-09 00:36:40.710879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.109 qpair failed and we were unable to recover it. 00:29:10.109 [2024-10-09 00:36:40.711263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.109 [2024-10-09 00:36:40.711298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.109 qpair failed and we were unable to recover it. 00:29:10.109 [2024-10-09 00:36:40.711668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.109 [2024-10-09 00:36:40.711697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.109 qpair failed and we were unable to recover it. 00:29:10.109 [2024-10-09 00:36:40.712047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.109 [2024-10-09 00:36:40.712078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.109 qpair failed and we were unable to recover it. 00:29:10.109 [2024-10-09 00:36:40.712435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.109 [2024-10-09 00:36:40.712464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.109 qpair failed and we were unable to recover it. 00:29:10.109 [2024-10-09 00:36:40.712766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.109 [2024-10-09 00:36:40.712796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.109 qpair failed and we were unable to recover it. 00:29:10.109 [2024-10-09 00:36:40.713169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.109 [2024-10-09 00:36:40.713198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.109 qpair failed and we were unable to recover it. 00:29:10.109 [2024-10-09 00:36:40.713549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.109 [2024-10-09 00:36:40.713578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.109 qpair failed and we were unable to recover it. 00:29:10.382 [2024-10-09 00:36:40.713841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.382 [2024-10-09 00:36:40.713876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.382 qpair failed and we were unable to recover it. 00:29:10.382 [2024-10-09 00:36:40.714239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.382 [2024-10-09 00:36:40.714269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.382 qpair failed and we were unable to recover it. 00:29:10.382 [2024-10-09 00:36:40.714531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.382 [2024-10-09 00:36:40.714560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.382 qpair failed and we were unable to recover it. 00:29:10.382 [2024-10-09 00:36:40.714830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.382 [2024-10-09 00:36:40.714860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.382 qpair failed and we were unable to recover it. 00:29:10.382 [2024-10-09 00:36:40.715214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.382 [2024-10-09 00:36:40.715245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.382 qpair failed and we were unable to recover it. 00:29:10.382 [2024-10-09 00:36:40.715588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.382 [2024-10-09 00:36:40.715617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.382 qpair failed and we were unable to recover it. 00:29:10.382 [2024-10-09 00:36:40.716061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.382 [2024-10-09 00:36:40.716092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.382 qpair failed and we were unable to recover it. 00:29:10.382 [2024-10-09 00:36:40.716476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.382 [2024-10-09 00:36:40.716505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.382 qpair failed and we were unable to recover it. 00:29:10.382 [2024-10-09 00:36:40.716872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.382 [2024-10-09 00:36:40.716902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.382 qpair failed and we were unable to recover it. 00:29:10.382 [2024-10-09 00:36:40.717154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.382 [2024-10-09 00:36:40.717182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.382 qpair failed and we were unable to recover it. 00:29:10.382 [2024-10-09 00:36:40.717424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.382 [2024-10-09 00:36:40.717455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.382 qpair failed and we were unable to recover it. 00:29:10.382 [2024-10-09 00:36:40.717824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.382 [2024-10-09 00:36:40.717855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.382 qpair failed and we were unable to recover it. 00:29:10.382 [2024-10-09 00:36:40.718114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.382 [2024-10-09 00:36:40.718143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.382 qpair failed and we were unable to recover it. 00:29:10.382 [2024-10-09 00:36:40.718498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.382 [2024-10-09 00:36:40.718527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.382 qpair failed and we were unable to recover it. 00:29:10.383 [2024-10-09 00:36:40.718907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.383 [2024-10-09 00:36:40.718937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.383 qpair failed and we were unable to recover it. 00:29:10.383 [2024-10-09 00:36:40.719298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.383 [2024-10-09 00:36:40.719327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.383 qpair failed and we were unable to recover it. 00:29:10.383 [2024-10-09 00:36:40.719702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.383 [2024-10-09 00:36:40.719749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.383 qpair failed and we were unable to recover it. 00:29:10.383 [2024-10-09 00:36:40.720006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.383 [2024-10-09 00:36:40.720040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.383 qpair failed and we were unable to recover it. 00:29:10.383 [2024-10-09 00:36:40.720404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.383 [2024-10-09 00:36:40.720433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.383 qpair failed and we were unable to recover it. 00:29:10.383 [2024-10-09 00:36:40.720794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.383 [2024-10-09 00:36:40.720826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.383 qpair failed and we were unable to recover it. 00:29:10.383 [2024-10-09 00:36:40.721198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.383 [2024-10-09 00:36:40.721227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.383 qpair failed and we were unable to recover it. 00:29:10.383 [2024-10-09 00:36:40.721671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.383 [2024-10-09 00:36:40.721702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.383 qpair failed and we were unable to recover it. 00:29:10.383 [2024-10-09 00:36:40.721972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.383 [2024-10-09 00:36:40.722002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.383 qpair failed and we were unable to recover it. 00:29:10.383 [2024-10-09 00:36:40.722356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.383 [2024-10-09 00:36:40.722385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.383 qpair failed and we were unable to recover it. 00:29:10.383 [2024-10-09 00:36:40.722752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.383 [2024-10-09 00:36:40.722784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.383 qpair failed and we were unable to recover it. 00:29:10.383 [2024-10-09 00:36:40.723178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.383 [2024-10-09 00:36:40.723207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.383 qpair failed and we were unable to recover it. 00:29:10.383 [2024-10-09 00:36:40.723581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.383 [2024-10-09 00:36:40.723609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.383 qpair failed and we were unable to recover it. 00:29:10.383 [2024-10-09 00:36:40.724054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.383 [2024-10-09 00:36:40.724084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.383 qpair failed and we were unable to recover it. 00:29:10.383 [2024-10-09 00:36:40.724495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.383 [2024-10-09 00:36:40.724525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.383 qpair failed and we were unable to recover it. 00:29:10.383 [2024-10-09 00:36:40.724875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.383 [2024-10-09 00:36:40.724905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.383 qpair failed and we were unable to recover it. 00:29:10.383 [2024-10-09 00:36:40.725294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.383 [2024-10-09 00:36:40.725323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.383 qpair failed and we were unable to recover it. 00:29:10.383 [2024-10-09 00:36:40.725687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.383 [2024-10-09 00:36:40.725717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.383 qpair failed and we were unable to recover it. 00:29:10.383 [2024-10-09 00:36:40.726090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.383 [2024-10-09 00:36:40.726121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.383 qpair failed and we were unable to recover it. 00:29:10.383 [2024-10-09 00:36:40.726374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.383 [2024-10-09 00:36:40.726402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.383 qpair failed and we were unable to recover it. 00:29:10.383 [2024-10-09 00:36:40.726803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.383 [2024-10-09 00:36:40.726834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.383 qpair failed and we were unable to recover it. 00:29:10.383 [2024-10-09 00:36:40.727183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.383 [2024-10-09 00:36:40.727213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.383 qpair failed and we were unable to recover it. 00:29:10.383 [2024-10-09 00:36:40.727588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.383 [2024-10-09 00:36:40.727617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.383 qpair failed and we were unable to recover it. 00:29:10.383 [2024-10-09 00:36:40.728011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.383 [2024-10-09 00:36:40.728040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.383 qpair failed and we were unable to recover it. 00:29:10.383 [2024-10-09 00:36:40.728408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.383 [2024-10-09 00:36:40.728439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.383 qpair failed and we were unable to recover it. 00:29:10.383 [2024-10-09 00:36:40.728814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.383 [2024-10-09 00:36:40.728844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.383 qpair failed and we were unable to recover it. 00:29:10.383 [2024-10-09 00:36:40.729217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.383 [2024-10-09 00:36:40.729245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.383 qpair failed and we were unable to recover it. 00:29:10.383 [2024-10-09 00:36:40.729628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.383 [2024-10-09 00:36:40.729656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.383 qpair failed and we were unable to recover it. 00:29:10.383 [2024-10-09 00:36:40.730014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.383 [2024-10-09 00:36:40.730044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.383 qpair failed and we were unable to recover it. 00:29:10.383 [2024-10-09 00:36:40.730447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.383 [2024-10-09 00:36:40.730476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.383 qpair failed and we were unable to recover it. 00:29:10.383 [2024-10-09 00:36:40.730816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.383 [2024-10-09 00:36:40.730846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.383 qpair failed and we were unable to recover it. 00:29:10.383 [2024-10-09 00:36:40.731127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.383 [2024-10-09 00:36:40.731156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.383 qpair failed and we were unable to recover it. 00:29:10.383 [2024-10-09 00:36:40.731428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.383 [2024-10-09 00:36:40.731457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.383 qpair failed and we were unable to recover it. 00:29:10.383 [2024-10-09 00:36:40.731833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.383 [2024-10-09 00:36:40.731863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.383 qpair failed and we were unable to recover it. 00:29:10.383 [2024-10-09 00:36:40.732221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.383 [2024-10-09 00:36:40.732250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.383 qpair failed and we were unable to recover it. 00:29:10.383 [2024-10-09 00:36:40.732616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.383 [2024-10-09 00:36:40.732645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.383 qpair failed and we were unable to recover it. 00:29:10.383 [2024-10-09 00:36:40.732960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.383 [2024-10-09 00:36:40.732992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.383 qpair failed and we were unable to recover it. 00:29:10.383 [2024-10-09 00:36:40.733390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.383 [2024-10-09 00:36:40.733419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.383 qpair failed and we were unable to recover it. 00:29:10.383 [2024-10-09 00:36:40.733791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.383 [2024-10-09 00:36:40.733821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.383 qpair failed and we were unable to recover it. 00:29:10.383 [2024-10-09 00:36:40.734189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.383 [2024-10-09 00:36:40.734217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.383 qpair failed and we were unable to recover it. 00:29:10.384 [2024-10-09 00:36:40.734606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.384 [2024-10-09 00:36:40.734636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.384 qpair failed and we were unable to recover it. 00:29:10.384 [2024-10-09 00:36:40.734975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.384 [2024-10-09 00:36:40.735005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.384 qpair failed and we were unable to recover it. 00:29:10.384 [2024-10-09 00:36:40.735365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.384 [2024-10-09 00:36:40.735395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.384 qpair failed and we were unable to recover it. 00:29:10.384 [2024-10-09 00:36:40.735760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.384 [2024-10-09 00:36:40.735790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.384 qpair failed and we were unable to recover it. 00:29:10.384 [2024-10-09 00:36:40.736161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.384 [2024-10-09 00:36:40.736192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.384 qpair failed and we were unable to recover it. 00:29:10.384 [2024-10-09 00:36:40.736553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.384 [2024-10-09 00:36:40.736582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.384 qpair failed and we were unable to recover it. 00:29:10.384 [2024-10-09 00:36:40.736928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.384 [2024-10-09 00:36:40.736961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.384 qpair failed and we were unable to recover it. 00:29:10.384 [2024-10-09 00:36:40.737320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.384 [2024-10-09 00:36:40.737355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.384 qpair failed and we were unable to recover it. 00:29:10.384 [2024-10-09 00:36:40.737671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.384 [2024-10-09 00:36:40.737703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.384 qpair failed and we were unable to recover it. 00:29:10.384 [2024-10-09 00:36:40.738081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.384 [2024-10-09 00:36:40.738112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.384 qpair failed and we were unable to recover it. 00:29:10.384 [2024-10-09 00:36:40.738480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.384 [2024-10-09 00:36:40.738510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.384 qpair failed and we were unable to recover it. 00:29:10.384 [2024-10-09 00:36:40.738887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.384 [2024-10-09 00:36:40.738918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.384 qpair failed and we were unable to recover it. 00:29:10.384 [2024-10-09 00:36:40.739154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.384 [2024-10-09 00:36:40.739183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.384 qpair failed and we were unable to recover it. 00:29:10.384 [2024-10-09 00:36:40.739619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.384 [2024-10-09 00:36:40.739649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.384 qpair failed and we were unable to recover it. 00:29:10.384 [2024-10-09 00:36:40.740031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.384 [2024-10-09 00:36:40.740062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.384 qpair failed and we were unable to recover it. 00:29:10.384 [2024-10-09 00:36:40.740435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.384 [2024-10-09 00:36:40.740463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.384 qpair failed and we were unable to recover it. 00:29:10.384 [2024-10-09 00:36:40.740760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.384 [2024-10-09 00:36:40.740790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.384 qpair failed and we were unable to recover it. 00:29:10.384 [2024-10-09 00:36:40.741168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.384 [2024-10-09 00:36:40.741197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.384 qpair failed and we were unable to recover it. 00:29:10.384 [2024-10-09 00:36:40.741555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.384 [2024-10-09 00:36:40.741584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.384 qpair failed and we were unable to recover it. 00:29:10.384 [2024-10-09 00:36:40.741916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.384 [2024-10-09 00:36:40.741947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.384 qpair failed and we were unable to recover it. 00:29:10.384 [2024-10-09 00:36:40.742332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.384 [2024-10-09 00:36:40.742361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.384 qpair failed and we were unable to recover it. 00:29:10.384 [2024-10-09 00:36:40.742628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.384 [2024-10-09 00:36:40.742657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.384 qpair failed and we were unable to recover it. 00:29:10.384 [2024-10-09 00:36:40.742994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.384 [2024-10-09 00:36:40.743025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.384 qpair failed and we were unable to recover it. 00:29:10.384 [2024-10-09 00:36:40.743365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.384 [2024-10-09 00:36:40.743396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.384 qpair failed and we were unable to recover it. 00:29:10.384 [2024-10-09 00:36:40.743766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.384 [2024-10-09 00:36:40.743796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.384 qpair failed and we were unable to recover it. 00:29:10.384 [2024-10-09 00:36:40.744164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.384 [2024-10-09 00:36:40.744193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.384 qpair failed and we were unable to recover it. 00:29:10.384 [2024-10-09 00:36:40.744558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.384 [2024-10-09 00:36:40.744587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.384 qpair failed and we were unable to recover it. 00:29:10.384 [2024-10-09 00:36:40.744945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.384 [2024-10-09 00:36:40.744977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.384 qpair failed and we were unable to recover it. 00:29:10.384 [2024-10-09 00:36:40.745313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.384 [2024-10-09 00:36:40.745342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.384 qpair failed and we were unable to recover it. 00:29:10.384 [2024-10-09 00:36:40.745714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.384 [2024-10-09 00:36:40.745752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.384 qpair failed and we were unable to recover it. 00:29:10.384 [2024-10-09 00:36:40.746021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.384 [2024-10-09 00:36:40.746050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.384 qpair failed and we were unable to recover it. 00:29:10.384 [2024-10-09 00:36:40.746429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.384 [2024-10-09 00:36:40.746458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.384 qpair failed and we were unable to recover it. 00:29:10.384 [2024-10-09 00:36:40.746812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.384 [2024-10-09 00:36:40.746844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.384 qpair failed and we were unable to recover it. 00:29:10.384 [2024-10-09 00:36:40.747229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.384 [2024-10-09 00:36:40.747258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.384 qpair failed and we were unable to recover it. 00:29:10.384 [2024-10-09 00:36:40.747517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.384 [2024-10-09 00:36:40.747551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.384 qpair failed and we were unable to recover it. 00:29:10.384 [2024-10-09 00:36:40.747899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.384 [2024-10-09 00:36:40.747929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.384 qpair failed and we were unable to recover it. 00:29:10.384 [2024-10-09 00:36:40.748307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.384 [2024-10-09 00:36:40.748336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.384 qpair failed and we were unable to recover it. 00:29:10.384 [2024-10-09 00:36:40.748703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.384 [2024-10-09 00:36:40.748748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.384 qpair failed and we were unable to recover it. 00:29:10.384 [2024-10-09 00:36:40.749193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.384 [2024-10-09 00:36:40.749223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.384 qpair failed and we were unable to recover it. 00:29:10.384 [2024-10-09 00:36:40.749619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.384 [2024-10-09 00:36:40.749647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.384 qpair failed and we were unable to recover it. 00:29:10.384 [2024-10-09 00:36:40.750019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.385 [2024-10-09 00:36:40.750050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.385 qpair failed and we were unable to recover it. 00:29:10.385 [2024-10-09 00:36:40.750419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.385 [2024-10-09 00:36:40.750448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.385 qpair failed and we were unable to recover it. 00:29:10.385 [2024-10-09 00:36:40.750811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.385 [2024-10-09 00:36:40.750842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.385 qpair failed and we were unable to recover it. 00:29:10.385 [2024-10-09 00:36:40.751248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.385 [2024-10-09 00:36:40.751277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.385 qpair failed and we were unable to recover it. 00:29:10.385 [2024-10-09 00:36:40.751651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.385 [2024-10-09 00:36:40.751680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.385 qpair failed and we were unable to recover it. 00:29:10.385 [2024-10-09 00:36:40.752039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.385 [2024-10-09 00:36:40.752069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.385 qpair failed and we were unable to recover it. 00:29:10.385 [2024-10-09 00:36:40.752406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.385 [2024-10-09 00:36:40.752436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.385 qpair failed and we were unable to recover it. 00:29:10.385 [2024-10-09 00:36:40.752791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.385 [2024-10-09 00:36:40.752821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.385 qpair failed and we were unable to recover it. 00:29:10.385 [2024-10-09 00:36:40.753173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.385 [2024-10-09 00:36:40.753202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.385 qpair failed and we were unable to recover it. 00:29:10.385 [2024-10-09 00:36:40.753547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.385 [2024-10-09 00:36:40.753575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.385 qpair failed and we were unable to recover it. 00:29:10.385 [2024-10-09 00:36:40.753925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.385 [2024-10-09 00:36:40.753955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.385 qpair failed and we were unable to recover it. 00:29:10.385 [2024-10-09 00:36:40.754309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.385 [2024-10-09 00:36:40.754339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.385 qpair failed and we were unable to recover it. 00:29:10.385 [2024-10-09 00:36:40.754709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.385 [2024-10-09 00:36:40.754749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.385 qpair failed and we were unable to recover it. 00:29:10.385 [2024-10-09 00:36:40.755106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.385 [2024-10-09 00:36:40.755135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.385 qpair failed and we were unable to recover it. 00:29:10.385 [2024-10-09 00:36:40.755520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.385 [2024-10-09 00:36:40.755550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.385 qpair failed and we were unable to recover it. 00:29:10.385 [2024-10-09 00:36:40.755940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.385 [2024-10-09 00:36:40.755970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.385 qpair failed and we were unable to recover it. 00:29:10.385 [2024-10-09 00:36:40.756225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.385 [2024-10-09 00:36:40.756257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.385 qpair failed and we were unable to recover it. 00:29:10.385 [2024-10-09 00:36:40.756597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.385 [2024-10-09 00:36:40.756627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.385 qpair failed and we were unable to recover it. 00:29:10.385 [2024-10-09 00:36:40.757011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.385 [2024-10-09 00:36:40.757042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.385 qpair failed and we were unable to recover it. 00:29:10.385 [2024-10-09 00:36:40.757382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.385 [2024-10-09 00:36:40.757412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.385 qpair failed and we were unable to recover it. 00:29:10.385 [2024-10-09 00:36:40.757750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.385 [2024-10-09 00:36:40.757782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.385 qpair failed and we were unable to recover it. 00:29:10.385 [2024-10-09 00:36:40.758159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.385 [2024-10-09 00:36:40.758194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.385 qpair failed and we were unable to recover it. 00:29:10.385 [2024-10-09 00:36:40.758566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.385 [2024-10-09 00:36:40.758595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.385 qpair failed and we were unable to recover it. 00:29:10.385 [2024-10-09 00:36:40.758950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.385 [2024-10-09 00:36:40.758980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.385 qpair failed and we were unable to recover it. 00:29:10.385 [2024-10-09 00:36:40.759356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.385 [2024-10-09 00:36:40.759385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.385 qpair failed and we were unable to recover it. 00:29:10.385 [2024-10-09 00:36:40.759823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.385 [2024-10-09 00:36:40.759854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.385 qpair failed and we were unable to recover it. 00:29:10.385 [2024-10-09 00:36:40.760227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.385 [2024-10-09 00:36:40.760256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.385 qpair failed and we were unable to recover it. 00:29:10.385 [2024-10-09 00:36:40.760621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.385 [2024-10-09 00:36:40.760650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.385 qpair failed and we were unable to recover it. 00:29:10.385 [2024-10-09 00:36:40.761008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.385 [2024-10-09 00:36:40.761038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.385 qpair failed and we were unable to recover it. 00:29:10.385 [2024-10-09 00:36:40.761381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.385 [2024-10-09 00:36:40.761411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.385 qpair failed and we were unable to recover it. 00:29:10.385 [2024-10-09 00:36:40.761777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.385 [2024-10-09 00:36:40.761807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.385 qpair failed and we were unable to recover it. 00:29:10.385 [2024-10-09 00:36:40.762053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.385 [2024-10-09 00:36:40.762084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.385 qpair failed and we were unable to recover it. 00:29:10.385 [2024-10-09 00:36:40.762402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.385 [2024-10-09 00:36:40.762432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.385 qpair failed and we were unable to recover it. 00:29:10.385 [2024-10-09 00:36:40.762789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.385 [2024-10-09 00:36:40.762820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.385 qpair failed and we were unable to recover it. 00:29:10.385 [2024-10-09 00:36:40.763199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.385 [2024-10-09 00:36:40.763228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.385 qpair failed and we were unable to recover it. 00:29:10.385 [2024-10-09 00:36:40.763533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.385 [2024-10-09 00:36:40.763563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.385 qpair failed and we were unable to recover it. 00:29:10.385 [2024-10-09 00:36:40.763889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.385 [2024-10-09 00:36:40.763920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.385 qpair failed and we were unable to recover it. 00:29:10.385 [2024-10-09 00:36:40.764171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.385 [2024-10-09 00:36:40.764202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.385 qpair failed and we were unable to recover it. 00:29:10.385 [2024-10-09 00:36:40.764564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.385 [2024-10-09 00:36:40.764593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.385 qpair failed and we were unable to recover it. 00:29:10.385 [2024-10-09 00:36:40.764947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.385 [2024-10-09 00:36:40.764978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.385 qpair failed and we were unable to recover it. 00:29:10.385 [2024-10-09 00:36:40.765344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.385 [2024-10-09 00:36:40.765373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.386 qpair failed and we were unable to recover it. 00:29:10.386 [2024-10-09 00:36:40.765736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.386 [2024-10-09 00:36:40.765767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.386 qpair failed and we were unable to recover it. 00:29:10.386 [2024-10-09 00:36:40.765994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.386 [2024-10-09 00:36:40.766026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.386 qpair failed and we were unable to recover it. 00:29:10.386 [2024-10-09 00:36:40.766404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.386 [2024-10-09 00:36:40.766433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.386 qpair failed and we were unable to recover it. 00:29:10.386 [2024-10-09 00:36:40.766750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.386 [2024-10-09 00:36:40.766781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.386 qpair failed and we were unable to recover it. 00:29:10.386 [2024-10-09 00:36:40.767102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.386 [2024-10-09 00:36:40.767132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.386 qpair failed and we were unable to recover it. 00:29:10.386 [2024-10-09 00:36:40.767376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.386 [2024-10-09 00:36:40.767405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.386 qpair failed and we were unable to recover it. 00:29:10.386 [2024-10-09 00:36:40.767780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.386 [2024-10-09 00:36:40.767812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.386 qpair failed and we were unable to recover it. 00:29:10.386 [2024-10-09 00:36:40.768175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.386 [2024-10-09 00:36:40.768206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.386 qpair failed and we were unable to recover it. 00:29:10.386 [2024-10-09 00:36:40.768566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.386 [2024-10-09 00:36:40.768595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.386 qpair failed and we were unable to recover it. 00:29:10.386 [2024-10-09 00:36:40.768986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.386 [2024-10-09 00:36:40.769018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.386 qpair failed and we were unable to recover it. 00:29:10.386 [2024-10-09 00:36:40.769374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.386 [2024-10-09 00:36:40.769404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.386 qpair failed and we were unable to recover it. 00:29:10.386 [2024-10-09 00:36:40.769787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.386 [2024-10-09 00:36:40.769819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.386 qpair failed and we were unable to recover it. 00:29:10.386 [2024-10-09 00:36:40.770168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.386 [2024-10-09 00:36:40.770198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.386 qpair failed and we were unable to recover it. 00:29:10.386 [2024-10-09 00:36:40.770560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.386 [2024-10-09 00:36:40.770591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.386 qpair failed and we were unable to recover it. 00:29:10.386 [2024-10-09 00:36:40.770842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.386 [2024-10-09 00:36:40.770872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.386 qpair failed and we were unable to recover it. 00:29:10.386 [2024-10-09 00:36:40.771211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.386 [2024-10-09 00:36:40.771242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.386 qpair failed and we were unable to recover it. 00:29:10.386 [2024-10-09 00:36:40.771614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.386 [2024-10-09 00:36:40.771645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.386 qpair failed and we were unable to recover it. 00:29:10.386 [2024-10-09 00:36:40.772039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.386 [2024-10-09 00:36:40.772069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.386 qpair failed and we were unable to recover it. 00:29:10.386 [2024-10-09 00:36:40.772426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.386 [2024-10-09 00:36:40.772455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.386 qpair failed and we were unable to recover it. 00:29:10.386 [2024-10-09 00:36:40.772813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.386 [2024-10-09 00:36:40.772844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.386 qpair failed and we were unable to recover it. 00:29:10.386 [2024-10-09 00:36:40.773169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.386 [2024-10-09 00:36:40.773203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.386 qpair failed and we were unable to recover it. 00:29:10.386 [2024-10-09 00:36:40.773586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.386 [2024-10-09 00:36:40.773618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.386 qpair failed and we were unable to recover it. 00:29:10.386 [2024-10-09 00:36:40.773869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.386 [2024-10-09 00:36:40.773900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.386 qpair failed and we were unable to recover it. 00:29:10.386 [2024-10-09 00:36:40.774266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.386 [2024-10-09 00:36:40.774296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.386 qpair failed and we were unable to recover it. 00:29:10.386 [2024-10-09 00:36:40.774662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.386 [2024-10-09 00:36:40.774691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.386 qpair failed and we were unable to recover it. 00:29:10.386 [2024-10-09 00:36:40.775077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.386 [2024-10-09 00:36:40.775106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.386 qpair failed and we were unable to recover it. 00:29:10.386 [2024-10-09 00:36:40.775458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.386 [2024-10-09 00:36:40.775487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.386 qpair failed and we were unable to recover it. 00:29:10.386 [2024-10-09 00:36:40.775837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.386 [2024-10-09 00:36:40.775869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.386 qpair failed and we were unable to recover it. 00:29:10.386 [2024-10-09 00:36:40.776242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.386 [2024-10-09 00:36:40.776271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.386 qpair failed and we were unable to recover it. 00:29:10.386 [2024-10-09 00:36:40.776610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.386 [2024-10-09 00:36:40.776640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.386 qpair failed and we were unable to recover it. 00:29:10.386 [2024-10-09 00:36:40.776904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.386 [2024-10-09 00:36:40.776935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.386 qpair failed and we were unable to recover it. 00:29:10.386 [2024-10-09 00:36:40.777187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.386 [2024-10-09 00:36:40.777215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.386 qpair failed and we were unable to recover it. 00:29:10.386 [2024-10-09 00:36:40.777571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.386 [2024-10-09 00:36:40.777601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.386 qpair failed and we were unable to recover it. 00:29:10.386 [2024-10-09 00:36:40.777995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.386 [2024-10-09 00:36:40.778027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.386 qpair failed and we were unable to recover it. 00:29:10.386 [2024-10-09 00:36:40.778317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.386 [2024-10-09 00:36:40.778348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.386 qpair failed and we were unable to recover it. 00:29:10.386 [2024-10-09 00:36:40.778701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.386 [2024-10-09 00:36:40.778738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.386 qpair failed and we were unable to recover it. 00:29:10.386 [2024-10-09 00:36:40.778990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.387 [2024-10-09 00:36:40.779019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.387 qpair failed and we were unable to recover it. 00:29:10.387 [2024-10-09 00:36:40.779275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.387 [2024-10-09 00:36:40.779305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.387 qpair failed and we were unable to recover it. 00:29:10.387 [2024-10-09 00:36:40.779555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.387 [2024-10-09 00:36:40.779586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.387 qpair failed and we were unable to recover it. 00:29:10.387 [2024-10-09 00:36:40.779845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.387 [2024-10-09 00:36:40.779876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.387 qpair failed and we were unable to recover it. 00:29:10.387 [2024-10-09 00:36:40.780237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.387 [2024-10-09 00:36:40.780268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.387 qpair failed and we were unable to recover it. 00:29:10.387 [2024-10-09 00:36:40.780617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.387 [2024-10-09 00:36:40.780647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.387 qpair failed and we were unable to recover it. 00:29:10.387 [2024-10-09 00:36:40.781021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.387 [2024-10-09 00:36:40.781051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.387 qpair failed and we were unable to recover it. 00:29:10.387 [2024-10-09 00:36:40.781394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.387 [2024-10-09 00:36:40.781423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.387 qpair failed and we were unable to recover it. 00:29:10.387 [2024-10-09 00:36:40.781780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.387 [2024-10-09 00:36:40.781811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.387 qpair failed and we were unable to recover it. 00:29:10.387 [2024-10-09 00:36:40.782180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.387 [2024-10-09 00:36:40.782211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.387 qpair failed and we were unable to recover it. 00:29:10.387 [2024-10-09 00:36:40.782557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.387 [2024-10-09 00:36:40.782586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.387 qpair failed and we were unable to recover it. 00:29:10.387 [2024-10-09 00:36:40.782944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.387 [2024-10-09 00:36:40.782976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.387 qpair failed and we were unable to recover it. 00:29:10.387 [2024-10-09 00:36:40.783223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.387 [2024-10-09 00:36:40.783259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.387 qpair failed and we were unable to recover it. 00:29:10.387 [2024-10-09 00:36:40.783601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.387 [2024-10-09 00:36:40.783632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.387 qpair failed and we were unable to recover it. 00:29:10.387 [2024-10-09 00:36:40.783982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.387 [2024-10-09 00:36:40.784013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.387 qpair failed and we were unable to recover it. 00:29:10.387 [2024-10-09 00:36:40.784407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.387 [2024-10-09 00:36:40.784437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.387 qpair failed and we were unable to recover it. 00:29:10.387 [2024-10-09 00:36:40.784796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.387 [2024-10-09 00:36:40.784825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.387 qpair failed and we were unable to recover it. 00:29:10.387 [2024-10-09 00:36:40.785204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.387 [2024-10-09 00:36:40.785234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.387 qpair failed and we were unable to recover it. 00:29:10.387 [2024-10-09 00:36:40.785590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.387 [2024-10-09 00:36:40.785620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.387 qpair failed and we were unable to recover it. 00:29:10.387 [2024-10-09 00:36:40.786013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.387 [2024-10-09 00:36:40.786044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.387 qpair failed and we were unable to recover it. 00:29:10.387 [2024-10-09 00:36:40.786391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.387 [2024-10-09 00:36:40.786422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.387 qpair failed and we were unable to recover it. 00:29:10.387 [2024-10-09 00:36:40.786779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.387 [2024-10-09 00:36:40.786809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.387 qpair failed and we were unable to recover it. 00:29:10.387 [2024-10-09 00:36:40.787172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.387 [2024-10-09 00:36:40.787201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.387 qpair failed and we were unable to recover it. 00:29:10.387 [2024-10-09 00:36:40.787595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.387 [2024-10-09 00:36:40.787625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.387 qpair failed and we were unable to recover it. 00:29:10.387 [2024-10-09 00:36:40.787967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.387 [2024-10-09 00:36:40.787999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.387 qpair failed and we were unable to recover it. 00:29:10.387 [2024-10-09 00:36:40.788380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.387 [2024-10-09 00:36:40.788410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.387 qpair failed and we were unable to recover it. 00:29:10.387 [2024-10-09 00:36:40.788777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.387 [2024-10-09 00:36:40.788811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.387 qpair failed and we were unable to recover it. 00:29:10.387 [2024-10-09 00:36:40.789173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.387 [2024-10-09 00:36:40.789203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.387 qpair failed and we were unable to recover it. 00:29:10.387 [2024-10-09 00:36:40.789544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.387 [2024-10-09 00:36:40.789576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.387 qpair failed and we were unable to recover it. 00:29:10.387 [2024-10-09 00:36:40.789864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.387 [2024-10-09 00:36:40.789894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.387 qpair failed and we were unable to recover it. 00:29:10.387 [2024-10-09 00:36:40.790260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.387 [2024-10-09 00:36:40.790289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.387 qpair failed and we were unable to recover it. 00:29:10.387 [2024-10-09 00:36:40.790677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.387 [2024-10-09 00:36:40.790705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.387 qpair failed and we were unable to recover it. 00:29:10.387 [2024-10-09 00:36:40.791070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.387 [2024-10-09 00:36:40.791099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.387 qpair failed and we were unable to recover it. 00:29:10.387 [2024-10-09 00:36:40.791460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.387 [2024-10-09 00:36:40.791490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.387 qpair failed and we were unable to recover it. 00:29:10.387 [2024-10-09 00:36:40.791848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.387 [2024-10-09 00:36:40.791878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.387 qpair failed and we were unable to recover it. 00:29:10.387 [2024-10-09 00:36:40.792235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.387 [2024-10-09 00:36:40.792265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.387 qpair failed and we were unable to recover it. 00:29:10.387 [2024-10-09 00:36:40.792615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.387 [2024-10-09 00:36:40.792645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.387 qpair failed and we were unable to recover it. 00:29:10.387 [2024-10-09 00:36:40.793004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.388 [2024-10-09 00:36:40.793035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.388 qpair failed and we were unable to recover it. 00:29:10.388 [2024-10-09 00:36:40.793390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.388 [2024-10-09 00:36:40.793419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.388 qpair failed and we were unable to recover it. 00:29:10.388 [2024-10-09 00:36:40.793669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.388 [2024-10-09 00:36:40.793704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.388 qpair failed and we were unable to recover it. 00:29:10.388 [2024-10-09 00:36:40.794050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.388 [2024-10-09 00:36:40.794081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.388 qpair failed and we were unable to recover it. 00:29:10.388 [2024-10-09 00:36:40.794425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.388 [2024-10-09 00:36:40.794455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.388 qpair failed and we were unable to recover it. 00:29:10.388 [2024-10-09 00:36:40.794825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.388 [2024-10-09 00:36:40.794855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.388 qpair failed and we were unable to recover it. 00:29:10.388 [2024-10-09 00:36:40.795233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.388 [2024-10-09 00:36:40.795262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.388 qpair failed and we were unable to recover it. 00:29:10.388 [2024-10-09 00:36:40.795617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.388 [2024-10-09 00:36:40.795646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.388 qpair failed and we were unable to recover it. 00:29:10.388 [2024-10-09 00:36:40.796093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.388 [2024-10-09 00:36:40.796124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.388 qpair failed and we were unable to recover it. 00:29:10.388 [2024-10-09 00:36:40.796493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.388 [2024-10-09 00:36:40.796524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.388 qpair failed and we were unable to recover it. 00:29:10.388 [2024-10-09 00:36:40.796892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.388 [2024-10-09 00:36:40.796923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.388 qpair failed and we were unable to recover it. 00:29:10.388 [2024-10-09 00:36:40.797196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.388 [2024-10-09 00:36:40.797225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.388 qpair failed and we were unable to recover it. 00:29:10.388 [2024-10-09 00:36:40.797567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.388 [2024-10-09 00:36:40.797596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.388 qpair failed and we were unable to recover it. 00:29:10.388 [2024-10-09 00:36:40.797943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.388 [2024-10-09 00:36:40.797975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.388 qpair failed and we were unable to recover it. 00:29:10.388 [2024-10-09 00:36:40.798397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.388 [2024-10-09 00:36:40.798426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.388 qpair failed and we were unable to recover it. 00:29:10.388 [2024-10-09 00:36:40.798634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.388 [2024-10-09 00:36:40.798662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.388 qpair failed and we were unable to recover it. 00:29:10.388 [2024-10-09 00:36:40.799035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.388 [2024-10-09 00:36:40.799066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.388 qpair failed and we were unable to recover it. 00:29:10.388 [2024-10-09 00:36:40.799449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.388 [2024-10-09 00:36:40.799477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.388 qpair failed and we were unable to recover it. 00:29:10.388 [2024-10-09 00:36:40.799836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.388 [2024-10-09 00:36:40.799867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.388 qpair failed and we were unable to recover it. 00:29:10.388 [2024-10-09 00:36:40.800259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.388 [2024-10-09 00:36:40.800289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.388 qpair failed and we were unable to recover it. 00:29:10.388 [2024-10-09 00:36:40.800658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.388 [2024-10-09 00:36:40.800686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.388 qpair failed and we were unable to recover it. 00:29:10.388 [2024-10-09 00:36:40.801048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.388 [2024-10-09 00:36:40.801078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.388 qpair failed and we were unable to recover it. 00:29:10.388 [2024-10-09 00:36:40.801453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.388 [2024-10-09 00:36:40.801483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.388 qpair failed and we were unable to recover it. 00:29:10.388 [2024-10-09 00:36:40.801833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.388 [2024-10-09 00:36:40.801865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.388 qpair failed and we were unable to recover it. 00:29:10.388 [2024-10-09 00:36:40.802263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.388 [2024-10-09 00:36:40.802292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.388 qpair failed and we were unable to recover it. 00:29:10.388 [2024-10-09 00:36:40.802545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.388 [2024-10-09 00:36:40.802577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.388 qpair failed and we were unable to recover it. 00:29:10.388 [2024-10-09 00:36:40.802919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.388 [2024-10-09 00:36:40.802953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.388 qpair failed and we were unable to recover it. 00:29:10.388 [2024-10-09 00:36:40.803241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.388 [2024-10-09 00:36:40.803269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.388 qpair failed and we were unable to recover it. 00:29:10.388 [2024-10-09 00:36:40.803676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.388 [2024-10-09 00:36:40.803706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.388 qpair failed and we were unable to recover it. 00:29:10.388 [2024-10-09 00:36:40.804117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.388 [2024-10-09 00:36:40.804147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.388 qpair failed and we were unable to recover it. 00:29:10.388 [2024-10-09 00:36:40.804398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.388 [2024-10-09 00:36:40.804427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.388 qpair failed and we were unable to recover it. 00:29:10.388 [2024-10-09 00:36:40.804778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.388 [2024-10-09 00:36:40.804811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.388 qpair failed and we were unable to recover it. 00:29:10.388 [2024-10-09 00:36:40.805185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.388 [2024-10-09 00:36:40.805216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.388 qpair failed and we were unable to recover it. 00:29:10.388 [2024-10-09 00:36:40.805586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.388 [2024-10-09 00:36:40.805625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.388 qpair failed and we were unable to recover it. 00:29:10.388 [2024-10-09 00:36:40.805968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.388 [2024-10-09 00:36:40.806000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.388 qpair failed and we were unable to recover it. 00:29:10.388 [2024-10-09 00:36:40.806354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.388 [2024-10-09 00:36:40.806385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.388 qpair failed and we were unable to recover it. 00:29:10.388 [2024-10-09 00:36:40.806756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.388 [2024-10-09 00:36:40.806787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.388 qpair failed and we were unable to recover it. 00:29:10.388 [2024-10-09 00:36:40.807152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.388 [2024-10-09 00:36:40.807182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.388 qpair failed and we were unable to recover it. 00:29:10.388 [2024-10-09 00:36:40.807554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.388 [2024-10-09 00:36:40.807583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.388 qpair failed and we were unable to recover it. 00:29:10.388 [2024-10-09 00:36:40.807956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.388 [2024-10-09 00:36:40.807991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.388 qpair failed and we were unable to recover it. 00:29:10.388 [2024-10-09 00:36:40.808139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.388 [2024-10-09 00:36:40.808173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.388 qpair failed and we were unable to recover it. 00:29:10.389 [2024-10-09 00:36:40.808655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.389 [2024-10-09 00:36:40.808684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.389 qpair failed and we were unable to recover it. 00:29:10.389 [2024-10-09 00:36:40.809077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.389 [2024-10-09 00:36:40.809109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.389 qpair failed and we were unable to recover it. 00:29:10.389 [2024-10-09 00:36:40.809459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.389 [2024-10-09 00:36:40.809489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.389 qpair failed and we were unable to recover it. 00:29:10.389 [2024-10-09 00:36:40.809746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.389 [2024-10-09 00:36:40.809778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.389 qpair failed and we were unable to recover it. 00:29:10.389 [2024-10-09 00:36:40.810103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.389 [2024-10-09 00:36:40.810133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.389 qpair failed and we were unable to recover it. 00:29:10.389 [2024-10-09 00:36:40.810485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.389 [2024-10-09 00:36:40.810515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.389 qpair failed and we were unable to recover it. 00:29:10.389 [2024-10-09 00:36:40.810877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.389 [2024-10-09 00:36:40.810911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.389 qpair failed and we were unable to recover it. 00:29:10.389 [2024-10-09 00:36:40.811266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.389 [2024-10-09 00:36:40.811297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.389 qpair failed and we were unable to recover it. 00:29:10.389 [2024-10-09 00:36:40.811661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.389 [2024-10-09 00:36:40.811690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.389 qpair failed and we were unable to recover it. 00:29:10.389 [2024-10-09 00:36:40.811960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.389 [2024-10-09 00:36:40.811990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.389 qpair failed and we were unable to recover it. 00:29:10.389 [2024-10-09 00:36:40.812351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.389 [2024-10-09 00:36:40.812379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.389 qpair failed and we were unable to recover it. 00:29:10.389 [2024-10-09 00:36:40.812749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.389 [2024-10-09 00:36:40.812781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.389 qpair failed and we were unable to recover it. 00:29:10.389 [2024-10-09 00:36:40.813039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.389 [2024-10-09 00:36:40.813068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.389 qpair failed and we were unable to recover it. 00:29:10.389 [2024-10-09 00:36:40.813351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.389 [2024-10-09 00:36:40.813381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.389 qpair failed and we were unable to recover it. 00:29:10.389 [2024-10-09 00:36:40.813677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.389 [2024-10-09 00:36:40.813713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.389 qpair failed and we were unable to recover it. 00:29:10.389 [2024-10-09 00:36:40.814078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.389 [2024-10-09 00:36:40.814108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.389 qpair failed and we were unable to recover it. 00:29:10.389 [2024-10-09 00:36:40.814372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.389 [2024-10-09 00:36:40.814401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.389 qpair failed and we were unable to recover it. 00:29:10.389 [2024-10-09 00:36:40.814675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.389 [2024-10-09 00:36:40.814706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.389 qpair failed and we were unable to recover it. 00:29:10.389 [2024-10-09 00:36:40.815088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.389 [2024-10-09 00:36:40.815119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.389 qpair failed and we were unable to recover it. 00:29:10.389 [2024-10-09 00:36:40.815487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.389 [2024-10-09 00:36:40.815517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.389 qpair failed and we were unable to recover it. 00:29:10.389 [2024-10-09 00:36:40.815877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.389 [2024-10-09 00:36:40.815908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.389 qpair failed and we were unable to recover it. 00:29:10.389 [2024-10-09 00:36:40.816286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.389 [2024-10-09 00:36:40.816316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.389 qpair failed and we were unable to recover it. 00:29:10.389 [2024-10-09 00:36:40.816672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.389 [2024-10-09 00:36:40.816702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.389 qpair failed and we were unable to recover it. 00:29:10.389 [2024-10-09 00:36:40.817105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.389 [2024-10-09 00:36:40.817137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.389 qpair failed and we were unable to recover it. 00:29:10.389 [2024-10-09 00:36:40.817396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.389 [2024-10-09 00:36:40.817429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.389 qpair failed and we were unable to recover it. 00:29:10.389 [2024-10-09 00:36:40.817784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.389 [2024-10-09 00:36:40.817817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.389 qpair failed and we were unable to recover it. 00:29:10.389 [2024-10-09 00:36:40.818188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.389 [2024-10-09 00:36:40.818218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.389 qpair failed and we were unable to recover it. 00:29:10.389 [2024-10-09 00:36:40.818588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.389 [2024-10-09 00:36:40.818620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.389 qpair failed and we were unable to recover it. 00:29:10.389 [2024-10-09 00:36:40.818957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.389 [2024-10-09 00:36:40.818989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.389 qpair failed and we were unable to recover it. 00:29:10.389 [2024-10-09 00:36:40.819354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.389 [2024-10-09 00:36:40.819391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.389 qpair failed and we were unable to recover it. 00:29:10.389 [2024-10-09 00:36:40.819743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.389 [2024-10-09 00:36:40.819776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.389 qpair failed and we were unable to recover it. 00:29:10.389 [2024-10-09 00:36:40.820007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.389 [2024-10-09 00:36:40.820040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.389 qpair failed and we were unable to recover it. 00:29:10.389 [2024-10-09 00:36:40.820301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.389 [2024-10-09 00:36:40.820331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.389 qpair failed and we were unable to recover it. 00:29:10.389 [2024-10-09 00:36:40.820671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.389 [2024-10-09 00:36:40.820700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.389 qpair failed and we were unable to recover it. 00:29:10.389 [2024-10-09 00:36:40.821079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.389 [2024-10-09 00:36:40.821111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.389 qpair failed and we were unable to recover it. 00:29:10.389 [2024-10-09 00:36:40.821465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.389 [2024-10-09 00:36:40.821496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.389 qpair failed and we were unable to recover it. 00:29:10.389 [2024-10-09 00:36:40.821757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.389 [2024-10-09 00:36:40.821787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.389 qpair failed and we were unable to recover it. 00:29:10.389 [2024-10-09 00:36:40.822157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.389 [2024-10-09 00:36:40.822188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.389 qpair failed and we were unable to recover it. 00:29:10.389 [2024-10-09 00:36:40.822545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.389 [2024-10-09 00:36:40.822576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.389 qpair failed and we were unable to recover it. 00:29:10.389 [2024-10-09 00:36:40.822943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.389 [2024-10-09 00:36:40.822974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.389 qpair failed and we were unable to recover it. 00:29:10.389 [2024-10-09 00:36:40.823340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.390 [2024-10-09 00:36:40.823371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.390 qpair failed and we were unable to recover it. 00:29:10.390 [2024-10-09 00:36:40.823749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.390 [2024-10-09 00:36:40.823780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.390 qpair failed and we were unable to recover it. 00:29:10.390 [2024-10-09 00:36:40.824136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.390 [2024-10-09 00:36:40.824166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.390 qpair failed and we were unable to recover it. 00:29:10.390 [2024-10-09 00:36:40.824552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.390 [2024-10-09 00:36:40.824583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.390 qpair failed and we were unable to recover it. 00:29:10.390 [2024-10-09 00:36:40.824846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.390 [2024-10-09 00:36:40.824879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.390 qpair failed and we were unable to recover it. 00:29:10.390 [2024-10-09 00:36:40.825223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.390 [2024-10-09 00:36:40.825253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.390 qpair failed and we were unable to recover it. 00:29:10.390 [2024-10-09 00:36:40.825612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.390 [2024-10-09 00:36:40.825643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.390 qpair failed and we were unable to recover it. 00:29:10.390 [2024-10-09 00:36:40.826032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.390 [2024-10-09 00:36:40.826064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.390 qpair failed and we were unable to recover it. 00:29:10.390 [2024-10-09 00:36:40.826423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.390 [2024-10-09 00:36:40.826453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.390 qpair failed and we were unable to recover it. 00:29:10.390 [2024-10-09 00:36:40.826811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.390 [2024-10-09 00:36:40.826842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.390 qpair failed and we were unable to recover it. 00:29:10.390 [2024-10-09 00:36:40.827212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.390 [2024-10-09 00:36:40.827243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.390 qpair failed and we were unable to recover it. 00:29:10.390 [2024-10-09 00:36:40.827612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.390 [2024-10-09 00:36:40.827641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.390 qpair failed and we were unable to recover it. 00:29:10.390 [2024-10-09 00:36:40.828003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.390 [2024-10-09 00:36:40.828033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.390 qpair failed and we were unable to recover it. 00:29:10.390 [2024-10-09 00:36:40.828391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.390 [2024-10-09 00:36:40.828420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.390 qpair failed and we were unable to recover it. 00:29:10.390 [2024-10-09 00:36:40.828768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.390 [2024-10-09 00:36:40.828801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.390 qpair failed and we were unable to recover it. 00:29:10.390 [2024-10-09 00:36:40.829191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.390 [2024-10-09 00:36:40.829220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.390 qpair failed and we were unable to recover it. 00:29:10.390 [2024-10-09 00:36:40.829491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.390 [2024-10-09 00:36:40.829526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.390 qpair failed and we were unable to recover it. 00:29:10.390 [2024-10-09 00:36:40.829894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.390 [2024-10-09 00:36:40.829929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.390 qpair failed and we were unable to recover it. 00:29:10.390 [2024-10-09 00:36:40.830269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.390 [2024-10-09 00:36:40.830300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.390 qpair failed and we were unable to recover it. 00:29:10.390 [2024-10-09 00:36:40.830687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.390 [2024-10-09 00:36:40.830716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.390 qpair failed and we were unable to recover it. 00:29:10.390 [2024-10-09 00:36:40.830966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.390 [2024-10-09 00:36:40.830996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.390 qpair failed and we were unable to recover it. 00:29:10.390 [2024-10-09 00:36:40.831396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.390 [2024-10-09 00:36:40.831428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.390 qpair failed and we were unable to recover it. 00:29:10.390 [2024-10-09 00:36:40.831788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.390 [2024-10-09 00:36:40.831821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.390 qpair failed and we were unable to recover it. 00:29:10.390 [2024-10-09 00:36:40.832174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.390 [2024-10-09 00:36:40.832204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.390 qpair failed and we were unable to recover it. 00:29:10.390 [2024-10-09 00:36:40.832564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.390 [2024-10-09 00:36:40.832594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.390 qpair failed and we were unable to recover it. 00:29:10.390 [2024-10-09 00:36:40.832945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.390 [2024-10-09 00:36:40.832976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.390 qpair failed and we were unable to recover it. 00:29:10.390 [2024-10-09 00:36:40.833308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.390 [2024-10-09 00:36:40.833339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.390 qpair failed and we were unable to recover it. 00:29:10.390 [2024-10-09 00:36:40.833758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.390 [2024-10-09 00:36:40.833790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.390 qpair failed and we were unable to recover it. 00:29:10.390 [2024-10-09 00:36:40.834142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.390 [2024-10-09 00:36:40.834172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.390 qpair failed and we were unable to recover it. 00:29:10.390 [2024-10-09 00:36:40.834499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.390 [2024-10-09 00:36:40.834529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.390 qpair failed and we were unable to recover it. 00:29:10.390 [2024-10-09 00:36:40.834865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.390 [2024-10-09 00:36:40.834897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.390 qpair failed and we were unable to recover it. 00:29:10.390 [2024-10-09 00:36:40.835253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.390 [2024-10-09 00:36:40.835283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.390 qpair failed and we were unable to recover it. 00:29:10.390 [2024-10-09 00:36:40.835557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.390 [2024-10-09 00:36:40.835589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.390 qpair failed and we were unable to recover it. 00:29:10.390 [2024-10-09 00:36:40.835967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.390 [2024-10-09 00:36:40.836000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.390 qpair failed and we were unable to recover it. 00:29:10.390 [2024-10-09 00:36:40.836340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.390 [2024-10-09 00:36:40.836373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.390 qpair failed and we were unable to recover it. 00:29:10.390 [2024-10-09 00:36:40.836742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.390 [2024-10-09 00:36:40.836774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.390 qpair failed and we were unable to recover it. 00:29:10.390 [2024-10-09 00:36:40.837171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.390 [2024-10-09 00:36:40.837200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.390 qpair failed and we were unable to recover it. 00:29:10.390 [2024-10-09 00:36:40.837579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.390 [2024-10-09 00:36:40.837609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.390 qpair failed and we were unable to recover it. 00:29:10.390 [2024-10-09 00:36:40.838009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.390 [2024-10-09 00:36:40.838040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.390 qpair failed and we were unable to recover it. 00:29:10.390 [2024-10-09 00:36:40.838479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.390 [2024-10-09 00:36:40.838509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.390 qpair failed and we were unable to recover it. 00:29:10.390 [2024-10-09 00:36:40.838854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.390 [2024-10-09 00:36:40.838892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.390 qpair failed and we were unable to recover it. 00:29:10.391 [2024-10-09 00:36:40.839227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.391 [2024-10-09 00:36:40.839257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.391 qpair failed and we were unable to recover it. 00:29:10.391 [2024-10-09 00:36:40.839614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.391 [2024-10-09 00:36:40.839645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.391 qpair failed and we were unable to recover it. 00:29:10.391 [2024-10-09 00:36:40.840011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.391 [2024-10-09 00:36:40.840055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.391 qpair failed and we were unable to recover it. 00:29:10.391 [2024-10-09 00:36:40.840433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.391 [2024-10-09 00:36:40.840463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.391 qpair failed and we were unable to recover it. 00:29:10.391 [2024-10-09 00:36:40.840679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.391 [2024-10-09 00:36:40.840708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.391 qpair failed and we were unable to recover it. 00:29:10.391 [2024-10-09 00:36:40.841093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.391 [2024-10-09 00:36:40.841126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.391 qpair failed and we were unable to recover it. 00:29:10.391 [2024-10-09 00:36:40.841494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.391 [2024-10-09 00:36:40.841525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.391 qpair failed and we were unable to recover it. 00:29:10.391 [2024-10-09 00:36:40.841895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.391 [2024-10-09 00:36:40.841927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.391 qpair failed and we were unable to recover it. 00:29:10.391 [2024-10-09 00:36:40.842182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.391 [2024-10-09 00:36:40.842215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.391 qpair failed and we were unable to recover it. 00:29:10.391 [2024-10-09 00:36:40.842571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.391 [2024-10-09 00:36:40.842601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.391 qpair failed and we were unable to recover it. 00:29:10.391 [2024-10-09 00:36:40.842953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.391 [2024-10-09 00:36:40.842986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.391 qpair failed and we were unable to recover it. 00:29:10.391 [2024-10-09 00:36:40.843347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.391 [2024-10-09 00:36:40.843377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.391 qpair failed and we were unable to recover it. 00:29:10.391 [2024-10-09 00:36:40.843618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.391 [2024-10-09 00:36:40.843648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.391 qpair failed and we were unable to recover it. 00:29:10.391 [2024-10-09 00:36:40.844026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.391 [2024-10-09 00:36:40.844058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.391 qpair failed and we were unable to recover it. 00:29:10.391 [2024-10-09 00:36:40.844434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.391 [2024-10-09 00:36:40.844468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.391 qpair failed and we were unable to recover it. 00:29:10.391 [2024-10-09 00:36:40.844729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.391 [2024-10-09 00:36:40.844759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.391 qpair failed and we were unable to recover it. 00:29:10.391 [2024-10-09 00:36:40.845129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.391 [2024-10-09 00:36:40.845159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.391 qpair failed and we were unable to recover it. 00:29:10.391 [2024-10-09 00:36:40.845634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.391 [2024-10-09 00:36:40.845665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.391 qpair failed and we were unable to recover it. 00:29:10.391 [2024-10-09 00:36:40.845946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.391 [2024-10-09 00:36:40.845977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.391 qpair failed and we were unable to recover it. 00:29:10.391 [2024-10-09 00:36:40.846350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.391 [2024-10-09 00:36:40.846381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.391 qpair failed and we were unable to recover it. 00:29:10.391 [2024-10-09 00:36:40.846730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.391 [2024-10-09 00:36:40.846762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.391 qpair failed and we were unable to recover it. 00:29:10.391 [2024-10-09 00:36:40.847119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.391 [2024-10-09 00:36:40.847150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.391 qpair failed and we were unable to recover it. 00:29:10.391 [2024-10-09 00:36:40.847499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.391 [2024-10-09 00:36:40.847530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.391 qpair failed and we were unable to recover it. 00:29:10.391 [2024-10-09 00:36:40.847897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.391 [2024-10-09 00:36:40.847931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.391 qpair failed and we were unable to recover it. 00:29:10.391 [2024-10-09 00:36:40.848322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.391 [2024-10-09 00:36:40.848357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.391 qpair failed and we were unable to recover it. 00:29:10.391 [2024-10-09 00:36:40.848717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.391 [2024-10-09 00:36:40.848758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.391 qpair failed and we were unable to recover it. 00:29:10.391 [2024-10-09 00:36:40.849119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.391 [2024-10-09 00:36:40.849150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.391 qpair failed and we were unable to recover it. 00:29:10.391 [2024-10-09 00:36:40.849403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.391 [2024-10-09 00:36:40.849437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.391 qpair failed and we were unable to recover it. 00:29:10.391 [2024-10-09 00:36:40.849877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.391 [2024-10-09 00:36:40.849911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.391 qpair failed and we were unable to recover it. 00:29:10.391 [2024-10-09 00:36:40.850266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.391 [2024-10-09 00:36:40.850297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.391 qpair failed and we were unable to recover it. 00:29:10.391 [2024-10-09 00:36:40.850552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.391 [2024-10-09 00:36:40.850584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.391 qpair failed and we were unable to recover it. 00:29:10.391 [2024-10-09 00:36:40.850919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.391 [2024-10-09 00:36:40.850952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.391 qpair failed and we were unable to recover it. 00:29:10.391 [2024-10-09 00:36:40.851309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.391 [2024-10-09 00:36:40.851339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.391 qpair failed and we were unable to recover it. 00:29:10.391 [2024-10-09 00:36:40.851602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.391 [2024-10-09 00:36:40.851633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.391 qpair failed and we were unable to recover it. 00:29:10.391 [2024-10-09 00:36:40.852083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.391 [2024-10-09 00:36:40.852115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.391 qpair failed and we were unable to recover it. 00:29:10.392 [2024-10-09 00:36:40.852371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.392 [2024-10-09 00:36:40.852403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.392 qpair failed and we were unable to recover it. 00:29:10.392 [2024-10-09 00:36:40.852735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.392 [2024-10-09 00:36:40.852768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.392 qpair failed and we were unable to recover it. 00:29:10.392 [2024-10-09 00:36:40.853155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.392 [2024-10-09 00:36:40.853188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.392 qpair failed and we were unable to recover it. 00:29:10.392 [2024-10-09 00:36:40.853461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.392 [2024-10-09 00:36:40.853492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.392 qpair failed and we were unable to recover it. 00:29:10.392 [2024-10-09 00:36:40.853870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.392 [2024-10-09 00:36:40.853903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.392 qpair failed and we were unable to recover it. 00:29:10.392 [2024-10-09 00:36:40.854276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.392 [2024-10-09 00:36:40.854310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.392 qpair failed and we were unable to recover it. 00:29:10.392 [2024-10-09 00:36:40.854657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.392 [2024-10-09 00:36:40.854688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.392 qpair failed and we were unable to recover it. 00:29:10.392 [2024-10-09 00:36:40.855126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.392 [2024-10-09 00:36:40.855158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.392 qpair failed and we were unable to recover it. 00:29:10.392 [2024-10-09 00:36:40.855506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.392 [2024-10-09 00:36:40.855542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.392 qpair failed and we were unable to recover it. 00:29:10.392 [2024-10-09 00:36:40.855823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.392 [2024-10-09 00:36:40.855855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.392 qpair failed and we were unable to recover it. 00:29:10.392 [2024-10-09 00:36:40.856215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.392 [2024-10-09 00:36:40.856247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.392 qpair failed and we were unable to recover it. 00:29:10.392 [2024-10-09 00:36:40.856613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.392 [2024-10-09 00:36:40.856643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.392 qpair failed and we were unable to recover it. 00:29:10.392 [2024-10-09 00:36:40.857012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.392 [2024-10-09 00:36:40.857044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.392 qpair failed and we were unable to recover it. 00:29:10.392 [2024-10-09 00:36:40.857385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.392 [2024-10-09 00:36:40.857417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.392 qpair failed and we were unable to recover it. 00:29:10.392 [2024-10-09 00:36:40.857782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.392 [2024-10-09 00:36:40.857815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.392 qpair failed and we were unable to recover it. 00:29:10.392 [2024-10-09 00:36:40.858183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.392 [2024-10-09 00:36:40.858214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.392 qpair failed and we were unable to recover it. 00:29:10.392 [2024-10-09 00:36:40.858631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.392 [2024-10-09 00:36:40.858662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.392 qpair failed and we were unable to recover it. 00:29:10.392 [2024-10-09 00:36:40.859014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.392 [2024-10-09 00:36:40.859047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.392 qpair failed and we were unable to recover it. 00:29:10.392 [2024-10-09 00:36:40.859395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.392 [2024-10-09 00:36:40.859426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.392 qpair failed and we were unable to recover it. 00:29:10.392 [2024-10-09 00:36:40.859772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.392 [2024-10-09 00:36:40.859804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.392 qpair failed and we were unable to recover it. 00:29:10.392 [2024-10-09 00:36:40.860086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.392 [2024-10-09 00:36:40.860116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.392 qpair failed and we were unable to recover it. 00:29:10.392 [2024-10-09 00:36:40.860508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.392 [2024-10-09 00:36:40.860539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.392 qpair failed and we were unable to recover it. 00:29:10.392 [2024-10-09 00:36:40.860798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.392 [2024-10-09 00:36:40.860830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.392 qpair failed and we were unable to recover it. 00:29:10.392 [2024-10-09 00:36:40.861216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.392 [2024-10-09 00:36:40.861245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.392 qpair failed and we were unable to recover it. 00:29:10.392 [2024-10-09 00:36:40.861611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.392 [2024-10-09 00:36:40.861643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.392 qpair failed and we were unable to recover it. 00:29:10.392 [2024-10-09 00:36:40.861934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.392 [2024-10-09 00:36:40.861966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.392 qpair failed and we were unable to recover it. 00:29:10.392 [2024-10-09 00:36:40.862310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.392 [2024-10-09 00:36:40.862341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.392 qpair failed and we were unable to recover it. 00:29:10.392 [2024-10-09 00:36:40.862695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.392 [2024-10-09 00:36:40.862735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.392 qpair failed and we were unable to recover it. 00:29:10.392 [2024-10-09 00:36:40.863083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.392 [2024-10-09 00:36:40.863112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.392 qpair failed and we were unable to recover it. 00:29:10.392 [2024-10-09 00:36:40.863507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.392 [2024-10-09 00:36:40.863536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.392 qpair failed and we were unable to recover it. 00:29:10.392 [2024-10-09 00:36:40.863879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.392 [2024-10-09 00:36:40.863911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.392 qpair failed and we were unable to recover it. 00:29:10.392 [2024-10-09 00:36:40.864277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.392 [2024-10-09 00:36:40.864306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.392 qpair failed and we were unable to recover it. 00:29:10.392 [2024-10-09 00:36:40.864570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.392 [2024-10-09 00:36:40.864603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.392 qpair failed and we were unable to recover it. 00:29:10.392 [2024-10-09 00:36:40.864990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.392 [2024-10-09 00:36:40.865021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.392 qpair failed and we were unable to recover it. 00:29:10.392 [2024-10-09 00:36:40.865378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.392 [2024-10-09 00:36:40.865409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.392 qpair failed and we were unable to recover it. 00:29:10.392 [2024-10-09 00:36:40.865776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.392 [2024-10-09 00:36:40.865813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.392 qpair failed and we were unable to recover it. 00:29:10.392 [2024-10-09 00:36:40.866072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.392 [2024-10-09 00:36:40.866103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.392 qpair failed and we were unable to recover it. 00:29:10.392 [2024-10-09 00:36:40.866481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.392 [2024-10-09 00:36:40.866510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.392 qpair failed and we were unable to recover it. 00:29:10.392 [2024-10-09 00:36:40.866855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.392 [2024-10-09 00:36:40.866885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.392 qpair failed and we were unable to recover it. 00:29:10.392 [2024-10-09 00:36:40.867134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.392 [2024-10-09 00:36:40.867167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.392 qpair failed and we were unable to recover it. 00:29:10.392 [2024-10-09 00:36:40.867517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.393 [2024-10-09 00:36:40.867548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.393 qpair failed and we were unable to recover it. 00:29:10.393 [2024-10-09 00:36:40.867836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.393 [2024-10-09 00:36:40.867867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.393 qpair failed and we were unable to recover it. 00:29:10.393 [2024-10-09 00:36:40.868258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.393 [2024-10-09 00:36:40.868288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.393 qpair failed and we were unable to recover it. 00:29:10.393 [2024-10-09 00:36:40.868646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.393 [2024-10-09 00:36:40.868674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.393 qpair failed and we were unable to recover it. 00:29:10.393 [2024-10-09 00:36:40.869125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.393 [2024-10-09 00:36:40.869158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.393 qpair failed and we were unable to recover it. 00:29:10.393 [2024-10-09 00:36:40.869527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.393 [2024-10-09 00:36:40.869564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.393 qpair failed and we were unable to recover it. 00:29:10.393 [2024-10-09 00:36:40.869823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.393 [2024-10-09 00:36:40.869854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.393 qpair failed and we were unable to recover it. 00:29:10.393 [2024-10-09 00:36:40.870129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.393 [2024-10-09 00:36:40.870159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.393 qpair failed and we were unable to recover it. 00:29:10.393 [2024-10-09 00:36:40.870540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.393 [2024-10-09 00:36:40.870570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.393 qpair failed and we were unable to recover it. 00:29:10.393 [2024-10-09 00:36:40.870830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.393 [2024-10-09 00:36:40.870860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.393 qpair failed and we were unable to recover it. 00:29:10.393 [2024-10-09 00:36:40.871102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.393 [2024-10-09 00:36:40.871132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.393 qpair failed and we were unable to recover it. 00:29:10.393 [2024-10-09 00:36:40.871480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.393 [2024-10-09 00:36:40.871509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.393 qpair failed and we were unable to recover it. 00:29:10.393 [2024-10-09 00:36:40.871862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.393 [2024-10-09 00:36:40.871893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.393 qpair failed and we were unable to recover it. 00:29:10.393 [2024-10-09 00:36:40.872287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.393 [2024-10-09 00:36:40.872317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.393 qpair failed and we were unable to recover it. 00:29:10.393 [2024-10-09 00:36:40.872659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.393 [2024-10-09 00:36:40.872688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.393 qpair failed and we were unable to recover it. 00:29:10.393 [2024-10-09 00:36:40.873086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.393 [2024-10-09 00:36:40.873116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.393 qpair failed and we were unable to recover it. 00:29:10.393 [2024-10-09 00:36:40.873493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.393 [2024-10-09 00:36:40.873523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.393 qpair failed and we were unable to recover it. 00:29:10.393 [2024-10-09 00:36:40.873945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.393 [2024-10-09 00:36:40.873975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.393 qpair failed and we were unable to recover it. 00:29:10.393 [2024-10-09 00:36:40.874358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.393 [2024-10-09 00:36:40.874386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.393 qpair failed and we were unable to recover it. 00:29:10.393 [2024-10-09 00:36:40.874871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.393 [2024-10-09 00:36:40.874902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.393 qpair failed and we were unable to recover it. 00:29:10.393 [2024-10-09 00:36:40.875272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.393 [2024-10-09 00:36:40.875301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.393 qpair failed and we were unable to recover it. 00:29:10.393 [2024-10-09 00:36:40.875641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.393 [2024-10-09 00:36:40.875671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.393 qpair failed and we were unable to recover it. 00:29:10.393 [2024-10-09 00:36:40.876091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.393 [2024-10-09 00:36:40.876128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.393 qpair failed and we were unable to recover it. 00:29:10.393 [2024-10-09 00:36:40.876384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.393 [2024-10-09 00:36:40.876414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.393 qpair failed and we were unable to recover it. 00:29:10.393 [2024-10-09 00:36:40.876667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.393 [2024-10-09 00:36:40.876696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.393 qpair failed and we were unable to recover it. 00:29:10.393 [2024-10-09 00:36:40.876952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.393 [2024-10-09 00:36:40.876982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.393 qpair failed and we were unable to recover it. 00:29:10.393 [2024-10-09 00:36:40.877233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.393 [2024-10-09 00:36:40.877263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.393 qpair failed and we were unable to recover it. 00:29:10.393 [2024-10-09 00:36:40.877490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.393 [2024-10-09 00:36:40.877520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.393 qpair failed and we were unable to recover it. 00:29:10.393 [2024-10-09 00:36:40.877925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.393 [2024-10-09 00:36:40.877956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.393 qpair failed and we were unable to recover it. 00:29:10.393 [2024-10-09 00:36:40.878213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.393 [2024-10-09 00:36:40.878241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.393 qpair failed and we were unable to recover it. 00:29:10.393 [2024-10-09 00:36:40.878576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.393 [2024-10-09 00:36:40.878605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.393 qpair failed and we were unable to recover it. 00:29:10.393 [2024-10-09 00:36:40.879019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.393 [2024-10-09 00:36:40.879049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.393 qpair failed and we were unable to recover it. 00:29:10.393 [2024-10-09 00:36:40.879412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.393 [2024-10-09 00:36:40.879442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.393 qpair failed and we were unable to recover it. 00:29:10.393 [2024-10-09 00:36:40.879795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.393 [2024-10-09 00:36:40.879827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.393 qpair failed and we were unable to recover it. 00:29:10.393 [2024-10-09 00:36:40.880191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.393 [2024-10-09 00:36:40.880220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.393 qpair failed and we were unable to recover it. 00:29:10.393 [2024-10-09 00:36:40.880606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.393 [2024-10-09 00:36:40.880635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.393 qpair failed and we were unable to recover it. 00:29:10.393 [2024-10-09 00:36:40.881036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.393 [2024-10-09 00:36:40.881067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.393 qpair failed and we were unable to recover it. 00:29:10.393 [2024-10-09 00:36:40.881412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.393 [2024-10-09 00:36:40.881442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.393 qpair failed and we were unable to recover it. 00:29:10.393 [2024-10-09 00:36:40.881814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.393 [2024-10-09 00:36:40.881846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.393 qpair failed and we were unable to recover it. 00:29:10.393 [2024-10-09 00:36:40.882210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.393 [2024-10-09 00:36:40.882240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.393 qpair failed and we were unable to recover it. 00:29:10.393 [2024-10-09 00:36:40.882594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.393 [2024-10-09 00:36:40.882623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.393 qpair failed and we were unable to recover it. 00:29:10.393 [2024-10-09 00:36:40.882857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.394 [2024-10-09 00:36:40.882887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.394 qpair failed and we were unable to recover it. 00:29:10.394 [2024-10-09 00:36:40.883278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.394 [2024-10-09 00:36:40.883307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.394 qpair failed and we were unable to recover it. 00:29:10.394 [2024-10-09 00:36:40.883533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.394 [2024-10-09 00:36:40.883564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.394 qpair failed and we were unable to recover it. 00:29:10.394 [2024-10-09 00:36:40.883949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.394 [2024-10-09 00:36:40.883979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.394 qpair failed and we were unable to recover it. 00:29:10.394 [2024-10-09 00:36:40.884308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.394 [2024-10-09 00:36:40.884337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.394 qpair failed and we were unable to recover it. 00:29:10.394 [2024-10-09 00:36:40.884761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.394 [2024-10-09 00:36:40.884793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.394 qpair failed and we were unable to recover it. 00:29:10.394 [2024-10-09 00:36:40.885179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.394 [2024-10-09 00:36:40.885209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.394 qpair failed and we were unable to recover it. 00:29:10.394 [2024-10-09 00:36:40.885573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.394 [2024-10-09 00:36:40.885602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.394 qpair failed and we were unable to recover it. 00:29:10.394 [2024-10-09 00:36:40.885855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.394 [2024-10-09 00:36:40.885886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.394 qpair failed and we were unable to recover it. 00:29:10.394 [2024-10-09 00:36:40.886247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.394 [2024-10-09 00:36:40.886277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.394 qpair failed and we were unable to recover it. 00:29:10.394 [2024-10-09 00:36:40.886393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.394 [2024-10-09 00:36:40.886424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.394 qpair failed and we were unable to recover it. 00:29:10.394 [2024-10-09 00:36:40.886804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.394 [2024-10-09 00:36:40.886834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.394 qpair failed and we were unable to recover it. 00:29:10.394 [2024-10-09 00:36:40.887187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.394 [2024-10-09 00:36:40.887219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.394 qpair failed and we were unable to recover it. 00:29:10.394 [2024-10-09 00:36:40.887603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.394 [2024-10-09 00:36:40.887634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.394 qpair failed and we were unable to recover it. 00:29:10.394 [2024-10-09 00:36:40.888022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.394 [2024-10-09 00:36:40.888051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.394 qpair failed and we were unable to recover it. 00:29:10.394 [2024-10-09 00:36:40.888484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.394 [2024-10-09 00:36:40.888514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.394 qpair failed and we were unable to recover it. 00:29:10.394 [2024-10-09 00:36:40.888764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.394 [2024-10-09 00:36:40.888798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.394 qpair failed and we were unable to recover it. 00:29:10.394 [2024-10-09 00:36:40.889170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.394 [2024-10-09 00:36:40.889203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.394 qpair failed and we were unable to recover it. 00:29:10.394 [2024-10-09 00:36:40.889569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.394 [2024-10-09 00:36:40.889599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.394 qpair failed and we were unable to recover it. 00:29:10.394 [2024-10-09 00:36:40.890016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.394 [2024-10-09 00:36:40.890047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.394 qpair failed and we were unable to recover it. 00:29:10.394 [2024-10-09 00:36:40.890408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.394 [2024-10-09 00:36:40.890440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.394 qpair failed and we were unable to recover it. 00:29:10.394 [2024-10-09 00:36:40.890788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.394 [2024-10-09 00:36:40.890818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.394 qpair failed and we were unable to recover it. 00:29:10.394 [2024-10-09 00:36:40.891081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.394 [2024-10-09 00:36:40.891113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.394 qpair failed and we were unable to recover it. 00:29:10.394 [2024-10-09 00:36:40.891478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.394 [2024-10-09 00:36:40.891508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.394 qpair failed and we were unable to recover it. 00:29:10.394 [2024-10-09 00:36:40.891899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.394 [2024-10-09 00:36:40.891930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.394 qpair failed and we were unable to recover it. 00:29:10.394 [2024-10-09 00:36:40.892287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.394 [2024-10-09 00:36:40.892316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.394 qpair failed and we were unable to recover it. 00:29:10.394 [2024-10-09 00:36:40.892686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.394 [2024-10-09 00:36:40.892715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.394 qpair failed and we were unable to recover it. 00:29:10.394 [2024-10-09 00:36:40.893091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.394 [2024-10-09 00:36:40.893120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.394 qpair failed and we were unable to recover it. 00:29:10.394 [2024-10-09 00:36:40.893492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.394 [2024-10-09 00:36:40.893521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.394 qpair failed and we were unable to recover it. 00:29:10.394 [2024-10-09 00:36:40.893890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.394 [2024-10-09 00:36:40.893923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.394 qpair failed and we were unable to recover it. 00:29:10.394 [2024-10-09 00:36:40.894279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.394 [2024-10-09 00:36:40.894308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.394 qpair failed and we were unable to recover it. 00:29:10.394 [2024-10-09 00:36:40.894565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.394 [2024-10-09 00:36:40.894593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.394 qpair failed and we were unable to recover it. 00:29:10.394 [2024-10-09 00:36:40.894931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.394 [2024-10-09 00:36:40.894961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.394 qpair failed and we were unable to recover it. 00:29:10.394 [2024-10-09 00:36:40.895203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.394 [2024-10-09 00:36:40.895236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.394 qpair failed and we were unable to recover it. 00:29:10.394 [2024-10-09 00:36:40.895605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.394 [2024-10-09 00:36:40.895635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.394 qpair failed and we were unable to recover it. 00:29:10.394 [2024-10-09 00:36:40.896029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.394 [2024-10-09 00:36:40.896059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.394 qpair failed and we were unable to recover it. 00:29:10.394 [2024-10-09 00:36:40.896358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.394 [2024-10-09 00:36:40.896387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.394 qpair failed and we were unable to recover it. 00:29:10.394 [2024-10-09 00:36:40.896767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.394 [2024-10-09 00:36:40.896798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.394 qpair failed and we were unable to recover it. 00:29:10.394 [2024-10-09 00:36:40.897250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.394 [2024-10-09 00:36:40.897279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.394 qpair failed and we were unable to recover it. 00:29:10.394 [2024-10-09 00:36:40.897648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.394 [2024-10-09 00:36:40.897679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.394 qpair failed and we were unable to recover it. 00:29:10.394 [2024-10-09 00:36:40.898029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.394 [2024-10-09 00:36:40.898060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.394 qpair failed and we were unable to recover it. 00:29:10.395 [2024-10-09 00:36:40.898411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.395 [2024-10-09 00:36:40.898442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.395 qpair failed and we were unable to recover it. 00:29:10.395 [2024-10-09 00:36:40.898784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.395 [2024-10-09 00:36:40.898816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.395 qpair failed and we were unable to recover it. 00:29:10.395 [2024-10-09 00:36:40.899212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.395 [2024-10-09 00:36:40.899243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.395 qpair failed and we were unable to recover it. 00:29:10.395 [2024-10-09 00:36:40.899594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.395 [2024-10-09 00:36:40.899622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.395 qpair failed and we were unable to recover it. 00:29:10.395 [2024-10-09 00:36:40.899973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.395 [2024-10-09 00:36:40.900005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.395 qpair failed and we were unable to recover it. 00:29:10.395 [2024-10-09 00:36:40.900358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.395 [2024-10-09 00:36:40.900390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.395 qpair failed and we were unable to recover it. 00:29:10.395 [2024-10-09 00:36:40.900739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.395 [2024-10-09 00:36:40.900770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.395 qpair failed and we were unable to recover it. 00:29:10.395 [2024-10-09 00:36:40.901170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.395 [2024-10-09 00:36:40.901201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.395 qpair failed and we were unable to recover it. 00:29:10.395 [2024-10-09 00:36:40.901568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.395 [2024-10-09 00:36:40.901602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.395 qpair failed and we were unable to recover it. 00:29:10.395 [2024-10-09 00:36:40.901954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.395 [2024-10-09 00:36:40.901985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.395 qpair failed and we were unable to recover it. 00:29:10.395 [2024-10-09 00:36:40.902337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.395 [2024-10-09 00:36:40.902369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.395 qpair failed and we were unable to recover it. 00:29:10.395 [2024-10-09 00:36:40.902710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.395 [2024-10-09 00:36:40.902752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.395 qpair failed and we were unable to recover it. 00:29:10.395 [2024-10-09 00:36:40.902977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.395 [2024-10-09 00:36:40.903008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.395 qpair failed and we were unable to recover it. 00:29:10.395 [2024-10-09 00:36:40.903358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.395 [2024-10-09 00:36:40.903389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.395 qpair failed and we were unable to recover it. 00:29:10.395 [2024-10-09 00:36:40.903631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.395 [2024-10-09 00:36:40.903660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.395 qpair failed and we were unable to recover it. 00:29:10.395 [2024-10-09 00:36:40.904133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.395 [2024-10-09 00:36:40.904163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.395 qpair failed and we were unable to recover it. 00:29:10.395 [2024-10-09 00:36:40.904505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.395 [2024-10-09 00:36:40.904535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.395 qpair failed and we were unable to recover it. 00:29:10.395 [2024-10-09 00:36:40.904896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.395 [2024-10-09 00:36:40.904927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.395 qpair failed and we were unable to recover it. 00:29:10.395 [2024-10-09 00:36:40.905287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.395 [2024-10-09 00:36:40.905317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.395 qpair failed and we were unable to recover it. 00:29:10.395 [2024-10-09 00:36:40.905658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.395 [2024-10-09 00:36:40.905690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.395 qpair failed and we were unable to recover it. 00:29:10.395 [2024-10-09 00:36:40.906134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.395 [2024-10-09 00:36:40.906165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.395 qpair failed and we were unable to recover it. 00:29:10.395 [2024-10-09 00:36:40.906610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.395 [2024-10-09 00:36:40.906641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.395 qpair failed and we were unable to recover it. 00:29:10.395 [2024-10-09 00:36:40.906859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.395 [2024-10-09 00:36:40.906890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.395 qpair failed and we were unable to recover it. 00:29:10.395 [2024-10-09 00:36:40.907232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.395 [2024-10-09 00:36:40.907262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.395 qpair failed and we were unable to recover it. 00:29:10.395 [2024-10-09 00:36:40.907620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.395 [2024-10-09 00:36:40.907649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.395 qpair failed and we were unable to recover it. 00:29:10.395 [2024-10-09 00:36:40.908009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.395 [2024-10-09 00:36:40.908040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.395 qpair failed and we were unable to recover it. 00:29:10.395 [2024-10-09 00:36:40.908394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.395 [2024-10-09 00:36:40.908425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.395 qpair failed and we were unable to recover it. 00:29:10.395 [2024-10-09 00:36:40.908802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.395 [2024-10-09 00:36:40.908834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.395 qpair failed and we were unable to recover it. 00:29:10.395 [2024-10-09 00:36:40.909198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.395 [2024-10-09 00:36:40.909227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.395 qpair failed and we were unable to recover it. 00:29:10.395 [2024-10-09 00:36:40.909576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.395 [2024-10-09 00:36:40.909606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.395 qpair failed and we were unable to recover it. 00:29:10.395 [2024-10-09 00:36:40.909837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.395 [2024-10-09 00:36:40.909866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.395 qpair failed and we were unable to recover it. 00:29:10.395 [2024-10-09 00:36:40.910133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.395 [2024-10-09 00:36:40.910164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.395 qpair failed and we were unable to recover it. 00:29:10.395 [2024-10-09 00:36:40.910510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.395 [2024-10-09 00:36:40.910540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.395 qpair failed and we were unable to recover it. 00:29:10.395 [2024-10-09 00:36:40.910883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.395 [2024-10-09 00:36:40.910913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.395 qpair failed and we were unable to recover it. 00:29:10.395 [2024-10-09 00:36:40.911290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.395 [2024-10-09 00:36:40.911321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.395 qpair failed and we were unable to recover it. 00:29:10.395 [2024-10-09 00:36:40.911691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.395 [2024-10-09 00:36:40.911738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.396 qpair failed and we were unable to recover it. 00:29:10.396 [2024-10-09 00:36:40.912109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.396 [2024-10-09 00:36:40.912138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.396 qpair failed and we were unable to recover it. 00:29:10.396 [2024-10-09 00:36:40.912494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.396 [2024-10-09 00:36:40.912524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.396 qpair failed and we were unable to recover it. 00:29:10.396 [2024-10-09 00:36:40.912917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.396 [2024-10-09 00:36:40.912949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.396 qpair failed and we were unable to recover it. 00:29:10.396 [2024-10-09 00:36:40.913288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.396 [2024-10-09 00:36:40.913316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.396 qpair failed and we were unable to recover it. 00:29:10.396 [2024-10-09 00:36:40.913669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.396 [2024-10-09 00:36:40.913699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.396 qpair failed and we were unable to recover it. 00:29:10.396 [2024-10-09 00:36:40.914102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.396 [2024-10-09 00:36:40.914134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.396 qpair failed and we were unable to recover it. 00:29:10.396 [2024-10-09 00:36:40.914487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.396 [2024-10-09 00:36:40.914515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.396 qpair failed and we were unable to recover it. 00:29:10.396 [2024-10-09 00:36:40.914897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.396 [2024-10-09 00:36:40.914928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.396 qpair failed and we were unable to recover it. 00:29:10.396 [2024-10-09 00:36:40.915297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.396 [2024-10-09 00:36:40.915327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.396 qpair failed and we were unable to recover it. 00:29:10.396 [2024-10-09 00:36:40.915691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.396 [2024-10-09 00:36:40.915731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.396 qpair failed and we were unable to recover it. 00:29:10.396 [2024-10-09 00:36:40.916096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.396 [2024-10-09 00:36:40.916128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.396 qpair failed and we were unable to recover it. 00:29:10.396 [2024-10-09 00:36:40.916383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.396 [2024-10-09 00:36:40.916417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.396 qpair failed and we were unable to recover it. 00:29:10.396 [2024-10-09 00:36:40.916713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.396 [2024-10-09 00:36:40.916768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.396 qpair failed and we were unable to recover it. 00:29:10.396 [2024-10-09 00:36:40.917150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.396 [2024-10-09 00:36:40.917181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.396 qpair failed and we were unable to recover it. 00:29:10.396 [2024-10-09 00:36:40.917540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.396 [2024-10-09 00:36:40.917570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.396 qpair failed and we were unable to recover it. 00:29:10.396 [2024-10-09 00:36:40.917953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.396 [2024-10-09 00:36:40.917984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.396 qpair failed and we were unable to recover it. 00:29:10.396 [2024-10-09 00:36:40.918339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.396 [2024-10-09 00:36:40.918369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.396 qpair failed and we were unable to recover it. 00:29:10.396 [2024-10-09 00:36:40.918759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.396 [2024-10-09 00:36:40.918792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.396 qpair failed and we were unable to recover it. 00:29:10.396 [2024-10-09 00:36:40.919182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.396 [2024-10-09 00:36:40.919211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.396 qpair failed and we were unable to recover it. 00:29:10.396 [2024-10-09 00:36:40.919577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.396 [2024-10-09 00:36:40.919608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.396 qpair failed and we were unable to recover it. 00:29:10.396 [2024-10-09 00:36:40.920020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.396 [2024-10-09 00:36:40.920050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.396 qpair failed and we were unable to recover it. 00:29:10.396 [2024-10-09 00:36:40.920495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.396 [2024-10-09 00:36:40.920526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.396 qpair failed and we were unable to recover it. 00:29:10.396 [2024-10-09 00:36:40.920879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.396 [2024-10-09 00:36:40.920910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.396 qpair failed and we were unable to recover it. 00:29:10.396 [2024-10-09 00:36:40.921265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.396 [2024-10-09 00:36:40.921295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.396 qpair failed and we were unable to recover it. 00:29:10.396 [2024-10-09 00:36:40.921652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.396 [2024-10-09 00:36:40.921682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.396 qpair failed and we were unable to recover it. 00:29:10.396 [2024-10-09 00:36:40.921849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.396 [2024-10-09 00:36:40.921883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.396 qpair failed and we were unable to recover it. 00:29:10.396 [2024-10-09 00:36:40.922221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.396 [2024-10-09 00:36:40.922259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.396 qpair failed and we were unable to recover it. 00:29:10.396 [2024-10-09 00:36:40.922604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.396 [2024-10-09 00:36:40.922634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.396 qpair failed and we were unable to recover it. 00:29:10.396 [2024-10-09 00:36:40.922983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.396 [2024-10-09 00:36:40.923013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.396 qpair failed and we were unable to recover it. 00:29:10.396 [2024-10-09 00:36:40.923387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.396 [2024-10-09 00:36:40.923419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.396 qpair failed and we were unable to recover it. 00:29:10.396 [2024-10-09 00:36:40.923779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.396 [2024-10-09 00:36:40.923810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.396 qpair failed and we were unable to recover it. 00:29:10.396 [2024-10-09 00:36:40.924070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.396 [2024-10-09 00:36:40.924099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.396 qpair failed and we were unable to recover it. 00:29:10.396 [2024-10-09 00:36:40.924468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.396 [2024-10-09 00:36:40.924498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.396 qpair failed and we were unable to recover it. 00:29:10.396 [2024-10-09 00:36:40.924768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.396 [2024-10-09 00:36:40.924799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.396 qpair failed and we were unable to recover it. 00:29:10.396 [2024-10-09 00:36:40.925162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.396 [2024-10-09 00:36:40.925193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.396 qpair failed and we were unable to recover it. 00:29:10.396 [2024-10-09 00:36:40.925558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.396 [2024-10-09 00:36:40.925588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.396 qpair failed and we were unable to recover it. 00:29:10.396 [2024-10-09 00:36:40.925943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.396 [2024-10-09 00:36:40.925973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.396 qpair failed and we were unable to recover it. 00:29:10.396 [2024-10-09 00:36:40.926345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.396 [2024-10-09 00:36:40.926377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.396 qpair failed and we were unable to recover it. 00:29:10.396 [2024-10-09 00:36:40.926744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.396 [2024-10-09 00:36:40.926775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.396 qpair failed and we were unable to recover it. 00:29:10.396 [2024-10-09 00:36:40.926893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.396 [2024-10-09 00:36:40.926920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.396 qpair failed and we were unable to recover it. 00:29:10.396 [2024-10-09 00:36:40.927251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.397 [2024-10-09 00:36:40.927282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.397 qpair failed and we were unable to recover it. 00:29:10.397 [2024-10-09 00:36:40.927642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.397 [2024-10-09 00:36:40.927675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.397 qpair failed and we were unable to recover it. 00:29:10.397 [2024-10-09 00:36:40.928096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.397 [2024-10-09 00:36:40.928137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.397 qpair failed and we were unable to recover it. 00:29:10.397 [2024-10-09 00:36:40.928389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.397 [2024-10-09 00:36:40.928418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.397 qpair failed and we were unable to recover it. 00:29:10.397 [2024-10-09 00:36:40.928660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.397 [2024-10-09 00:36:40.928690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.397 qpair failed and we were unable to recover it. 00:29:10.397 [2024-10-09 00:36:40.929096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.397 [2024-10-09 00:36:40.929127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.397 qpair failed and we were unable to recover it. 00:29:10.397 [2024-10-09 00:36:40.929487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.397 [2024-10-09 00:36:40.929518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.397 qpair failed and we were unable to recover it. 00:29:10.397 [2024-10-09 00:36:40.929877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.397 [2024-10-09 00:36:40.929910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.397 qpair failed and we were unable to recover it. 00:29:10.397 [2024-10-09 00:36:40.930208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.397 [2024-10-09 00:36:40.930237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.397 qpair failed and we were unable to recover it. 00:29:10.397 [2024-10-09 00:36:40.930603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.397 [2024-10-09 00:36:40.930633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.397 qpair failed and we were unable to recover it. 00:29:10.397 [2024-10-09 00:36:40.930976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.397 [2024-10-09 00:36:40.931006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.397 qpair failed and we were unable to recover it. 00:29:10.397 [2024-10-09 00:36:40.931351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.397 [2024-10-09 00:36:40.931380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.397 qpair failed and we were unable to recover it. 00:29:10.397 [2024-10-09 00:36:40.931740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.397 [2024-10-09 00:36:40.931773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.397 qpair failed and we were unable to recover it. 00:29:10.397 [2024-10-09 00:36:40.932148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.397 [2024-10-09 00:36:40.932179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.397 qpair failed and we were unable to recover it. 00:29:10.397 [2024-10-09 00:36:40.932424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.397 [2024-10-09 00:36:40.932454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.397 qpair failed and we were unable to recover it. 00:29:10.397 [2024-10-09 00:36:40.932836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.397 [2024-10-09 00:36:40.932867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.397 qpair failed and we were unable to recover it. 00:29:10.397 [2024-10-09 00:36:40.933231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.397 [2024-10-09 00:36:40.933260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.397 qpair failed and we were unable to recover it. 00:29:10.397 [2024-10-09 00:36:40.933518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.397 [2024-10-09 00:36:40.933547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.397 qpair failed and we were unable to recover it. 00:29:10.397 [2024-10-09 00:36:40.933813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.397 [2024-10-09 00:36:40.933849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.397 qpair failed and we were unable to recover it. 00:29:10.397 [2024-10-09 00:36:40.934214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.397 [2024-10-09 00:36:40.934245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.397 qpair failed and we were unable to recover it. 00:29:10.397 [2024-10-09 00:36:40.934610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.397 [2024-10-09 00:36:40.934641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.397 qpair failed and we were unable to recover it. 00:29:10.397 [2024-10-09 00:36:40.935039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.397 [2024-10-09 00:36:40.935070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.397 qpair failed and we were unable to recover it. 00:29:10.397 [2024-10-09 00:36:40.935320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.397 [2024-10-09 00:36:40.935350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.397 qpair failed and we were unable to recover it. 00:29:10.397 [2024-10-09 00:36:40.935719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.397 [2024-10-09 00:36:40.935759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.397 qpair failed and we were unable to recover it. 00:29:10.397 [2024-10-09 00:36:40.936137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.397 [2024-10-09 00:36:40.936167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.397 qpair failed and we were unable to recover it. 00:29:10.397 [2024-10-09 00:36:40.936429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.397 [2024-10-09 00:36:40.936459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.397 qpair failed and we were unable to recover it. 00:29:10.397 [2024-10-09 00:36:40.936849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.397 [2024-10-09 00:36:40.936880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.397 qpair failed and we were unable to recover it. 00:29:10.397 [2024-10-09 00:36:40.937220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.397 [2024-10-09 00:36:40.937256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.397 qpair failed and we were unable to recover it. 00:29:10.397 [2024-10-09 00:36:40.937489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.397 [2024-10-09 00:36:40.937520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.397 qpair failed and we were unable to recover it. 00:29:10.397 [2024-10-09 00:36:40.937790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.397 [2024-10-09 00:36:40.937822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.397 qpair failed and we were unable to recover it. 00:29:10.397 [2024-10-09 00:36:40.938060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.397 [2024-10-09 00:36:40.938094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.397 qpair failed and we were unable to recover it. 00:29:10.397 [2024-10-09 00:36:40.938447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.397 [2024-10-09 00:36:40.938479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.397 qpair failed and we were unable to recover it. 00:29:10.397 [2024-10-09 00:36:40.938830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.397 [2024-10-09 00:36:40.938860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.397 qpair failed and we were unable to recover it. 00:29:10.397 [2024-10-09 00:36:40.939236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.397 [2024-10-09 00:36:40.939266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.397 qpair failed and we were unable to recover it. 00:29:10.397 [2024-10-09 00:36:40.939646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.397 [2024-10-09 00:36:40.939677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.397 qpair failed and we were unable to recover it. 00:29:10.397 [2024-10-09 00:36:40.939933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.397 [2024-10-09 00:36:40.939964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.397 qpair failed and we were unable to recover it. 00:29:10.397 [2024-10-09 00:36:40.940287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.397 [2024-10-09 00:36:40.940317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.397 qpair failed and we were unable to recover it. 00:29:10.397 [2024-10-09 00:36:40.940670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.397 [2024-10-09 00:36:40.940700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.397 qpair failed and we were unable to recover it. 00:29:10.397 [2024-10-09 00:36:40.941149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.397 [2024-10-09 00:36:40.941180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.397 qpair failed and we were unable to recover it. 00:29:10.397 [2024-10-09 00:36:40.941523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.397 [2024-10-09 00:36:40.941554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.397 qpair failed and we were unable to recover it. 00:29:10.397 [2024-10-09 00:36:40.941905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.398 [2024-10-09 00:36:40.941935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.398 qpair failed and we were unable to recover it. 00:29:10.398 [2024-10-09 00:36:40.942299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.398 [2024-10-09 00:36:40.942330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.398 qpair failed and we were unable to recover it. 00:29:10.398 [2024-10-09 00:36:40.942701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.398 [2024-10-09 00:36:40.942737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.398 qpair failed and we were unable to recover it. 00:29:10.398 [2024-10-09 00:36:40.943114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.398 [2024-10-09 00:36:40.943143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.398 qpair failed and we were unable to recover it. 00:29:10.398 [2024-10-09 00:36:40.943501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.398 [2024-10-09 00:36:40.943531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.398 qpair failed and we were unable to recover it. 00:29:10.398 [2024-10-09 00:36:40.943882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.398 [2024-10-09 00:36:40.943912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.398 qpair failed and we were unable to recover it. 00:29:10.398 [2024-10-09 00:36:40.944291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.398 [2024-10-09 00:36:40.944320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.398 qpair failed and we were unable to recover it. 00:29:10.398 [2024-10-09 00:36:40.944753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.398 [2024-10-09 00:36:40.944783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.398 qpair failed and we were unable to recover it. 00:29:10.398 [2024-10-09 00:36:40.945078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.398 [2024-10-09 00:36:40.945108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.398 qpair failed and we were unable to recover it. 00:29:10.398 [2024-10-09 00:36:40.945459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.398 [2024-10-09 00:36:40.945489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.398 qpair failed and we were unable to recover it. 00:29:10.398 [2024-10-09 00:36:40.945844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.398 [2024-10-09 00:36:40.945874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.398 qpair failed and we were unable to recover it. 00:29:10.398 [2024-10-09 00:36:40.946217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.398 [2024-10-09 00:36:40.946245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.398 qpair failed and we were unable to recover it. 00:29:10.398 [2024-10-09 00:36:40.946632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.398 [2024-10-09 00:36:40.946661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.398 qpair failed and we were unable to recover it. 00:29:10.398 [2024-10-09 00:36:40.947081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.398 [2024-10-09 00:36:40.947111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.398 qpair failed and we were unable to recover it. 00:29:10.398 [2024-10-09 00:36:40.947477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.398 [2024-10-09 00:36:40.947513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.398 qpair failed and we were unable to recover it. 00:29:10.398 [2024-10-09 00:36:40.947988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.398 [2024-10-09 00:36:40.948018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.398 qpair failed and we were unable to recover it. 00:29:10.398 [2024-10-09 00:36:40.948375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.398 [2024-10-09 00:36:40.948405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.398 qpair failed and we were unable to recover it. 00:29:10.398 [2024-10-09 00:36:40.948653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.398 [2024-10-09 00:36:40.948686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.398 qpair failed and we were unable to recover it. 00:29:10.398 [2024-10-09 00:36:40.949072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.398 [2024-10-09 00:36:40.949103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.398 qpair failed and we were unable to recover it. 00:29:10.398 [2024-10-09 00:36:40.949477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.398 [2024-10-09 00:36:40.949507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.398 qpair failed and we were unable to recover it. 00:29:10.398 [2024-10-09 00:36:40.949887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.398 [2024-10-09 00:36:40.949918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.398 qpair failed and we were unable to recover it. 00:29:10.398 [2024-10-09 00:36:40.950265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.398 [2024-10-09 00:36:40.950293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.398 qpair failed and we were unable to recover it. 00:29:10.398 [2024-10-09 00:36:40.950666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.398 [2024-10-09 00:36:40.950695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.398 qpair failed and we were unable to recover it. 00:29:10.398 [2024-10-09 00:36:40.951087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.398 [2024-10-09 00:36:40.951117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.398 qpair failed and we were unable to recover it. 00:29:10.398 [2024-10-09 00:36:40.951368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.398 [2024-10-09 00:36:40.951396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.398 qpair failed and we were unable to recover it. 00:29:10.398 [2024-10-09 00:36:40.951759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.398 [2024-10-09 00:36:40.951789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.398 qpair failed and we were unable to recover it. 00:29:10.398 [2024-10-09 00:36:40.952147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.398 [2024-10-09 00:36:40.952177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.398 qpair failed and we were unable to recover it. 00:29:10.398 [2024-10-09 00:36:40.952523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.398 [2024-10-09 00:36:40.952551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.398 qpair failed and we were unable to recover it. 00:29:10.398 [2024-10-09 00:36:40.952917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.398 [2024-10-09 00:36:40.952948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.398 qpair failed and we were unable to recover it. 00:29:10.398 [2024-10-09 00:36:40.953315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.398 [2024-10-09 00:36:40.953344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.398 qpair failed and we were unable to recover it. 00:29:10.398 [2024-10-09 00:36:40.953708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.398 [2024-10-09 00:36:40.953753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.398 qpair failed and we were unable to recover it. 00:29:10.398 [2024-10-09 00:36:40.954028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.398 [2024-10-09 00:36:40.954056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.398 qpair failed and we were unable to recover it. 00:29:10.398 [2024-10-09 00:36:40.954401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.398 [2024-10-09 00:36:40.954430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.398 qpair failed and we were unable to recover it. 00:29:10.398 [2024-10-09 00:36:40.954840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.398 [2024-10-09 00:36:40.954871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.398 qpair failed and we were unable to recover it. 00:29:10.398 [2024-10-09 00:36:40.955244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.398 [2024-10-09 00:36:40.955273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.398 qpair failed and we were unable to recover it. 00:29:10.398 [2024-10-09 00:36:40.955655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.398 [2024-10-09 00:36:40.955684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.398 qpair failed and we were unable to recover it. 00:29:10.398 [2024-10-09 00:36:40.956072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.398 [2024-10-09 00:36:40.956103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.398 qpair failed and we were unable to recover it. 00:29:10.398 [2024-10-09 00:36:40.956465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.398 [2024-10-09 00:36:40.956493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.398 qpair failed and we were unable to recover it. 00:29:10.398 [2024-10-09 00:36:40.956856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.398 [2024-10-09 00:36:40.956887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.398 qpair failed and we were unable to recover it. 00:29:10.398 [2024-10-09 00:36:40.957272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.398 [2024-10-09 00:36:40.957300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.398 qpair failed and we were unable to recover it. 00:29:10.398 [2024-10-09 00:36:40.957735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.398 [2024-10-09 00:36:40.957766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.399 qpair failed and we were unable to recover it. 00:29:10.399 [2024-10-09 00:36:40.958159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.399 [2024-10-09 00:36:40.958194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.399 qpair failed and we were unable to recover it. 00:29:10.399 [2024-10-09 00:36:40.958517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.399 [2024-10-09 00:36:40.958547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.399 qpair failed and we were unable to recover it. 00:29:10.399 [2024-10-09 00:36:40.958834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.399 [2024-10-09 00:36:40.958864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.399 qpair failed and we were unable to recover it. 00:29:10.399 [2024-10-09 00:36:40.959268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.399 [2024-10-09 00:36:40.959297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.399 qpair failed and we were unable to recover it. 00:29:10.399 [2024-10-09 00:36:40.959657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.399 [2024-10-09 00:36:40.959686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.399 qpair failed and we were unable to recover it. 00:29:10.399 [2024-10-09 00:36:40.960091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.399 [2024-10-09 00:36:40.960121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.399 qpair failed and we were unable to recover it. 00:29:10.399 [2024-10-09 00:36:40.960459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.399 [2024-10-09 00:36:40.960489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.399 qpair failed and we were unable to recover it. 00:29:10.399 [2024-10-09 00:36:40.960849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.399 [2024-10-09 00:36:40.960880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.399 qpair failed and we were unable to recover it. 00:29:10.399 [2024-10-09 00:36:40.961140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.399 [2024-10-09 00:36:40.961173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.399 qpair failed and we were unable to recover it. 00:29:10.399 [2024-10-09 00:36:40.961511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.399 [2024-10-09 00:36:40.961542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.399 qpair failed and we were unable to recover it. 00:29:10.399 [2024-10-09 00:36:40.961935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.399 [2024-10-09 00:36:40.961964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.399 qpair failed and we were unable to recover it. 00:29:10.399 [2024-10-09 00:36:40.962338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.399 [2024-10-09 00:36:40.962367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.399 qpair failed and we were unable to recover it. 00:29:10.399 [2024-10-09 00:36:40.962730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.399 [2024-10-09 00:36:40.962761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.399 qpair failed and we were unable to recover it. 00:29:10.399 [2024-10-09 00:36:40.963126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.399 [2024-10-09 00:36:40.963156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.399 qpair failed and we were unable to recover it. 00:29:10.399 [2024-10-09 00:36:40.963523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.399 [2024-10-09 00:36:40.963552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.399 qpair failed and we were unable to recover it. 00:29:10.399 [2024-10-09 00:36:40.963913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.399 [2024-10-09 00:36:40.963943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.399 qpair failed and we were unable to recover it. 00:29:10.399 [2024-10-09 00:36:40.964326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.399 [2024-10-09 00:36:40.964355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.399 qpair failed and we were unable to recover it. 00:29:10.399 [2024-10-09 00:36:40.964715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.399 [2024-10-09 00:36:40.964759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.399 qpair failed and we were unable to recover it. 00:29:10.399 [2024-10-09 00:36:40.965142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.399 [2024-10-09 00:36:40.965172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.399 qpair failed and we were unable to recover it. 00:29:10.399 [2024-10-09 00:36:40.965518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.399 [2024-10-09 00:36:40.965549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.399 qpair failed and we were unable to recover it. 00:29:10.399 [2024-10-09 00:36:40.965905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.399 [2024-10-09 00:36:40.965935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.399 qpair failed and we were unable to recover it. 00:29:10.399 [2024-10-09 00:36:40.966282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.399 [2024-10-09 00:36:40.966312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.399 qpair failed and we were unable to recover it. 00:29:10.399 [2024-10-09 00:36:40.966685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.399 [2024-10-09 00:36:40.966714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.399 qpair failed and we were unable to recover it. 00:29:10.399 [2024-10-09 00:36:40.967102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.399 [2024-10-09 00:36:40.967132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.399 qpair failed and we were unable to recover it. 00:29:10.399 [2024-10-09 00:36:40.967379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.399 [2024-10-09 00:36:40.967410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.399 qpair failed and we were unable to recover it. 00:29:10.399 [2024-10-09 00:36:40.967771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.399 [2024-10-09 00:36:40.967803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.399 qpair failed and we were unable to recover it. 00:29:10.399 [2024-10-09 00:36:40.968189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.399 [2024-10-09 00:36:40.968218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.399 qpair failed and we were unable to recover it. 00:29:10.399 [2024-10-09 00:36:40.968583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.399 [2024-10-09 00:36:40.968611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.399 qpair failed and we were unable to recover it. 00:29:10.399 [2024-10-09 00:36:40.968978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.399 [2024-10-09 00:36:40.969008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.399 qpair failed and we were unable to recover it. 00:29:10.399 [2024-10-09 00:36:40.969369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.399 [2024-10-09 00:36:40.969398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.399 qpair failed and we were unable to recover it. 00:29:10.399 [2024-10-09 00:36:40.969788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.399 [2024-10-09 00:36:40.969819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.399 qpair failed and we were unable to recover it. 00:29:10.399 [2024-10-09 00:36:40.970175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.399 [2024-10-09 00:36:40.970204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.399 qpair failed and we were unable to recover it. 00:29:10.399 [2024-10-09 00:36:40.970574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.399 [2024-10-09 00:36:40.970605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.399 qpair failed and we were unable to recover it. 00:29:10.399 [2024-10-09 00:36:40.970975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.399 [2024-10-09 00:36:40.971005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.399 qpair failed and we were unable to recover it. 00:29:10.399 [2024-10-09 00:36:40.971376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.399 [2024-10-09 00:36:40.971405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.399 qpair failed and we were unable to recover it. 00:29:10.399 [2024-10-09 00:36:40.971763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.399 [2024-10-09 00:36:40.971792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.399 qpair failed and we were unable to recover it. 00:29:10.399 [2024-10-09 00:36:40.972161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.399 [2024-10-09 00:36:40.972190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.399 qpair failed and we were unable to recover it. 00:29:10.399 [2024-10-09 00:36:40.972596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.399 [2024-10-09 00:36:40.972626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.399 qpair failed and we were unable to recover it. 00:29:10.399 [2024-10-09 00:36:40.972989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.399 [2024-10-09 00:36:40.973027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.399 qpair failed and we were unable to recover it. 00:29:10.399 [2024-10-09 00:36:40.973397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.400 [2024-10-09 00:36:40.973425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.400 qpair failed and we were unable to recover it. 00:29:10.400 [2024-10-09 00:36:40.973796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.400 [2024-10-09 00:36:40.973828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.400 qpair failed and we were unable to recover it. 00:29:10.400 [2024-10-09 00:36:40.974239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.400 [2024-10-09 00:36:40.974269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.400 qpair failed and we were unable to recover it. 00:29:10.400 [2024-10-09 00:36:40.974638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.400 [2024-10-09 00:36:40.974669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.400 qpair failed and we were unable to recover it. 00:29:10.400 [2024-10-09 00:36:40.975020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.400 [2024-10-09 00:36:40.975051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.400 qpair failed and we were unable to recover it. 00:29:10.400 [2024-10-09 00:36:40.975400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.400 [2024-10-09 00:36:40.975431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.400 qpair failed and we were unable to recover it. 00:29:10.400 [2024-10-09 00:36:40.975655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.400 [2024-10-09 00:36:40.975688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.400 qpair failed and we were unable to recover it. 00:29:10.400 [2024-10-09 00:36:40.976032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.400 [2024-10-09 00:36:40.976063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.400 qpair failed and we were unable to recover it. 00:29:10.400 [2024-10-09 00:36:40.976413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.400 [2024-10-09 00:36:40.976443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.400 qpair failed and we were unable to recover it. 00:29:10.400 [2024-10-09 00:36:40.976754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.400 [2024-10-09 00:36:40.976786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.400 qpair failed and we were unable to recover it. 00:29:10.400 [2024-10-09 00:36:40.977136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.400 [2024-10-09 00:36:40.977165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.400 qpair failed and we were unable to recover it. 00:29:10.400 [2024-10-09 00:36:40.977531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.400 [2024-10-09 00:36:40.977561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.400 qpair failed and we were unable to recover it. 00:29:10.400 [2024-10-09 00:36:40.977798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.400 [2024-10-09 00:36:40.977831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.400 qpair failed and we were unable to recover it. 00:29:10.400 [2024-10-09 00:36:40.978214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.400 [2024-10-09 00:36:40.978243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.400 qpair failed and we were unable to recover it. 00:29:10.400 [2024-10-09 00:36:40.978609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.400 [2024-10-09 00:36:40.978638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.400 qpair failed and we were unable to recover it. 00:29:10.400 [2024-10-09 00:36:40.978980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.400 [2024-10-09 00:36:40.979011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.400 qpair failed and we were unable to recover it. 00:29:10.400 [2024-10-09 00:36:40.979386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.400 [2024-10-09 00:36:40.979415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.400 qpair failed and we were unable to recover it. 00:29:10.400 [2024-10-09 00:36:40.979712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.400 [2024-10-09 00:36:40.979749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.400 qpair failed and we were unable to recover it. 00:29:10.400 [2024-10-09 00:36:40.980016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.400 [2024-10-09 00:36:40.980048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.400 qpair failed and we were unable to recover it. 00:29:10.400 [2024-10-09 00:36:40.980443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.400 [2024-10-09 00:36:40.980472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.400 qpair failed and we were unable to recover it. 00:29:10.400 [2024-10-09 00:36:40.980836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.400 [2024-10-09 00:36:40.980867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.400 qpair failed and we were unable to recover it. 00:29:10.400 [2024-10-09 00:36:40.981232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.400 [2024-10-09 00:36:40.981260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.400 qpair failed and we were unable to recover it. 00:29:10.400 [2024-10-09 00:36:40.981624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.400 [2024-10-09 00:36:40.981654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.400 qpair failed and we were unable to recover it. 00:29:10.400 [2024-10-09 00:36:40.982001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.400 [2024-10-09 00:36:40.982032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.400 qpair failed and we were unable to recover it. 00:29:10.400 [2024-10-09 00:36:40.982399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.400 [2024-10-09 00:36:40.982429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.400 qpair failed and we were unable to recover it. 00:29:10.400 [2024-10-09 00:36:40.982784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.400 [2024-10-09 00:36:40.982814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.400 qpair failed and we were unable to recover it. 00:29:10.400 [2024-10-09 00:36:40.983198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.400 [2024-10-09 00:36:40.983227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.400 qpair failed and we were unable to recover it. 00:29:10.400 [2024-10-09 00:36:40.983600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.400 [2024-10-09 00:36:40.983630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.400 qpair failed and we were unable to recover it. 00:29:10.400 [2024-10-09 00:36:40.983899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.400 [2024-10-09 00:36:40.983929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.400 qpair failed and we were unable to recover it. 00:29:10.400 [2024-10-09 00:36:40.984268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.400 [2024-10-09 00:36:40.984306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.400 qpair failed and we were unable to recover it. 00:29:10.400 [2024-10-09 00:36:40.984653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.400 [2024-10-09 00:36:40.984682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.400 qpair failed and we were unable to recover it. 00:29:10.400 [2024-10-09 00:36:40.984989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.400 [2024-10-09 00:36:40.985019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.400 qpair failed and we were unable to recover it. 00:29:10.400 [2024-10-09 00:36:40.985419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.400 [2024-10-09 00:36:40.985449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.400 qpair failed and we were unable to recover it. 00:29:10.400 [2024-10-09 00:36:40.985814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.400 [2024-10-09 00:36:40.985846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.400 qpair failed and we were unable to recover it. 00:29:10.400 [2024-10-09 00:36:40.986203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.401 [2024-10-09 00:36:40.986234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.401 qpair failed and we were unable to recover it. 00:29:10.401 [2024-10-09 00:36:40.986577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.401 [2024-10-09 00:36:40.986606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.401 qpair failed and we were unable to recover it. 00:29:10.401 [2024-10-09 00:36:40.986983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.401 [2024-10-09 00:36:40.987012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.401 qpair failed and we were unable to recover it. 00:29:10.401 [2024-10-09 00:36:40.987381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.401 [2024-10-09 00:36:40.987410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.401 qpair failed and we were unable to recover it. 00:29:10.401 [2024-10-09 00:36:40.987770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.401 [2024-10-09 00:36:40.987801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.401 qpair failed and we were unable to recover it. 00:29:10.401 [2024-10-09 00:36:40.988133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.401 [2024-10-09 00:36:40.988162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.401 qpair failed and we were unable to recover it. 00:29:10.401 [2024-10-09 00:36:40.988537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.401 [2024-10-09 00:36:40.988567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.401 qpair failed and we were unable to recover it. 00:29:10.401 [2024-10-09 00:36:40.988905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.401 [2024-10-09 00:36:40.988934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.401 qpair failed and we were unable to recover it. 00:29:10.401 [2024-10-09 00:36:40.989288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.401 [2024-10-09 00:36:40.989316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.401 qpair failed and we were unable to recover it. 00:29:10.401 [2024-10-09 00:36:40.989694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.401 [2024-10-09 00:36:40.989741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.401 qpair failed and we were unable to recover it. 00:29:10.401 [2024-10-09 00:36:40.990124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.401 [2024-10-09 00:36:40.990152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.401 qpair failed and we were unable to recover it. 00:29:10.401 [2024-10-09 00:36:40.990505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.401 [2024-10-09 00:36:40.990534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.401 qpair failed and we were unable to recover it. 00:29:10.401 [2024-10-09 00:36:40.990872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.401 [2024-10-09 00:36:40.990903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.401 qpair failed and we were unable to recover it. 00:29:10.401 [2024-10-09 00:36:40.991288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.401 [2024-10-09 00:36:40.991318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.401 qpair failed and we were unable to recover it. 00:29:10.401 [2024-10-09 00:36:40.991689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.401 [2024-10-09 00:36:40.991757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.401 qpair failed and we were unable to recover it. 00:29:10.401 [2024-10-09 00:36:40.992087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.401 [2024-10-09 00:36:40.992116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.401 qpair failed and we were unable to recover it. 00:29:10.401 [2024-10-09 00:36:40.992491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.401 [2024-10-09 00:36:40.992521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.401 qpair failed and we were unable to recover it. 00:29:10.401 [2024-10-09 00:36:40.992889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.401 [2024-10-09 00:36:40.992921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.401 qpair failed and we were unable to recover it. 00:29:10.401 [2024-10-09 00:36:40.993174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.401 [2024-10-09 00:36:40.993202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.401 qpair failed and we were unable to recover it. 00:29:10.401 [2024-10-09 00:36:40.993553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.401 [2024-10-09 00:36:40.993581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.401 qpair failed and we were unable to recover it. 00:29:10.401 [2024-10-09 00:36:40.993853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.401 [2024-10-09 00:36:40.993884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.401 qpair failed and we were unable to recover it. 00:29:10.401 [2024-10-09 00:36:40.994226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.401 [2024-10-09 00:36:40.994255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.401 qpair failed and we were unable to recover it. 00:29:10.401 [2024-10-09 00:36:40.994616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.401 [2024-10-09 00:36:40.994658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.401 qpair failed and we were unable to recover it. 00:29:10.401 [2024-10-09 00:36:40.994998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.401 [2024-10-09 00:36:40.995029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.401 qpair failed and we were unable to recover it. 00:29:10.401 [2024-10-09 00:36:40.995388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.401 [2024-10-09 00:36:40.995418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.401 qpair failed and we were unable to recover it. 00:29:10.401 [2024-10-09 00:36:40.995749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.401 [2024-10-09 00:36:40.995782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.401 qpair failed and we were unable to recover it. 00:29:10.401 [2024-10-09 00:36:40.996157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.401 [2024-10-09 00:36:40.996187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.401 qpair failed and we were unable to recover it. 00:29:10.401 [2024-10-09 00:36:40.996541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.401 [2024-10-09 00:36:40.996571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.401 qpair failed and we were unable to recover it. 00:29:10.401 [2024-10-09 00:36:40.996961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.401 [2024-10-09 00:36:40.996993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.401 qpair failed and we were unable to recover it. 00:29:10.401 [2024-10-09 00:36:40.997353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.401 [2024-10-09 00:36:40.997382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.402 qpair failed and we were unable to recover it. 00:29:10.402 [2024-10-09 00:36:40.997751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.402 [2024-10-09 00:36:40.997782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.402 qpair failed and we were unable to recover it. 00:29:10.402 [2024-10-09 00:36:40.998119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.402 [2024-10-09 00:36:40.998148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.402 qpair failed and we were unable to recover it. 00:29:10.402 [2024-10-09 00:36:40.998512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.402 [2024-10-09 00:36:40.998542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.402 qpair failed and we were unable to recover it. 00:29:10.402 [2024-10-09 00:36:40.998904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.402 [2024-10-09 00:36:40.998933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.402 qpair failed and we were unable to recover it. 00:29:10.402 [2024-10-09 00:36:40.999301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.402 [2024-10-09 00:36:40.999332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.402 qpair failed and we were unable to recover it. 00:29:10.402 [2024-10-09 00:36:40.999699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.402 [2024-10-09 00:36:40.999738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.402 qpair failed and we were unable to recover it. 00:29:10.402 [2024-10-09 00:36:41.001662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.402 [2024-10-09 00:36:41.001756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.402 qpair failed and we were unable to recover it. 00:29:10.402 [2024-10-09 00:36:41.002197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.402 [2024-10-09 00:36:41.002232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.402 qpair failed and we were unable to recover it. 00:29:10.402 [2024-10-09 00:36:41.002663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.402 [2024-10-09 00:36:41.002693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.402 qpair failed and we were unable to recover it. 00:29:10.402 [2024-10-09 00:36:41.003061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.402 [2024-10-09 00:36:41.003092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.402 qpair failed and we were unable to recover it. 00:29:10.402 [2024-10-09 00:36:41.003454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.402 [2024-10-09 00:36:41.003485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.402 qpair failed and we were unable to recover it. 00:29:10.402 [2024-10-09 00:36:41.003856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.402 [2024-10-09 00:36:41.003888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.402 qpair failed and we were unable to recover it. 00:29:10.402 [2024-10-09 00:36:41.004215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.402 [2024-10-09 00:36:41.004245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.402 qpair failed and we were unable to recover it. 00:29:10.402 [2024-10-09 00:36:41.004598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.402 [2024-10-09 00:36:41.004627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.402 qpair failed and we were unable to recover it. 00:29:10.402 [2024-10-09 00:36:41.005048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.402 [2024-10-09 00:36:41.005078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.402 qpair failed and we were unable to recover it. 00:29:10.402 [2024-10-09 00:36:41.005406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.402 [2024-10-09 00:36:41.005438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.402 qpair failed and we were unable to recover it. 00:29:10.402 [2024-10-09 00:36:41.005790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.402 [2024-10-09 00:36:41.005820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.402 qpair failed and we were unable to recover it. 00:29:10.682 [2024-10-09 00:36:41.006192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.682 [2024-10-09 00:36:41.006223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.682 qpair failed and we were unable to recover it. 00:29:10.682 [2024-10-09 00:36:41.006585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.682 [2024-10-09 00:36:41.006616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.682 qpair failed and we were unable to recover it. 00:29:10.682 [2024-10-09 00:36:41.007000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.682 [2024-10-09 00:36:41.007040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.682 qpair failed and we were unable to recover it. 00:29:10.682 [2024-10-09 00:36:41.007418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.682 [2024-10-09 00:36:41.007447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.682 qpair failed and we were unable to recover it. 00:29:10.682 [2024-10-09 00:36:41.007809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.682 [2024-10-09 00:36:41.007839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.682 qpair failed and we were unable to recover it. 00:29:10.682 [2024-10-09 00:36:41.008063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.682 [2024-10-09 00:36:41.008092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.682 qpair failed and we were unable to recover it. 00:29:10.682 [2024-10-09 00:36:41.008462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.682 [2024-10-09 00:36:41.008493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.682 qpair failed and we were unable to recover it. 00:29:10.682 [2024-10-09 00:36:41.008858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.682 [2024-10-09 00:36:41.008889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.682 qpair failed and we were unable to recover it. 00:29:10.682 [2024-10-09 00:36:41.009237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.682 [2024-10-09 00:36:41.009267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.682 qpair failed and we were unable to recover it. 00:29:10.682 [2024-10-09 00:36:41.009611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.682 [2024-10-09 00:36:41.009640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.682 qpair failed and we were unable to recover it. 00:29:10.682 [2024-10-09 00:36:41.010014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.682 [2024-10-09 00:36:41.010045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.682 qpair failed and we were unable to recover it. 00:29:10.682 [2024-10-09 00:36:41.010427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.682 [2024-10-09 00:36:41.010457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.682 qpair failed and we were unable to recover it. 00:29:10.682 [2024-10-09 00:36:41.010800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.682 [2024-10-09 00:36:41.010830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.682 qpair failed and we were unable to recover it. 00:29:10.682 [2024-10-09 00:36:41.011193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.682 [2024-10-09 00:36:41.011222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.682 qpair failed and we were unable to recover it. 00:29:10.682 [2024-10-09 00:36:41.011589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.682 [2024-10-09 00:36:41.011617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.682 qpair failed and we were unable to recover it. 00:29:10.682 [2024-10-09 00:36:41.011981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.682 [2024-10-09 00:36:41.012013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.682 qpair failed and we were unable to recover it. 00:29:10.682 [2024-10-09 00:36:41.012384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.682 [2024-10-09 00:36:41.012414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.682 qpair failed and we were unable to recover it. 00:29:10.682 [2024-10-09 00:36:41.012790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.682 [2024-10-09 00:36:41.012819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.682 qpair failed and we were unable to recover it. 00:29:10.682 [2024-10-09 00:36:41.013184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.682 [2024-10-09 00:36:41.013213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.682 qpair failed and we were unable to recover it. 00:29:10.682 [2024-10-09 00:36:41.013522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.682 [2024-10-09 00:36:41.013550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.682 qpair failed and we were unable to recover it. 00:29:10.682 [2024-10-09 00:36:41.013934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.682 [2024-10-09 00:36:41.013966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.682 qpair failed and we were unable to recover it. 00:29:10.682 [2024-10-09 00:36:41.014315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.682 [2024-10-09 00:36:41.014346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.682 qpair failed and we were unable to recover it. 00:29:10.682 [2024-10-09 00:36:41.014697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.682 [2024-10-09 00:36:41.014734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.682 qpair failed and we were unable to recover it. 00:29:10.682 [2024-10-09 00:36:41.015139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.682 [2024-10-09 00:36:41.015168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.682 qpair failed and we were unable to recover it. 00:29:10.682 [2024-10-09 00:36:41.015530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.682 [2024-10-09 00:36:41.015560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.682 qpair failed and we were unable to recover it. 00:29:10.682 [2024-10-09 00:36:41.015984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.682 [2024-10-09 00:36:41.016016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.682 qpair failed and we were unable to recover it. 00:29:10.682 [2024-10-09 00:36:41.016269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.682 [2024-10-09 00:36:41.016299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.683 qpair failed and we were unable to recover it. 00:29:10.683 [2024-10-09 00:36:41.016652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.683 [2024-10-09 00:36:41.016682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.683 qpair failed and we were unable to recover it. 00:29:10.683 [2024-10-09 00:36:41.017035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.683 [2024-10-09 00:36:41.017065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.683 qpair failed and we were unable to recover it. 00:29:10.683 [2024-10-09 00:36:41.017441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.683 [2024-10-09 00:36:41.017470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.683 qpair failed and we were unable to recover it. 00:29:10.683 [2024-10-09 00:36:41.017839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.683 [2024-10-09 00:36:41.017870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.683 qpair failed and we were unable to recover it. 00:29:10.683 [2024-10-09 00:36:41.018238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.683 [2024-10-09 00:36:41.018267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.683 qpair failed and we were unable to recover it. 00:29:10.683 [2024-10-09 00:36:41.018638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.683 [2024-10-09 00:36:41.018668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.683 qpair failed and we were unable to recover it. 00:29:10.683 [2024-10-09 00:36:41.018936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.683 [2024-10-09 00:36:41.018966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.683 qpair failed and we were unable to recover it. 00:29:10.683 [2024-10-09 00:36:41.019285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.683 [2024-10-09 00:36:41.019315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.683 qpair failed and we were unable to recover it. 00:29:10.683 [2024-10-09 00:36:41.019673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.683 [2024-10-09 00:36:41.019702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.683 qpair failed and we were unable to recover it. 00:29:10.683 [2024-10-09 00:36:41.020067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.683 [2024-10-09 00:36:41.020098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.683 qpair failed and we were unable to recover it. 00:29:10.683 [2024-10-09 00:36:41.020473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.683 [2024-10-09 00:36:41.020503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.683 qpair failed and we were unable to recover it. 00:29:10.683 [2024-10-09 00:36:41.020856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.683 [2024-10-09 00:36:41.020887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.683 qpair failed and we were unable to recover it. 00:29:10.683 [2024-10-09 00:36:41.021287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.683 [2024-10-09 00:36:41.021320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.683 qpair failed and we were unable to recover it. 00:29:10.683 [2024-10-09 00:36:41.021676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.683 [2024-10-09 00:36:41.021706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.683 qpair failed and we were unable to recover it. 00:29:10.683 [2024-10-09 00:36:41.022095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.683 [2024-10-09 00:36:41.022125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.683 qpair failed and we were unable to recover it. 00:29:10.683 [2024-10-09 00:36:41.022393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.683 [2024-10-09 00:36:41.022426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.683 qpair failed and we were unable to recover it. 00:29:10.683 [2024-10-09 00:36:41.022857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.683 [2024-10-09 00:36:41.022894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.683 qpair failed and we were unable to recover it. 00:29:10.683 [2024-10-09 00:36:41.023267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.683 [2024-10-09 00:36:41.023297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.683 qpair failed and we were unable to recover it. 00:29:10.683 [2024-10-09 00:36:41.023551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.683 [2024-10-09 00:36:41.023581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.683 qpair failed and we were unable to recover it. 00:29:10.683 [2024-10-09 00:36:41.023956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.683 [2024-10-09 00:36:41.023987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.683 qpair failed and we were unable to recover it. 00:29:10.683 [2024-10-09 00:36:41.024354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.683 [2024-10-09 00:36:41.024385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.683 qpair failed and we were unable to recover it. 00:29:10.683 [2024-10-09 00:36:41.024758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.683 [2024-10-09 00:36:41.024789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.683 qpair failed and we were unable to recover it. 00:29:10.683 [2024-10-09 00:36:41.025155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.683 [2024-10-09 00:36:41.025184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.683 qpair failed and we were unable to recover it. 00:29:10.683 [2024-10-09 00:36:41.025601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.683 [2024-10-09 00:36:41.025631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.683 qpair failed and we were unable to recover it. 00:29:10.683 [2024-10-09 00:36:41.025957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.683 [2024-10-09 00:36:41.025987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.683 qpair failed and we were unable to recover it. 00:29:10.683 [2024-10-09 00:36:41.026353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.683 [2024-10-09 00:36:41.026382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.683 qpair failed and we were unable to recover it. 00:29:10.683 [2024-10-09 00:36:41.026742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.683 [2024-10-09 00:36:41.026773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.683 qpair failed and we were unable to recover it. 00:29:10.683 [2024-10-09 00:36:41.027137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.683 [2024-10-09 00:36:41.027167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.683 qpair failed and we were unable to recover it. 00:29:10.683 [2024-10-09 00:36:41.027429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.683 [2024-10-09 00:36:41.027462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.683 qpair failed and we were unable to recover it. 00:29:10.683 [2024-10-09 00:36:41.027801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.683 [2024-10-09 00:36:41.027832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.683 qpair failed and we were unable to recover it. 00:29:10.683 [2024-10-09 00:36:41.028247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.683 [2024-10-09 00:36:41.028277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.683 qpair failed and we were unable to recover it. 00:29:10.683 [2024-10-09 00:36:41.028647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.683 [2024-10-09 00:36:41.028676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.683 qpair failed and we were unable to recover it. 00:29:10.683 [2024-10-09 00:36:41.029101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.683 [2024-10-09 00:36:41.029132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.683 qpair failed and we were unable to recover it. 00:29:10.683 [2024-10-09 00:36:41.029467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.683 [2024-10-09 00:36:41.029496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.683 qpair failed and we were unable to recover it. 00:29:10.683 [2024-10-09 00:36:41.029779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.683 [2024-10-09 00:36:41.029810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.683 qpair failed and we were unable to recover it. 00:29:10.683 [2024-10-09 00:36:41.030186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.683 [2024-10-09 00:36:41.030216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.683 qpair failed and we were unable to recover it. 00:29:10.683 [2024-10-09 00:36:41.030469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.683 [2024-10-09 00:36:41.030498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.683 qpair failed and we were unable to recover it. 00:29:10.683 [2024-10-09 00:36:41.030661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.683 [2024-10-09 00:36:41.030693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.683 qpair failed and we were unable to recover it. 00:29:10.683 [2024-10-09 00:36:41.031093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.683 [2024-10-09 00:36:41.031124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.683 qpair failed and we were unable to recover it. 00:29:10.683 [2024-10-09 00:36:41.031490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.683 [2024-10-09 00:36:41.031520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.683 qpair failed and we were unable to recover it. 00:29:10.684 [2024-10-09 00:36:41.031922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.684 [2024-10-09 00:36:41.031952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.684 qpair failed and we were unable to recover it. 00:29:10.684 [2024-10-09 00:36:41.032299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.684 [2024-10-09 00:36:41.032330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.684 qpair failed and we were unable to recover it. 00:29:10.684 [2024-10-09 00:36:41.032689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.684 [2024-10-09 00:36:41.032726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.684 qpair failed and we were unable to recover it. 00:29:10.684 [2024-10-09 00:36:41.033124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.684 [2024-10-09 00:36:41.033159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.684 qpair failed and we were unable to recover it. 00:29:10.684 [2024-10-09 00:36:41.033537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.684 [2024-10-09 00:36:41.033567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.684 qpair failed and we were unable to recover it. 00:29:10.684 [2024-10-09 00:36:41.033967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.684 [2024-10-09 00:36:41.034000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.684 qpair failed and we were unable to recover it. 00:29:10.684 [2024-10-09 00:36:41.034358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.684 [2024-10-09 00:36:41.034388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.684 qpair failed and we were unable to recover it. 00:29:10.684 [2024-10-09 00:36:41.034709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.684 [2024-10-09 00:36:41.034752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.684 qpair failed and we were unable to recover it. 00:29:10.684 [2024-10-09 00:36:41.035096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.684 [2024-10-09 00:36:41.035125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.684 qpair failed and we were unable to recover it. 00:29:10.684 [2024-10-09 00:36:41.035480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.684 [2024-10-09 00:36:41.035510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.684 qpair failed and we were unable to recover it. 00:29:10.684 [2024-10-09 00:36:41.035781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.684 [2024-10-09 00:36:41.035811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.684 qpair failed and we were unable to recover it. 00:29:10.684 [2024-10-09 00:36:41.036178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.684 [2024-10-09 00:36:41.036208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.684 qpair failed and we were unable to recover it. 00:29:10.684 [2024-10-09 00:36:41.036576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.684 [2024-10-09 00:36:41.036607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.684 qpair failed and we were unable to recover it. 00:29:10.684 [2024-10-09 00:36:41.036885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.684 [2024-10-09 00:36:41.036916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.684 qpair failed and we were unable to recover it. 00:29:10.684 [2024-10-09 00:36:41.037293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.684 [2024-10-09 00:36:41.037322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.684 qpair failed and we were unable to recover it. 00:29:10.684 [2024-10-09 00:36:41.037668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.684 [2024-10-09 00:36:41.037697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.684 qpair failed and we were unable to recover it. 00:29:10.684 [2024-10-09 00:36:41.038072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.684 [2024-10-09 00:36:41.038101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.684 qpair failed and we were unable to recover it. 00:29:10.684 [2024-10-09 00:36:41.038472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.684 [2024-10-09 00:36:41.038502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.684 qpair failed and we were unable to recover it. 00:29:10.684 [2024-10-09 00:36:41.038884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.684 [2024-10-09 00:36:41.038915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.684 qpair failed and we were unable to recover it. 00:29:10.684 [2024-10-09 00:36:41.039286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.684 [2024-10-09 00:36:41.039314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.684 qpair failed and we were unable to recover it. 00:29:10.684 [2024-10-09 00:36:41.039694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.684 [2024-10-09 00:36:41.039735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.684 qpair failed and we were unable to recover it. 00:29:10.684 [2024-10-09 00:36:41.040130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.684 [2024-10-09 00:36:41.040160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.684 qpair failed and we were unable to recover it. 00:29:10.684 [2024-10-09 00:36:41.040470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.684 [2024-10-09 00:36:41.040499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.684 qpair failed and we were unable to recover it. 00:29:10.684 [2024-10-09 00:36:41.040760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.684 [2024-10-09 00:36:41.040791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.684 qpair failed and we were unable to recover it. 00:29:10.684 [2024-10-09 00:36:41.041177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.684 [2024-10-09 00:36:41.041206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.684 qpair failed and we were unable to recover it. 00:29:10.684 [2024-10-09 00:36:41.041530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.684 [2024-10-09 00:36:41.041560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.684 qpair failed and we were unable to recover it. 00:29:10.684 [2024-10-09 00:36:41.041858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.684 [2024-10-09 00:36:41.041888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.684 qpair failed and we were unable to recover it. 00:29:10.684 [2024-10-09 00:36:41.042150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.684 [2024-10-09 00:36:41.042183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.684 qpair failed and we were unable to recover it. 00:29:10.684 [2024-10-09 00:36:41.042539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.684 [2024-10-09 00:36:41.042569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.684 qpair failed and we were unable to recover it. 00:29:10.684 [2024-10-09 00:36:41.042909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.684 [2024-10-09 00:36:41.042945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.684 qpair failed and we were unable to recover it. 00:29:10.684 [2024-10-09 00:36:41.043305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.684 [2024-10-09 00:36:41.043341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.684 qpair failed and we were unable to recover it. 00:29:10.684 [2024-10-09 00:36:41.043718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.684 [2024-10-09 00:36:41.043758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.684 qpair failed and we were unable to recover it. 00:29:10.684 [2024-10-09 00:36:41.044071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.684 [2024-10-09 00:36:41.044100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.684 qpair failed and we were unable to recover it. 00:29:10.684 [2024-10-09 00:36:41.044429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.684 [2024-10-09 00:36:41.044460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.684 qpair failed and we were unable to recover it. 00:29:10.684 [2024-10-09 00:36:41.044810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.684 [2024-10-09 00:36:41.044841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.684 qpair failed and we were unable to recover it. 00:29:10.684 [2024-10-09 00:36:41.045144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.684 [2024-10-09 00:36:41.045176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.684 qpair failed and we were unable to recover it. 00:29:10.684 [2024-10-09 00:36:41.045556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.684 [2024-10-09 00:36:41.045586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.684 qpair failed and we were unable to recover it. 00:29:10.684 [2024-10-09 00:36:41.046022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.684 [2024-10-09 00:36:41.046051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.684 qpair failed and we were unable to recover it. 00:29:10.684 [2024-10-09 00:36:41.046309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.684 [2024-10-09 00:36:41.046339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.684 qpair failed and we were unable to recover it. 00:29:10.684 [2024-10-09 00:36:41.046773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.684 [2024-10-09 00:36:41.046804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.684 qpair failed and we were unable to recover it. 00:29:10.684 [2024-10-09 00:36:41.047033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.685 [2024-10-09 00:36:41.047065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.685 qpair failed and we were unable to recover it. 00:29:10.685 [2024-10-09 00:36:41.047329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.685 [2024-10-09 00:36:41.047358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.685 qpair failed and we were unable to recover it. 00:29:10.685 [2024-10-09 00:36:41.047706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.685 [2024-10-09 00:36:41.047744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.685 qpair failed and we were unable to recover it. 00:29:10.685 [2024-10-09 00:36:41.048101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.685 [2024-10-09 00:36:41.048131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.685 qpair failed and we were unable to recover it. 00:29:10.685 [2024-10-09 00:36:41.048494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.685 [2024-10-09 00:36:41.048525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.685 qpair failed and we were unable to recover it. 00:29:10.685 [2024-10-09 00:36:41.048785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.685 [2024-10-09 00:36:41.048815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.685 qpair failed and we were unable to recover it. 00:29:10.685 [2024-10-09 00:36:41.049186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.685 [2024-10-09 00:36:41.049214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.685 qpair failed and we were unable to recover it. 00:29:10.685 [2024-10-09 00:36:41.049486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.685 [2024-10-09 00:36:41.049515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.685 qpair failed and we were unable to recover it. 00:29:10.685 [2024-10-09 00:36:41.049779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.685 [2024-10-09 00:36:41.049809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.685 qpair failed and we were unable to recover it. 00:29:10.685 [2024-10-09 00:36:41.050164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.685 [2024-10-09 00:36:41.050195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.685 qpair failed and we were unable to recover it. 00:29:10.685 [2024-10-09 00:36:41.050562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.685 [2024-10-09 00:36:41.050591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.685 qpair failed and we were unable to recover it. 00:29:10.685 [2024-10-09 00:36:41.051008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.685 [2024-10-09 00:36:41.051039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.685 qpair failed and we were unable to recover it. 00:29:10.685 [2024-10-09 00:36:41.051297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.685 [2024-10-09 00:36:41.051325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.685 qpair failed and we were unable to recover it. 00:29:10.685 [2024-10-09 00:36:41.051689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.685 [2024-10-09 00:36:41.051718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.685 qpair failed and we were unable to recover it. 00:29:10.685 [2024-10-09 00:36:41.052088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.685 [2024-10-09 00:36:41.052121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.685 qpair failed and we were unable to recover it. 00:29:10.685 [2024-10-09 00:36:41.052494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.685 [2024-10-09 00:36:41.052523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.685 qpair failed and we were unable to recover it. 00:29:10.685 [2024-10-09 00:36:41.052902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.685 [2024-10-09 00:36:41.052932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.685 qpair failed and we were unable to recover it. 00:29:10.685 [2024-10-09 00:36:41.053296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.685 [2024-10-09 00:36:41.053325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.685 qpair failed and we were unable to recover it. 00:29:10.685 [2024-10-09 00:36:41.053687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.685 [2024-10-09 00:36:41.053716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.685 qpair failed and we were unable to recover it. 00:29:10.685 [2024-10-09 00:36:41.054140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.685 [2024-10-09 00:36:41.054169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.685 qpair failed and we were unable to recover it. 00:29:10.685 [2024-10-09 00:36:41.054535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.685 [2024-10-09 00:36:41.054568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.685 qpair failed and we were unable to recover it. 00:29:10.685 [2024-10-09 00:36:41.054948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.685 [2024-10-09 00:36:41.054979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.685 qpair failed and we were unable to recover it. 00:29:10.685 [2024-10-09 00:36:41.055267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.685 [2024-10-09 00:36:41.055297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.685 qpair failed and we were unable to recover it. 00:29:10.685 [2024-10-09 00:36:41.055649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.685 [2024-10-09 00:36:41.055680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.685 qpair failed and we were unable to recover it. 00:29:10.685 [2024-10-09 00:36:41.056069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.685 [2024-10-09 00:36:41.056105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.685 qpair failed and we were unable to recover it. 00:29:10.685 [2024-10-09 00:36:41.056477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.685 [2024-10-09 00:36:41.056508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.685 qpair failed and we were unable to recover it. 00:29:10.685 [2024-10-09 00:36:41.056769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.685 [2024-10-09 00:36:41.056799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.685 qpair failed and we were unable to recover it. 00:29:10.685 [2024-10-09 00:36:41.057069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.685 [2024-10-09 00:36:41.057098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.685 qpair failed and we were unable to recover it. 00:29:10.685 [2024-10-09 00:36:41.057356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.685 [2024-10-09 00:36:41.057385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.685 qpair failed and we were unable to recover it. 00:29:10.685 [2024-10-09 00:36:41.057761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.685 [2024-10-09 00:36:41.057792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.685 qpair failed and we were unable to recover it. 00:29:10.685 [2024-10-09 00:36:41.058134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.685 [2024-10-09 00:36:41.058163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.685 qpair failed and we were unable to recover it. 00:29:10.685 [2024-10-09 00:36:41.058516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.685 [2024-10-09 00:36:41.058546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.685 qpair failed and we were unable to recover it. 00:29:10.685 [2024-10-09 00:36:41.058829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.685 [2024-10-09 00:36:41.058860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.685 qpair failed and we were unable to recover it. 00:29:10.685 [2024-10-09 00:36:41.059160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.685 [2024-10-09 00:36:41.059192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.685 qpair failed and we were unable to recover it. 00:29:10.685 [2024-10-09 00:36:41.059542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.685 [2024-10-09 00:36:41.059572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.685 qpair failed and we were unable to recover it. 00:29:10.685 [2024-10-09 00:36:41.059927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.685 [2024-10-09 00:36:41.059958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.685 qpair failed and we were unable to recover it. 00:29:10.685 [2024-10-09 00:36:41.060351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.685 [2024-10-09 00:36:41.060382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.685 qpair failed and we were unable to recover it. 00:29:10.685 [2024-10-09 00:36:41.060769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.685 [2024-10-09 00:36:41.060799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.685 qpair failed and we were unable to recover it. 00:29:10.685 [2024-10-09 00:36:41.061125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.685 [2024-10-09 00:36:41.061153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.685 qpair failed and we were unable to recover it. 00:29:10.685 [2024-10-09 00:36:41.061416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.685 [2024-10-09 00:36:41.061446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.685 qpair failed and we were unable to recover it. 00:29:10.685 [2024-10-09 00:36:41.061810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.685 [2024-10-09 00:36:41.061840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.685 qpair failed and we were unable to recover it. 00:29:10.685 [2024-10-09 00:36:41.062219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.686 [2024-10-09 00:36:41.062255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.686 qpair failed and we were unable to recover it. 00:29:10.686 [2024-10-09 00:36:41.062598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.686 [2024-10-09 00:36:41.062627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.686 qpair failed and we were unable to recover it. 00:29:10.686 [2024-10-09 00:36:41.062991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.686 [2024-10-09 00:36:41.063021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.686 qpair failed and we were unable to recover it. 00:29:10.686 [2024-10-09 00:36:41.063393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.686 [2024-10-09 00:36:41.063422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.686 qpair failed and we were unable to recover it. 00:29:10.686 [2024-10-09 00:36:41.063669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.686 [2024-10-09 00:36:41.063699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.686 qpair failed and we were unable to recover it. 00:29:10.686 [2024-10-09 00:36:41.064041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.686 [2024-10-09 00:36:41.064071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.686 qpair failed and we were unable to recover it. 00:29:10.686 [2024-10-09 00:36:41.064437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.686 [2024-10-09 00:36:41.064466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.686 qpair failed and we were unable to recover it. 00:29:10.686 [2024-10-09 00:36:41.064804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.686 [2024-10-09 00:36:41.064834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.686 qpair failed and we were unable to recover it. 00:29:10.686 [2024-10-09 00:36:41.065183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.686 [2024-10-09 00:36:41.065212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.686 qpair failed and we were unable to recover it. 00:29:10.686 [2024-10-09 00:36:41.065574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.686 [2024-10-09 00:36:41.065603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.686 qpair failed and we were unable to recover it. 00:29:10.686 [2024-10-09 00:36:41.065872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.686 [2024-10-09 00:36:41.065902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.686 qpair failed and we were unable to recover it. 00:29:10.686 [2024-10-09 00:36:41.066283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.686 [2024-10-09 00:36:41.066316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.686 qpair failed and we were unable to recover it. 00:29:10.686 [2024-10-09 00:36:41.066663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.686 [2024-10-09 00:36:41.066694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.686 qpair failed and we were unable to recover it. 00:29:10.686 [2024-10-09 00:36:41.066981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.686 [2024-10-09 00:36:41.067012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.686 qpair failed and we were unable to recover it. 00:29:10.686 [2024-10-09 00:36:41.067236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.686 [2024-10-09 00:36:41.067269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.686 qpair failed and we were unable to recover it. 00:29:10.686 [2024-10-09 00:36:41.067636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.686 [2024-10-09 00:36:41.067666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.686 qpair failed and we were unable to recover it. 00:29:10.686 [2024-10-09 00:36:41.068016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.686 [2024-10-09 00:36:41.068049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.686 qpair failed and we were unable to recover it. 00:29:10.686 [2024-10-09 00:36:41.068308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.686 [2024-10-09 00:36:41.068343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.686 qpair failed and we were unable to recover it. 00:29:10.686 [2024-10-09 00:36:41.068744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.686 [2024-10-09 00:36:41.068776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.686 qpair failed and we were unable to recover it. 00:29:10.686 [2024-10-09 00:36:41.069143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.686 [2024-10-09 00:36:41.069172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.686 qpair failed and we were unable to recover it. 00:29:10.686 [2024-10-09 00:36:41.069501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.686 [2024-10-09 00:36:41.069532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.686 qpair failed and we were unable to recover it. 00:29:10.686 [2024-10-09 00:36:41.069887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.686 [2024-10-09 00:36:41.069918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.686 qpair failed and we were unable to recover it. 00:29:10.686 [2024-10-09 00:36:41.070339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.686 [2024-10-09 00:36:41.070369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.686 qpair failed and we were unable to recover it. 00:29:10.686 [2024-10-09 00:36:41.070733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.686 [2024-10-09 00:36:41.070764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.686 qpair failed and we were unable to recover it. 00:29:10.686 [2024-10-09 00:36:41.071138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.686 [2024-10-09 00:36:41.071168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.686 qpair failed and we were unable to recover it. 00:29:10.686 [2024-10-09 00:36:41.071460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.686 [2024-10-09 00:36:41.071492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.686 qpair failed and we were unable to recover it. 00:29:10.686 [2024-10-09 00:36:41.071756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.686 [2024-10-09 00:36:41.071786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.686 qpair failed and we were unable to recover it. 00:29:10.686 [2024-10-09 00:36:41.072243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.686 [2024-10-09 00:36:41.072273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.686 qpair failed and we were unable to recover it. 00:29:10.686 [2024-10-09 00:36:41.072653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.686 [2024-10-09 00:36:41.072687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.686 qpair failed and we were unable to recover it. 00:29:10.686 [2024-10-09 00:36:41.073050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.686 [2024-10-09 00:36:41.073089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.686 qpair failed and we were unable to recover it. 00:29:10.686 [2024-10-09 00:36:41.073419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.686 [2024-10-09 00:36:41.073450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.686 qpair failed and we were unable to recover it. 00:29:10.686 [2024-10-09 00:36:41.073912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.686 [2024-10-09 00:36:41.073944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.686 qpair failed and we were unable to recover it. 00:29:10.686 [2024-10-09 00:36:41.074278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.686 [2024-10-09 00:36:41.074308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.686 qpair failed and we were unable to recover it. 00:29:10.686 [2024-10-09 00:36:41.074617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.686 [2024-10-09 00:36:41.074646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.686 qpair failed and we were unable to recover it. 00:29:10.686 [2024-10-09 00:36:41.074896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.686 [2024-10-09 00:36:41.074927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.686 qpair failed and we were unable to recover it. 00:29:10.686 [2024-10-09 00:36:41.075312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.687 [2024-10-09 00:36:41.075343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.687 qpair failed and we were unable to recover it. 00:29:10.687 [2024-10-09 00:36:41.075682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.687 [2024-10-09 00:36:41.075712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.687 qpair failed and we were unable to recover it. 00:29:10.687 [2024-10-09 00:36:41.076081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.687 [2024-10-09 00:36:41.076110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.687 qpair failed and we were unable to recover it. 00:29:10.687 [2024-10-09 00:36:41.076368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.687 [2024-10-09 00:36:41.076397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.687 qpair failed and we were unable to recover it. 00:29:10.687 [2024-10-09 00:36:41.076638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.687 [2024-10-09 00:36:41.076671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.687 qpair failed and we were unable to recover it. 00:29:10.687 [2024-10-09 00:36:41.077066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.687 [2024-10-09 00:36:41.077096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.687 qpair failed and we were unable to recover it. 00:29:10.687 [2024-10-09 00:36:41.077466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.687 [2024-10-09 00:36:41.077498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.687 qpair failed and we were unable to recover it. 00:29:10.687 [2024-10-09 00:36:41.077873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.687 [2024-10-09 00:36:41.077904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.687 qpair failed and we were unable to recover it. 00:29:10.687 [2024-10-09 00:36:41.078307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.687 [2024-10-09 00:36:41.078337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.687 qpair failed and we were unable to recover it. 00:29:10.687 [2024-10-09 00:36:41.078672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.687 [2024-10-09 00:36:41.078714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.687 qpair failed and we were unable to recover it. 00:29:10.687 [2024-10-09 00:36:41.079093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.687 [2024-10-09 00:36:41.079122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.687 qpair failed and we were unable to recover it. 00:29:10.687 [2024-10-09 00:36:41.079495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.687 [2024-10-09 00:36:41.079524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.687 qpair failed and we were unable to recover it. 00:29:10.687 [2024-10-09 00:36:41.079890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.687 [2024-10-09 00:36:41.079921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.687 qpair failed and we were unable to recover it. 00:29:10.687 [2024-10-09 00:36:41.080321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.687 [2024-10-09 00:36:41.080351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.687 qpair failed and we were unable to recover it. 00:29:10.687 [2024-10-09 00:36:41.080578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.687 [2024-10-09 00:36:41.080610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.687 qpair failed and we were unable to recover it. 00:29:10.687 [2024-10-09 00:36:41.081002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.687 [2024-10-09 00:36:41.081033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.687 qpair failed and we were unable to recover it. 00:29:10.687 [2024-10-09 00:36:41.081381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.687 [2024-10-09 00:36:41.081411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.687 qpair failed and we were unable to recover it. 00:29:10.687 [2024-10-09 00:36:41.081772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.687 [2024-10-09 00:36:41.081804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.687 qpair failed and we were unable to recover it. 00:29:10.687 [2024-10-09 00:36:41.082179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.687 [2024-10-09 00:36:41.082208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.687 qpair failed and we were unable to recover it. 00:29:10.687 [2024-10-09 00:36:41.082559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.687 [2024-10-09 00:36:41.082589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.687 qpair failed and we were unable to recover it. 00:29:10.687 [2024-10-09 00:36:41.083020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.687 [2024-10-09 00:36:41.083053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.687 qpair failed and we were unable to recover it. 00:29:10.687 [2024-10-09 00:36:41.083398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.687 [2024-10-09 00:36:41.083430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.687 qpair failed and we were unable to recover it. 00:29:10.687 [2024-10-09 00:36:41.083751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.687 [2024-10-09 00:36:41.083780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.687 qpair failed and we were unable to recover it. 00:29:10.687 [2024-10-09 00:36:41.084136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.687 [2024-10-09 00:36:41.084166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.687 qpair failed and we were unable to recover it. 00:29:10.687 [2024-10-09 00:36:41.084521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.687 [2024-10-09 00:36:41.084550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.687 qpair failed and we were unable to recover it. 00:29:10.687 [2024-10-09 00:36:41.084830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.687 [2024-10-09 00:36:41.084860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.687 qpair failed and we were unable to recover it. 00:29:10.687 [2024-10-09 00:36:41.085225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.687 [2024-10-09 00:36:41.085253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.687 qpair failed and we were unable to recover it. 00:29:10.687 [2024-10-09 00:36:41.085619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.687 [2024-10-09 00:36:41.085649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.687 qpair failed and we were unable to recover it. 00:29:10.687 [2024-10-09 00:36:41.086022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.687 [2024-10-09 00:36:41.086054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.687 qpair failed and we were unable to recover it. 00:29:10.687 [2024-10-09 00:36:41.086408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.687 [2024-10-09 00:36:41.086438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.687 qpair failed and we were unable to recover it. 00:29:10.687 [2024-10-09 00:36:41.086795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.687 [2024-10-09 00:36:41.086825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.687 qpair failed and we were unable to recover it. 00:29:10.687 [2024-10-09 00:36:41.087200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.687 [2024-10-09 00:36:41.087230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.687 qpair failed and we were unable to recover it. 00:29:10.687 [2024-10-09 00:36:41.087613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.687 [2024-10-09 00:36:41.087642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.687 qpair failed and we were unable to recover it. 00:29:10.687 [2024-10-09 00:36:41.087893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.687 [2024-10-09 00:36:41.087923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.687 qpair failed and we were unable to recover it. 00:29:10.687 [2024-10-09 00:36:41.088310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.687 [2024-10-09 00:36:41.088339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.687 qpair failed and we were unable to recover it. 00:29:10.687 [2024-10-09 00:36:41.088583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.687 [2024-10-09 00:36:41.088611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.687 qpair failed and we were unable to recover it. 00:29:10.687 [2024-10-09 00:36:41.089031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.687 [2024-10-09 00:36:41.089060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.687 qpair failed and we were unable to recover it. 00:29:10.687 [2024-10-09 00:36:41.089423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.687 [2024-10-09 00:36:41.089452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.687 qpair failed and we were unable to recover it. 00:29:10.687 [2024-10-09 00:36:41.089842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.687 [2024-10-09 00:36:41.089872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.687 qpair failed and we were unable to recover it. 00:29:10.688 [2024-10-09 00:36:41.090238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.688 [2024-10-09 00:36:41.090267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.688 qpair failed and we were unable to recover it. 00:29:10.688 [2024-10-09 00:36:41.090636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.688 [2024-10-09 00:36:41.090665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.688 qpair failed and we were unable to recover it. 00:29:10.688 [2024-10-09 00:36:41.091042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.688 [2024-10-09 00:36:41.091072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.688 qpair failed and we were unable to recover it. 00:29:10.688 [2024-10-09 00:36:41.091331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.688 [2024-10-09 00:36:41.091360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.688 qpair failed and we were unable to recover it. 00:29:10.688 [2024-10-09 00:36:41.091705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.688 [2024-10-09 00:36:41.091742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.688 qpair failed and we were unable to recover it. 00:29:10.688 [2024-10-09 00:36:41.092110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.688 [2024-10-09 00:36:41.092139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.688 qpair failed and we were unable to recover it. 00:29:10.688 [2024-10-09 00:36:41.092526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.688 [2024-10-09 00:36:41.092556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.688 qpair failed and we were unable to recover it. 00:29:10.688 [2024-10-09 00:36:41.092826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.688 [2024-10-09 00:36:41.092859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.688 qpair failed and we were unable to recover it. 00:29:10.688 [2024-10-09 00:36:41.093242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.688 [2024-10-09 00:36:41.093272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.688 qpair failed and we were unable to recover it. 00:29:10.688 [2024-10-09 00:36:41.093613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.688 [2024-10-09 00:36:41.093642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.688 qpair failed and we were unable to recover it. 00:29:10.688 [2024-10-09 00:36:41.094043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.688 [2024-10-09 00:36:41.094072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.688 qpair failed and we were unable to recover it. 00:29:10.688 [2024-10-09 00:36:41.094393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.688 [2024-10-09 00:36:41.094425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.688 qpair failed and we were unable to recover it. 00:29:10.688 [2024-10-09 00:36:41.094782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.688 [2024-10-09 00:36:41.094812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.688 qpair failed and we were unable to recover it. 00:29:10.688 [2024-10-09 00:36:41.095164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.688 [2024-10-09 00:36:41.095193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.688 qpair failed and we were unable to recover it. 00:29:10.688 [2024-10-09 00:36:41.095598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.688 [2024-10-09 00:36:41.095626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.688 qpair failed and we were unable to recover it. 00:29:10.688 [2024-10-09 00:36:41.095990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.688 [2024-10-09 00:36:41.096020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.688 qpair failed and we were unable to recover it. 00:29:10.688 [2024-10-09 00:36:41.096377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.688 [2024-10-09 00:36:41.096406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.688 qpair failed and we were unable to recover it. 00:29:10.688 [2024-10-09 00:36:41.096753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.688 [2024-10-09 00:36:41.096783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.688 qpair failed and we were unable to recover it. 00:29:10.688 [2024-10-09 00:36:41.097033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.688 [2024-10-09 00:36:41.097065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.688 qpair failed and we were unable to recover it. 00:29:10.688 [2024-10-09 00:36:41.097480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.688 [2024-10-09 00:36:41.097509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.688 qpair failed and we were unable to recover it. 00:29:10.688 [2024-10-09 00:36:41.097855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.688 [2024-10-09 00:36:41.097885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.688 qpair failed and we were unable to recover it. 00:29:10.688 [2024-10-09 00:36:41.098254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.688 [2024-10-09 00:36:41.098285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.688 qpair failed and we were unable to recover it. 00:29:10.688 [2024-10-09 00:36:41.098648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.688 [2024-10-09 00:36:41.098678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.688 qpair failed and we were unable to recover it. 00:29:10.688 [2024-10-09 00:36:41.099062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.688 [2024-10-09 00:36:41.099091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.688 qpair failed and we were unable to recover it. 00:29:10.688 [2024-10-09 00:36:41.099460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.688 [2024-10-09 00:36:41.099490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.688 qpair failed and we were unable to recover it. 00:29:10.688 [2024-10-09 00:36:41.099786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.688 [2024-10-09 00:36:41.099817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.688 qpair failed and we were unable to recover it. 00:29:10.688 [2024-10-09 00:36:41.100176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.688 [2024-10-09 00:36:41.100205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.688 qpair failed and we were unable to recover it. 00:29:10.688 [2024-10-09 00:36:41.100591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.688 [2024-10-09 00:36:41.100623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.688 qpair failed and we were unable to recover it. 00:29:10.688 [2024-10-09 00:36:41.101033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.688 [2024-10-09 00:36:41.101064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.688 qpair failed and we were unable to recover it. 00:29:10.688 [2024-10-09 00:36:41.101449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.688 [2024-10-09 00:36:41.101478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.688 qpair failed and we were unable to recover it. 00:29:10.688 [2024-10-09 00:36:41.101676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.688 [2024-10-09 00:36:41.101707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.688 qpair failed and we were unable to recover it. 00:29:10.688 [2024-10-09 00:36:41.102084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.688 [2024-10-09 00:36:41.102116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.688 qpair failed and we were unable to recover it. 00:29:10.688 [2024-10-09 00:36:41.102389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.688 [2024-10-09 00:36:41.102417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.688 qpair failed and we were unable to recover it. 00:29:10.688 [2024-10-09 00:36:41.102767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.688 [2024-10-09 00:36:41.102798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.688 qpair failed and we were unable to recover it. 00:29:10.688 [2024-10-09 00:36:41.103150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.688 [2024-10-09 00:36:41.103180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.688 qpair failed and we were unable to recover it. 00:29:10.688 [2024-10-09 00:36:41.103519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.688 [2024-10-09 00:36:41.103547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.688 qpair failed and we were unable to recover it. 00:29:10.688 [2024-10-09 00:36:41.103900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.688 [2024-10-09 00:36:41.103933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.688 qpair failed and we were unable to recover it. 00:29:10.688 [2024-10-09 00:36:41.104276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.688 [2024-10-09 00:36:41.104306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.688 qpair failed and we were unable to recover it. 00:29:10.688 [2024-10-09 00:36:41.104658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.688 [2024-10-09 00:36:41.104694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.688 qpair failed and we were unable to recover it. 00:29:10.688 [2024-10-09 00:36:41.105075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.688 [2024-10-09 00:36:41.105104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.688 qpair failed and we were unable to recover it. 00:29:10.688 [2024-10-09 00:36:41.105474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.689 [2024-10-09 00:36:41.105503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.689 qpair failed and we were unable to recover it. 00:29:10.689 [2024-10-09 00:36:41.105857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.689 [2024-10-09 00:36:41.105889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.689 qpair failed and we were unable to recover it. 00:29:10.689 [2024-10-09 00:36:41.106254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.689 [2024-10-09 00:36:41.106284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.689 qpair failed and we were unable to recover it. 00:29:10.689 [2024-10-09 00:36:41.106528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.689 [2024-10-09 00:36:41.106561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.689 qpair failed and we were unable to recover it. 00:29:10.689 [2024-10-09 00:36:41.106942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.689 [2024-10-09 00:36:41.106974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.689 qpair failed and we were unable to recover it. 00:29:10.689 [2024-10-09 00:36:41.107327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.689 [2024-10-09 00:36:41.107358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.689 qpair failed and we were unable to recover it. 00:29:10.689 [2024-10-09 00:36:41.107716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.689 [2024-10-09 00:36:41.107754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.689 qpair failed and we were unable to recover it. 00:29:10.689 [2024-10-09 00:36:41.108204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.689 [2024-10-09 00:36:41.108233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.689 qpair failed and we were unable to recover it. 00:29:10.689 [2024-10-09 00:36:41.108486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.689 [2024-10-09 00:36:41.108515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.689 qpair failed and we were unable to recover it. 00:29:10.689 [2024-10-09 00:36:41.108769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.689 [2024-10-09 00:36:41.108798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.689 qpair failed and we were unable to recover it. 00:29:10.689 [2024-10-09 00:36:41.109225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.689 [2024-10-09 00:36:41.109255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.689 qpair failed and we were unable to recover it. 00:29:10.689 [2024-10-09 00:36:41.109479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.689 [2024-10-09 00:36:41.109510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.689 qpair failed and we were unable to recover it. 00:29:10.689 [2024-10-09 00:36:41.109806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.689 [2024-10-09 00:36:41.109836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.689 qpair failed and we were unable to recover it. 00:29:10.689 [2024-10-09 00:36:41.110194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.689 [2024-10-09 00:36:41.110224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.689 qpair failed and we were unable to recover it. 00:29:10.689 [2024-10-09 00:36:41.110485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.689 [2024-10-09 00:36:41.110515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.689 qpair failed and we were unable to recover it. 00:29:10.689 [2024-10-09 00:36:41.110858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.689 [2024-10-09 00:36:41.110888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.689 qpair failed and we were unable to recover it. 00:29:10.689 [2024-10-09 00:36:41.111231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.689 [2024-10-09 00:36:41.111259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.689 qpair failed and we were unable to recover it. 00:29:10.689 [2024-10-09 00:36:41.111603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.689 [2024-10-09 00:36:41.111633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.689 qpair failed and we were unable to recover it. 00:29:10.689 [2024-10-09 00:36:41.111780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.689 [2024-10-09 00:36:41.111810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.689 qpair failed and we were unable to recover it. 00:29:10.689 [2024-10-09 00:36:41.112239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.689 [2024-10-09 00:36:41.112269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.689 qpair failed and we were unable to recover it. 00:29:10.689 [2024-10-09 00:36:41.112640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.689 [2024-10-09 00:36:41.112670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.689 qpair failed and we were unable to recover it. 00:29:10.689 [2024-10-09 00:36:41.113034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.689 [2024-10-09 00:36:41.113063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.689 qpair failed and we were unable to recover it. 00:29:10.689 [2024-10-09 00:36:41.113414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.689 [2024-10-09 00:36:41.113446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.689 qpair failed and we were unable to recover it. 00:29:10.689 [2024-10-09 00:36:41.113803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.689 [2024-10-09 00:36:41.113835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.689 qpair failed and we were unable to recover it. 00:29:10.689 [2024-10-09 00:36:41.114232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.689 [2024-10-09 00:36:41.114263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.689 qpair failed and we were unable to recover it. 00:29:10.689 [2024-10-09 00:36:41.114514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.689 [2024-10-09 00:36:41.114554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.689 qpair failed and we were unable to recover it. 00:29:10.689 [2024-10-09 00:36:41.114969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.689 [2024-10-09 00:36:41.115001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.689 qpair failed and we were unable to recover it. 00:29:10.689 [2024-10-09 00:36:41.115366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.689 [2024-10-09 00:36:41.115404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.689 qpair failed and we were unable to recover it. 00:29:10.689 [2024-10-09 00:36:41.115779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.689 [2024-10-09 00:36:41.115808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.689 qpair failed and we were unable to recover it. 00:29:10.689 [2024-10-09 00:36:41.116167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.689 [2024-10-09 00:36:41.116196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.689 qpair failed and we were unable to recover it. 00:29:10.689 [2024-10-09 00:36:41.116538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.689 [2024-10-09 00:36:41.116569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.689 qpair failed and we were unable to recover it. 00:29:10.689 [2024-10-09 00:36:41.116970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.689 [2024-10-09 00:36:41.116999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.689 qpair failed and we were unable to recover it. 00:29:10.689 [2024-10-09 00:36:41.117267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.689 [2024-10-09 00:36:41.117298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.689 qpair failed and we were unable to recover it. 00:29:10.689 [2024-10-09 00:36:41.117637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.689 [2024-10-09 00:36:41.117668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.689 qpair failed and we were unable to recover it. 00:29:10.689 [2024-10-09 00:36:41.118050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.689 [2024-10-09 00:36:41.118082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.689 qpair failed and we were unable to recover it. 00:29:10.689 [2024-10-09 00:36:41.118436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.689 [2024-10-09 00:36:41.118472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.689 qpair failed and we were unable to recover it. 00:29:10.689 [2024-10-09 00:36:41.118804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.689 [2024-10-09 00:36:41.118835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.689 qpair failed and we were unable to recover it. 00:29:10.689 [2024-10-09 00:36:41.119275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.689 [2024-10-09 00:36:41.119305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.689 qpair failed and we were unable to recover it. 00:29:10.689 [2024-10-09 00:36:41.119645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.689 [2024-10-09 00:36:41.119684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.689 qpair failed and we were unable to recover it. 00:29:10.689 [2024-10-09 00:36:41.119985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.689 [2024-10-09 00:36:41.120021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.689 qpair failed and we were unable to recover it. 00:29:10.689 [2024-10-09 00:36:41.120389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.689 [2024-10-09 00:36:41.120419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.689 qpair failed and we were unable to recover it. 00:29:10.690 [2024-10-09 00:36:41.120786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.690 [2024-10-09 00:36:41.120818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.690 qpair failed and we were unable to recover it. 00:29:10.690 [2024-10-09 00:36:41.121188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.690 [2024-10-09 00:36:41.121218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.690 qpair failed and we were unable to recover it. 00:29:10.690 [2024-10-09 00:36:41.121525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.690 [2024-10-09 00:36:41.121554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.690 qpair failed and we were unable to recover it. 00:29:10.690 [2024-10-09 00:36:41.121906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.690 [2024-10-09 00:36:41.121936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.690 qpair failed and we were unable to recover it. 00:29:10.690 [2024-10-09 00:36:41.122186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.690 [2024-10-09 00:36:41.122215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.690 qpair failed and we were unable to recover it. 00:29:10.690 [2024-10-09 00:36:41.122571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.690 [2024-10-09 00:36:41.122603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.690 qpair failed and we were unable to recover it. 00:29:10.690 [2024-10-09 00:36:41.122967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.690 [2024-10-09 00:36:41.122997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.690 qpair failed and we were unable to recover it. 00:29:10.690 [2024-10-09 00:36:41.123357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.690 [2024-10-09 00:36:41.123387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.690 qpair failed and we were unable to recover it. 00:29:10.690 [2024-10-09 00:36:41.123758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.690 [2024-10-09 00:36:41.123791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.690 qpair failed and we were unable to recover it. 00:29:10.690 [2024-10-09 00:36:41.124143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.690 [2024-10-09 00:36:41.124171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.690 qpair failed and we were unable to recover it. 00:29:10.690 [2024-10-09 00:36:41.124570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.690 [2024-10-09 00:36:41.124600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.690 qpair failed and we were unable to recover it. 00:29:10.690 [2024-10-09 00:36:41.124832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.690 [2024-10-09 00:36:41.124868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.690 qpair failed and we were unable to recover it. 00:29:10.690 [2024-10-09 00:36:41.125280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.690 [2024-10-09 00:36:41.125309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.690 qpair failed and we were unable to recover it. 00:29:10.690 [2024-10-09 00:36:41.125550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.690 [2024-10-09 00:36:41.125579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.690 qpair failed and we were unable to recover it. 00:29:10.690 [2024-10-09 00:36:41.125912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.690 [2024-10-09 00:36:41.125944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.690 qpair failed and we were unable to recover it. 00:29:10.690 [2024-10-09 00:36:41.126307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.690 [2024-10-09 00:36:41.126335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.690 qpair failed and we were unable to recover it. 00:29:10.690 [2024-10-09 00:36:41.126690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.690 [2024-10-09 00:36:41.126745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.690 qpair failed and we were unable to recover it. 00:29:10.690 [2024-10-09 00:36:41.127091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.690 [2024-10-09 00:36:41.127121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.690 qpair failed and we were unable to recover it. 00:29:10.690 [2024-10-09 00:36:41.127495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.690 [2024-10-09 00:36:41.127534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.690 qpair failed and we were unable to recover it. 00:29:10.690 [2024-10-09 00:36:41.127890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.690 [2024-10-09 00:36:41.127921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.690 qpair failed and we were unable to recover it. 00:29:10.690 [2024-10-09 00:36:41.128284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.690 [2024-10-09 00:36:41.128315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.690 qpair failed and we were unable to recover it. 00:29:10.690 [2024-10-09 00:36:41.128677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.690 [2024-10-09 00:36:41.128711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.690 qpair failed and we were unable to recover it. 00:29:10.690 [2024-10-09 00:36:41.129075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.690 [2024-10-09 00:36:41.129106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.690 qpair failed and we were unable to recover it. 00:29:10.690 [2024-10-09 00:36:41.129460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.690 [2024-10-09 00:36:41.129492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.690 qpair failed and we were unable to recover it. 00:29:10.690 [2024-10-09 00:36:41.129849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.690 [2024-10-09 00:36:41.129881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.690 qpair failed and we were unable to recover it. 00:29:10.690 [2024-10-09 00:36:41.130267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.690 [2024-10-09 00:36:41.130296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.690 qpair failed and we were unable to recover it. 00:29:10.690 [2024-10-09 00:36:41.130626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.690 [2024-10-09 00:36:41.130655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.690 qpair failed and we were unable to recover it. 00:29:10.690 [2024-10-09 00:36:41.130819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.690 [2024-10-09 00:36:41.130850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.690 qpair failed and we were unable to recover it. 00:29:10.690 [2024-10-09 00:36:41.131223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.690 [2024-10-09 00:36:41.131255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.690 qpair failed and we were unable to recover it. 00:29:10.690 [2024-10-09 00:36:41.131592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.690 [2024-10-09 00:36:41.131621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.690 qpair failed and we were unable to recover it. 00:29:10.690 [2024-10-09 00:36:41.133472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.690 [2024-10-09 00:36:41.133536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.690 qpair failed and we were unable to recover it. 00:29:10.690 [2024-10-09 00:36:41.133928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.690 [2024-10-09 00:36:41.133966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.690 qpair failed and we were unable to recover it. 00:29:10.690 [2024-10-09 00:36:41.134411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.690 [2024-10-09 00:36:41.134441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.690 qpair failed and we were unable to recover it. 00:29:10.690 [2024-10-09 00:36:41.134700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.690 [2024-10-09 00:36:41.134738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.690 qpair failed and we were unable to recover it. 00:29:10.690 [2024-10-09 00:36:41.135137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.690 [2024-10-09 00:36:41.135167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.690 qpair failed and we were unable to recover it. 00:29:10.690 [2024-10-09 00:36:41.135551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.690 [2024-10-09 00:36:41.135581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.690 qpair failed and we were unable to recover it. 00:29:10.690 [2024-10-09 00:36:41.135957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.690 [2024-10-09 00:36:41.135994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.690 qpair failed and we were unable to recover it. 00:29:10.690 [2024-10-09 00:36:41.136386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.690 [2024-10-09 00:36:41.136416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.690 qpair failed and we were unable to recover it. 00:29:10.690 [2024-10-09 00:36:41.136742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.690 [2024-10-09 00:36:41.136773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.690 qpair failed and we were unable to recover it. 00:29:10.690 [2024-10-09 00:36:41.137184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.691 [2024-10-09 00:36:41.137217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.691 qpair failed and we were unable to recover it. 00:29:10.691 [2024-10-09 00:36:41.137494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.691 [2024-10-09 00:36:41.137525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.691 qpair failed and we were unable to recover it. 00:29:10.691 [2024-10-09 00:36:41.137811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.691 [2024-10-09 00:36:41.137847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.691 qpair failed and we were unable to recover it. 00:29:10.691 [2024-10-09 00:36:41.138236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.691 [2024-10-09 00:36:41.138268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.691 qpair failed and we were unable to recover it. 00:29:10.691 [2024-10-09 00:36:41.138608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.691 [2024-10-09 00:36:41.138638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.691 qpair failed and we were unable to recover it. 00:29:10.691 [2024-10-09 00:36:41.138917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.691 [2024-10-09 00:36:41.138948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.691 qpair failed and we were unable to recover it. 00:29:10.691 [2024-10-09 00:36:41.139183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.691 [2024-10-09 00:36:41.139215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.691 qpair failed and we were unable to recover it. 00:29:10.691 [2024-10-09 00:36:41.139640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.691 [2024-10-09 00:36:41.139669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.691 qpair failed and we were unable to recover it. 00:29:10.691 [2024-10-09 00:36:41.140069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.691 [2024-10-09 00:36:41.140100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.691 qpair failed and we were unable to recover it. 00:29:10.691 [2024-10-09 00:36:41.140467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.691 [2024-10-09 00:36:41.140497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.691 qpair failed and we were unable to recover it. 00:29:10.691 [2024-10-09 00:36:41.140853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.691 [2024-10-09 00:36:41.140884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.691 qpair failed and we were unable to recover it. 00:29:10.691 [2024-10-09 00:36:41.141223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.691 [2024-10-09 00:36:41.141254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.691 qpair failed and we were unable to recover it. 00:29:10.691 [2024-10-09 00:36:41.141586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.691 [2024-10-09 00:36:41.141619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.691 qpair failed and we were unable to recover it. 00:29:10.691 [2024-10-09 00:36:41.141864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.691 [2024-10-09 00:36:41.141895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.691 qpair failed and we were unable to recover it. 00:29:10.691 [2024-10-09 00:36:41.142263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.691 [2024-10-09 00:36:41.142293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.691 qpair failed and we were unable to recover it. 00:29:10.691 [2024-10-09 00:36:41.142654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.691 [2024-10-09 00:36:41.142682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.691 qpair failed and we were unable to recover it. 00:29:10.691 [2024-10-09 00:36:41.143084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.691 [2024-10-09 00:36:41.143116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.691 qpair failed and we were unable to recover it. 00:29:10.691 [2024-10-09 00:36:41.143494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.691 [2024-10-09 00:36:41.143525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.691 qpair failed and we were unable to recover it. 00:29:10.691 [2024-10-09 00:36:41.143766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.691 [2024-10-09 00:36:41.143797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.691 qpair failed and we were unable to recover it. 00:29:10.691 [2024-10-09 00:36:41.144185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.691 [2024-10-09 00:36:41.144214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.691 qpair failed and we were unable to recover it. 00:29:10.691 [2024-10-09 00:36:41.144553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.691 [2024-10-09 00:36:41.144585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.691 qpair failed and we were unable to recover it. 00:29:10.691 [2024-10-09 00:36:41.144929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.691 [2024-10-09 00:36:41.144960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.691 qpair failed and we were unable to recover it. 00:29:10.691 [2024-10-09 00:36:41.145311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.691 [2024-10-09 00:36:41.145343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.691 qpair failed and we were unable to recover it. 00:29:10.691 [2024-10-09 00:36:41.145595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.691 [2024-10-09 00:36:41.145625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.691 qpair failed and we were unable to recover it. 00:29:10.691 [2024-10-09 00:36:41.145939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.691 [2024-10-09 00:36:41.145968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.691 qpair failed and we were unable to recover it. 00:29:10.691 [2024-10-09 00:36:41.146362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.691 [2024-10-09 00:36:41.146391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.691 qpair failed and we were unable to recover it. 00:29:10.691 [2024-10-09 00:36:41.146835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.691 [2024-10-09 00:36:41.146864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.691 qpair failed and we were unable to recover it. 00:29:10.691 [2024-10-09 00:36:41.147102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.691 [2024-10-09 00:36:41.147131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.691 qpair failed and we were unable to recover it. 00:29:10.691 [2024-10-09 00:36:41.147504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.691 [2024-10-09 00:36:41.147533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.691 qpair failed and we were unable to recover it. 00:29:10.691 [2024-10-09 00:36:41.147929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.691 [2024-10-09 00:36:41.147960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.691 qpair failed and we were unable to recover it. 00:29:10.691 [2024-10-09 00:36:41.148333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.691 [2024-10-09 00:36:41.148370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.691 qpair failed and we were unable to recover it. 00:29:10.691 [2024-10-09 00:36:41.148743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.691 [2024-10-09 00:36:41.148774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.691 qpair failed and we were unable to recover it. 00:29:10.691 [2024-10-09 00:36:41.149136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.691 [2024-10-09 00:36:41.149165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.691 qpair failed and we were unable to recover it. 00:29:10.691 [2024-10-09 00:36:41.149538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.691 [2024-10-09 00:36:41.149566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.691 qpair failed and we were unable to recover it. 00:29:10.691 [2024-10-09 00:36:41.149878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.691 [2024-10-09 00:36:41.149912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.691 qpair failed and we were unable to recover it. 00:29:10.691 [2024-10-09 00:36:41.150289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.691 [2024-10-09 00:36:41.150321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.691 qpair failed and we were unable to recover it. 00:29:10.691 [2024-10-09 00:36:41.150564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.691 [2024-10-09 00:36:41.150594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.691 qpair failed and we were unable to recover it. 00:29:10.691 [2024-10-09 00:36:41.150948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.691 [2024-10-09 00:36:41.150983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.692 qpair failed and we were unable to recover it. 00:29:10.692 [2024-10-09 00:36:41.151326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.692 [2024-10-09 00:36:41.151357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.692 qpair failed and we were unable to recover it. 00:29:10.692 [2024-10-09 00:36:41.151644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.692 [2024-10-09 00:36:41.151674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.692 qpair failed and we were unable to recover it. 00:29:10.692 [2024-10-09 00:36:41.152065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.692 [2024-10-09 00:36:41.152103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.692 qpair failed and we were unable to recover it. 00:29:10.692 [2024-10-09 00:36:41.152346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.692 [2024-10-09 00:36:41.152375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.692 qpair failed and we were unable to recover it. 00:29:10.692 [2024-10-09 00:36:41.152808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.692 [2024-10-09 00:36:41.152840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.692 qpair failed and we were unable to recover it. 00:29:10.692 [2024-10-09 00:36:41.153223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.692 [2024-10-09 00:36:41.153252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.692 qpair failed and we were unable to recover it. 00:29:10.692 [2024-10-09 00:36:41.153604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.692 [2024-10-09 00:36:41.153642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.692 qpair failed and we were unable to recover it. 00:29:10.692 [2024-10-09 00:36:41.154007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.692 [2024-10-09 00:36:41.154038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.692 qpair failed and we were unable to recover it. 00:29:10.692 [2024-10-09 00:36:41.154403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.692 [2024-10-09 00:36:41.154432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.692 qpair failed and we were unable to recover it. 00:29:10.692 [2024-10-09 00:36:41.154804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.692 [2024-10-09 00:36:41.154834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.692 qpair failed and we were unable to recover it. 00:29:10.692 [2024-10-09 00:36:41.155201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.692 [2024-10-09 00:36:41.155230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.692 qpair failed and we were unable to recover it. 00:29:10.692 [2024-10-09 00:36:41.155593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.692 [2024-10-09 00:36:41.155622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.692 qpair failed and we were unable to recover it. 00:29:10.692 [2024-10-09 00:36:41.155968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.692 [2024-10-09 00:36:41.155998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.692 qpair failed and we were unable to recover it. 00:29:10.692 [2024-10-09 00:36:41.156357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.692 [2024-10-09 00:36:41.156385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.692 qpair failed and we were unable to recover it. 00:29:10.692 [2024-10-09 00:36:41.156768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.692 [2024-10-09 00:36:41.156797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.692 qpair failed and we were unable to recover it. 00:29:10.692 [2024-10-09 00:36:41.157163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.692 [2024-10-09 00:36:41.157193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.692 qpair failed and we were unable to recover it. 00:29:10.692 [2024-10-09 00:36:41.157567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.692 [2024-10-09 00:36:41.157598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.692 qpair failed and we were unable to recover it. 00:29:10.692 [2024-10-09 00:36:41.157942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.692 [2024-10-09 00:36:41.157977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.692 qpair failed and we were unable to recover it. 00:29:10.692 [2024-10-09 00:36:41.158296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.692 [2024-10-09 00:36:41.158326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.692 qpair failed and we were unable to recover it. 00:29:10.692 [2024-10-09 00:36:41.158690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.692 [2024-10-09 00:36:41.158738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.692 qpair failed and we were unable to recover it. 00:29:10.692 [2024-10-09 00:36:41.159134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.692 [2024-10-09 00:36:41.159164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.692 qpair failed and we were unable to recover it. 00:29:10.692 [2024-10-09 00:36:41.159476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.692 [2024-10-09 00:36:41.159505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.692 qpair failed and we were unable to recover it. 00:29:10.692 [2024-10-09 00:36:41.159845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.692 [2024-10-09 00:36:41.159876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.692 qpair failed and we were unable to recover it. 00:29:10.692 [2024-10-09 00:36:41.160234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.692 [2024-10-09 00:36:41.160264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.692 qpair failed and we were unable to recover it. 00:29:10.692 [2024-10-09 00:36:41.160623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.692 [2024-10-09 00:36:41.160652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.692 qpair failed and we were unable to recover it. 00:29:10.692 [2024-10-09 00:36:41.161065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.692 [2024-10-09 00:36:41.161095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.692 qpair failed and we were unable to recover it. 00:29:10.692 [2024-10-09 00:36:41.161428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.692 [2024-10-09 00:36:41.161458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.692 qpair failed and we were unable to recover it. 00:29:10.692 [2024-10-09 00:36:41.161837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.692 [2024-10-09 00:36:41.161867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.692 qpair failed and we were unable to recover it. 00:29:10.692 [2024-10-09 00:36:41.162233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.692 [2024-10-09 00:36:41.162263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.692 qpair failed and we were unable to recover it. 00:29:10.692 [2024-10-09 00:36:41.162618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.692 [2024-10-09 00:36:41.162652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.692 qpair failed and we were unable to recover it. 00:29:10.692 [2024-10-09 00:36:41.162981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.692 [2024-10-09 00:36:41.163011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.692 qpair failed and we were unable to recover it. 00:29:10.692 [2024-10-09 00:36:41.163351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.692 [2024-10-09 00:36:41.163381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.692 qpair failed and we were unable to recover it. 00:29:10.692 [2024-10-09 00:36:41.163753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.692 [2024-10-09 00:36:41.163782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.692 qpair failed and we were unable to recover it. 00:29:10.692 [2024-10-09 00:36:41.164113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.692 [2024-10-09 00:36:41.164143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.692 qpair failed and we were unable to recover it. 00:29:10.692 [2024-10-09 00:36:41.164468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.692 [2024-10-09 00:36:41.164498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.692 qpair failed and we were unable to recover it. 00:29:10.692 [2024-10-09 00:36:41.164854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.692 [2024-10-09 00:36:41.164885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.692 qpair failed and we were unable to recover it. 00:29:10.692 [2024-10-09 00:36:41.165255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.692 [2024-10-09 00:36:41.165284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.692 qpair failed and we were unable to recover it. 00:29:10.692 [2024-10-09 00:36:41.165666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.692 [2024-10-09 00:36:41.165695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.692 qpair failed and we were unable to recover it. 00:29:10.692 [2024-10-09 00:36:41.166078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.692 [2024-10-09 00:36:41.166107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.692 qpair failed and we were unable to recover it. 00:29:10.692 [2024-10-09 00:36:41.166468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.692 [2024-10-09 00:36:41.166499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.693 qpair failed and we were unable to recover it. 00:29:10.693 [2024-10-09 00:36:41.166860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.693 [2024-10-09 00:36:41.166891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.693 qpair failed and we were unable to recover it. 00:29:10.693 [2024-10-09 00:36:41.167151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.693 [2024-10-09 00:36:41.167183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.693 qpair failed and we were unable to recover it. 00:29:10.693 [2024-10-09 00:36:41.167555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.693 [2024-10-09 00:36:41.167585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.693 qpair failed and we were unable to recover it. 00:29:10.693 [2024-10-09 00:36:41.167933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.693 [2024-10-09 00:36:41.167964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.693 qpair failed and we were unable to recover it. 00:29:10.693 [2024-10-09 00:36:41.168416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.693 [2024-10-09 00:36:41.168444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.693 qpair failed and we were unable to recover it. 00:29:10.693 [2024-10-09 00:36:41.168802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.693 [2024-10-09 00:36:41.168832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.693 qpair failed and we were unable to recover it. 00:29:10.693 [2024-10-09 00:36:41.169230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.693 [2024-10-09 00:36:41.169261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.693 qpair failed and we were unable to recover it. 00:29:10.693 [2024-10-09 00:36:41.169602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.693 [2024-10-09 00:36:41.169632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.693 qpair failed and we were unable to recover it. 00:29:10.693 [2024-10-09 00:36:41.169985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.693 [2024-10-09 00:36:41.170015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.693 qpair failed and we were unable to recover it. 00:29:10.693 [2024-10-09 00:36:41.170374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.693 [2024-10-09 00:36:41.170404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.693 qpair failed and we were unable to recover it. 00:29:10.693 [2024-10-09 00:36:41.170790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.693 [2024-10-09 00:36:41.170819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.693 qpair failed and we were unable to recover it. 00:29:10.693 [2024-10-09 00:36:41.171175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.693 [2024-10-09 00:36:41.171204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.693 qpair failed and we were unable to recover it. 00:29:10.693 [2024-10-09 00:36:41.171563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.693 [2024-10-09 00:36:41.171593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.693 qpair failed and we were unable to recover it. 00:29:10.693 [2024-10-09 00:36:41.171856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.693 [2024-10-09 00:36:41.171889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.693 qpair failed and we were unable to recover it. 00:29:10.693 [2024-10-09 00:36:41.172250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.693 [2024-10-09 00:36:41.172279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.693 qpair failed and we were unable to recover it. 00:29:10.693 [2024-10-09 00:36:41.172642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.693 [2024-10-09 00:36:41.172672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.693 qpair failed and we were unable to recover it. 00:29:10.693 [2024-10-09 00:36:41.173030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.693 [2024-10-09 00:36:41.173059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.693 qpair failed and we were unable to recover it. 00:29:10.693 [2024-10-09 00:36:41.173410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.693 [2024-10-09 00:36:41.173441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.693 qpair failed and we were unable to recover it. 00:29:10.693 [2024-10-09 00:36:41.173787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.693 [2024-10-09 00:36:41.173817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.693 qpair failed and we were unable to recover it. 00:29:10.693 [2024-10-09 00:36:41.174194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.693 [2024-10-09 00:36:41.174224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.693 qpair failed and we were unable to recover it. 00:29:10.693 [2024-10-09 00:36:41.174461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.693 [2024-10-09 00:36:41.174493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.693 qpair failed and we were unable to recover it. 00:29:10.693 [2024-10-09 00:36:41.174659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.693 [2024-10-09 00:36:41.174687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.693 qpair failed and we were unable to recover it. 00:29:10.693 [2024-10-09 00:36:41.175036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.693 [2024-10-09 00:36:41.175067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.693 qpair failed and we were unable to recover it. 00:29:10.693 [2024-10-09 00:36:41.175403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.693 [2024-10-09 00:36:41.175433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.693 qpair failed and we were unable to recover it. 00:29:10.693 [2024-10-09 00:36:41.175806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.693 [2024-10-09 00:36:41.175837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.693 qpair failed and we were unable to recover it. 00:29:10.693 [2024-10-09 00:36:41.176206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.693 [2024-10-09 00:36:41.176236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.693 qpair failed and we were unable to recover it. 00:29:10.693 [2024-10-09 00:36:41.176613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.693 [2024-10-09 00:36:41.176641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.693 qpair failed and we were unable to recover it. 00:29:10.693 [2024-10-09 00:36:41.177005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.693 [2024-10-09 00:36:41.177036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.693 qpair failed and we were unable to recover it. 00:29:10.693 [2024-10-09 00:36:41.177390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.693 [2024-10-09 00:36:41.177418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.693 qpair failed and we were unable to recover it. 00:29:10.693 [2024-10-09 00:36:41.177762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.693 [2024-10-09 00:36:41.177792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.693 qpair failed and we were unable to recover it. 00:29:10.693 [2024-10-09 00:36:41.178192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.693 [2024-10-09 00:36:41.178223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.693 qpair failed and we were unable to recover it. 00:29:10.693 [2024-10-09 00:36:41.178586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.693 [2024-10-09 00:36:41.178616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.693 qpair failed and we were unable to recover it. 00:29:10.693 [2024-10-09 00:36:41.178964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.693 [2024-10-09 00:36:41.178994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.693 qpair failed and we were unable to recover it. 00:29:10.693 [2024-10-09 00:36:41.179353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.693 [2024-10-09 00:36:41.179384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.693 qpair failed and we were unable to recover it. 00:29:10.693 [2024-10-09 00:36:41.179743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.693 [2024-10-09 00:36:41.179773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.693 qpair failed and we were unable to recover it. 00:29:10.693 [2024-10-09 00:36:41.180203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.693 [2024-10-09 00:36:41.180232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.693 qpair failed and we were unable to recover it. 00:29:10.693 [2024-10-09 00:36:41.180584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.693 [2024-10-09 00:36:41.180613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.693 qpair failed and we were unable to recover it. 00:29:10.693 [2024-10-09 00:36:41.180978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.693 [2024-10-09 00:36:41.181008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.693 qpair failed and we were unable to recover it. 00:29:10.693 [2024-10-09 00:36:41.181352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.693 [2024-10-09 00:36:41.181381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.693 qpair failed and we were unable to recover it. 00:29:10.693 [2024-10-09 00:36:41.181745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.693 [2024-10-09 00:36:41.181775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.693 qpair failed and we were unable to recover it. 00:29:10.694 [2024-10-09 00:36:41.182153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.694 [2024-10-09 00:36:41.182181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.694 qpair failed and we were unable to recover it. 00:29:10.694 [2024-10-09 00:36:41.182562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.694 [2024-10-09 00:36:41.182592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.694 qpair failed and we were unable to recover it. 00:29:10.694 [2024-10-09 00:36:41.182966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.694 [2024-10-09 00:36:41.182996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.694 qpair failed and we were unable to recover it. 00:29:10.694 [2024-10-09 00:36:41.183347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.694 [2024-10-09 00:36:41.183376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.694 qpair failed and we were unable to recover it. 00:29:10.694 [2024-10-09 00:36:41.183735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.694 [2024-10-09 00:36:41.183766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.694 qpair failed and we were unable to recover it. 00:29:10.694 [2024-10-09 00:36:41.184080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.694 [2024-10-09 00:36:41.184108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.694 qpair failed and we were unable to recover it. 00:29:10.694 [2024-10-09 00:36:41.184483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.694 [2024-10-09 00:36:41.184511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.694 qpair failed and we were unable to recover it. 00:29:10.694 [2024-10-09 00:36:41.184872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.694 [2024-10-09 00:36:41.184904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.694 qpair failed and we were unable to recover it. 00:29:10.694 [2024-10-09 00:36:41.185273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.694 [2024-10-09 00:36:41.185302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.694 qpair failed and we were unable to recover it. 00:29:10.694 [2024-10-09 00:36:41.185674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.694 [2024-10-09 00:36:41.185702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.694 qpair failed and we were unable to recover it. 00:29:10.694 [2024-10-09 00:36:41.185945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.694 [2024-10-09 00:36:41.185977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.694 qpair failed and we were unable to recover it. 00:29:10.694 [2024-10-09 00:36:41.186353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.694 [2024-10-09 00:36:41.186381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.694 qpair failed and we were unable to recover it. 00:29:10.694 [2024-10-09 00:36:41.186740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.694 [2024-10-09 00:36:41.186770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.694 qpair failed and we were unable to recover it. 00:29:10.694 [2024-10-09 00:36:41.187030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.694 [2024-10-09 00:36:41.187058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.694 qpair failed and we were unable to recover it. 00:29:10.694 [2024-10-09 00:36:41.187443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.694 [2024-10-09 00:36:41.187472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.694 qpair failed and we were unable to recover it. 00:29:10.694 [2024-10-09 00:36:41.187835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.694 [2024-10-09 00:36:41.187865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.694 qpair failed and we were unable to recover it. 00:29:10.694 [2024-10-09 00:36:41.188302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.694 [2024-10-09 00:36:41.188331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.694 qpair failed and we were unable to recover it. 00:29:10.694 [2024-10-09 00:36:41.188688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.694 [2024-10-09 00:36:41.188735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.694 qpair failed and we were unable to recover it. 00:29:10.694 [2024-10-09 00:36:41.189072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.694 [2024-10-09 00:36:41.189101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.694 qpair failed and we were unable to recover it. 00:29:10.694 [2024-10-09 00:36:41.189450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.694 [2024-10-09 00:36:41.189481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.694 qpair failed and we were unable to recover it. 00:29:10.694 [2024-10-09 00:36:41.189832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.694 [2024-10-09 00:36:41.189863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.694 qpair failed and we were unable to recover it. 00:29:10.694 [2024-10-09 00:36:41.190247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.694 [2024-10-09 00:36:41.190276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.694 qpair failed and we were unable to recover it. 00:29:10.694 [2024-10-09 00:36:41.190608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.694 [2024-10-09 00:36:41.190637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.694 qpair failed and we were unable to recover it. 00:29:10.694 [2024-10-09 00:36:41.190991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.694 [2024-10-09 00:36:41.191021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.694 qpair failed and we were unable to recover it. 00:29:10.694 [2024-10-09 00:36:41.191323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.694 [2024-10-09 00:36:41.191351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.694 qpair failed and we were unable to recover it. 00:29:10.694 [2024-10-09 00:36:41.191711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.694 [2024-10-09 00:36:41.191751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.694 qpair failed and we were unable to recover it. 00:29:10.694 [2024-10-09 00:36:41.192094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.694 [2024-10-09 00:36:41.192125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.694 qpair failed and we were unable to recover it. 00:29:10.694 [2024-10-09 00:36:41.192392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.694 [2024-10-09 00:36:41.192420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.694 qpair failed and we were unable to recover it. 00:29:10.694 [2024-10-09 00:36:41.192761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.694 [2024-10-09 00:36:41.192791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.694 qpair failed and we were unable to recover it. 00:29:10.694 [2024-10-09 00:36:41.193036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.694 [2024-10-09 00:36:41.193066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.694 qpair failed and we were unable to recover it. 00:29:10.694 [2024-10-09 00:36:41.193435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.694 [2024-10-09 00:36:41.193463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.694 qpair failed and we were unable to recover it. 00:29:10.694 [2024-10-09 00:36:41.193825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.694 [2024-10-09 00:36:41.193857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.694 qpair failed and we were unable to recover it. 00:29:10.694 [2024-10-09 00:36:41.194110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.694 [2024-10-09 00:36:41.194140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.694 qpair failed and we were unable to recover it. 00:29:10.694 [2024-10-09 00:36:41.194490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.694 [2024-10-09 00:36:41.194519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.694 qpair failed and we were unable to recover it. 00:29:10.694 [2024-10-09 00:36:41.194881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.694 [2024-10-09 00:36:41.194912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.694 qpair failed and we were unable to recover it. 00:29:10.694 [2024-10-09 00:36:41.195291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.694 [2024-10-09 00:36:41.195320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.694 qpair failed and we were unable to recover it. 00:29:10.694 [2024-10-09 00:36:41.195750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.694 [2024-10-09 00:36:41.195782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.694 qpair failed and we were unable to recover it. 00:29:10.694 [2024-10-09 00:36:41.196178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.694 [2024-10-09 00:36:41.196208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.694 qpair failed and we were unable to recover it. 00:29:10.694 [2024-10-09 00:36:41.196577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.694 [2024-10-09 00:36:41.196606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.694 qpair failed and we were unable to recover it. 00:29:10.694 [2024-10-09 00:36:41.196966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.694 [2024-10-09 00:36:41.196997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.694 qpair failed and we were unable to recover it. 00:29:10.694 [2024-10-09 00:36:41.197362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.694 [2024-10-09 00:36:41.197391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.695 qpair failed and we were unable to recover it. 00:29:10.695 [2024-10-09 00:36:41.197758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.695 [2024-10-09 00:36:41.197788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.695 qpair failed and we were unable to recover it. 00:29:10.695 [2024-10-09 00:36:41.198184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.695 [2024-10-09 00:36:41.198213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.695 qpair failed and we were unable to recover it. 00:29:10.695 [2024-10-09 00:36:41.198553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.695 [2024-10-09 00:36:41.198584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.695 qpair failed and we were unable to recover it. 00:29:10.695 [2024-10-09 00:36:41.198952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.695 [2024-10-09 00:36:41.198988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.695 qpair failed and we were unable to recover it. 00:29:10.695 [2024-10-09 00:36:41.199350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.695 [2024-10-09 00:36:41.199379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.695 qpair failed and we were unable to recover it. 00:29:10.695 [2024-10-09 00:36:41.199742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.695 [2024-10-09 00:36:41.199774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.695 qpair failed and we were unable to recover it. 00:29:10.695 [2024-10-09 00:36:41.200142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.695 [2024-10-09 00:36:41.200171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.695 qpair failed and we were unable to recover it. 00:29:10.695 [2024-10-09 00:36:41.200536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.695 [2024-10-09 00:36:41.200566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.695 qpair failed and we were unable to recover it. 00:29:10.695 [2024-10-09 00:36:41.200816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.695 [2024-10-09 00:36:41.200849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.695 qpair failed and we were unable to recover it. 00:29:10.695 [2024-10-09 00:36:41.201241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.695 [2024-10-09 00:36:41.201270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.695 qpair failed and we were unable to recover it. 00:29:10.695 [2024-10-09 00:36:41.201650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.695 [2024-10-09 00:36:41.201679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.695 qpair failed and we were unable to recover it. 00:29:10.695 [2024-10-09 00:36:41.202049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.695 [2024-10-09 00:36:41.202080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.695 qpair failed and we were unable to recover it. 00:29:10.695 [2024-10-09 00:36:41.202452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.695 [2024-10-09 00:36:41.202481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.695 qpair failed and we were unable to recover it. 00:29:10.695 [2024-10-09 00:36:41.202753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.695 [2024-10-09 00:36:41.202784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.695 qpair failed and we were unable to recover it. 00:29:10.695 [2024-10-09 00:36:41.203132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.695 [2024-10-09 00:36:41.203161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.695 qpair failed and we were unable to recover it. 00:29:10.695 [2024-10-09 00:36:41.203530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.695 [2024-10-09 00:36:41.203560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.695 qpair failed and we were unable to recover it. 00:29:10.695 [2024-10-09 00:36:41.203934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.695 [2024-10-09 00:36:41.203964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.695 qpair failed and we were unable to recover it. 00:29:10.695 [2024-10-09 00:36:41.204330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.695 [2024-10-09 00:36:41.204360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.695 qpair failed and we were unable to recover it. 00:29:10.695 [2024-10-09 00:36:41.204730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.695 [2024-10-09 00:36:41.204760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.695 qpair failed and we were unable to recover it. 00:29:10.695 [2024-10-09 00:36:41.204991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.695 [2024-10-09 00:36:41.205023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.695 qpair failed and we were unable to recover it. 00:29:10.695 [2024-10-09 00:36:41.205287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.695 [2024-10-09 00:36:41.205318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.695 qpair failed and we were unable to recover it. 00:29:10.695 [2024-10-09 00:36:41.205675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.695 [2024-10-09 00:36:41.205705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.695 qpair failed and we were unable to recover it. 00:29:10.695 [2024-10-09 00:36:41.206116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.695 [2024-10-09 00:36:41.206146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.695 qpair failed and we were unable to recover it. 00:29:10.695 [2024-10-09 00:36:41.206478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.695 [2024-10-09 00:36:41.206507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.695 qpair failed and we were unable to recover it. 00:29:10.695 [2024-10-09 00:36:41.206851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.695 [2024-10-09 00:36:41.206881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.695 qpair failed and we were unable to recover it. 00:29:10.695 [2024-10-09 00:36:41.207213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.695 [2024-10-09 00:36:41.207243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.695 qpair failed and we were unable to recover it. 00:29:10.695 [2024-10-09 00:36:41.207610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.695 [2024-10-09 00:36:41.207641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.695 qpair failed and we were unable to recover it. 00:29:10.695 [2024-10-09 00:36:41.207984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.695 [2024-10-09 00:36:41.208015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.695 qpair failed and we were unable to recover it. 00:29:10.695 [2024-10-09 00:36:41.208452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.695 [2024-10-09 00:36:41.208481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.695 qpair failed and we were unable to recover it. 00:29:10.695 [2024-10-09 00:36:41.208808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.695 [2024-10-09 00:36:41.208840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.695 qpair failed and we were unable to recover it. 00:29:10.695 [2024-10-09 00:36:41.209198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.695 [2024-10-09 00:36:41.209239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.695 qpair failed and we were unable to recover it. 00:29:10.695 [2024-10-09 00:36:41.209561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.695 [2024-10-09 00:36:41.209592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.695 qpair failed and we were unable to recover it. 00:29:10.695 [2024-10-09 00:36:41.209935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.695 [2024-10-09 00:36:41.209966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.695 qpair failed and we were unable to recover it. 00:29:10.695 [2024-10-09 00:36:41.210343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.695 [2024-10-09 00:36:41.210371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.695 qpair failed and we were unable to recover it. 00:29:10.695 [2024-10-09 00:36:41.210719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.695 [2024-10-09 00:36:41.210760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.695 qpair failed and we were unable to recover it. 00:29:10.695 [2024-10-09 00:36:41.211154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.695 [2024-10-09 00:36:41.211183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.695 qpair failed and we were unable to recover it. 00:29:10.695 [2024-10-09 00:36:41.211533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.696 [2024-10-09 00:36:41.211561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.696 qpair failed and we were unable to recover it. 00:29:10.696 [2024-10-09 00:36:41.211895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.696 [2024-10-09 00:36:41.211926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.696 qpair failed and we were unable to recover it. 00:29:10.696 [2024-10-09 00:36:41.212183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.696 [2024-10-09 00:36:41.212216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.696 qpair failed and we were unable to recover it. 00:29:10.696 [2024-10-09 00:36:41.212561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.696 [2024-10-09 00:36:41.212591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.696 qpair failed and we were unable to recover it. 00:29:10.696 [2024-10-09 00:36:41.212950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.696 [2024-10-09 00:36:41.212981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.696 qpair failed and we were unable to recover it. 00:29:10.696 [2024-10-09 00:36:41.213313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.696 [2024-10-09 00:36:41.213344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.696 qpair failed and we were unable to recover it. 00:29:10.696 [2024-10-09 00:36:41.213681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.696 [2024-10-09 00:36:41.213709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.696 qpair failed and we were unable to recover it. 00:29:10.696 [2024-10-09 00:36:41.214043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.696 [2024-10-09 00:36:41.214073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.696 qpair failed and we were unable to recover it. 00:29:10.696 [2024-10-09 00:36:41.214448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.696 [2024-10-09 00:36:41.214478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.696 qpair failed and we were unable to recover it. 00:29:10.696 [2024-10-09 00:36:41.214821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.696 [2024-10-09 00:36:41.214852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.696 qpair failed and we were unable to recover it. 00:29:10.696 [2024-10-09 00:36:41.215221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.696 [2024-10-09 00:36:41.215250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.696 qpair failed and we were unable to recover it. 00:29:10.696 [2024-10-09 00:36:41.215560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.696 [2024-10-09 00:36:41.215590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.696 qpair failed and we were unable to recover it. 00:29:10.696 [2024-10-09 00:36:41.215933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.696 [2024-10-09 00:36:41.215963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.696 qpair failed and we were unable to recover it. 00:29:10.696 [2024-10-09 00:36:41.216327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.696 [2024-10-09 00:36:41.216357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.696 qpair failed and we were unable to recover it. 00:29:10.696 [2024-10-09 00:36:41.216781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.696 [2024-10-09 00:36:41.216811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.696 qpair failed and we were unable to recover it. 00:29:10.696 [2024-10-09 00:36:41.217174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.696 [2024-10-09 00:36:41.217204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.696 qpair failed and we were unable to recover it. 00:29:10.696 [2024-10-09 00:36:41.217564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.696 [2024-10-09 00:36:41.217593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.696 qpair failed and we were unable to recover it. 00:29:10.696 [2024-10-09 00:36:41.217948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.696 [2024-10-09 00:36:41.217978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.696 qpair failed and we were unable to recover it. 00:29:10.696 [2024-10-09 00:36:41.218346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.696 [2024-10-09 00:36:41.218374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.696 qpair failed and we were unable to recover it. 00:29:10.696 [2024-10-09 00:36:41.218756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.696 [2024-10-09 00:36:41.218787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.696 qpair failed and we were unable to recover it. 00:29:10.696 [2024-10-09 00:36:41.219132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.696 [2024-10-09 00:36:41.219162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.696 qpair failed and we were unable to recover it. 00:29:10.696 [2024-10-09 00:36:41.219396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.696 [2024-10-09 00:36:41.219424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.696 qpair failed and we were unable to recover it. 00:29:10.696 [2024-10-09 00:36:41.219776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.696 [2024-10-09 00:36:41.219808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.696 qpair failed and we were unable to recover it. 00:29:10.696 [2024-10-09 00:36:41.220177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.696 [2024-10-09 00:36:41.220206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.696 qpair failed and we were unable to recover it. 00:29:10.696 [2024-10-09 00:36:41.220572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.696 [2024-10-09 00:36:41.220602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.696 qpair failed and we were unable to recover it. 00:29:10.696 [2024-10-09 00:36:41.220982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.696 [2024-10-09 00:36:41.221022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.696 qpair failed and we were unable to recover it. 00:29:10.696 [2024-10-09 00:36:41.221391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.696 [2024-10-09 00:36:41.221420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.696 qpair failed and we were unable to recover it. 00:29:10.696 [2024-10-09 00:36:41.221787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.696 [2024-10-09 00:36:41.221817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.696 qpair failed and we were unable to recover it. 00:29:10.696 [2024-10-09 00:36:41.222196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.696 [2024-10-09 00:36:41.222225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.696 qpair failed and we were unable to recover it. 00:29:10.696 [2024-10-09 00:36:41.222552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.696 [2024-10-09 00:36:41.222583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.696 qpair failed and we were unable to recover it. 00:29:10.696 [2024-10-09 00:36:41.222815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.696 [2024-10-09 00:36:41.222847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.696 qpair failed and we were unable to recover it. 00:29:10.696 [2024-10-09 00:36:41.223292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.696 [2024-10-09 00:36:41.223322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.696 qpair failed and we were unable to recover it. 00:29:10.696 [2024-10-09 00:36:41.223688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.696 [2024-10-09 00:36:41.223718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.696 qpair failed and we were unable to recover it. 00:29:10.696 [2024-10-09 00:36:41.224106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.696 [2024-10-09 00:36:41.224148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.696 qpair failed and we were unable to recover it. 00:29:10.696 [2024-10-09 00:36:41.224515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.696 [2024-10-09 00:36:41.224546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.696 qpair failed and we were unable to recover it. 00:29:10.696 [2024-10-09 00:36:41.224928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.696 [2024-10-09 00:36:41.224960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.696 qpair failed and we were unable to recover it. 00:29:10.696 [2024-10-09 00:36:41.225317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.696 [2024-10-09 00:36:41.225345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.696 qpair failed and we were unable to recover it. 00:29:10.696 [2024-10-09 00:36:41.225678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.696 [2024-10-09 00:36:41.225709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.696 qpair failed and we were unable to recover it. 00:29:10.696 [2024-10-09 00:36:41.226136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.696 [2024-10-09 00:36:41.226165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.696 qpair failed and we were unable to recover it. 00:29:10.696 [2024-10-09 00:36:41.226602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.696 [2024-10-09 00:36:41.226630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.696 qpair failed and we were unable to recover it. 00:29:10.696 [2024-10-09 00:36:41.226938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.696 [2024-10-09 00:36:41.226969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.696 qpair failed and we were unable to recover it. 00:29:10.697 [2024-10-09 00:36:41.227332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.697 [2024-10-09 00:36:41.227360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.697 qpair failed and we were unable to recover it. 00:29:10.697 [2024-10-09 00:36:41.227719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.697 [2024-10-09 00:36:41.227757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.697 qpair failed and we were unable to recover it. 00:29:10.697 [2024-10-09 00:36:41.228109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.697 [2024-10-09 00:36:41.228138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.697 qpair failed and we were unable to recover it. 00:29:10.697 [2024-10-09 00:36:41.228501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.697 [2024-10-09 00:36:41.228531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.697 qpair failed and we were unable to recover it. 00:29:10.697 [2024-10-09 00:36:41.228907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.697 [2024-10-09 00:36:41.228938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.697 qpair failed and we were unable to recover it. 00:29:10.697 [2024-10-09 00:36:41.229317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.697 [2024-10-09 00:36:41.229346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.697 qpair failed and we were unable to recover it. 00:29:10.697 [2024-10-09 00:36:41.229694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.697 [2024-10-09 00:36:41.229751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.697 qpair failed and we were unable to recover it. 00:29:10.697 [2024-10-09 00:36:41.229984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.697 [2024-10-09 00:36:41.230018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.697 qpair failed and we were unable to recover it. 00:29:10.697 [2024-10-09 00:36:41.230375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.697 [2024-10-09 00:36:41.230406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.697 qpair failed and we were unable to recover it. 00:29:10.697 [2024-10-09 00:36:41.230750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.697 [2024-10-09 00:36:41.230780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.697 qpair failed and we were unable to recover it. 00:29:10.697 [2024-10-09 00:36:41.231128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.697 [2024-10-09 00:36:41.231159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.697 qpair failed and we were unable to recover it. 00:29:10.697 [2024-10-09 00:36:41.231428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.697 [2024-10-09 00:36:41.231458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.697 qpair failed and we were unable to recover it. 00:29:10.697 [2024-10-09 00:36:41.231807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.697 [2024-10-09 00:36:41.231837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.697 qpair failed and we were unable to recover it. 00:29:10.697 [2024-10-09 00:36:41.232203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.697 [2024-10-09 00:36:41.232231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.697 qpair failed and we were unable to recover it. 00:29:10.697 [2024-10-09 00:36:41.232632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.697 [2024-10-09 00:36:41.232661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.697 qpair failed and we were unable to recover it. 00:29:10.697 [2024-10-09 00:36:41.233008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.697 [2024-10-09 00:36:41.233039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.697 qpair failed and we were unable to recover it. 00:29:10.697 [2024-10-09 00:36:41.233370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.697 [2024-10-09 00:36:41.233403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.697 qpair failed and we were unable to recover it. 00:29:10.697 [2024-10-09 00:36:41.233671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.697 [2024-10-09 00:36:41.233700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.697 qpair failed and we were unable to recover it. 00:29:10.697 [2024-10-09 00:36:41.234070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.697 [2024-10-09 00:36:41.234100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.697 qpair failed and we were unable to recover it. 00:29:10.697 [2024-10-09 00:36:41.234468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.697 [2024-10-09 00:36:41.234498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.697 qpair failed and we were unable to recover it. 00:29:10.697 [2024-10-09 00:36:41.234884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.697 [2024-10-09 00:36:41.234915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.697 qpair failed and we were unable to recover it. 00:29:10.697 [2024-10-09 00:36:41.235341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.697 [2024-10-09 00:36:41.235375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.697 qpair failed and we were unable to recover it. 00:29:10.697 [2024-10-09 00:36:41.235715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.697 [2024-10-09 00:36:41.235755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.697 qpair failed and we were unable to recover it. 00:29:10.697 [2024-10-09 00:36:41.236157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.697 [2024-10-09 00:36:41.236186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.697 qpair failed and we were unable to recover it. 00:29:10.697 [2024-10-09 00:36:41.236556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.697 [2024-10-09 00:36:41.236585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.697 qpair failed and we were unable to recover it. 00:29:10.697 [2024-10-09 00:36:41.236947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.697 [2024-10-09 00:36:41.236978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.697 qpair failed and we were unable to recover it. 00:29:10.697 [2024-10-09 00:36:41.237299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.697 [2024-10-09 00:36:41.237329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.697 qpair failed and we were unable to recover it. 00:29:10.697 [2024-10-09 00:36:41.237662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.697 [2024-10-09 00:36:41.237692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.697 qpair failed and we were unable to recover it. 00:29:10.697 [2024-10-09 00:36:41.238048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.697 [2024-10-09 00:36:41.238078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.697 qpair failed and we were unable to recover it. 00:29:10.697 [2024-10-09 00:36:41.238433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.697 [2024-10-09 00:36:41.238462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.697 qpair failed and we were unable to recover it. 00:29:10.697 [2024-10-09 00:36:41.238802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.697 [2024-10-09 00:36:41.238833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.697 qpair failed and we were unable to recover it. 00:29:10.697 [2024-10-09 00:36:41.239235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.697 [2024-10-09 00:36:41.239264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.697 qpair failed and we were unable to recover it. 00:29:10.697 [2024-10-09 00:36:41.239606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.697 [2024-10-09 00:36:41.239636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.697 qpair failed and we were unable to recover it. 00:29:10.697 [2024-10-09 00:36:41.239997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.697 [2024-10-09 00:36:41.240028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.697 qpair failed and we were unable to recover it. 00:29:10.697 [2024-10-09 00:36:41.240374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.697 [2024-10-09 00:36:41.240404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.697 qpair failed and we were unable to recover it. 00:29:10.697 [2024-10-09 00:36:41.240763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.697 [2024-10-09 00:36:41.240794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.697 qpair failed and we were unable to recover it. 00:29:10.697 [2024-10-09 00:36:41.241183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.697 [2024-10-09 00:36:41.241213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.697 qpair failed and we were unable to recover it. 00:29:10.697 [2024-10-09 00:36:41.241561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.697 [2024-10-09 00:36:41.241591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.697 qpair failed and we were unable to recover it. 00:29:10.697 [2024-10-09 00:36:41.242031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.697 [2024-10-09 00:36:41.242061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.697 qpair failed and we were unable to recover it. 00:29:10.697 [2024-10-09 00:36:41.242303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.697 [2024-10-09 00:36:41.242334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.697 qpair failed and we were unable to recover it. 00:29:10.697 [2024-10-09 00:36:41.242708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.698 [2024-10-09 00:36:41.242747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.698 qpair failed and we were unable to recover it. 00:29:10.698 [2024-10-09 00:36:41.243080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.698 [2024-10-09 00:36:41.243110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.698 qpair failed and we were unable to recover it. 00:29:10.698 [2024-10-09 00:36:41.243471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.698 [2024-10-09 00:36:41.243500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.698 qpair failed and we were unable to recover it. 00:29:10.698 [2024-10-09 00:36:41.243840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.698 [2024-10-09 00:36:41.243870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.698 qpair failed and we were unable to recover it. 00:29:10.698 [2024-10-09 00:36:41.244124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.698 [2024-10-09 00:36:41.244157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.698 qpair failed and we were unable to recover it. 00:29:10.698 [2024-10-09 00:36:41.244525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.698 [2024-10-09 00:36:41.244555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.698 qpair failed and we were unable to recover it. 00:29:10.698 [2024-10-09 00:36:41.244900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.698 [2024-10-09 00:36:41.244931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.698 qpair failed and we were unable to recover it. 00:29:10.698 [2024-10-09 00:36:41.245285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.698 [2024-10-09 00:36:41.245315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.698 qpair failed and we were unable to recover it. 00:29:10.698 [2024-10-09 00:36:41.245694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.698 [2024-10-09 00:36:41.245737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.698 qpair failed and we were unable to recover it. 00:29:10.698 [2024-10-09 00:36:41.246138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.698 [2024-10-09 00:36:41.246168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.698 qpair failed and we were unable to recover it. 00:29:10.698 [2024-10-09 00:36:41.246514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.698 [2024-10-09 00:36:41.246545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.698 qpair failed and we were unable to recover it. 00:29:10.698 [2024-10-09 00:36:41.246928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.698 [2024-10-09 00:36:41.246957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.698 qpair failed and we were unable to recover it. 00:29:10.698 [2024-10-09 00:36:41.247326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.698 [2024-10-09 00:36:41.247355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.698 qpair failed and we were unable to recover it. 00:29:10.698 [2024-10-09 00:36:41.247741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.698 [2024-10-09 00:36:41.247772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.698 qpair failed and we were unable to recover it. 00:29:10.698 [2024-10-09 00:36:41.248130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.698 [2024-10-09 00:36:41.248159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.698 qpair failed and we were unable to recover it. 00:29:10.698 [2024-10-09 00:36:41.248538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.698 [2024-10-09 00:36:41.248567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.698 qpair failed and we were unable to recover it. 00:29:10.698 [2024-10-09 00:36:41.248930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.698 [2024-10-09 00:36:41.248961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.698 qpair failed and we were unable to recover it. 00:29:10.698 [2024-10-09 00:36:41.249312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.698 [2024-10-09 00:36:41.249341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.698 qpair failed and we were unable to recover it. 00:29:10.698 [2024-10-09 00:36:41.249692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.698 [2024-10-09 00:36:41.249729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.698 qpair failed and we were unable to recover it. 00:29:10.698 [2024-10-09 00:36:41.250054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.698 [2024-10-09 00:36:41.250084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.698 qpair failed and we were unable to recover it. 00:29:10.698 [2024-10-09 00:36:41.250455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.698 [2024-10-09 00:36:41.250484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.698 qpair failed and we were unable to recover it. 00:29:10.698 [2024-10-09 00:36:41.250850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.698 [2024-10-09 00:36:41.250880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.698 qpair failed and we were unable to recover it. 00:29:10.698 [2024-10-09 00:36:41.251248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.698 [2024-10-09 00:36:41.251277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.698 qpair failed and we were unable to recover it. 00:29:10.698 [2024-10-09 00:36:41.251638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.698 [2024-10-09 00:36:41.251667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.698 qpair failed and we were unable to recover it. 00:29:10.698 [2024-10-09 00:36:41.252037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.698 [2024-10-09 00:36:41.252067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.698 qpair failed and we were unable to recover it. 00:29:10.698 [2024-10-09 00:36:41.252433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.698 [2024-10-09 00:36:41.252463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.698 qpair failed and we were unable to recover it. 00:29:10.698 [2024-10-09 00:36:41.252828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.698 [2024-10-09 00:36:41.252859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.698 qpair failed and we were unable to recover it. 00:29:10.698 [2024-10-09 00:36:41.253192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.698 [2024-10-09 00:36:41.253222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.698 qpair failed and we were unable to recover it. 00:29:10.698 [2024-10-09 00:36:41.253474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.698 [2024-10-09 00:36:41.253503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.698 qpair failed and we were unable to recover it. 00:29:10.698 [2024-10-09 00:36:41.253875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.698 [2024-10-09 00:36:41.253906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.698 qpair failed and we were unable to recover it. 00:29:10.698 [2024-10-09 00:36:41.254261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.698 [2024-10-09 00:36:41.254290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.698 qpair failed and we were unable to recover it. 00:29:10.698 [2024-10-09 00:36:41.254527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.698 [2024-10-09 00:36:41.254559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.698 qpair failed and we were unable to recover it. 00:29:10.698 [2024-10-09 00:36:41.254927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.698 [2024-10-09 00:36:41.254959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.698 qpair failed and we were unable to recover it. 00:29:10.698 [2024-10-09 00:36:41.255357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.698 [2024-10-09 00:36:41.255387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.698 qpair failed and we were unable to recover it. 00:29:10.698 [2024-10-09 00:36:41.255743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.698 [2024-10-09 00:36:41.255773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.698 qpair failed and we were unable to recover it. 00:29:10.698 [2024-10-09 00:36:41.256127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.698 [2024-10-09 00:36:41.256157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.698 qpair failed and we were unable to recover it. 00:29:10.698 [2024-10-09 00:36:41.256528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.698 [2024-10-09 00:36:41.256557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.698 qpair failed and we were unable to recover it. 00:29:10.698 [2024-10-09 00:36:41.256924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.698 [2024-10-09 00:36:41.256955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.698 qpair failed and we were unable to recover it. 00:29:10.698 [2024-10-09 00:36:41.257322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.698 [2024-10-09 00:36:41.257351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.698 qpair failed and we were unable to recover it. 00:29:10.698 [2024-10-09 00:36:41.257695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.698 [2024-10-09 00:36:41.257734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.698 qpair failed and we were unable to recover it. 00:29:10.698 [2024-10-09 00:36:41.258103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.698 [2024-10-09 00:36:41.258133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.698 qpair failed and we were unable to recover it. 00:29:10.698 [2024-10-09 00:36:41.258496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.699 [2024-10-09 00:36:41.258526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.699 qpair failed and we were unable to recover it. 00:29:10.699 [2024-10-09 00:36:41.258888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.699 [2024-10-09 00:36:41.258917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.699 qpair failed and we were unable to recover it. 00:29:10.699 [2024-10-09 00:36:41.259269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.699 [2024-10-09 00:36:41.259298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.699 qpair failed and we were unable to recover it. 00:29:10.699 [2024-10-09 00:36:41.259626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.699 [2024-10-09 00:36:41.259657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.699 qpair failed and we were unable to recover it. 00:29:10.699 [2024-10-09 00:36:41.260043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.699 [2024-10-09 00:36:41.260073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.699 qpair failed and we were unable to recover it. 00:29:10.699 [2024-10-09 00:36:41.260431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.699 [2024-10-09 00:36:41.260461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.699 qpair failed and we were unable to recover it. 00:29:10.699 [2024-10-09 00:36:41.260818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.699 [2024-10-09 00:36:41.260848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.699 qpair failed and we were unable to recover it. 00:29:10.699 [2024-10-09 00:36:41.261207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.699 [2024-10-09 00:36:41.261237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.699 qpair failed and we were unable to recover it. 00:29:10.699 [2024-10-09 00:36:41.261600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.699 [2024-10-09 00:36:41.261629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.699 qpair failed and we were unable to recover it. 00:29:10.699 [2024-10-09 00:36:41.261962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.699 [2024-10-09 00:36:41.261992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.699 qpair failed and we were unable to recover it. 00:29:10.699 [2024-10-09 00:36:41.262367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.699 [2024-10-09 00:36:41.262396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.699 qpair failed and we were unable to recover it. 00:29:10.699 [2024-10-09 00:36:41.262787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.699 [2024-10-09 00:36:41.262818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.699 qpair failed and we were unable to recover it. 00:29:10.699 [2024-10-09 00:36:41.263188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.699 [2024-10-09 00:36:41.263216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.699 qpair failed and we were unable to recover it. 00:29:10.699 [2024-10-09 00:36:41.263574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.699 [2024-10-09 00:36:41.263602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.699 qpair failed and we were unable to recover it. 00:29:10.699 [2024-10-09 00:36:41.263985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.699 [2024-10-09 00:36:41.264015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.699 qpair failed and we were unable to recover it. 00:29:10.699 [2024-10-09 00:36:41.264346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.699 [2024-10-09 00:36:41.264374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.699 qpair failed and we were unable to recover it. 00:29:10.699 [2024-10-09 00:36:41.264739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.699 [2024-10-09 00:36:41.264770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.699 qpair failed and we were unable to recover it. 00:29:10.699 [2024-10-09 00:36:41.265122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.699 [2024-10-09 00:36:41.265152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.699 qpair failed and we were unable to recover it. 00:29:10.699 [2024-10-09 00:36:41.265518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.699 [2024-10-09 00:36:41.265546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.699 qpair failed and we were unable to recover it. 00:29:10.699 [2024-10-09 00:36:41.265879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.699 [2024-10-09 00:36:41.265911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.699 qpair failed and we were unable to recover it. 00:29:10.699 [2024-10-09 00:36:41.266255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.699 [2024-10-09 00:36:41.266283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.699 qpair failed and we were unable to recover it. 00:29:10.699 [2024-10-09 00:36:41.266685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.699 [2024-10-09 00:36:41.266715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.699 qpair failed and we were unable to recover it. 00:29:10.699 [2024-10-09 00:36:41.267093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.699 [2024-10-09 00:36:41.267123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.699 qpair failed and we were unable to recover it. 00:29:10.699 [2024-10-09 00:36:41.267505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.699 [2024-10-09 00:36:41.267533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.699 qpair failed and we were unable to recover it. 00:29:10.699 [2024-10-09 00:36:41.267882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.699 [2024-10-09 00:36:41.267912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.699 qpair failed and we were unable to recover it. 00:29:10.699 [2024-10-09 00:36:41.268290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.699 [2024-10-09 00:36:41.268320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.699 qpair failed and we were unable to recover it. 00:29:10.699 [2024-10-09 00:36:41.268609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.699 [2024-10-09 00:36:41.268638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.699 qpair failed and we were unable to recover it. 00:29:10.699 [2024-10-09 00:36:41.268879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.699 [2024-10-09 00:36:41.268910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.699 qpair failed and we were unable to recover it. 00:29:10.699 [2024-10-09 00:36:41.269240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.699 [2024-10-09 00:36:41.269271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.699 qpair failed and we were unable to recover it. 00:29:10.699 [2024-10-09 00:36:41.269639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.699 [2024-10-09 00:36:41.269668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.699 qpair failed and we were unable to recover it. 00:29:10.699 [2024-10-09 00:36:41.269998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.699 [2024-10-09 00:36:41.270029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.699 qpair failed and we were unable to recover it. 00:29:10.699 [2024-10-09 00:36:41.270401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.699 [2024-10-09 00:36:41.270431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.699 qpair failed and we were unable to recover it. 00:29:10.699 [2024-10-09 00:36:41.270781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.699 [2024-10-09 00:36:41.270812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.699 qpair failed and we were unable to recover it. 00:29:10.699 [2024-10-09 00:36:41.271064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.699 [2024-10-09 00:36:41.271094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.699 qpair failed and we were unable to recover it. 00:29:10.699 [2024-10-09 00:36:41.271515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.699 [2024-10-09 00:36:41.271545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.699 qpair failed and we were unable to recover it. 00:29:10.699 [2024-10-09 00:36:41.271873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.699 [2024-10-09 00:36:41.271910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.699 qpair failed and we were unable to recover it. 00:29:10.699 [2024-10-09 00:36:41.272283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.700 [2024-10-09 00:36:41.272312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.700 qpair failed and we were unable to recover it. 00:29:10.700 [2024-10-09 00:36:41.272669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.700 [2024-10-09 00:36:41.272697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.700 qpair failed and we were unable to recover it. 00:29:10.700 [2024-10-09 00:36:41.273062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.700 [2024-10-09 00:36:41.273092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.700 qpair failed and we were unable to recover it. 00:29:10.700 [2024-10-09 00:36:41.273444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.700 [2024-10-09 00:36:41.273473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.700 qpair failed and we were unable to recover it. 00:29:10.700 [2024-10-09 00:36:41.273833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.700 [2024-10-09 00:36:41.273865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.700 qpair failed and we were unable to recover it. 00:29:10.700 [2024-10-09 00:36:41.274231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.700 [2024-10-09 00:36:41.274261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.700 qpair failed and we were unable to recover it. 00:29:10.700 [2024-10-09 00:36:41.274615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.700 [2024-10-09 00:36:41.274643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.700 qpair failed and we were unable to recover it. 00:29:10.700 [2024-10-09 00:36:41.275019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.700 [2024-10-09 00:36:41.275059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.700 qpair failed and we were unable to recover it. 00:29:10.700 [2024-10-09 00:36:41.275423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.700 [2024-10-09 00:36:41.275451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.700 qpair failed and we were unable to recover it. 00:29:10.700 [2024-10-09 00:36:41.275805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.700 [2024-10-09 00:36:41.275836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.700 qpair failed and we were unable to recover it. 00:29:10.700 [2024-10-09 00:36:41.276204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.700 [2024-10-09 00:36:41.276233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.700 qpair failed and we were unable to recover it. 00:29:10.700 [2024-10-09 00:36:41.276607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.700 [2024-10-09 00:36:41.276636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.700 qpair failed and we were unable to recover it. 00:29:10.700 [2024-10-09 00:36:41.276910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.700 [2024-10-09 00:36:41.276939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.700 qpair failed and we were unable to recover it. 00:29:10.700 [2024-10-09 00:36:41.277325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.700 [2024-10-09 00:36:41.277354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.700 qpair failed and we were unable to recover it. 00:29:10.700 [2024-10-09 00:36:41.277789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.700 [2024-10-09 00:36:41.277819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.700 qpair failed and we were unable to recover it. 00:29:10.700 [2024-10-09 00:36:41.278176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.700 [2024-10-09 00:36:41.278206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.700 qpair failed and we were unable to recover it. 00:29:10.700 [2024-10-09 00:36:41.278570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.700 [2024-10-09 00:36:41.278600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.700 qpair failed and we were unable to recover it. 00:29:10.700 [2024-10-09 00:36:41.278954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.700 [2024-10-09 00:36:41.278985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.700 qpair failed and we were unable to recover it. 00:29:10.700 [2024-10-09 00:36:41.279347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.700 [2024-10-09 00:36:41.279375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.700 qpair failed and we were unable to recover it. 00:29:10.700 [2024-10-09 00:36:41.279729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.700 [2024-10-09 00:36:41.279759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.700 qpair failed and we were unable to recover it. 00:29:10.700 [2024-10-09 00:36:41.280144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.700 [2024-10-09 00:36:41.280173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.700 qpair failed and we were unable to recover it. 00:29:10.700 [2024-10-09 00:36:41.280519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.700 [2024-10-09 00:36:41.280551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.700 qpair failed and we were unable to recover it. 00:29:10.700 [2024-10-09 00:36:41.280890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.700 [2024-10-09 00:36:41.280921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.700 qpair failed and we were unable to recover it. 00:29:10.700 [2024-10-09 00:36:41.281298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.700 [2024-10-09 00:36:41.281327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.700 qpair failed and we were unable to recover it. 00:29:10.700 [2024-10-09 00:36:41.281676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.700 [2024-10-09 00:36:41.281706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.700 qpair failed and we were unable to recover it. 00:29:10.700 [2024-10-09 00:36:41.282067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.700 [2024-10-09 00:36:41.282096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.700 qpair failed and we were unable to recover it. 00:29:10.700 [2024-10-09 00:36:41.282468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.700 [2024-10-09 00:36:41.282501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.700 qpair failed and we were unable to recover it. 00:29:10.700 [2024-10-09 00:36:41.282872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.700 [2024-10-09 00:36:41.282902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.700 qpair failed and we were unable to recover it. 00:29:10.700 [2024-10-09 00:36:41.283238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.700 [2024-10-09 00:36:41.283268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.700 qpair failed and we were unable to recover it. 00:29:10.700 [2024-10-09 00:36:41.283537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.700 [2024-10-09 00:36:41.283568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.700 qpair failed and we were unable to recover it. 00:29:10.700 [2024-10-09 00:36:41.283935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.700 [2024-10-09 00:36:41.283965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.700 qpair failed and we were unable to recover it. 00:29:10.700 [2024-10-09 00:36:41.284212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.700 [2024-10-09 00:36:41.284242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.700 qpair failed and we were unable to recover it. 00:29:10.700 [2024-10-09 00:36:41.284582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.700 [2024-10-09 00:36:41.284612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.700 qpair failed and we were unable to recover it. 00:29:10.700 [2024-10-09 00:36:41.284969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.700 [2024-10-09 00:36:41.285001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.700 qpair failed and we were unable to recover it. 00:29:10.700 [2024-10-09 00:36:41.285359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.700 [2024-10-09 00:36:41.285388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.700 qpair failed and we were unable to recover it. 00:29:10.700 [2024-10-09 00:36:41.285764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.700 [2024-10-09 00:36:41.285793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.700 qpair failed and we were unable to recover it. 00:29:10.700 [2024-10-09 00:36:41.286148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.700 [2024-10-09 00:36:41.286176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.700 qpair failed and we were unable to recover it. 00:29:10.700 [2024-10-09 00:36:41.286530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.700 [2024-10-09 00:36:41.286558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.700 qpair failed and we were unable to recover it. 00:29:10.700 [2024-10-09 00:36:41.286927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.700 [2024-10-09 00:36:41.286958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.700 qpair failed and we were unable to recover it. 00:29:10.700 [2024-10-09 00:36:41.287326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.700 [2024-10-09 00:36:41.287355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.700 qpair failed and we were unable to recover it. 00:29:10.700 [2024-10-09 00:36:41.287661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.700 [2024-10-09 00:36:41.287691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.700 qpair failed and we were unable to recover it. 00:29:10.701 [2024-10-09 00:36:41.288071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.701 [2024-10-09 00:36:41.288101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.701 qpair failed and we were unable to recover it. 00:29:10.701 [2024-10-09 00:36:41.288551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.701 [2024-10-09 00:36:41.288581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.701 qpair failed and we were unable to recover it. 00:29:10.701 [2024-10-09 00:36:41.288948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.701 [2024-10-09 00:36:41.288977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.701 qpair failed and we were unable to recover it. 00:29:10.701 [2024-10-09 00:36:41.289333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.701 [2024-10-09 00:36:41.289362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.701 qpair failed and we were unable to recover it. 00:29:10.701 [2024-10-09 00:36:41.289718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.701 [2024-10-09 00:36:41.289756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.701 qpair failed and we were unable to recover it. 00:29:10.701 [2024-10-09 00:36:41.290068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.701 [2024-10-09 00:36:41.290096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.701 qpair failed and we were unable to recover it. 00:29:10.701 [2024-10-09 00:36:41.290481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.701 [2024-10-09 00:36:41.290510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.701 qpair failed and we were unable to recover it. 00:29:10.701 [2024-10-09 00:36:41.290948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.701 [2024-10-09 00:36:41.290979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.701 qpair failed and we were unable to recover it. 00:29:10.701 [2024-10-09 00:36:41.291339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.701 [2024-10-09 00:36:41.291367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.701 qpair failed and we were unable to recover it. 00:29:10.701 [2024-10-09 00:36:41.291743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.701 [2024-10-09 00:36:41.291775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.701 qpair failed and we were unable to recover it. 00:29:10.701 [2024-10-09 00:36:41.292133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.701 [2024-10-09 00:36:41.292163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.701 qpair failed and we were unable to recover it. 00:29:10.701 [2024-10-09 00:36:41.292416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.701 [2024-10-09 00:36:41.292444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.701 qpair failed and we were unable to recover it. 00:29:10.701 [2024-10-09 00:36:41.292798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.701 [2024-10-09 00:36:41.292833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.701 qpair failed and we were unable to recover it. 00:29:10.701 [2024-10-09 00:36:41.293164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.701 [2024-10-09 00:36:41.293194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.701 qpair failed and we were unable to recover it. 00:29:10.701 [2024-10-09 00:36:41.293561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.701 [2024-10-09 00:36:41.293590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.701 qpair failed and we were unable to recover it. 00:29:10.701 [2024-10-09 00:36:41.293948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.701 [2024-10-09 00:36:41.293979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.701 qpair failed and we were unable to recover it. 00:29:10.701 [2024-10-09 00:36:41.294328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.701 [2024-10-09 00:36:41.294358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.701 qpair failed and we were unable to recover it. 00:29:10.701 [2024-10-09 00:36:41.294740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.701 [2024-10-09 00:36:41.294771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.701 qpair failed and we were unable to recover it. 00:29:10.701 [2024-10-09 00:36:41.295132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.701 [2024-10-09 00:36:41.295160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.701 qpair failed and we were unable to recover it. 00:29:10.701 [2024-10-09 00:36:41.295515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.701 [2024-10-09 00:36:41.295544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.701 qpair failed and we were unable to recover it. 00:29:10.701 [2024-10-09 00:36:41.295904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.701 [2024-10-09 00:36:41.295935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.701 qpair failed and we were unable to recover it. 00:29:10.701 [2024-10-09 00:36:41.296272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.701 [2024-10-09 00:36:41.296302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.701 qpair failed and we were unable to recover it. 00:29:10.701 [2024-10-09 00:36:41.296682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.701 [2024-10-09 00:36:41.296711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.701 qpair failed and we were unable to recover it. 00:29:10.701 [2024-10-09 00:36:41.297162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.701 [2024-10-09 00:36:41.297192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.701 qpair failed and we were unable to recover it. 00:29:10.701 [2024-10-09 00:36:41.297541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.701 [2024-10-09 00:36:41.297570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.701 qpair failed and we were unable to recover it. 00:29:10.701 [2024-10-09 00:36:41.297949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.701 [2024-10-09 00:36:41.297979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.701 qpair failed and we were unable to recover it. 00:29:10.701 [2024-10-09 00:36:41.298355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.701 [2024-10-09 00:36:41.298385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.701 qpair failed and we were unable to recover it. 00:29:10.701 [2024-10-09 00:36:41.298754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.701 [2024-10-09 00:36:41.298786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.701 qpair failed and we were unable to recover it. 00:29:10.701 [2024-10-09 00:36:41.300694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.701 [2024-10-09 00:36:41.300773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.701 qpair failed and we were unable to recover it. 00:29:10.701 [2024-10-09 00:36:41.301078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.701 [2024-10-09 00:36:41.301114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.701 qpair failed and we were unable to recover it. 00:29:10.701 [2024-10-09 00:36:41.301493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.701 [2024-10-09 00:36:41.301523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.701 qpair failed and we were unable to recover it. 00:29:10.701 [2024-10-09 00:36:41.301870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.701 [2024-10-09 00:36:41.301903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.701 qpair failed and we were unable to recover it. 00:29:10.701 [2024-10-09 00:36:41.302254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.701 [2024-10-09 00:36:41.302283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.701 qpair failed and we were unable to recover it. 00:29:10.701 [2024-10-09 00:36:41.302620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.701 [2024-10-09 00:36:41.302651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.701 qpair failed and we were unable to recover it. 00:29:10.701 [2024-10-09 00:36:41.302991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.701 [2024-10-09 00:36:41.303022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.701 qpair failed and we were unable to recover it. 00:29:10.701 [2024-10-09 00:36:41.303390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.701 [2024-10-09 00:36:41.303419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.701 qpair failed and we were unable to recover it. 00:29:10.701 [2024-10-09 00:36:41.303785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.701 [2024-10-09 00:36:41.303815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.701 qpair failed and we were unable to recover it. 00:29:10.701 [2024-10-09 00:36:41.304171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.701 [2024-10-09 00:36:41.304200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.701 qpair failed and we were unable to recover it. 00:29:10.701 [2024-10-09 00:36:41.304568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.701 [2024-10-09 00:36:41.304597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.701 qpair failed and we were unable to recover it. 00:29:10.701 [2024-10-09 00:36:41.304937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.701 [2024-10-09 00:36:41.304966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.701 qpair failed and we were unable to recover it. 00:29:10.701 [2024-10-09 00:36:41.305336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.701 [2024-10-09 00:36:41.305365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.702 qpair failed and we were unable to recover it. 00:29:10.977 [2024-10-09 00:36:41.305736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.977 [2024-10-09 00:36:41.305772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.977 qpair failed and we were unable to recover it. 00:29:10.977 [2024-10-09 00:36:41.306051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.977 [2024-10-09 00:36:41.306080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.977 qpair failed and we were unable to recover it. 00:29:10.977 [2024-10-09 00:36:41.306325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.977 [2024-10-09 00:36:41.306353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.977 qpair failed and we were unable to recover it. 00:29:10.977 [2024-10-09 00:36:41.306713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.977 [2024-10-09 00:36:41.306767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.977 qpair failed and we were unable to recover it. 00:29:10.977 [2024-10-09 00:36:41.307158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.977 [2024-10-09 00:36:41.307188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.977 qpair failed and we were unable to recover it. 00:29:10.977 [2024-10-09 00:36:41.307559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.977 [2024-10-09 00:36:41.307587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.977 qpair failed and we were unable to recover it. 00:29:10.977 [2024-10-09 00:36:41.307946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.977 [2024-10-09 00:36:41.307978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.977 qpair failed and we were unable to recover it. 00:29:10.977 [2024-10-09 00:36:41.308323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.977 [2024-10-09 00:36:41.308354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.977 qpair failed and we were unable to recover it. 00:29:10.977 [2024-10-09 00:36:41.308715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.977 [2024-10-09 00:36:41.308753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.977 qpair failed and we were unable to recover it. 00:29:10.977 [2024-10-09 00:36:41.309092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.977 [2024-10-09 00:36:41.309121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.977 qpair failed and we were unable to recover it. 00:29:10.977 [2024-10-09 00:36:41.309476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.977 [2024-10-09 00:36:41.309505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.977 qpair failed and we were unable to recover it. 00:29:10.977 [2024-10-09 00:36:41.309847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.977 [2024-10-09 00:36:41.309877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.977 qpair failed and we were unable to recover it. 00:29:10.977 [2024-10-09 00:36:41.310239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.977 [2024-10-09 00:36:41.310269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.977 qpair failed and we were unable to recover it. 00:29:10.977 [2024-10-09 00:36:41.310646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.977 [2024-10-09 00:36:41.310674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.977 qpair failed and we were unable to recover it. 00:29:10.977 [2024-10-09 00:36:41.311019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.977 [2024-10-09 00:36:41.311050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.977 qpair failed and we were unable to recover it. 00:29:10.977 [2024-10-09 00:36:41.311389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.977 [2024-10-09 00:36:41.311420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.977 qpair failed and we were unable to recover it. 00:29:10.977 [2024-10-09 00:36:41.311801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.977 [2024-10-09 00:36:41.311832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.977 qpair failed and we were unable to recover it. 00:29:10.977 [2024-10-09 00:36:41.312168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.977 [2024-10-09 00:36:41.312205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.977 qpair failed and we were unable to recover it. 00:29:10.977 [2024-10-09 00:36:41.312452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.977 [2024-10-09 00:36:41.312484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.977 qpair failed and we were unable to recover it. 00:29:10.977 [2024-10-09 00:36:41.312771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.977 [2024-10-09 00:36:41.312801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.977 qpair failed and we were unable to recover it. 00:29:10.977 [2024-10-09 00:36:41.313206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.977 [2024-10-09 00:36:41.313236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.977 qpair failed and we were unable to recover it. 00:29:10.977 [2024-10-09 00:36:41.313562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.977 [2024-10-09 00:36:41.313592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.977 qpair failed and we were unable to recover it. 00:29:10.977 [2024-10-09 00:36:41.313946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.977 [2024-10-09 00:36:41.313976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.977 qpair failed and we were unable to recover it. 00:29:10.977 [2024-10-09 00:36:41.314217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.977 [2024-10-09 00:36:41.314250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.977 qpair failed and we were unable to recover it. 00:29:10.977 [2024-10-09 00:36:41.314639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.977 [2024-10-09 00:36:41.314668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.977 qpair failed and we were unable to recover it. 00:29:10.977 [2024-10-09 00:36:41.314997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.978 [2024-10-09 00:36:41.315029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.978 qpair failed and we were unable to recover it. 00:29:10.978 [2024-10-09 00:36:41.315385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.978 [2024-10-09 00:36:41.315415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.978 qpair failed and we were unable to recover it. 00:29:10.978 [2024-10-09 00:36:41.315777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.978 [2024-10-09 00:36:41.315808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.978 qpair failed and we were unable to recover it. 00:29:10.978 [2024-10-09 00:36:41.316073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.978 [2024-10-09 00:36:41.316102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.978 qpair failed and we were unable to recover it. 00:29:10.978 [2024-10-09 00:36:41.316333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.978 [2024-10-09 00:36:41.316366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.978 qpair failed and we were unable to recover it. 00:29:10.978 [2024-10-09 00:36:41.316708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.978 [2024-10-09 00:36:41.316745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.978 qpair failed and we were unable to recover it. 00:29:10.978 [2024-10-09 00:36:41.317087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.978 [2024-10-09 00:36:41.317117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.978 qpair failed and we were unable to recover it. 00:29:10.978 [2024-10-09 00:36:41.317483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.978 [2024-10-09 00:36:41.317513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.978 qpair failed and we were unable to recover it. 00:29:10.978 [2024-10-09 00:36:41.317775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.978 [2024-10-09 00:36:41.317805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.978 qpair failed and we were unable to recover it. 00:29:10.978 [2024-10-09 00:36:41.318092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.978 [2024-10-09 00:36:41.318124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.978 qpair failed and we were unable to recover it. 00:29:10.978 [2024-10-09 00:36:41.318489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.978 [2024-10-09 00:36:41.318518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.978 qpair failed and we were unable to recover it. 00:29:10.978 [2024-10-09 00:36:41.318883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.978 [2024-10-09 00:36:41.318916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.978 qpair failed and we were unable to recover it. 00:29:10.978 [2024-10-09 00:36:41.319243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.978 [2024-10-09 00:36:41.319273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.978 qpair failed and we were unable to recover it. 00:29:10.978 [2024-10-09 00:36:41.319648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.978 [2024-10-09 00:36:41.319679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.978 qpair failed and we were unable to recover it. 00:29:10.978 [2024-10-09 00:36:41.320042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.978 [2024-10-09 00:36:41.320079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.978 qpair failed and we were unable to recover it. 00:29:10.978 [2024-10-09 00:36:41.320441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.978 [2024-10-09 00:36:41.320471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.978 qpair failed and we were unable to recover it. 00:29:10.978 [2024-10-09 00:36:41.320798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.978 [2024-10-09 00:36:41.320828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.978 qpair failed and we were unable to recover it. 00:29:10.978 [2024-10-09 00:36:41.321207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.978 [2024-10-09 00:36:41.321236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.978 qpair failed and we were unable to recover it. 00:29:10.978 [2024-10-09 00:36:41.321680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.978 [2024-10-09 00:36:41.321712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.978 qpair failed and we were unable to recover it. 00:29:10.978 [2024-10-09 00:36:41.322098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.978 [2024-10-09 00:36:41.322129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.978 qpair failed and we were unable to recover it. 00:29:10.978 [2024-10-09 00:36:41.322497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.978 [2024-10-09 00:36:41.322527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.978 qpair failed and we were unable to recover it. 00:29:10.978 [2024-10-09 00:36:41.322815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.978 [2024-10-09 00:36:41.322845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.978 qpair failed and we were unable to recover it. 00:29:10.978 [2024-10-09 00:36:41.323201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.978 [2024-10-09 00:36:41.323229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.978 qpair failed and we were unable to recover it. 00:29:10.978 [2024-10-09 00:36:41.323594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.978 [2024-10-09 00:36:41.323622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.978 qpair failed and we were unable to recover it. 00:29:10.978 [2024-10-09 00:36:41.323968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.978 [2024-10-09 00:36:41.323998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.978 qpair failed and we were unable to recover it. 00:29:10.978 [2024-10-09 00:36:41.324368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.978 [2024-10-09 00:36:41.324397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.978 qpair failed and we were unable to recover it. 00:29:10.978 [2024-10-09 00:36:41.324768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.978 [2024-10-09 00:36:41.324799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.978 qpair failed and we were unable to recover it. 00:29:10.978 [2024-10-09 00:36:41.325159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.978 [2024-10-09 00:36:41.325189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.978 qpair failed and we were unable to recover it. 00:29:10.978 [2024-10-09 00:36:41.325562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.978 [2024-10-09 00:36:41.325591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.978 qpair failed and we were unable to recover it. 00:29:10.978 [2024-10-09 00:36:41.325957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.978 [2024-10-09 00:36:41.325988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.978 qpair failed and we were unable to recover it. 00:29:10.978 [2024-10-09 00:36:41.326317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.978 [2024-10-09 00:36:41.326346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.978 qpair failed and we were unable to recover it. 00:29:10.978 [2024-10-09 00:36:41.326715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.978 [2024-10-09 00:36:41.326762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.978 qpair failed and we were unable to recover it. 00:29:10.978 [2024-10-09 00:36:41.327134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.978 [2024-10-09 00:36:41.327163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.978 qpair failed and we were unable to recover it. 00:29:10.978 [2024-10-09 00:36:41.327550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.978 [2024-10-09 00:36:41.327579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.978 qpair failed and we were unable to recover it. 00:29:10.978 [2024-10-09 00:36:41.327920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.978 [2024-10-09 00:36:41.327951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.978 qpair failed and we were unable to recover it. 00:29:10.978 [2024-10-09 00:36:41.328347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.978 [2024-10-09 00:36:41.328377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.978 qpair failed and we were unable to recover it. 00:29:10.978 [2024-10-09 00:36:41.328750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.978 [2024-10-09 00:36:41.328781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.978 qpair failed and we were unable to recover it. 00:29:10.978 [2024-10-09 00:36:41.329131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.978 [2024-10-09 00:36:41.329162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.978 qpair failed and we were unable to recover it. 00:29:10.978 [2024-10-09 00:36:41.329536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.978 [2024-10-09 00:36:41.329565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.978 qpair failed and we were unable to recover it. 00:29:10.978 [2024-10-09 00:36:41.329940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.978 [2024-10-09 00:36:41.329969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.978 qpair failed and we were unable to recover it. 00:29:10.978 [2024-10-09 00:36:41.330333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.979 [2024-10-09 00:36:41.330364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.979 qpair failed and we were unable to recover it. 00:29:10.979 [2024-10-09 00:36:41.330713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.979 [2024-10-09 00:36:41.330776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.979 qpair failed and we were unable to recover it. 00:29:10.979 [2024-10-09 00:36:41.331124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.979 [2024-10-09 00:36:41.331153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.979 qpair failed and we were unable to recover it. 00:29:10.979 [2024-10-09 00:36:41.331402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.979 [2024-10-09 00:36:41.331434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.979 qpair failed and we were unable to recover it. 00:29:10.979 [2024-10-09 00:36:41.331796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.979 [2024-10-09 00:36:41.331826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.979 qpair failed and we were unable to recover it. 00:29:10.979 [2024-10-09 00:36:41.332210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.979 [2024-10-09 00:36:41.332239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.979 qpair failed and we were unable to recover it. 00:29:10.979 [2024-10-09 00:36:41.332581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.979 [2024-10-09 00:36:41.332609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.979 qpair failed and we were unable to recover it. 00:29:10.979 [2024-10-09 00:36:41.333010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.979 [2024-10-09 00:36:41.333041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.979 qpair failed and we were unable to recover it. 00:29:10.979 [2024-10-09 00:36:41.333380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.979 [2024-10-09 00:36:41.333411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.979 qpair failed and we were unable to recover it. 00:29:10.979 [2024-10-09 00:36:41.333774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.979 [2024-10-09 00:36:41.333804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.979 qpair failed and we were unable to recover it. 00:29:10.979 [2024-10-09 00:36:41.334178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.979 [2024-10-09 00:36:41.334207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.979 qpair failed and we were unable to recover it. 00:29:10.979 [2024-10-09 00:36:41.334464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.979 [2024-10-09 00:36:41.334495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.979 qpair failed and we were unable to recover it. 00:29:10.979 [2024-10-09 00:36:41.334859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.979 [2024-10-09 00:36:41.334890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.979 qpair failed and we were unable to recover it. 00:29:10.979 [2024-10-09 00:36:41.335228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.979 [2024-10-09 00:36:41.335258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.979 qpair failed and we were unable to recover it. 00:29:10.979 [2024-10-09 00:36:41.335633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.979 [2024-10-09 00:36:41.335663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.979 qpair failed and we were unable to recover it. 00:29:10.979 [2024-10-09 00:36:41.336076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.979 [2024-10-09 00:36:41.336107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.979 qpair failed and we were unable to recover it. 00:29:10.979 [2024-10-09 00:36:41.336476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.979 [2024-10-09 00:36:41.336505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.979 qpair failed and we were unable to recover it. 00:29:10.979 [2024-10-09 00:36:41.336854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.979 [2024-10-09 00:36:41.336885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.979 qpair failed and we were unable to recover it. 00:29:10.979 [2024-10-09 00:36:41.337245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.979 [2024-10-09 00:36:41.337274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.979 qpair failed and we were unable to recover it. 00:29:10.979 [2024-10-09 00:36:41.337646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.979 [2024-10-09 00:36:41.337676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.979 qpair failed and we were unable to recover it. 00:29:10.979 [2024-10-09 00:36:41.338050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.979 [2024-10-09 00:36:41.338080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.979 qpair failed and we were unable to recover it. 00:29:10.979 [2024-10-09 00:36:41.338436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.979 [2024-10-09 00:36:41.338466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.979 qpair failed and we were unable to recover it. 00:29:10.979 [2024-10-09 00:36:41.338832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.979 [2024-10-09 00:36:41.338863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.979 qpair failed and we were unable to recover it. 00:29:10.979 [2024-10-09 00:36:41.339231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.979 [2024-10-09 00:36:41.339260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.979 qpair failed and we were unable to recover it. 00:29:10.979 [2024-10-09 00:36:41.339618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.979 [2024-10-09 00:36:41.339649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.979 qpair failed and we were unable to recover it. 00:29:10.979 [2024-10-09 00:36:41.340016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.979 [2024-10-09 00:36:41.340047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.979 qpair failed and we were unable to recover it. 00:29:10.979 [2024-10-09 00:36:41.340404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.979 [2024-10-09 00:36:41.340435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.979 qpair failed and we were unable to recover it. 00:29:10.979 [2024-10-09 00:36:41.340777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.979 [2024-10-09 00:36:41.340807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.979 qpair failed and we were unable to recover it. 00:29:10.979 [2024-10-09 00:36:41.341134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.979 [2024-10-09 00:36:41.341163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.979 qpair failed and we were unable to recover it. 00:29:10.979 [2024-10-09 00:36:41.341528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.979 [2024-10-09 00:36:41.341557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.979 qpair failed and we were unable to recover it. 00:29:10.979 [2024-10-09 00:36:41.341796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.979 [2024-10-09 00:36:41.341826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.979 qpair failed and we were unable to recover it. 00:29:10.979 [2024-10-09 00:36:41.342205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.979 [2024-10-09 00:36:41.342235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.979 qpair failed and we were unable to recover it. 00:29:10.979 [2024-10-09 00:36:41.342611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.979 [2024-10-09 00:36:41.342642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.979 qpair failed and we were unable to recover it. 00:29:10.979 [2024-10-09 00:36:41.343003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.979 [2024-10-09 00:36:41.343033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.979 qpair failed and we were unable to recover it. 00:29:10.979 [2024-10-09 00:36:41.343385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.979 [2024-10-09 00:36:41.343414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.979 qpair failed and we were unable to recover it. 00:29:10.979 [2024-10-09 00:36:41.343692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.979 [2024-10-09 00:36:41.343738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.979 qpair failed and we were unable to recover it. 00:29:10.979 [2024-10-09 00:36:41.344083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.979 [2024-10-09 00:36:41.344117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.979 qpair failed and we were unable to recover it. 00:29:10.979 [2024-10-09 00:36:41.344507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.979 [2024-10-09 00:36:41.344536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.979 qpair failed and we were unable to recover it. 00:29:10.979 [2024-10-09 00:36:41.344673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.979 [2024-10-09 00:36:41.344705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.979 qpair failed and we were unable to recover it. 00:29:10.979 [2024-10-09 00:36:41.345090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.979 [2024-10-09 00:36:41.345120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.979 qpair failed and we were unable to recover it. 00:29:10.979 [2024-10-09 00:36:41.345479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.979 [2024-10-09 00:36:41.345509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.979 qpair failed and we were unable to recover it. 00:29:10.980 [2024-10-09 00:36:41.345758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.980 [2024-10-09 00:36:41.345790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.980 qpair failed and we were unable to recover it. 00:29:10.980 [2024-10-09 00:36:41.346174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.980 [2024-10-09 00:36:41.346203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.980 qpair failed and we were unable to recover it. 00:29:10.980 [2024-10-09 00:36:41.346556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.980 [2024-10-09 00:36:41.346585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.980 qpair failed and we were unable to recover it. 00:29:10.980 [2024-10-09 00:36:41.346951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.980 [2024-10-09 00:36:41.346983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.980 qpair failed and we were unable to recover it. 00:29:10.980 [2024-10-09 00:36:41.347341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.980 [2024-10-09 00:36:41.347370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.980 qpair failed and we were unable to recover it. 00:29:10.980 [2024-10-09 00:36:41.347755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.980 [2024-10-09 00:36:41.347785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.980 qpair failed and we were unable to recover it. 00:29:10.980 [2024-10-09 00:36:41.348165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.980 [2024-10-09 00:36:41.348195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.980 qpair failed and we were unable to recover it. 00:29:10.980 [2024-10-09 00:36:41.348549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.980 [2024-10-09 00:36:41.348580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.980 qpair failed and we were unable to recover it. 00:29:10.980 [2024-10-09 00:36:41.349039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.980 [2024-10-09 00:36:41.349069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.980 qpair failed and we were unable to recover it. 00:29:10.980 [2024-10-09 00:36:41.349317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.980 [2024-10-09 00:36:41.349349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.980 qpair failed and we were unable to recover it. 00:29:10.980 [2024-10-09 00:36:41.349743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.980 [2024-10-09 00:36:41.349774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.980 qpair failed and we were unable to recover it. 00:29:10.980 [2024-10-09 00:36:41.350140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.980 [2024-10-09 00:36:41.350170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.980 qpair failed and we were unable to recover it. 00:29:10.980 [2024-10-09 00:36:41.350531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.980 [2024-10-09 00:36:41.350560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.980 qpair failed and we were unable to recover it. 00:29:10.980 [2024-10-09 00:36:41.350959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.980 [2024-10-09 00:36:41.350989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.980 qpair failed and we were unable to recover it. 00:29:10.980 [2024-10-09 00:36:41.351350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.980 [2024-10-09 00:36:41.351379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.980 qpair failed and we were unable to recover it. 00:29:10.980 [2024-10-09 00:36:41.351634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.980 [2024-10-09 00:36:41.351667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.980 qpair failed and we were unable to recover it. 00:29:10.980 [2024-10-09 00:36:41.352088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.980 [2024-10-09 00:36:41.352118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.980 qpair failed and we were unable to recover it. 00:29:10.980 [2024-10-09 00:36:41.352392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.980 [2024-10-09 00:36:41.352422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.980 qpair failed and we were unable to recover it. 00:29:10.980 [2024-10-09 00:36:41.352767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.980 [2024-10-09 00:36:41.352798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.980 qpair failed and we were unable to recover it. 00:29:10.980 [2024-10-09 00:36:41.353079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.980 [2024-10-09 00:36:41.353108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.980 qpair failed and we were unable to recover it. 00:29:10.980 [2024-10-09 00:36:41.353503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.980 [2024-10-09 00:36:41.353534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.980 qpair failed and we were unable to recover it. 00:29:10.980 [2024-10-09 00:36:41.353888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.980 [2024-10-09 00:36:41.353921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.980 qpair failed and we were unable to recover it. 00:29:10.980 [2024-10-09 00:36:41.354254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.980 [2024-10-09 00:36:41.354285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.980 qpair failed and we were unable to recover it. 00:29:10.980 [2024-10-09 00:36:41.354640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.980 [2024-10-09 00:36:41.354678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.980 qpair failed and we were unable to recover it. 00:29:10.980 [2024-10-09 00:36:41.355061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.980 [2024-10-09 00:36:41.355092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.980 qpair failed and we were unable to recover it. 00:29:10.980 [2024-10-09 00:36:41.355529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.980 [2024-10-09 00:36:41.355560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.980 qpair failed and we were unable to recover it. 00:29:10.980 [2024-10-09 00:36:41.355997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.980 [2024-10-09 00:36:41.356027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.980 qpair failed and we were unable to recover it. 00:29:10.980 [2024-10-09 00:36:41.356376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.980 [2024-10-09 00:36:41.356413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.980 qpair failed and we were unable to recover it. 00:29:10.980 [2024-10-09 00:36:41.356806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.980 [2024-10-09 00:36:41.356843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.980 qpair failed and we were unable to recover it. 00:29:10.980 [2024-10-09 00:36:41.357202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.980 [2024-10-09 00:36:41.357232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.980 qpair failed and we were unable to recover it. 00:29:10.980 [2024-10-09 00:36:41.357588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.980 [2024-10-09 00:36:41.357617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.980 qpair failed and we were unable to recover it. 00:29:10.980 [2024-10-09 00:36:41.358020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.980 [2024-10-09 00:36:41.358051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.980 qpair failed and we were unable to recover it. 00:29:10.980 [2024-10-09 00:36:41.358403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.980 [2024-10-09 00:36:41.358432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.980 qpair failed and we were unable to recover it. 00:29:10.980 [2024-10-09 00:36:41.358704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.980 [2024-10-09 00:36:41.358741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.980 qpair failed and we were unable to recover it. 00:29:10.980 [2024-10-09 00:36:41.359171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.980 [2024-10-09 00:36:41.359203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.980 qpair failed and we were unable to recover it. 00:29:10.980 [2024-10-09 00:36:41.359563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.980 [2024-10-09 00:36:41.359593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.980 qpair failed and we were unable to recover it. 00:29:10.980 [2024-10-09 00:36:41.360038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.980 [2024-10-09 00:36:41.360069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.980 qpair failed and we were unable to recover it. 00:29:10.980 [2024-10-09 00:36:41.360416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.980 [2024-10-09 00:36:41.360449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.980 qpair failed and we were unable to recover it. 00:29:10.980 [2024-10-09 00:36:41.360758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.980 [2024-10-09 00:36:41.360789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.980 qpair failed and we were unable to recover it. 00:29:10.980 [2024-10-09 00:36:41.361110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.980 [2024-10-09 00:36:41.361141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.980 qpair failed and we were unable to recover it. 00:29:10.980 [2024-10-09 00:36:41.361528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.981 [2024-10-09 00:36:41.361557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.981 qpair failed and we were unable to recover it. 00:29:10.981 [2024-10-09 00:36:41.361830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.981 [2024-10-09 00:36:41.361860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.981 qpair failed and we were unable to recover it. 00:29:10.981 [2024-10-09 00:36:41.362251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.981 [2024-10-09 00:36:41.362282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.981 qpair failed and we were unable to recover it. 00:29:10.981 [2024-10-09 00:36:41.362618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.981 [2024-10-09 00:36:41.362647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.981 qpair failed and we were unable to recover it. 00:29:10.981 [2024-10-09 00:36:41.363042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.981 [2024-10-09 00:36:41.363072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.981 qpair failed and we were unable to recover it. 00:29:10.981 [2024-10-09 00:36:41.363410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.981 [2024-10-09 00:36:41.363439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.981 qpair failed and we were unable to recover it. 00:29:10.981 [2024-10-09 00:36:41.363880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.981 [2024-10-09 00:36:41.363911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.981 qpair failed and we were unable to recover it. 00:29:10.981 [2024-10-09 00:36:41.364278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.981 [2024-10-09 00:36:41.364307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.981 qpair failed and we were unable to recover it. 00:29:10.981 [2024-10-09 00:36:41.364639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.981 [2024-10-09 00:36:41.364667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.981 qpair failed and we were unable to recover it. 00:29:10.981 [2024-10-09 00:36:41.365034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.981 [2024-10-09 00:36:41.365065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.981 qpair failed and we were unable to recover it. 00:29:10.981 [2024-10-09 00:36:41.365430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.981 [2024-10-09 00:36:41.365462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.981 qpair failed and we were unable to recover it. 00:29:10.981 [2024-10-09 00:36:41.365861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.981 [2024-10-09 00:36:41.365890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.981 qpair failed and we were unable to recover it. 00:29:10.981 [2024-10-09 00:36:41.366271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.981 [2024-10-09 00:36:41.366302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.981 qpair failed and we were unable to recover it. 00:29:10.981 [2024-10-09 00:36:41.366673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.981 [2024-10-09 00:36:41.366702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.981 qpair failed and we were unable to recover it. 00:29:10.981 [2024-10-09 00:36:41.367114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.981 [2024-10-09 00:36:41.367146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.981 qpair failed and we were unable to recover it. 00:29:10.981 [2024-10-09 00:36:41.367496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.981 [2024-10-09 00:36:41.367531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.981 qpair failed and we were unable to recover it. 00:29:10.981 [2024-10-09 00:36:41.367824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.981 [2024-10-09 00:36:41.367857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.981 qpair failed and we were unable to recover it. 00:29:10.981 [2024-10-09 00:36:41.368240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.981 [2024-10-09 00:36:41.368271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.981 qpair failed and we were unable to recover it. 00:29:10.981 [2024-10-09 00:36:41.368627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.981 [2024-10-09 00:36:41.368658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.981 qpair failed and we were unable to recover it. 00:29:10.981 [2024-10-09 00:36:41.369041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.981 [2024-10-09 00:36:41.369072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.981 qpair failed and we were unable to recover it. 00:29:10.981 [2024-10-09 00:36:41.369458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.981 [2024-10-09 00:36:41.369490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.981 qpair failed and we were unable to recover it. 00:29:10.981 [2024-10-09 00:36:41.369825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.981 [2024-10-09 00:36:41.369855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.981 qpair failed and we were unable to recover it. 00:29:10.981 [2024-10-09 00:36:41.370196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.981 [2024-10-09 00:36:41.370227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.981 qpair failed and we were unable to recover it. 00:29:10.981 [2024-10-09 00:36:41.370488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.981 [2024-10-09 00:36:41.370518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.981 qpair failed and we were unable to recover it. 00:29:10.981 [2024-10-09 00:36:41.370890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.981 [2024-10-09 00:36:41.370921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.981 qpair failed and we were unable to recover it. 00:29:10.981 [2024-10-09 00:36:41.371207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.981 [2024-10-09 00:36:41.371239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.981 qpair failed and we were unable to recover it. 00:29:10.981 [2024-10-09 00:36:41.371510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.981 [2024-10-09 00:36:41.371538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.981 qpair failed and we were unable to recover it. 00:29:10.981 [2024-10-09 00:36:41.371782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.981 [2024-10-09 00:36:41.371818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.981 qpair failed and we were unable to recover it. 00:29:10.981 [2024-10-09 00:36:41.372209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.981 [2024-10-09 00:36:41.372239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.981 qpair failed and we were unable to recover it. 00:29:10.981 [2024-10-09 00:36:41.372586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.981 [2024-10-09 00:36:41.372616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.981 qpair failed and we were unable to recover it. 00:29:10.981 [2024-10-09 00:36:41.372990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.981 [2024-10-09 00:36:41.373023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.981 qpair failed and we were unable to recover it. 00:29:10.981 [2024-10-09 00:36:41.373406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.981 [2024-10-09 00:36:41.373436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.981 qpair failed and we were unable to recover it. 00:29:10.981 [2024-10-09 00:36:41.373694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.981 [2024-10-09 00:36:41.373732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.981 qpair failed and we were unable to recover it. 00:29:10.981 [2024-10-09 00:36:41.374117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.981 [2024-10-09 00:36:41.374147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.981 qpair failed and we were unable to recover it. 00:29:10.981 [2024-10-09 00:36:41.374538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.982 [2024-10-09 00:36:41.374567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.982 qpair failed and we were unable to recover it. 00:29:10.982 [2024-10-09 00:36:41.374845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.982 [2024-10-09 00:36:41.374877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.982 qpair failed and we were unable to recover it. 00:29:10.982 [2024-10-09 00:36:41.375160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.982 [2024-10-09 00:36:41.375191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.982 qpair failed and we were unable to recover it. 00:29:10.982 [2024-10-09 00:36:41.375563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.982 [2024-10-09 00:36:41.375591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.982 qpair failed and we were unable to recover it. 00:29:10.982 [2024-10-09 00:36:41.375863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.982 [2024-10-09 00:36:41.375893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.982 qpair failed and we were unable to recover it. 00:29:10.982 [2024-10-09 00:36:41.376142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.982 [2024-10-09 00:36:41.376171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.982 qpair failed and we were unable to recover it. 00:29:10.982 [2024-10-09 00:36:41.376567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.982 [2024-10-09 00:36:41.376596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.982 qpair failed and we were unable to recover it. 00:29:10.982 [2024-10-09 00:36:41.376873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.982 [2024-10-09 00:36:41.376904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.982 qpair failed and we were unable to recover it. 00:29:10.982 [2024-10-09 00:36:41.377178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.982 [2024-10-09 00:36:41.377219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.982 qpair failed and we were unable to recover it. 00:29:10.982 [2024-10-09 00:36:41.377487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.982 [2024-10-09 00:36:41.377515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.982 qpair failed and we were unable to recover it. 00:29:10.982 [2024-10-09 00:36:41.377797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.982 [2024-10-09 00:36:41.377828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.982 qpair failed and we were unable to recover it. 00:29:10.982 [2024-10-09 00:36:41.378193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.982 [2024-10-09 00:36:41.378224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.982 qpair failed and we were unable to recover it. 00:29:10.982 [2024-10-09 00:36:41.378619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.982 [2024-10-09 00:36:41.378651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.982 qpair failed and we were unable to recover it. 00:29:10.982 [2024-10-09 00:36:41.379014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.982 [2024-10-09 00:36:41.379045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.982 qpair failed and we were unable to recover it. 00:29:10.982 [2024-10-09 00:36:41.379378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.982 [2024-10-09 00:36:41.379412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.982 qpair failed and we were unable to recover it. 00:29:10.982 [2024-10-09 00:36:41.379758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.982 [2024-10-09 00:36:41.379791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.982 qpair failed and we were unable to recover it. 00:29:10.982 [2024-10-09 00:36:41.380142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.982 [2024-10-09 00:36:41.380174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.982 qpair failed and we were unable to recover it. 00:29:10.982 [2024-10-09 00:36:41.380517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.982 [2024-10-09 00:36:41.380548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.982 qpair failed and we were unable to recover it. 00:29:10.982 [2024-10-09 00:36:41.380806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.982 [2024-10-09 00:36:41.380837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.982 qpair failed and we were unable to recover it. 00:29:10.982 [2024-10-09 00:36:41.381097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.982 [2024-10-09 00:36:41.381130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.982 qpair failed and we were unable to recover it. 00:29:10.982 [2024-10-09 00:36:41.381447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.982 [2024-10-09 00:36:41.381476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.982 qpair failed and we were unable to recover it. 00:29:10.982 [2024-10-09 00:36:41.381847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.982 [2024-10-09 00:36:41.381879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.982 qpair failed and we were unable to recover it. 00:29:10.982 [2024-10-09 00:36:41.382305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.982 [2024-10-09 00:36:41.382335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.982 qpair failed and we were unable to recover it. 00:29:10.982 [2024-10-09 00:36:41.382571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.982 [2024-10-09 00:36:41.382601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.982 qpair failed and we were unable to recover it. 00:29:10.982 [2024-10-09 00:36:41.382825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.982 [2024-10-09 00:36:41.382860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.982 qpair failed and we were unable to recover it. 00:29:10.982 [2024-10-09 00:36:41.383091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.982 [2024-10-09 00:36:41.383121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.982 qpair failed and we were unable to recover it. 00:29:10.982 [2024-10-09 00:36:41.383488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.982 [2024-10-09 00:36:41.383517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.982 qpair failed and we were unable to recover it. 00:29:10.982 [2024-10-09 00:36:41.383782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.982 [2024-10-09 00:36:41.383813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.982 qpair failed and we were unable to recover it. 00:29:10.982 [2024-10-09 00:36:41.384079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.982 [2024-10-09 00:36:41.384108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.982 qpair failed and we were unable to recover it. 00:29:10.982 [2024-10-09 00:36:41.384361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.982 [2024-10-09 00:36:41.384389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.982 qpair failed and we were unable to recover it. 00:29:10.982 [2024-10-09 00:36:41.384712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.982 [2024-10-09 00:36:41.384749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.982 qpair failed and we were unable to recover it. 00:29:10.982 [2024-10-09 00:36:41.385110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.982 [2024-10-09 00:36:41.385141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.982 qpair failed and we were unable to recover it. 00:29:10.982 [2024-10-09 00:36:41.385586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.982 [2024-10-09 00:36:41.385617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.982 qpair failed and we were unable to recover it. 00:29:10.982 [2024-10-09 00:36:41.385979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.982 [2024-10-09 00:36:41.386012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.982 qpair failed and we were unable to recover it. 00:29:10.982 [2024-10-09 00:36:41.386402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.982 [2024-10-09 00:36:41.386433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.982 qpair failed and we were unable to recover it. 00:29:10.982 [2024-10-09 00:36:41.386801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.982 [2024-10-09 00:36:41.386831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.982 qpair failed and we were unable to recover it. 00:29:10.982 [2024-10-09 00:36:41.387197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.982 [2024-10-09 00:36:41.387228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.982 qpair failed and we were unable to recover it. 00:29:10.982 [2024-10-09 00:36:41.387604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.982 [2024-10-09 00:36:41.387632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.982 qpair failed and we were unable to recover it. 00:29:10.982 [2024-10-09 00:36:41.388071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.982 [2024-10-09 00:36:41.388108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.982 qpair failed and we were unable to recover it. 00:29:10.982 [2024-10-09 00:36:41.388498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.982 [2024-10-09 00:36:41.388534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.982 qpair failed and we were unable to recover it. 00:29:10.982 [2024-10-09 00:36:41.388805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.983 [2024-10-09 00:36:41.388838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.983 qpair failed and we were unable to recover it. 00:29:10.983 [2024-10-09 00:36:41.389224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.983 [2024-10-09 00:36:41.389257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.983 qpair failed and we were unable to recover it. 00:29:10.983 [2024-10-09 00:36:41.389587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.983 [2024-10-09 00:36:41.389616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.983 qpair failed and we were unable to recover it. 00:29:10.983 [2024-10-09 00:36:41.389903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.983 [2024-10-09 00:36:41.389933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.983 qpair failed and we were unable to recover it. 00:29:10.983 [2024-10-09 00:36:41.390369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.983 [2024-10-09 00:36:41.390398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.983 qpair failed and we were unable to recover it. 00:29:10.983 [2024-10-09 00:36:41.390789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.983 [2024-10-09 00:36:41.390822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.983 qpair failed and we were unable to recover it. 00:29:10.983 [2024-10-09 00:36:41.391170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.983 [2024-10-09 00:36:41.391202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.983 qpair failed and we were unable to recover it. 00:29:10.983 [2024-10-09 00:36:41.391566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.983 [2024-10-09 00:36:41.391595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.983 qpair failed and we were unable to recover it. 00:29:10.983 [2024-10-09 00:36:41.391977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.983 [2024-10-09 00:36:41.392010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.983 qpair failed and we were unable to recover it. 00:29:10.983 [2024-10-09 00:36:41.392403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.983 [2024-10-09 00:36:41.392436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.983 qpair failed and we were unable to recover it. 00:29:10.983 [2024-10-09 00:36:41.392783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.983 [2024-10-09 00:36:41.392817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.983 qpair failed and we were unable to recover it. 00:29:10.983 [2024-10-09 00:36:41.393207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.983 [2024-10-09 00:36:41.393237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.983 qpair failed and we were unable to recover it. 00:29:10.983 [2024-10-09 00:36:41.393580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.983 [2024-10-09 00:36:41.393610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.983 qpair failed and we were unable to recover it. 00:29:10.983 [2024-10-09 00:36:41.393980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.983 [2024-10-09 00:36:41.394015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.983 qpair failed and we were unable to recover it. 00:29:10.983 [2024-10-09 00:36:41.394299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.983 [2024-10-09 00:36:41.394329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.983 qpair failed and we were unable to recover it. 00:29:10.983 [2024-10-09 00:36:41.394688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.983 [2024-10-09 00:36:41.394718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.983 qpair failed and we were unable to recover it. 00:29:10.983 [2024-10-09 00:36:41.395068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.983 [2024-10-09 00:36:41.395099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.983 qpair failed and we were unable to recover it. 00:29:10.983 [2024-10-09 00:36:41.395480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.983 [2024-10-09 00:36:41.395511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.983 qpair failed and we were unable to recover it. 00:29:10.983 [2024-10-09 00:36:41.395869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.983 [2024-10-09 00:36:41.395901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.983 qpair failed and we were unable to recover it. 00:29:10.983 [2024-10-09 00:36:41.396301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.983 [2024-10-09 00:36:41.396332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.983 qpair failed and we were unable to recover it. 00:29:10.983 [2024-10-09 00:36:41.396591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.983 [2024-10-09 00:36:41.396620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.983 qpair failed and we were unable to recover it. 00:29:10.983 [2024-10-09 00:36:41.396966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.983 [2024-10-09 00:36:41.396999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.983 qpair failed and we were unable to recover it. 00:29:10.983 [2024-10-09 00:36:41.397378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.983 [2024-10-09 00:36:41.397409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.983 qpair failed and we were unable to recover it. 00:29:10.983 [2024-10-09 00:36:41.397771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.983 [2024-10-09 00:36:41.397802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.983 qpair failed and we were unable to recover it. 00:29:10.983 [2024-10-09 00:36:41.398074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.983 [2024-10-09 00:36:41.398108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.983 qpair failed and we were unable to recover it. 00:29:10.983 [2024-10-09 00:36:41.398469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.983 [2024-10-09 00:36:41.398499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.983 qpair failed and we were unable to recover it. 00:29:10.983 [2024-10-09 00:36:41.398786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.983 [2024-10-09 00:36:41.398818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.983 qpair failed and we were unable to recover it. 00:29:10.983 [2024-10-09 00:36:41.399177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.983 [2024-10-09 00:36:41.399207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.983 qpair failed and we were unable to recover it. 00:29:10.983 [2024-10-09 00:36:41.399470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.983 [2024-10-09 00:36:41.399500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.983 qpair failed and we were unable to recover it. 00:29:10.983 [2024-10-09 00:36:41.399881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.983 [2024-10-09 00:36:41.399912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.983 qpair failed and we were unable to recover it. 00:29:10.983 [2024-10-09 00:36:41.400305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.983 [2024-10-09 00:36:41.400335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.983 qpair failed and we were unable to recover it. 00:29:10.983 [2024-10-09 00:36:41.400693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.983 [2024-10-09 00:36:41.400746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.983 qpair failed and we were unable to recover it. 00:29:10.983 [2024-10-09 00:36:41.401129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.983 [2024-10-09 00:36:41.401160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.983 qpair failed and we were unable to recover it. 00:29:10.983 [2024-10-09 00:36:41.401411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.983 [2024-10-09 00:36:41.401442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.983 qpair failed and we were unable to recover it. 00:29:10.983 [2024-10-09 00:36:41.401858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.983 [2024-10-09 00:36:41.401890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.983 qpair failed and we were unable to recover it. 00:29:10.983 [2024-10-09 00:36:41.402334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.983 [2024-10-09 00:36:41.402364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.983 qpair failed and we were unable to recover it. 00:29:10.983 [2024-10-09 00:36:41.402742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.983 [2024-10-09 00:36:41.402779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.983 qpair failed and we were unable to recover it. 00:29:10.983 [2024-10-09 00:36:41.403142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.983 [2024-10-09 00:36:41.403172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.983 qpair failed and we were unable to recover it. 00:29:10.983 [2024-10-09 00:36:41.403543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.983 [2024-10-09 00:36:41.403574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.983 qpair failed and we were unable to recover it. 00:29:10.983 [2024-10-09 00:36:41.403909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.983 [2024-10-09 00:36:41.403940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.983 qpair failed and we were unable to recover it. 00:29:10.983 [2024-10-09 00:36:41.404314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.983 [2024-10-09 00:36:41.404346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.983 qpair failed and we were unable to recover it. 00:29:10.983 [2024-10-09 00:36:41.404611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.984 [2024-10-09 00:36:41.404641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.984 qpair failed and we were unable to recover it. 00:29:10.984 [2024-10-09 00:36:41.407127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.984 [2024-10-09 00:36:41.407200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.984 qpair failed and we were unable to recover it. 00:29:10.984 [2024-10-09 00:36:41.407486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.984 [2024-10-09 00:36:41.407524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.984 qpair failed and we were unable to recover it. 00:29:10.984 [2024-10-09 00:36:41.407992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.984 [2024-10-09 00:36:41.408025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.984 qpair failed and we were unable to recover it. 00:29:10.984 [2024-10-09 00:36:41.408366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.984 [2024-10-09 00:36:41.408397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.984 qpair failed and we were unable to recover it. 00:29:10.984 [2024-10-09 00:36:41.408770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.984 [2024-10-09 00:36:41.408801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.984 qpair failed and we were unable to recover it. 00:29:10.984 [2024-10-09 00:36:41.409191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.984 [2024-10-09 00:36:41.409220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.984 qpair failed and we were unable to recover it. 00:29:10.984 [2024-10-09 00:36:41.409576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.984 [2024-10-09 00:36:41.409605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.984 qpair failed and we were unable to recover it. 00:29:10.984 [2024-10-09 00:36:41.409870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.984 [2024-10-09 00:36:41.409903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.984 qpair failed and we were unable to recover it. 00:29:10.984 [2024-10-09 00:36:41.410261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.984 [2024-10-09 00:36:41.410292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.984 qpair failed and we were unable to recover it. 00:29:10.984 [2024-10-09 00:36:41.410605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.984 [2024-10-09 00:36:41.410635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.984 qpair failed and we were unable to recover it. 00:29:10.984 [2024-10-09 00:36:41.410791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.984 [2024-10-09 00:36:41.410824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.984 qpair failed and we were unable to recover it. 00:29:10.984 [2024-10-09 00:36:41.411193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.984 [2024-10-09 00:36:41.411222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.984 qpair failed and we were unable to recover it. 00:29:10.984 [2024-10-09 00:36:41.411616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.984 [2024-10-09 00:36:41.411644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.984 qpair failed and we were unable to recover it. 00:29:10.984 [2024-10-09 00:36:41.412058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.984 [2024-10-09 00:36:41.412090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.984 qpair failed and we were unable to recover it. 00:29:10.984 [2024-10-09 00:36:41.412453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.984 [2024-10-09 00:36:41.412484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.984 qpair failed and we were unable to recover it. 00:29:10.984 [2024-10-09 00:36:41.412843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.984 [2024-10-09 00:36:41.412873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.984 qpair failed and we were unable to recover it. 00:29:10.984 [2024-10-09 00:36:41.413256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.984 [2024-10-09 00:36:41.413285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.984 qpair failed and we were unable to recover it. 00:29:10.984 [2024-10-09 00:36:41.413652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.984 [2024-10-09 00:36:41.413681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.984 qpair failed and we were unable to recover it. 00:29:10.984 [2024-10-09 00:36:41.414071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.984 [2024-10-09 00:36:41.414102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.984 qpair failed and we were unable to recover it. 00:29:10.984 [2024-10-09 00:36:41.414457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.984 [2024-10-09 00:36:41.414487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.984 qpair failed and we were unable to recover it. 00:29:10.984 [2024-10-09 00:36:41.414824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.984 [2024-10-09 00:36:41.414854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.984 qpair failed and we were unable to recover it. 00:29:10.984 [2024-10-09 00:36:41.415208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.984 [2024-10-09 00:36:41.415244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.984 qpair failed and we were unable to recover it. 00:29:10.984 [2024-10-09 00:36:41.415594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.984 [2024-10-09 00:36:41.415623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.984 qpair failed and we were unable to recover it. 00:29:10.984 [2024-10-09 00:36:41.416030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.984 [2024-10-09 00:36:41.416060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.984 qpair failed and we were unable to recover it. 00:29:10.984 [2024-10-09 00:36:41.416424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.984 [2024-10-09 00:36:41.416454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.984 qpair failed and we were unable to recover it. 00:29:10.984 [2024-10-09 00:36:41.416713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.984 [2024-10-09 00:36:41.416761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.984 qpair failed and we were unable to recover it. 00:29:10.984 [2024-10-09 00:36:41.417144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.984 [2024-10-09 00:36:41.417174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.984 qpair failed and we were unable to recover it. 00:29:10.984 [2024-10-09 00:36:41.417536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.984 [2024-10-09 00:36:41.417564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.984 qpair failed and we were unable to recover it. 00:29:10.984 [2024-10-09 00:36:41.417832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.984 [2024-10-09 00:36:41.417862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.984 qpair failed and we were unable to recover it. 00:29:10.984 [2024-10-09 00:36:41.418136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.984 [2024-10-09 00:36:41.418164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.984 qpair failed and we were unable to recover it. 00:29:10.984 [2024-10-09 00:36:41.418509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.984 [2024-10-09 00:36:41.418538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.984 qpair failed and we were unable to recover it. 00:29:10.984 [2024-10-09 00:36:41.418879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.984 [2024-10-09 00:36:41.418910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.984 qpair failed and we were unable to recover it. 00:29:10.984 [2024-10-09 00:36:41.419278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.984 [2024-10-09 00:36:41.419308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.984 qpair failed and we were unable to recover it. 00:29:10.984 [2024-10-09 00:36:41.419642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.984 [2024-10-09 00:36:41.419672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.984 qpair failed and we were unable to recover it. 00:29:10.984 [2024-10-09 00:36:41.420087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.984 [2024-10-09 00:36:41.420118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.984 qpair failed and we were unable to recover it. 00:29:10.984 [2024-10-09 00:36:41.420494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.984 [2024-10-09 00:36:41.420523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.984 qpair failed and we were unable to recover it. 00:29:10.984 [2024-10-09 00:36:41.420907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.984 [2024-10-09 00:36:41.420941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.984 qpair failed and we were unable to recover it. 00:29:10.984 [2024-10-09 00:36:41.421327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.984 [2024-10-09 00:36:41.421357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.984 qpair failed and we were unable to recover it. 00:29:10.984 [2024-10-09 00:36:41.421651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.984 [2024-10-09 00:36:41.421680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.984 qpair failed and we were unable to recover it. 00:29:10.984 [2024-10-09 00:36:41.422109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.984 [2024-10-09 00:36:41.422140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.984 qpair failed and we were unable to recover it. 00:29:10.985 [2024-10-09 00:36:41.422466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.985 [2024-10-09 00:36:41.422496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.985 qpair failed and we were unable to recover it. 00:29:10.985 [2024-10-09 00:36:41.422851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.985 [2024-10-09 00:36:41.422881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.985 qpair failed and we were unable to recover it. 00:29:10.985 [2024-10-09 00:36:41.423245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.985 [2024-10-09 00:36:41.423274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.985 qpair failed and we were unable to recover it. 00:29:10.985 [2024-10-09 00:36:41.423649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.985 [2024-10-09 00:36:41.423678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.985 qpair failed and we were unable to recover it. 00:29:10.985 [2024-10-09 00:36:41.423979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.985 [2024-10-09 00:36:41.424009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.985 qpair failed and we were unable to recover it. 00:29:10.985 [2024-10-09 00:36:41.424358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.985 [2024-10-09 00:36:41.424387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.985 qpair failed and we were unable to recover it. 00:29:10.985 [2024-10-09 00:36:41.424751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.985 [2024-10-09 00:36:41.424782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.985 qpair failed and we were unable to recover it. 00:29:10.985 [2024-10-09 00:36:41.425169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.985 [2024-10-09 00:36:41.425198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.985 qpair failed and we were unable to recover it. 00:29:10.985 [2024-10-09 00:36:41.425516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.985 [2024-10-09 00:36:41.425547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.985 qpair failed and we were unable to recover it. 00:29:10.985 [2024-10-09 00:36:41.425785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.985 [2024-10-09 00:36:41.425818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.985 qpair failed and we were unable to recover it. 00:29:10.985 [2024-10-09 00:36:41.426173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.985 [2024-10-09 00:36:41.426202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.985 qpair failed and we were unable to recover it. 00:29:10.985 [2024-10-09 00:36:41.426494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.985 [2024-10-09 00:36:41.426524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.985 qpair failed and we were unable to recover it. 00:29:10.985 [2024-10-09 00:36:41.426864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.985 [2024-10-09 00:36:41.426895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.985 qpair failed and we were unable to recover it. 00:29:10.985 [2024-10-09 00:36:41.427265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.985 [2024-10-09 00:36:41.427293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.985 qpair failed and we were unable to recover it. 00:29:10.985 [2024-10-09 00:36:41.427671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.985 [2024-10-09 00:36:41.427700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.985 qpair failed and we were unable to recover it. 00:29:10.985 [2024-10-09 00:36:41.428130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.985 [2024-10-09 00:36:41.428160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.985 qpair failed and we were unable to recover it. 00:29:10.985 [2024-10-09 00:36:41.428491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.985 [2024-10-09 00:36:41.428522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.985 qpair failed and we were unable to recover it. 00:29:10.985 [2024-10-09 00:36:41.428883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.985 [2024-10-09 00:36:41.428914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.985 qpair failed and we were unable to recover it. 00:29:10.985 [2024-10-09 00:36:41.429287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.985 [2024-10-09 00:36:41.429316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.985 qpair failed and we were unable to recover it. 00:29:10.985 [2024-10-09 00:36:41.429741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.985 [2024-10-09 00:36:41.429771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.985 qpair failed and we were unable to recover it. 00:29:10.985 [2024-10-09 00:36:41.430029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.985 [2024-10-09 00:36:41.430058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.985 qpair failed and we were unable to recover it. 00:29:10.985 [2024-10-09 00:36:41.430341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.985 [2024-10-09 00:36:41.430370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.985 qpair failed and we were unable to recover it. 00:29:10.985 [2024-10-09 00:36:41.430750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.985 [2024-10-09 00:36:41.430783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.985 qpair failed and we were unable to recover it. 00:29:10.985 [2024-10-09 00:36:41.431108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.985 [2024-10-09 00:36:41.431138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.985 qpair failed and we were unable to recover it. 00:29:10.985 [2024-10-09 00:36:41.431515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.985 [2024-10-09 00:36:41.431544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.985 qpair failed and we were unable to recover it. 00:29:10.985 [2024-10-09 00:36:41.431913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.985 [2024-10-09 00:36:41.431944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.985 qpair failed and we were unable to recover it. 00:29:10.985 [2024-10-09 00:36:41.432174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.985 [2024-10-09 00:36:41.432202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.985 qpair failed and we were unable to recover it. 00:29:10.985 [2024-10-09 00:36:41.432602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.985 [2024-10-09 00:36:41.432632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.985 qpair failed and we were unable to recover it. 00:29:10.985 [2024-10-09 00:36:41.433016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.985 [2024-10-09 00:36:41.433047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.985 qpair failed and we were unable to recover it. 00:29:10.985 [2024-10-09 00:36:41.433452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.985 [2024-10-09 00:36:41.433480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.985 qpair failed and we were unable to recover it. 00:29:10.985 [2024-10-09 00:36:41.433711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.985 [2024-10-09 00:36:41.433751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.985 qpair failed and we were unable to recover it. 00:29:10.985 [2024-10-09 00:36:41.434127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.985 [2024-10-09 00:36:41.434156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.985 qpair failed and we were unable to recover it. 00:29:10.985 [2024-10-09 00:36:41.434533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.985 [2024-10-09 00:36:41.434563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.985 qpair failed and we were unable to recover it. 00:29:10.985 [2024-10-09 00:36:41.434905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.985 [2024-10-09 00:36:41.434936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.985 qpair failed and we were unable to recover it. 00:29:10.985 [2024-10-09 00:36:41.435293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.985 [2024-10-09 00:36:41.435323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.985 qpair failed and we were unable to recover it. 00:29:10.985 [2024-10-09 00:36:41.435679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.985 [2024-10-09 00:36:41.435709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.985 qpair failed and we were unable to recover it. 00:29:10.985 [2024-10-09 00:36:41.436058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.985 [2024-10-09 00:36:41.436088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.985 qpair failed and we were unable to recover it. 00:29:10.985 [2024-10-09 00:36:41.436426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.985 [2024-10-09 00:36:41.436456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.985 qpair failed and we were unable to recover it. 00:29:10.985 [2024-10-09 00:36:41.436803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.985 [2024-10-09 00:36:41.436834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.985 qpair failed and we were unable to recover it. 00:29:10.985 [2024-10-09 00:36:41.437193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.985 [2024-10-09 00:36:41.437223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.985 qpair failed and we were unable to recover it. 00:29:10.985 [2024-10-09 00:36:41.437530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.986 [2024-10-09 00:36:41.437562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.986 qpair failed and we were unable to recover it. 00:29:10.986 [2024-10-09 00:36:41.437949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.986 [2024-10-09 00:36:41.437980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.986 qpair failed and we were unable to recover it. 00:29:10.986 [2024-10-09 00:36:41.438337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.986 [2024-10-09 00:36:41.438367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.986 qpair failed and we were unable to recover it. 00:29:10.986 [2024-10-09 00:36:41.438734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.986 [2024-10-09 00:36:41.438765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.986 qpair failed and we were unable to recover it. 00:29:10.986 [2024-10-09 00:36:41.439128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.986 [2024-10-09 00:36:41.439158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.986 qpair failed and we were unable to recover it. 00:29:10.986 [2024-10-09 00:36:41.439514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.986 [2024-10-09 00:36:41.439545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.986 qpair failed and we were unable to recover it. 00:29:10.986 [2024-10-09 00:36:41.439916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.986 [2024-10-09 00:36:41.439947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.986 qpair failed and we were unable to recover it. 00:29:10.986 [2024-10-09 00:36:41.440179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.986 [2024-10-09 00:36:41.440211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.986 qpair failed and we were unable to recover it. 00:29:10.986 [2024-10-09 00:36:41.440555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.986 [2024-10-09 00:36:41.440584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.986 qpair failed and we were unable to recover it. 00:29:10.986 [2024-10-09 00:36:41.440945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.986 [2024-10-09 00:36:41.440984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.986 qpair failed and we were unable to recover it. 00:29:10.986 [2024-10-09 00:36:41.441331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.986 [2024-10-09 00:36:41.441360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.986 qpair failed and we were unable to recover it. 00:29:10.986 [2024-10-09 00:36:41.441742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.986 [2024-10-09 00:36:41.441773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.986 qpair failed and we were unable to recover it. 00:29:10.986 [2024-10-09 00:36:41.442137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.986 [2024-10-09 00:36:41.442166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.986 qpair failed and we were unable to recover it. 00:29:10.986 [2024-10-09 00:36:41.442494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.986 [2024-10-09 00:36:41.442523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.986 qpair failed and we were unable to recover it. 00:29:10.986 [2024-10-09 00:36:41.442875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.986 [2024-10-09 00:36:41.442905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.986 qpair failed and we were unable to recover it. 00:29:10.986 [2024-10-09 00:36:41.443276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.986 [2024-10-09 00:36:41.443306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.986 qpair failed and we were unable to recover it. 00:29:10.986 [2024-10-09 00:36:41.443677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.986 [2024-10-09 00:36:41.443705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.986 qpair failed and we were unable to recover it. 00:29:10.986 [2024-10-09 00:36:41.444056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.986 [2024-10-09 00:36:41.444086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.986 qpair failed and we were unable to recover it. 00:29:10.986 [2024-10-09 00:36:41.444509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.986 [2024-10-09 00:36:41.444539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.986 qpair failed and we were unable to recover it. 00:29:10.986 [2024-10-09 00:36:41.444897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.986 [2024-10-09 00:36:41.444929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.986 qpair failed and we were unable to recover it. 00:29:10.986 [2024-10-09 00:36:41.445164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.986 [2024-10-09 00:36:41.445193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.986 qpair failed and we were unable to recover it. 00:29:10.986 [2024-10-09 00:36:41.445585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.986 [2024-10-09 00:36:41.445614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.986 qpair failed and we were unable to recover it. 00:29:10.986 [2024-10-09 00:36:41.445960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.986 [2024-10-09 00:36:41.445991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.986 qpair failed and we were unable to recover it. 00:29:10.986 [2024-10-09 00:36:41.446371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.986 [2024-10-09 00:36:41.446402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.986 qpair failed and we were unable to recover it. 00:29:10.986 [2024-10-09 00:36:41.446745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.986 [2024-10-09 00:36:41.446775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.986 qpair failed and we were unable to recover it. 00:29:10.986 [2024-10-09 00:36:41.447128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.986 [2024-10-09 00:36:41.447157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.986 qpair failed and we were unable to recover it. 00:29:10.986 [2024-10-09 00:36:41.447514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.986 [2024-10-09 00:36:41.447544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.986 qpair failed and we were unable to recover it. 00:29:10.986 [2024-10-09 00:36:41.447924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.986 [2024-10-09 00:36:41.447956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.986 qpair failed and we were unable to recover it. 00:29:10.986 [2024-10-09 00:36:41.448324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.986 [2024-10-09 00:36:41.448353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.986 qpair failed and we were unable to recover it. 00:29:10.986 [2024-10-09 00:36:41.448710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.986 [2024-10-09 00:36:41.448750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.986 qpair failed and we were unable to recover it. 00:29:10.986 [2024-10-09 00:36:41.449008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.986 [2024-10-09 00:36:41.449038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.986 qpair failed and we were unable to recover it. 00:29:10.986 [2024-10-09 00:36:41.449393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.986 [2024-10-09 00:36:41.449424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.986 qpair failed and we were unable to recover it. 00:29:10.986 [2024-10-09 00:36:41.449786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.986 [2024-10-09 00:36:41.449817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.986 qpair failed and we were unable to recover it. 00:29:10.986 [2024-10-09 00:36:41.450181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.986 [2024-10-09 00:36:41.450211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.986 qpair failed and we were unable to recover it. 00:29:10.986 [2024-10-09 00:36:41.450567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.986 [2024-10-09 00:36:41.450598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.986 qpair failed and we were unable to recover it. 00:29:10.986 [2024-10-09 00:36:41.450950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.986 [2024-10-09 00:36:41.450982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.986 qpair failed and we were unable to recover it. 00:29:10.986 [2024-10-09 00:36:41.451355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.987 [2024-10-09 00:36:41.451389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.987 qpair failed and we were unable to recover it. 00:29:10.987 [2024-10-09 00:36:41.451711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.987 [2024-10-09 00:36:41.451747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.987 qpair failed and we were unable to recover it. 00:29:10.987 [2024-10-09 00:36:41.452123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.987 [2024-10-09 00:36:41.452152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.987 qpair failed and we were unable to recover it. 00:29:10.987 [2024-10-09 00:36:41.452516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.987 [2024-10-09 00:36:41.452545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.987 qpair failed and we were unable to recover it. 00:29:10.987 [2024-10-09 00:36:41.452801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.987 [2024-10-09 00:36:41.452833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.987 qpair failed and we were unable to recover it. 00:29:10.987 [2024-10-09 00:36:41.453188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.987 [2024-10-09 00:36:41.453218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.987 qpair failed and we were unable to recover it. 00:29:10.987 [2024-10-09 00:36:41.453576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.987 [2024-10-09 00:36:41.453606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.987 qpair failed and we were unable to recover it. 00:29:10.987 [2024-10-09 00:36:41.453957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.987 [2024-10-09 00:36:41.453987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.987 qpair failed and we were unable to recover it. 00:29:10.987 [2024-10-09 00:36:41.454340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.987 [2024-10-09 00:36:41.454369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.987 qpair failed and we were unable to recover it. 00:29:10.987 [2024-10-09 00:36:41.454746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.987 [2024-10-09 00:36:41.454777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.987 qpair failed and we were unable to recover it. 00:29:10.987 [2024-10-09 00:36:41.455155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.987 [2024-10-09 00:36:41.455185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.987 qpair failed and we were unable to recover it. 00:29:10.987 [2024-10-09 00:36:41.455557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.987 [2024-10-09 00:36:41.455587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.987 qpair failed and we were unable to recover it. 00:29:10.987 [2024-10-09 00:36:41.455927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.987 [2024-10-09 00:36:41.455957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.987 qpair failed and we were unable to recover it. 00:29:10.987 [2024-10-09 00:36:41.456308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.987 [2024-10-09 00:36:41.456337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.987 qpair failed and we were unable to recover it. 00:29:10.987 [2024-10-09 00:36:41.456693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.987 [2024-10-09 00:36:41.456730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.987 qpair failed and we were unable to recover it. 00:29:10.987 [2024-10-09 00:36:41.457081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.987 [2024-10-09 00:36:41.457110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.987 qpair failed and we were unable to recover it. 00:29:10.987 [2024-10-09 00:36:41.457468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.987 [2024-10-09 00:36:41.457499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.987 qpair failed and we were unable to recover it. 00:29:10.987 [2024-10-09 00:36:41.457860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.987 [2024-10-09 00:36:41.457892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.987 qpair failed and we were unable to recover it. 00:29:10.987 [2024-10-09 00:36:41.458263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.987 [2024-10-09 00:36:41.458291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.987 qpair failed and we were unable to recover it. 00:29:10.987 [2024-10-09 00:36:41.458645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.987 [2024-10-09 00:36:41.458673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.987 qpair failed and we were unable to recover it. 00:29:10.987 [2024-10-09 00:36:41.459043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.987 [2024-10-09 00:36:41.459074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.987 qpair failed and we were unable to recover it. 00:29:10.987 [2024-10-09 00:36:41.459431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.987 [2024-10-09 00:36:41.459461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.987 qpair failed and we were unable to recover it. 00:29:10.987 [2024-10-09 00:36:41.459820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.987 [2024-10-09 00:36:41.459850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.987 qpair failed and we were unable to recover it. 00:29:10.987 [2024-10-09 00:36:41.460205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.987 [2024-10-09 00:36:41.460234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.987 qpair failed and we were unable to recover it. 00:29:10.987 [2024-10-09 00:36:41.460603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.987 [2024-10-09 00:36:41.460632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.987 qpair failed and we were unable to recover it. 00:29:10.987 [2024-10-09 00:36:41.460989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.987 [2024-10-09 00:36:41.461019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.987 qpair failed and we were unable to recover it. 00:29:10.987 [2024-10-09 00:36:41.461352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.987 [2024-10-09 00:36:41.461381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.987 qpair failed and we were unable to recover it. 00:29:10.987 [2024-10-09 00:36:41.461639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.987 [2024-10-09 00:36:41.461673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.987 qpair failed and we were unable to recover it. 00:29:10.987 [2024-10-09 00:36:41.462075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.987 [2024-10-09 00:36:41.462106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.987 qpair failed and we were unable to recover it. 00:29:10.987 [2024-10-09 00:36:41.462445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.987 [2024-10-09 00:36:41.462475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.987 qpair failed and we were unable to recover it. 00:29:10.987 [2024-10-09 00:36:41.462735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.987 [2024-10-09 00:36:41.462766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.987 qpair failed and we were unable to recover it. 00:29:10.987 [2024-10-09 00:36:41.463159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.987 [2024-10-09 00:36:41.463187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.987 qpair failed and we were unable to recover it. 00:29:10.987 [2024-10-09 00:36:41.463523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.987 [2024-10-09 00:36:41.463553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.987 qpair failed and we were unable to recover it. 00:29:10.987 [2024-10-09 00:36:41.463922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.987 [2024-10-09 00:36:41.463952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.987 qpair failed and we were unable to recover it. 00:29:10.987 [2024-10-09 00:36:41.464320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.987 [2024-10-09 00:36:41.464350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.987 qpair failed and we were unable to recover it. 00:29:10.987 [2024-10-09 00:36:41.464718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.987 [2024-10-09 00:36:41.464764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.987 qpair failed and we were unable to recover it. 00:29:10.987 [2024-10-09 00:36:41.465126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.987 [2024-10-09 00:36:41.465155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.987 qpair failed and we were unable to recover it. 00:29:10.987 [2024-10-09 00:36:41.465488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.987 [2024-10-09 00:36:41.465518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.987 qpair failed and we were unable to recover it. 00:29:10.987 [2024-10-09 00:36:41.465858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.987 [2024-10-09 00:36:41.465888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.987 qpair failed and we were unable to recover it. 00:29:10.987 [2024-10-09 00:36:41.466268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.987 [2024-10-09 00:36:41.466296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.987 qpair failed and we were unable to recover it. 00:29:10.987 [2024-10-09 00:36:41.466673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.987 [2024-10-09 00:36:41.466703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.987 qpair failed and we were unable to recover it. 00:29:10.987 [2024-10-09 00:36:41.467116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.988 [2024-10-09 00:36:41.467147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.988 qpair failed and we were unable to recover it. 00:29:10.988 [2024-10-09 00:36:41.467502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.988 [2024-10-09 00:36:41.467531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.988 qpair failed and we were unable to recover it. 00:29:10.988 [2024-10-09 00:36:41.467904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.988 [2024-10-09 00:36:41.467934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.988 qpair failed and we were unable to recover it. 00:29:10.988 [2024-10-09 00:36:41.468200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.988 [2024-10-09 00:36:41.468233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.988 qpair failed and we were unable to recover it. 00:29:10.988 [2024-10-09 00:36:41.468569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.988 [2024-10-09 00:36:41.468600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.988 qpair failed and we were unable to recover it. 00:29:10.988 [2024-10-09 00:36:41.468948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.988 [2024-10-09 00:36:41.468979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.988 qpair failed and we were unable to recover it. 00:29:10.988 [2024-10-09 00:36:41.469347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.988 [2024-10-09 00:36:41.469376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.988 qpair failed and we were unable to recover it. 00:29:10.988 [2024-10-09 00:36:41.469740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.988 [2024-10-09 00:36:41.469769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.988 qpair failed and we were unable to recover it. 00:29:10.988 [2024-10-09 00:36:41.470105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.988 [2024-10-09 00:36:41.470134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.988 qpair failed and we were unable to recover it. 00:29:10.988 [2024-10-09 00:36:41.470533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.988 [2024-10-09 00:36:41.470562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.988 qpair failed and we were unable to recover it. 00:29:10.988 [2024-10-09 00:36:41.470929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.988 [2024-10-09 00:36:41.470960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.988 qpair failed and we were unable to recover it. 00:29:10.988 [2024-10-09 00:36:41.471225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.988 [2024-10-09 00:36:41.471255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.988 qpair failed and we were unable to recover it. 00:29:10.988 [2024-10-09 00:36:41.471596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.988 [2024-10-09 00:36:41.471625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.988 qpair failed and we were unable to recover it. 00:29:10.988 [2024-10-09 00:36:41.471978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.988 [2024-10-09 00:36:41.472010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.988 qpair failed and we were unable to recover it. 00:29:10.988 [2024-10-09 00:36:41.472342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.988 [2024-10-09 00:36:41.472373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.988 qpair failed and we were unable to recover it. 00:29:10.988 [2024-10-09 00:36:41.472700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.988 [2024-10-09 00:36:41.472738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.988 qpair failed and we were unable to recover it. 00:29:10.988 [2024-10-09 00:36:41.473146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.988 [2024-10-09 00:36:41.473175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.988 qpair failed and we were unable to recover it. 00:29:10.988 [2024-10-09 00:36:41.473527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.988 [2024-10-09 00:36:41.473556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.988 qpair failed and we were unable to recover it. 00:29:10.988 [2024-10-09 00:36:41.473938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.988 [2024-10-09 00:36:41.473968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.988 qpair failed and we were unable to recover it. 00:29:10.988 [2024-10-09 00:36:41.474300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.988 [2024-10-09 00:36:41.474328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.988 qpair failed and we were unable to recover it. 00:29:10.988 [2024-10-09 00:36:41.474680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.988 [2024-10-09 00:36:41.474734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.988 qpair failed and we were unable to recover it. 00:29:10.988 [2024-10-09 00:36:41.475140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.988 [2024-10-09 00:36:41.475168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.988 qpair failed and we were unable to recover it. 00:29:10.988 [2024-10-09 00:36:41.475527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.988 [2024-10-09 00:36:41.475558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.988 qpair failed and we were unable to recover it. 00:29:10.988 [2024-10-09 00:36:41.475897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.988 [2024-10-09 00:36:41.475927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.988 qpair failed and we were unable to recover it. 00:29:10.988 [2024-10-09 00:36:41.476290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.988 [2024-10-09 00:36:41.476320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.988 qpair failed and we were unable to recover it. 00:29:10.988 [2024-10-09 00:36:41.476624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.988 [2024-10-09 00:36:41.476654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.988 qpair failed and we were unable to recover it. 00:29:10.988 [2024-10-09 00:36:41.477000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.988 [2024-10-09 00:36:41.477029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.988 qpair failed and we were unable to recover it. 00:29:10.988 [2024-10-09 00:36:41.477393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.988 [2024-10-09 00:36:41.477422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.988 qpair failed and we were unable to recover it. 00:29:10.988 [2024-10-09 00:36:41.477793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.988 [2024-10-09 00:36:41.477824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.988 qpair failed and we were unable to recover it. 00:29:10.988 [2024-10-09 00:36:41.478166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.988 [2024-10-09 00:36:41.478196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.988 qpair failed and we were unable to recover it. 00:29:10.988 [2024-10-09 00:36:41.478450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.988 [2024-10-09 00:36:41.478479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.988 qpair failed and we were unable to recover it. 00:29:10.988 [2024-10-09 00:36:41.478841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.988 [2024-10-09 00:36:41.478871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.988 qpair failed and we were unable to recover it. 00:29:10.988 [2024-10-09 00:36:41.479232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.988 [2024-10-09 00:36:41.479261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.988 qpair failed and we were unable to recover it. 00:29:10.988 [2024-10-09 00:36:41.479627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.988 [2024-10-09 00:36:41.479655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.988 qpair failed and we were unable to recover it. 00:29:10.988 [2024-10-09 00:36:41.480025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.988 [2024-10-09 00:36:41.480055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.988 qpair failed and we were unable to recover it. 00:29:10.988 [2024-10-09 00:36:41.480430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.988 [2024-10-09 00:36:41.480461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.988 qpair failed and we were unable to recover it. 00:29:10.988 [2024-10-09 00:36:41.480732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.988 [2024-10-09 00:36:41.480762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.988 qpair failed and we were unable to recover it. 00:29:10.988 [2024-10-09 00:36:41.481109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.988 [2024-10-09 00:36:41.481138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.988 qpair failed and we were unable to recover it. 00:29:10.988 [2024-10-09 00:36:41.481484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.988 [2024-10-09 00:36:41.481514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.988 qpair failed and we were unable to recover it. 00:29:10.988 [2024-10-09 00:36:41.481841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.988 [2024-10-09 00:36:41.481871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.988 qpair failed and we were unable to recover it. 00:29:10.988 [2024-10-09 00:36:41.482238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.988 [2024-10-09 00:36:41.482268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.989 qpair failed and we were unable to recover it. 00:29:10.989 [2024-10-09 00:36:41.482632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.989 [2024-10-09 00:36:41.482663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.989 qpair failed and we were unable to recover it. 00:29:10.989 [2024-10-09 00:36:41.483018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.989 [2024-10-09 00:36:41.483050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.989 qpair failed and we were unable to recover it. 00:29:10.989 [2024-10-09 00:36:41.483376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.989 [2024-10-09 00:36:41.483405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.989 qpair failed and we were unable to recover it. 00:29:10.989 [2024-10-09 00:36:41.483771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.989 [2024-10-09 00:36:41.483802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.989 qpair failed and we were unable to recover it. 00:29:10.989 [2024-10-09 00:36:41.484174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.989 [2024-10-09 00:36:41.484203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.989 qpair failed and we were unable to recover it. 00:29:10.989 [2024-10-09 00:36:41.484578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.989 [2024-10-09 00:36:41.484607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.989 qpair failed and we were unable to recover it. 00:29:10.989 [2024-10-09 00:36:41.484980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.989 [2024-10-09 00:36:41.485010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.989 qpair failed and we were unable to recover it. 00:29:10.989 [2024-10-09 00:36:41.485354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.989 [2024-10-09 00:36:41.485383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.989 qpair failed and we were unable to recover it. 00:29:10.989 [2024-10-09 00:36:41.485744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.989 [2024-10-09 00:36:41.485774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.989 qpair failed and we were unable to recover it. 00:29:10.989 [2024-10-09 00:36:41.486117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.989 [2024-10-09 00:36:41.486146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.989 qpair failed and we were unable to recover it. 00:29:10.989 [2024-10-09 00:36:41.486474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.989 [2024-10-09 00:36:41.486504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.989 qpair failed and we were unable to recover it. 00:29:10.989 [2024-10-09 00:36:41.486866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.989 [2024-10-09 00:36:41.486896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.989 qpair failed and we were unable to recover it. 00:29:10.989 [2024-10-09 00:36:41.487256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.989 [2024-10-09 00:36:41.487285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.989 qpair failed and we were unable to recover it. 00:29:10.989 [2024-10-09 00:36:41.487629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.989 [2024-10-09 00:36:41.487662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.989 qpair failed and we were unable to recover it. 00:29:10.989 [2024-10-09 00:36:41.488059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.989 [2024-10-09 00:36:41.488089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.989 qpair failed and we were unable to recover it. 00:29:10.989 [2024-10-09 00:36:41.488443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.989 [2024-10-09 00:36:41.488472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.989 qpair failed and we were unable to recover it. 00:29:10.989 [2024-10-09 00:36:41.488836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.989 [2024-10-09 00:36:41.488866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.989 qpair failed and we were unable to recover it. 00:29:10.989 [2024-10-09 00:36:41.489240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.989 [2024-10-09 00:36:41.489270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.989 qpair failed and we were unable to recover it. 00:29:10.989 [2024-10-09 00:36:41.489649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.989 [2024-10-09 00:36:41.489680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.989 qpair failed and we were unable to recover it. 00:29:10.989 [2024-10-09 00:36:41.490037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.989 [2024-10-09 00:36:41.490067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.989 qpair failed and we were unable to recover it. 00:29:10.989 [2024-10-09 00:36:41.490490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.989 [2024-10-09 00:36:41.490520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.989 qpair failed and we were unable to recover it. 00:29:10.989 [2024-10-09 00:36:41.490784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.989 [2024-10-09 00:36:41.490814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.989 qpair failed and we were unable to recover it. 00:29:10.989 [2024-10-09 00:36:41.491137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.989 [2024-10-09 00:36:41.491169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.989 qpair failed and we were unable to recover it. 00:29:10.989 [2024-10-09 00:36:41.491514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.989 [2024-10-09 00:36:41.491543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.989 qpair failed and we were unable to recover it. 00:29:10.989 [2024-10-09 00:36:41.491902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.989 [2024-10-09 00:36:41.491933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.989 qpair failed and we were unable to recover it. 00:29:10.989 [2024-10-09 00:36:41.492300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.989 [2024-10-09 00:36:41.492329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.989 qpair failed and we were unable to recover it. 00:29:10.989 [2024-10-09 00:36:41.492698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.989 [2024-10-09 00:36:41.492736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.989 qpair failed and we were unable to recover it. 00:29:10.989 [2024-10-09 00:36:41.493095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.989 [2024-10-09 00:36:41.493123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.989 qpair failed and we were unable to recover it. 00:29:10.989 [2024-10-09 00:36:41.493462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.989 [2024-10-09 00:36:41.493491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.989 qpair failed and we were unable to recover it. 00:29:10.989 [2024-10-09 00:36:41.493861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.989 [2024-10-09 00:36:41.493892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.989 qpair failed and we were unable to recover it. 00:29:10.989 [2024-10-09 00:36:41.494247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.989 [2024-10-09 00:36:41.494277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.989 qpair failed and we were unable to recover it. 00:29:10.989 [2024-10-09 00:36:41.494638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.989 [2024-10-09 00:36:41.494667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.989 qpair failed and we were unable to recover it. 00:29:10.989 [2024-10-09 00:36:41.495086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.989 [2024-10-09 00:36:41.495116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.989 qpair failed and we were unable to recover it. 00:29:10.989 [2024-10-09 00:36:41.495479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.989 [2024-10-09 00:36:41.495508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.989 qpair failed and we were unable to recover it. 00:29:10.989 [2024-10-09 00:36:41.495879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.989 [2024-10-09 00:36:41.495910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.989 qpair failed and we were unable to recover it. 00:29:10.989 [2024-10-09 00:36:41.496282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.989 [2024-10-09 00:36:41.496311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.989 qpair failed and we were unable to recover it. 00:29:10.989 [2024-10-09 00:36:41.496667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.989 [2024-10-09 00:36:41.496696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.989 qpair failed and we were unable to recover it. 00:29:10.989 [2024-10-09 00:36:41.497061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.989 [2024-10-09 00:36:41.497091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.989 qpair failed and we were unable to recover it. 00:29:10.989 [2024-10-09 00:36:41.497471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.989 [2024-10-09 00:36:41.497501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.989 qpair failed and we were unable to recover it. 00:29:10.989 [2024-10-09 00:36:41.497745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.989 [2024-10-09 00:36:41.497776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.989 qpair failed and we were unable to recover it. 00:29:10.990 [2024-10-09 00:36:41.498048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.990 [2024-10-09 00:36:41.498082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.990 qpair failed and we were unable to recover it. 00:29:10.990 [2024-10-09 00:36:41.498451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.990 [2024-10-09 00:36:41.498482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.990 qpair failed and we were unable to recover it. 00:29:10.990 [2024-10-09 00:36:41.498847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.990 [2024-10-09 00:36:41.498878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.990 qpair failed and we were unable to recover it. 00:29:10.990 [2024-10-09 00:36:41.499239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.990 [2024-10-09 00:36:41.499268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.990 qpair failed and we were unable to recover it. 00:29:10.990 [2024-10-09 00:36:41.499614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.990 [2024-10-09 00:36:41.499643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.990 qpair failed and we were unable to recover it. 00:29:10.990 [2024-10-09 00:36:41.500020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.990 [2024-10-09 00:36:41.500050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.990 qpair failed and we were unable to recover it. 00:29:10.990 [2024-10-09 00:36:41.500423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.990 [2024-10-09 00:36:41.500452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.990 qpair failed and we were unable to recover it. 00:29:10.990 [2024-10-09 00:36:41.500799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.990 [2024-10-09 00:36:41.500829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.990 qpair failed and we were unable to recover it. 00:29:10.990 [2024-10-09 00:36:41.501056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.990 [2024-10-09 00:36:41.501089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.990 qpair failed and we were unable to recover it. 00:29:10.990 [2024-10-09 00:36:41.501502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.990 [2024-10-09 00:36:41.501531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.990 qpair failed and we were unable to recover it. 00:29:10.990 [2024-10-09 00:36:41.501884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.990 [2024-10-09 00:36:41.501913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.990 qpair failed and we were unable to recover it. 00:29:10.990 [2024-10-09 00:36:41.502355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.990 [2024-10-09 00:36:41.502383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.990 qpair failed and we were unable to recover it. 00:29:10.990 [2024-10-09 00:36:41.502752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.990 [2024-10-09 00:36:41.502784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.990 qpair failed and we were unable to recover it. 00:29:10.990 [2024-10-09 00:36:41.503136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.990 [2024-10-09 00:36:41.503172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.990 qpair failed and we were unable to recover it. 00:29:10.990 [2024-10-09 00:36:41.503419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.990 [2024-10-09 00:36:41.503449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.990 qpair failed and we were unable to recover it. 00:29:10.990 [2024-10-09 00:36:41.503819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.990 [2024-10-09 00:36:41.503851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.990 qpair failed and we were unable to recover it. 00:29:10.990 [2024-10-09 00:36:41.504198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.990 [2024-10-09 00:36:41.504227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.990 qpair failed and we were unable to recover it. 00:29:10.990 [2024-10-09 00:36:41.504588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.990 [2024-10-09 00:36:41.504618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.990 qpair failed and we were unable to recover it. 00:29:10.990 [2024-10-09 00:36:41.504874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.990 [2024-10-09 00:36:41.504904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.990 qpair failed and we were unable to recover it. 00:29:10.990 [2024-10-09 00:36:41.505254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.990 [2024-10-09 00:36:41.505284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.990 qpair failed and we were unable to recover it. 00:29:10.990 [2024-10-09 00:36:41.505630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.990 [2024-10-09 00:36:41.505660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.990 qpair failed and we were unable to recover it. 00:29:10.990 [2024-10-09 00:36:41.506028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.990 [2024-10-09 00:36:41.506058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.990 qpair failed and we were unable to recover it. 00:29:10.990 [2024-10-09 00:36:41.506427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.990 [2024-10-09 00:36:41.506456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.990 qpair failed and we were unable to recover it. 00:29:10.990 [2024-10-09 00:36:41.506798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.990 [2024-10-09 00:36:41.506828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.990 qpair failed and we were unable to recover it. 00:29:10.990 [2024-10-09 00:36:41.507191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.990 [2024-10-09 00:36:41.507220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.990 qpair failed and we were unable to recover it. 00:29:10.990 [2024-10-09 00:36:41.507583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.990 [2024-10-09 00:36:41.507612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.990 qpair failed and we were unable to recover it. 00:29:10.990 [2024-10-09 00:36:41.507930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.990 [2024-10-09 00:36:41.507960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.990 qpair failed and we were unable to recover it. 00:29:10.990 [2024-10-09 00:36:41.508220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.990 [2024-10-09 00:36:41.508249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.990 qpair failed and we were unable to recover it. 00:29:10.990 [2024-10-09 00:36:41.508626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.990 [2024-10-09 00:36:41.508656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.990 qpair failed and we were unable to recover it. 00:29:10.990 [2024-10-09 00:36:41.509035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.990 [2024-10-09 00:36:41.509065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.990 qpair failed and we were unable to recover it. 00:29:10.990 [2024-10-09 00:36:41.509403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.990 [2024-10-09 00:36:41.509434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.990 qpair failed and we were unable to recover it. 00:29:10.990 [2024-10-09 00:36:41.509793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.990 [2024-10-09 00:36:41.509823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.990 qpair failed and we were unable to recover it. 00:29:10.990 [2024-10-09 00:36:41.510182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.990 [2024-10-09 00:36:41.510212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.990 qpair failed and we were unable to recover it. 00:29:10.990 [2024-10-09 00:36:41.510546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.990 [2024-10-09 00:36:41.510575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.990 qpair failed and we were unable to recover it. 00:29:10.990 [2024-10-09 00:36:41.510915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.990 [2024-10-09 00:36:41.510946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.990 qpair failed and we were unable to recover it. 00:29:10.990 [2024-10-09 00:36:41.511317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.990 [2024-10-09 00:36:41.511346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.990 qpair failed and we were unable to recover it. 00:29:10.990 [2024-10-09 00:36:41.511652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.991 [2024-10-09 00:36:41.511680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.991 qpair failed and we were unable to recover it. 00:29:10.991 [2024-10-09 00:36:41.512076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.991 [2024-10-09 00:36:41.512107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.991 qpair failed and we were unable to recover it. 00:29:10.991 [2024-10-09 00:36:41.512443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.991 [2024-10-09 00:36:41.512473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.991 qpair failed and we were unable to recover it. 00:29:10.991 [2024-10-09 00:36:41.512828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.991 [2024-10-09 00:36:41.512858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.991 qpair failed and we were unable to recover it. 00:29:10.991 [2024-10-09 00:36:41.513219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.991 [2024-10-09 00:36:41.513247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.991 qpair failed and we were unable to recover it. 00:29:10.991 [2024-10-09 00:36:41.513614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.991 [2024-10-09 00:36:41.513646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.991 qpair failed and we were unable to recover it. 00:29:10.991 [2024-10-09 00:36:41.514022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.991 [2024-10-09 00:36:41.514053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.991 qpair failed and we were unable to recover it. 00:29:10.991 [2024-10-09 00:36:41.514415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.991 [2024-10-09 00:36:41.514444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.991 qpair failed and we were unable to recover it. 00:29:10.991 [2024-10-09 00:36:41.514798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.991 [2024-10-09 00:36:41.514828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.991 qpair failed and we were unable to recover it. 00:29:10.991 [2024-10-09 00:36:41.515181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.991 [2024-10-09 00:36:41.515211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.991 qpair failed and we were unable to recover it. 00:29:10.991 [2024-10-09 00:36:41.515560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.991 [2024-10-09 00:36:41.515589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.991 qpair failed and we were unable to recover it. 00:29:10.991 [2024-10-09 00:36:41.515933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.991 [2024-10-09 00:36:41.515964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.991 qpair failed and we were unable to recover it. 00:29:10.991 [2024-10-09 00:36:41.516322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.991 [2024-10-09 00:36:41.516351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.991 qpair failed and we were unable to recover it. 00:29:10.991 [2024-10-09 00:36:41.516714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.991 [2024-10-09 00:36:41.516752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.991 qpair failed and we were unable to recover it. 00:29:10.991 [2024-10-09 00:36:41.517119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.991 [2024-10-09 00:36:41.517148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.991 qpair failed and we were unable to recover it. 00:29:10.991 [2024-10-09 00:36:41.517475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.991 [2024-10-09 00:36:41.517503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.991 qpair failed and we were unable to recover it. 00:29:10.991 [2024-10-09 00:36:41.517848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.991 [2024-10-09 00:36:41.517879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.991 qpair failed and we were unable to recover it. 00:29:10.991 [2024-10-09 00:36:41.518270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.991 [2024-10-09 00:36:41.518300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.991 qpair failed and we were unable to recover it. 00:29:10.991 [2024-10-09 00:36:41.518629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.991 [2024-10-09 00:36:41.518659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.991 qpair failed and we were unable to recover it. 00:29:10.991 [2024-10-09 00:36:41.519046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.991 [2024-10-09 00:36:41.519078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.991 qpair failed and we were unable to recover it. 00:29:10.991 [2024-10-09 00:36:41.519444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.991 [2024-10-09 00:36:41.519473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.991 qpair failed and we were unable to recover it. 00:29:10.991 [2024-10-09 00:36:41.519839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.991 [2024-10-09 00:36:41.519869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.991 qpair failed and we were unable to recover it. 00:29:10.991 [2024-10-09 00:36:41.520264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.991 [2024-10-09 00:36:41.520293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.991 qpair failed and we were unable to recover it. 00:29:10.991 [2024-10-09 00:36:41.520634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.991 [2024-10-09 00:36:41.520665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.991 qpair failed and we were unable to recover it. 00:29:10.991 [2024-10-09 00:36:41.521002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.991 [2024-10-09 00:36:41.521032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.991 qpair failed and we were unable to recover it. 00:29:10.991 [2024-10-09 00:36:41.521412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.991 [2024-10-09 00:36:41.521441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.991 qpair failed and we were unable to recover it. 00:29:10.991 [2024-10-09 00:36:41.521802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.991 [2024-10-09 00:36:41.521831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.991 qpair failed and we were unable to recover it. 00:29:10.991 [2024-10-09 00:36:41.522201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.991 [2024-10-09 00:36:41.522230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.991 qpair failed and we were unable to recover it. 00:29:10.991 [2024-10-09 00:36:41.522488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.991 [2024-10-09 00:36:41.522521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.991 qpair failed and we were unable to recover it. 00:29:10.991 [2024-10-09 00:36:41.522777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.991 [2024-10-09 00:36:41.522808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.991 qpair failed and we were unable to recover it. 00:29:10.991 [2024-10-09 00:36:41.523139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.991 [2024-10-09 00:36:41.523168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.991 qpair failed and we were unable to recover it. 00:29:10.991 [2024-10-09 00:36:41.523484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.991 [2024-10-09 00:36:41.523514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.991 qpair failed and we were unable to recover it. 00:29:10.991 [2024-10-09 00:36:41.523877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.991 [2024-10-09 00:36:41.523913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.991 qpair failed and we were unable to recover it. 00:29:10.991 [2024-10-09 00:36:41.524173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.991 [2024-10-09 00:36:41.524201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.991 qpair failed and we were unable to recover it. 00:29:10.991 [2024-10-09 00:36:41.524564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.991 [2024-10-09 00:36:41.524594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.991 qpair failed and we were unable to recover it. 00:29:10.991 [2024-10-09 00:36:41.524993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.991 [2024-10-09 00:36:41.525023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.991 qpair failed and we were unable to recover it. 00:29:10.991 [2024-10-09 00:36:41.525381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.991 [2024-10-09 00:36:41.525413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.991 qpair failed and we were unable to recover it. 00:29:10.991 [2024-10-09 00:36:41.525784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.991 [2024-10-09 00:36:41.525820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.991 qpair failed and we were unable to recover it. 00:29:10.991 [2024-10-09 00:36:41.526157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.991 [2024-10-09 00:36:41.526185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.991 qpair failed and we were unable to recover it. 00:29:10.991 [2024-10-09 00:36:41.526537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.991 [2024-10-09 00:36:41.526567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.991 qpair failed and we were unable to recover it. 00:29:10.991 [2024-10-09 00:36:41.526819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.991 [2024-10-09 00:36:41.526853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.991 qpair failed and we were unable to recover it. 00:29:10.992 [2024-10-09 00:36:41.527214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.992 [2024-10-09 00:36:41.527243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.992 qpair failed and we were unable to recover it. 00:29:10.992 [2024-10-09 00:36:41.527655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.992 [2024-10-09 00:36:41.527686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.992 qpair failed and we were unable to recover it. 00:29:10.992 [2024-10-09 00:36:41.528071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.992 [2024-10-09 00:36:41.528102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.992 qpair failed and we were unable to recover it. 00:29:10.992 [2024-10-09 00:36:41.528487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.992 [2024-10-09 00:36:41.528517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.992 qpair failed and we were unable to recover it. 00:29:10.992 [2024-10-09 00:36:41.528789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.992 [2024-10-09 00:36:41.528819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.992 qpair failed and we were unable to recover it. 00:29:10.992 [2024-10-09 00:36:41.529193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.992 [2024-10-09 00:36:41.529222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.992 qpair failed and we were unable to recover it. 00:29:10.992 [2024-10-09 00:36:41.529586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.992 [2024-10-09 00:36:41.529615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.992 qpair failed and we were unable to recover it. 00:29:10.992 [2024-10-09 00:36:41.529952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.992 [2024-10-09 00:36:41.529982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.992 qpair failed and we were unable to recover it. 00:29:10.992 [2024-10-09 00:36:41.530349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.992 [2024-10-09 00:36:41.530378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.992 qpair failed and we were unable to recover it. 00:29:10.992 [2024-10-09 00:36:41.530705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.992 [2024-10-09 00:36:41.530763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.992 qpair failed and we were unable to recover it. 00:29:10.992 [2024-10-09 00:36:41.531120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.992 [2024-10-09 00:36:41.531149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.992 qpair failed and we were unable to recover it. 00:29:10.992 [2024-10-09 00:36:41.531519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.992 [2024-10-09 00:36:41.531548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.992 qpair failed and we were unable to recover it. 00:29:10.992 [2024-10-09 00:36:41.531826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.992 [2024-10-09 00:36:41.531857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.992 qpair failed and we were unable to recover it. 00:29:10.992 [2024-10-09 00:36:41.532207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.992 [2024-10-09 00:36:41.532237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.992 qpair failed and we were unable to recover it. 00:29:10.992 [2024-10-09 00:36:41.532591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.992 [2024-10-09 00:36:41.532620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.992 qpair failed and we were unable to recover it. 00:29:10.992 [2024-10-09 00:36:41.532961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.992 [2024-10-09 00:36:41.532992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.992 qpair failed and we were unable to recover it. 00:29:10.992 [2024-10-09 00:36:41.533352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.992 [2024-10-09 00:36:41.533382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.992 qpair failed and we were unable to recover it. 00:29:10.992 [2024-10-09 00:36:41.533756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.992 [2024-10-09 00:36:41.533788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.992 qpair failed and we were unable to recover it. 00:29:10.992 [2024-10-09 00:36:41.534141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.992 [2024-10-09 00:36:41.534177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.992 qpair failed and we were unable to recover it. 00:29:10.992 [2024-10-09 00:36:41.534546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.992 [2024-10-09 00:36:41.534578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.992 qpair failed and we were unable to recover it. 00:29:10.992 [2024-10-09 00:36:41.534922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.992 [2024-10-09 00:36:41.534953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.992 qpair failed and we were unable to recover it. 00:29:10.992 [2024-10-09 00:36:41.535298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.992 [2024-10-09 00:36:41.535329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.992 qpair failed and we were unable to recover it. 00:29:10.992 [2024-10-09 00:36:41.535672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.992 [2024-10-09 00:36:41.535702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.992 qpair failed and we were unable to recover it. 00:29:10.992 [2024-10-09 00:36:41.536065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.992 [2024-10-09 00:36:41.536096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.992 qpair failed and we were unable to recover it. 00:29:10.992 [2024-10-09 00:36:41.536464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.992 [2024-10-09 00:36:41.536494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.992 qpair failed and we were unable to recover it. 00:29:10.992 [2024-10-09 00:36:41.536744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.992 [2024-10-09 00:36:41.536775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.992 qpair failed and we were unable to recover it. 00:29:10.992 [2024-10-09 00:36:41.538686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.992 [2024-10-09 00:36:41.538764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.992 qpair failed and we were unable to recover it. 00:29:10.992 [2024-10-09 00:36:41.539204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.992 [2024-10-09 00:36:41.539237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.992 qpair failed and we were unable to recover it. 00:29:10.992 [2024-10-09 00:36:41.539612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.992 [2024-10-09 00:36:41.539641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.992 qpair failed and we were unable to recover it. 00:29:10.992 [2024-10-09 00:36:41.540021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.992 [2024-10-09 00:36:41.540051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.992 qpair failed and we were unable to recover it. 00:29:10.992 [2024-10-09 00:36:41.540400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.992 [2024-10-09 00:36:41.540428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.992 qpair failed and we were unable to recover it. 00:29:10.992 [2024-10-09 00:36:41.540788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.992 [2024-10-09 00:36:41.540820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.992 qpair failed and we were unable to recover it. 00:29:10.992 [2024-10-09 00:36:41.541072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.992 [2024-10-09 00:36:41.541103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.992 qpair failed and we were unable to recover it. 00:29:10.992 [2024-10-09 00:36:41.541494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.992 [2024-10-09 00:36:41.541523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.992 qpair failed and we were unable to recover it. 00:29:10.992 [2024-10-09 00:36:41.541857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.992 [2024-10-09 00:36:41.541887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.992 qpair failed and we were unable to recover it. 00:29:10.992 [2024-10-09 00:36:41.542254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.992 [2024-10-09 00:36:41.542283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.992 qpair failed and we were unable to recover it. 00:29:10.992 [2024-10-09 00:36:41.542640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.992 [2024-10-09 00:36:41.542668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.992 qpair failed and we were unable to recover it. 00:29:10.992 [2024-10-09 00:36:41.543033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.992 [2024-10-09 00:36:41.543064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.992 qpair failed and we were unable to recover it. 00:29:10.992 [2024-10-09 00:36:41.543416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.992 [2024-10-09 00:36:41.543445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.992 qpair failed and we were unable to recover it. 00:29:10.992 [2024-10-09 00:36:41.543801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.992 [2024-10-09 00:36:41.543832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.992 qpair failed and we were unable to recover it. 00:29:10.992 [2024-10-09 00:36:41.544185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.992 [2024-10-09 00:36:41.544215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.992 qpair failed and we were unable to recover it. 00:29:10.993 [2024-10-09 00:36:41.544593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.993 [2024-10-09 00:36:41.544621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.993 qpair failed and we were unable to recover it. 00:29:10.993 [2024-10-09 00:36:41.544991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.993 [2024-10-09 00:36:41.545023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.993 qpair failed and we were unable to recover it. 00:29:10.993 [2024-10-09 00:36:41.545397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.993 [2024-10-09 00:36:41.545426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.993 qpair failed and we were unable to recover it. 00:29:10.993 [2024-10-09 00:36:41.545696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.993 [2024-10-09 00:36:41.545731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.993 qpair failed and we were unable to recover it. 00:29:10.993 [2024-10-09 00:36:41.546078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.993 [2024-10-09 00:36:41.546115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.993 qpair failed and we were unable to recover it. 00:29:10.993 [2024-10-09 00:36:41.546499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.993 [2024-10-09 00:36:41.546529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.993 qpair failed and we were unable to recover it. 00:29:10.993 [2024-10-09 00:36:41.546902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.993 [2024-10-09 00:36:41.546931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.993 qpair failed and we were unable to recover it. 00:29:10.993 [2024-10-09 00:36:41.547291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.993 [2024-10-09 00:36:41.547320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.993 qpair failed and we were unable to recover it. 00:29:10.993 [2024-10-09 00:36:41.547657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.993 [2024-10-09 00:36:41.547687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.993 qpair failed and we were unable to recover it. 00:29:10.993 [2024-10-09 00:36:41.548132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.993 [2024-10-09 00:36:41.548162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.993 qpair failed and we were unable to recover it. 00:29:10.993 [2024-10-09 00:36:41.548602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.993 [2024-10-09 00:36:41.548632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.993 qpair failed and we were unable to recover it. 00:29:10.993 [2024-10-09 00:36:41.548981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.993 [2024-10-09 00:36:41.549013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.993 qpair failed and we were unable to recover it. 00:29:10.993 [2024-10-09 00:36:41.549342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.993 [2024-10-09 00:36:41.549370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.993 qpair failed and we were unable to recover it. 00:29:10.993 [2024-10-09 00:36:41.549743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.993 [2024-10-09 00:36:41.549774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.993 qpair failed and we were unable to recover it. 00:29:10.993 [2024-10-09 00:36:41.550039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.993 [2024-10-09 00:36:41.550068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.993 qpair failed and we were unable to recover it. 00:29:10.993 [2024-10-09 00:36:41.550420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.993 [2024-10-09 00:36:41.550450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.993 qpair failed and we were unable to recover it. 00:29:10.993 [2024-10-09 00:36:41.550709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.993 [2024-10-09 00:36:41.550747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.993 qpair failed and we were unable to recover it. 00:29:10.993 [2024-10-09 00:36:41.551134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.993 [2024-10-09 00:36:41.551163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.993 qpair failed and we were unable to recover it. 00:29:10.993 [2024-10-09 00:36:41.551515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.993 [2024-10-09 00:36:41.551544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.993 qpair failed and we were unable to recover it. 00:29:10.993 [2024-10-09 00:36:41.551806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.993 [2024-10-09 00:36:41.551841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.993 qpair failed and we were unable to recover it. 00:29:10.993 [2024-10-09 00:36:41.552241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.993 [2024-10-09 00:36:41.552270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.993 qpair failed and we were unable to recover it. 00:29:10.993 [2024-10-09 00:36:41.552637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.993 [2024-10-09 00:36:41.552666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.993 qpair failed and we were unable to recover it. 00:29:10.993 [2024-10-09 00:36:41.553081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.993 [2024-10-09 00:36:41.553112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.993 qpair failed and we were unable to recover it. 00:29:10.993 [2024-10-09 00:36:41.553446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.993 [2024-10-09 00:36:41.553477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.993 qpair failed and we were unable to recover it. 00:29:10.993 [2024-10-09 00:36:41.553854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.993 [2024-10-09 00:36:41.553885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.993 qpair failed and we were unable to recover it. 00:29:10.993 [2024-10-09 00:36:41.554227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.993 [2024-10-09 00:36:41.554257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.993 qpair failed and we were unable to recover it. 00:29:10.993 [2024-10-09 00:36:41.554634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.993 [2024-10-09 00:36:41.554663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.993 qpair failed and we were unable to recover it. 00:29:10.993 [2024-10-09 00:36:41.555032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.993 [2024-10-09 00:36:41.555061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.993 qpair failed and we were unable to recover it. 00:29:10.993 [2024-10-09 00:36:41.555437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.993 [2024-10-09 00:36:41.555467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.993 qpair failed and we were unable to recover it. 00:29:10.993 [2024-10-09 00:36:41.555839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.993 [2024-10-09 00:36:41.555870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.993 qpair failed and we were unable to recover it. 00:29:10.993 [2024-10-09 00:36:41.556215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.993 [2024-10-09 00:36:41.556245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.993 qpair failed and we were unable to recover it. 00:29:10.993 [2024-10-09 00:36:41.556607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.993 [2024-10-09 00:36:41.556636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.993 qpair failed and we were unable to recover it. 00:29:10.993 [2024-10-09 00:36:41.556966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.993 [2024-10-09 00:36:41.557001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.993 qpair failed and we were unable to recover it. 00:29:10.993 [2024-10-09 00:36:41.557255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.993 [2024-10-09 00:36:41.557284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.993 qpair failed and we were unable to recover it. 00:29:10.993 [2024-10-09 00:36:41.557636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.993 [2024-10-09 00:36:41.557667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.993 qpair failed and we were unable to recover it. 00:29:10.993 [2024-10-09 00:36:41.558065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.993 [2024-10-09 00:36:41.558096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.993 qpair failed and we were unable to recover it. 00:29:10.993 [2024-10-09 00:36:41.558464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.993 [2024-10-09 00:36:41.558492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.993 qpair failed and we were unable to recover it. 00:29:10.993 [2024-10-09 00:36:41.558854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.993 [2024-10-09 00:36:41.558884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.993 qpair failed and we were unable to recover it. 00:29:10.993 [2024-10-09 00:36:41.559264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.993 [2024-10-09 00:36:41.559294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.993 qpair failed and we were unable to recover it. 00:29:10.993 [2024-10-09 00:36:41.559555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.993 [2024-10-09 00:36:41.559588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.993 qpair failed and we were unable to recover it. 00:29:10.993 [2024-10-09 00:36:41.559958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.994 [2024-10-09 00:36:41.559989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.994 qpair failed and we were unable to recover it. 00:29:10.994 [2024-10-09 00:36:41.560363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.994 [2024-10-09 00:36:41.560392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.994 qpair failed and we were unable to recover it. 00:29:10.994 [2024-10-09 00:36:41.560767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.994 [2024-10-09 00:36:41.560796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.994 qpair failed and we were unable to recover it. 00:29:10.994 [2024-10-09 00:36:41.561198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.994 [2024-10-09 00:36:41.561228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.994 qpair failed and we were unable to recover it. 00:29:10.994 [2024-10-09 00:36:41.561594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.994 [2024-10-09 00:36:41.561623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.994 qpair failed and we were unable to recover it. 00:29:10.994 [2024-10-09 00:36:41.561983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.994 [2024-10-09 00:36:41.562014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.994 qpair failed and we were unable to recover it. 00:29:10.994 [2024-10-09 00:36:41.562353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.994 [2024-10-09 00:36:41.562384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.994 qpair failed and we were unable to recover it. 00:29:10.994 [2024-10-09 00:36:41.562744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.994 [2024-10-09 00:36:41.562775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.994 qpair failed and we were unable to recover it. 00:29:10.994 [2024-10-09 00:36:41.563114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.994 [2024-10-09 00:36:41.563143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.994 qpair failed and we were unable to recover it. 00:29:10.994 [2024-10-09 00:36:41.563397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.994 [2024-10-09 00:36:41.563426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.994 qpair failed and we were unable to recover it. 00:29:10.994 [2024-10-09 00:36:41.563754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.994 [2024-10-09 00:36:41.563784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.994 qpair failed and we were unable to recover it. 00:29:10.994 [2024-10-09 00:36:41.564145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.994 [2024-10-09 00:36:41.564173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.994 qpair failed and we were unable to recover it. 00:29:10.994 [2024-10-09 00:36:41.564508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.994 [2024-10-09 00:36:41.564538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.994 qpair failed and we were unable to recover it. 00:29:10.994 [2024-10-09 00:36:41.564888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.994 [2024-10-09 00:36:41.564917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.994 qpair failed and we were unable to recover it. 00:29:10.994 [2024-10-09 00:36:41.565265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.994 [2024-10-09 00:36:41.565294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.994 qpair failed and we were unable to recover it. 00:29:10.994 [2024-10-09 00:36:41.565651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.994 [2024-10-09 00:36:41.565680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.994 qpair failed and we were unable to recover it. 00:29:10.994 [2024-10-09 00:36:41.566050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.994 [2024-10-09 00:36:41.566079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.994 qpair failed and we were unable to recover it. 00:29:10.994 [2024-10-09 00:36:41.566404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.994 [2024-10-09 00:36:41.566433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.994 qpair failed and we were unable to recover it. 00:29:10.994 [2024-10-09 00:36:41.566798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.994 [2024-10-09 00:36:41.566829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.994 qpair failed and we were unable to recover it. 00:29:10.994 [2024-10-09 00:36:41.567196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.994 [2024-10-09 00:36:41.567225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.994 qpair failed and we were unable to recover it. 00:29:10.994 [2024-10-09 00:36:41.567560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.994 [2024-10-09 00:36:41.567590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.994 qpair failed and we were unable to recover it. 00:29:10.994 [2024-10-09 00:36:41.567944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.994 [2024-10-09 00:36:41.567974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.994 qpair failed and we were unable to recover it. 00:29:10.994 [2024-10-09 00:36:41.568310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.994 [2024-10-09 00:36:41.568340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.994 qpair failed and we were unable to recover it. 00:29:10.994 [2024-10-09 00:36:41.568751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.994 [2024-10-09 00:36:41.568782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.994 qpair failed and we were unable to recover it. 00:29:10.994 [2024-10-09 00:36:41.569145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.994 [2024-10-09 00:36:41.569174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.994 qpair failed and we were unable to recover it. 00:29:10.994 [2024-10-09 00:36:41.569597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.994 [2024-10-09 00:36:41.569625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.994 qpair failed and we were unable to recover it. 00:29:10.994 [2024-10-09 00:36:41.569964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.994 [2024-10-09 00:36:41.569994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.994 qpair failed and we were unable to recover it. 00:29:10.994 [2024-10-09 00:36:41.570353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.994 [2024-10-09 00:36:41.570382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.994 qpair failed and we were unable to recover it. 00:29:10.994 [2024-10-09 00:36:41.570754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.994 [2024-10-09 00:36:41.570784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.994 qpair failed and we were unable to recover it. 00:29:10.994 [2024-10-09 00:36:41.571139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.994 [2024-10-09 00:36:41.571168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.994 qpair failed and we were unable to recover it. 00:29:10.994 [2024-10-09 00:36:41.571519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.994 [2024-10-09 00:36:41.571548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.994 qpair failed and we were unable to recover it. 00:29:10.994 [2024-10-09 00:36:41.571785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.994 [2024-10-09 00:36:41.571815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.994 qpair failed and we were unable to recover it. 00:29:10.994 [2024-10-09 00:36:41.572215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.994 [2024-10-09 00:36:41.572249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.994 qpair failed and we were unable to recover it. 00:29:10.994 [2024-10-09 00:36:41.572493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.994 [2024-10-09 00:36:41.572522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.994 qpair failed and we were unable to recover it. 00:29:10.994 [2024-10-09 00:36:41.572882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.994 [2024-10-09 00:36:41.572912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.994 qpair failed and we were unable to recover it. 00:29:10.994 [2024-10-09 00:36:41.573275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.995 [2024-10-09 00:36:41.573305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.995 qpair failed and we were unable to recover it. 00:29:10.995 [2024-10-09 00:36:41.573676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.995 [2024-10-09 00:36:41.573707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.995 qpair failed and we were unable to recover it. 00:29:10.995 [2024-10-09 00:36:41.574020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.995 [2024-10-09 00:36:41.574049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.995 qpair failed and we were unable to recover it. 00:29:10.995 [2024-10-09 00:36:41.574384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.995 [2024-10-09 00:36:41.574414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.995 qpair failed and we were unable to recover it. 00:29:10.995 [2024-10-09 00:36:41.574666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.995 [2024-10-09 00:36:41.574696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.995 qpair failed and we were unable to recover it. 00:29:10.995 [2024-10-09 00:36:41.575056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.995 [2024-10-09 00:36:41.575085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.995 qpair failed and we were unable to recover it. 00:29:10.995 [2024-10-09 00:36:41.575437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.995 [2024-10-09 00:36:41.575467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.995 qpair failed and we were unable to recover it. 00:29:10.995 [2024-10-09 00:36:41.575846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.995 [2024-10-09 00:36:41.575877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.995 qpair failed and we were unable to recover it. 00:29:10.995 [2024-10-09 00:36:41.576249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.995 [2024-10-09 00:36:41.576279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.995 qpair failed and we were unable to recover it. 00:29:10.995 [2024-10-09 00:36:41.576621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.995 [2024-10-09 00:36:41.576650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.995 qpair failed and we were unable to recover it. 00:29:10.995 [2024-10-09 00:36:41.577010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.995 [2024-10-09 00:36:41.577041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.995 qpair failed and we were unable to recover it. 00:29:10.995 [2024-10-09 00:36:41.577400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.995 [2024-10-09 00:36:41.577430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.995 qpair failed and we were unable to recover it. 00:29:10.995 [2024-10-09 00:36:41.577784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.995 [2024-10-09 00:36:41.577815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.995 qpair failed and we were unable to recover it. 00:29:10.995 [2024-10-09 00:36:41.578146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.995 [2024-10-09 00:36:41.578178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.995 qpair failed and we were unable to recover it. 00:29:10.995 [2024-10-09 00:36:41.578520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.995 [2024-10-09 00:36:41.578549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.995 qpair failed and we were unable to recover it. 00:29:10.995 [2024-10-09 00:36:41.578921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.995 [2024-10-09 00:36:41.578951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.995 qpair failed and we were unable to recover it. 00:29:10.995 [2024-10-09 00:36:41.579333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.995 [2024-10-09 00:36:41.579362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.995 qpair failed and we were unable to recover it. 00:29:10.995 [2024-10-09 00:36:41.579685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.995 [2024-10-09 00:36:41.579714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.995 qpair failed and we were unable to recover it. 00:29:10.995 [2024-10-09 00:36:41.580066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.995 [2024-10-09 00:36:41.580098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.995 qpair failed and we were unable to recover it. 00:29:10.995 [2024-10-09 00:36:41.580451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.995 [2024-10-09 00:36:41.580481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.995 qpair failed and we were unable to recover it. 00:29:10.995 [2024-10-09 00:36:41.580749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.995 [2024-10-09 00:36:41.580780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.995 qpair failed and we were unable to recover it. 00:29:10.995 [2024-10-09 00:36:41.581178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.995 [2024-10-09 00:36:41.581207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.995 qpair failed and we were unable to recover it. 00:29:10.995 [2024-10-09 00:36:41.581447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.995 [2024-10-09 00:36:41.581479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.995 qpair failed and we were unable to recover it. 00:29:10.995 [2024-10-09 00:36:41.581836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.995 [2024-10-09 00:36:41.581867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.995 qpair failed and we were unable to recover it. 00:29:10.995 [2024-10-09 00:36:41.582231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.995 [2024-10-09 00:36:41.582265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.995 qpair failed and we were unable to recover it. 00:29:10.995 [2024-10-09 00:36:41.582527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.995 [2024-10-09 00:36:41.582557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.995 qpair failed and we were unable to recover it. 00:29:10.995 [2024-10-09 00:36:41.582901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.995 [2024-10-09 00:36:41.582932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.995 qpair failed and we were unable to recover it. 00:29:10.995 [2024-10-09 00:36:41.583297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.995 [2024-10-09 00:36:41.583326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.995 qpair failed and we were unable to recover it. 00:29:10.995 [2024-10-09 00:36:41.583694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.995 [2024-10-09 00:36:41.583731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.995 qpair failed and we were unable to recover it. 00:29:10.995 [2024-10-09 00:36:41.584097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.995 [2024-10-09 00:36:41.584127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.995 qpair failed and we were unable to recover it. 00:29:10.995 [2024-10-09 00:36:41.584382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.995 [2024-10-09 00:36:41.584410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.995 qpair failed and we were unable to recover it. 00:29:10.995 [2024-10-09 00:36:41.584649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.995 [2024-10-09 00:36:41.584684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.995 qpair failed and we were unable to recover it. 00:29:10.995 [2024-10-09 00:36:41.585057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.995 [2024-10-09 00:36:41.585088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.995 qpair failed and we were unable to recover it. 00:29:10.995 [2024-10-09 00:36:41.585453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.995 [2024-10-09 00:36:41.585483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.995 qpair failed and we were unable to recover it. 00:29:10.995 [2024-10-09 00:36:41.585834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.995 [2024-10-09 00:36:41.585864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.995 qpair failed and we were unable to recover it. 00:29:10.995 [2024-10-09 00:36:41.586235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.995 [2024-10-09 00:36:41.586264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.995 qpair failed and we were unable to recover it. 00:29:10.995 [2024-10-09 00:36:41.586630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.995 [2024-10-09 00:36:41.586659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.995 qpair failed and we were unable to recover it. 00:29:10.995 [2024-10-09 00:36:41.587066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.995 [2024-10-09 00:36:41.587096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.995 qpair failed and we were unable to recover it. 00:29:10.995 [2024-10-09 00:36:41.587438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.995 [2024-10-09 00:36:41.587467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.995 qpair failed and we were unable to recover it. 00:29:10.995 [2024-10-09 00:36:41.587836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.995 [2024-10-09 00:36:41.587867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.995 qpair failed and we were unable to recover it. 00:29:10.995 [2024-10-09 00:36:41.588215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.995 [2024-10-09 00:36:41.588245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.995 qpair failed and we were unable to recover it. 00:29:10.996 [2024-10-09 00:36:41.588612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.996 [2024-10-09 00:36:41.588641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.996 qpair failed and we were unable to recover it. 00:29:10.996 [2024-10-09 00:36:41.589033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.996 [2024-10-09 00:36:41.589064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.996 qpair failed and we were unable to recover it. 00:29:10.996 [2024-10-09 00:36:41.589429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.996 [2024-10-09 00:36:41.589459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.996 qpair failed and we were unable to recover it. 00:29:10.996 [2024-10-09 00:36:41.589842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.996 [2024-10-09 00:36:41.589873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.996 qpair failed and we were unable to recover it. 00:29:10.996 [2024-10-09 00:36:41.590213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.996 [2024-10-09 00:36:41.590243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.996 qpair failed and we were unable to recover it. 00:29:10.996 [2024-10-09 00:36:41.590578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.996 [2024-10-09 00:36:41.590607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.996 qpair failed and we were unable to recover it. 00:29:10.996 [2024-10-09 00:36:41.590872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.996 [2024-10-09 00:36:41.590902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.996 qpair failed and we were unable to recover it. 00:29:10.996 [2024-10-09 00:36:41.591256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.996 [2024-10-09 00:36:41.591287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.996 qpair failed and we were unable to recover it. 00:29:10.996 [2024-10-09 00:36:41.591661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.996 [2024-10-09 00:36:41.591691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.996 qpair failed and we were unable to recover it. 00:29:10.996 [2024-10-09 00:36:41.592053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.996 [2024-10-09 00:36:41.592084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.996 qpair failed and we were unable to recover it. 00:29:10.996 [2024-10-09 00:36:41.592443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.996 [2024-10-09 00:36:41.592472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.996 qpair failed and we were unable to recover it. 00:29:10.996 [2024-10-09 00:36:41.592845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.996 [2024-10-09 00:36:41.592879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.996 qpair failed and we were unable to recover it. 00:29:10.996 [2024-10-09 00:36:41.593102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.996 [2024-10-09 00:36:41.593134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.996 qpair failed and we were unable to recover it. 00:29:10.996 [2024-10-09 00:36:41.593480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.996 [2024-10-09 00:36:41.593512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.996 qpair failed and we were unable to recover it. 00:29:10.996 [2024-10-09 00:36:41.593886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.996 [2024-10-09 00:36:41.593917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.996 qpair failed and we were unable to recover it. 00:29:10.996 [2024-10-09 00:36:41.594279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.996 [2024-10-09 00:36:41.594307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.996 qpair failed and we were unable to recover it. 00:29:10.996 [2024-10-09 00:36:41.594639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.996 [2024-10-09 00:36:41.594667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.996 qpair failed and we were unable to recover it. 00:29:10.996 [2024-10-09 00:36:41.595049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.996 [2024-10-09 00:36:41.595079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.996 qpair failed and we were unable to recover it. 00:29:10.996 [2024-10-09 00:36:41.595437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.996 [2024-10-09 00:36:41.595466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.996 qpair failed and we were unable to recover it. 00:29:10.996 [2024-10-09 00:36:41.595824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.996 [2024-10-09 00:36:41.595854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.996 qpair failed and we were unable to recover it. 00:29:10.996 [2024-10-09 00:36:41.596227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.996 [2024-10-09 00:36:41.596256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.996 qpair failed and we were unable to recover it. 00:29:10.996 [2024-10-09 00:36:41.596625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.996 [2024-10-09 00:36:41.596655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.996 qpair failed and we were unable to recover it. 00:29:10.996 [2024-10-09 00:36:41.597031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.996 [2024-10-09 00:36:41.597061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:10.996 qpair failed and we were unable to recover it. 00:29:11.269 [2024-10-09 00:36:41.597415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.269 [2024-10-09 00:36:41.597447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.269 qpair failed and we were unable to recover it. 00:29:11.269 [2024-10-09 00:36:41.597781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.269 [2024-10-09 00:36:41.597812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.269 qpair failed and we were unable to recover it. 00:29:11.269 [2024-10-09 00:36:41.598222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.269 [2024-10-09 00:36:41.598251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.269 qpair failed and we were unable to recover it. 00:29:11.269 [2024-10-09 00:36:41.598581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.269 [2024-10-09 00:36:41.598611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.269 qpair failed and we were unable to recover it. 00:29:11.269 [2024-10-09 00:36:41.598864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.269 [2024-10-09 00:36:41.598894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.269 qpair failed and we were unable to recover it. 00:29:11.269 [2024-10-09 00:36:41.599262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.269 [2024-10-09 00:36:41.599299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.269 qpair failed and we were unable to recover it. 00:29:11.269 [2024-10-09 00:36:41.599629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.269 [2024-10-09 00:36:41.599659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.269 qpair failed and we were unable to recover it. 00:29:11.269 [2024-10-09 00:36:41.599914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.269 [2024-10-09 00:36:41.599944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.269 qpair failed and we were unable to recover it. 00:29:11.269 [2024-10-09 00:36:41.600208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.269 [2024-10-09 00:36:41.600237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.269 qpair failed and we were unable to recover it. 00:29:11.269 [2024-10-09 00:36:41.600612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.269 [2024-10-09 00:36:41.600641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.269 qpair failed and we were unable to recover it. 00:29:11.269 [2024-10-09 00:36:41.600999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.269 [2024-10-09 00:36:41.601028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.269 qpair failed and we were unable to recover it. 00:29:11.269 [2024-10-09 00:36:41.601408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.269 [2024-10-09 00:36:41.601437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.269 qpair failed and we were unable to recover it. 00:29:11.269 [2024-10-09 00:36:41.601709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.269 [2024-10-09 00:36:41.601749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.269 qpair failed and we were unable to recover it. 00:29:11.269 [2024-10-09 00:36:41.602094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.269 [2024-10-09 00:36:41.602123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.269 qpair failed and we were unable to recover it. 00:29:11.269 [2024-10-09 00:36:41.602486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.269 [2024-10-09 00:36:41.602515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.269 qpair failed and we were unable to recover it. 00:29:11.269 [2024-10-09 00:36:41.602871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.269 [2024-10-09 00:36:41.602902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.269 qpair failed and we were unable to recover it. 00:29:11.269 [2024-10-09 00:36:41.603268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.269 [2024-10-09 00:36:41.603297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.269 qpair failed and we were unable to recover it. 00:29:11.269 [2024-10-09 00:36:41.603653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.269 [2024-10-09 00:36:41.603682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.269 qpair failed and we were unable to recover it. 00:29:11.269 [2024-10-09 00:36:41.604117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.269 [2024-10-09 00:36:41.604148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.269 qpair failed and we were unable to recover it. 00:29:11.269 [2024-10-09 00:36:41.604501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.269 [2024-10-09 00:36:41.604530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.269 qpair failed and we were unable to recover it. 00:29:11.269 [2024-10-09 00:36:41.604877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.269 [2024-10-09 00:36:41.604907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.269 qpair failed and we were unable to recover it. 00:29:11.269 [2024-10-09 00:36:41.605299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.269 [2024-10-09 00:36:41.605329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.269 qpair failed and we were unable to recover it. 00:29:11.269 [2024-10-09 00:36:41.605771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.269 [2024-10-09 00:36:41.605802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.269 qpair failed and we were unable to recover it. 00:29:11.269 [2024-10-09 00:36:41.606160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.269 [2024-10-09 00:36:41.606190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.269 qpair failed and we were unable to recover it. 00:29:11.269 [2024-10-09 00:36:41.606568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.269 [2024-10-09 00:36:41.606597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.269 qpair failed and we were unable to recover it. 00:29:11.269 [2024-10-09 00:36:41.606911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.269 [2024-10-09 00:36:41.606942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.269 qpair failed and we were unable to recover it. 00:29:11.269 [2024-10-09 00:36:41.607316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.270 [2024-10-09 00:36:41.607345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.270 qpair failed and we were unable to recover it. 00:29:11.270 [2024-10-09 00:36:41.607696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.270 [2024-10-09 00:36:41.607732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.270 qpair failed and we were unable to recover it. 00:29:11.270 [2024-10-09 00:36:41.608078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.270 [2024-10-09 00:36:41.608111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.270 qpair failed and we were unable to recover it. 00:29:11.270 [2024-10-09 00:36:41.608489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.270 [2024-10-09 00:36:41.608518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.270 qpair failed and we were unable to recover it. 00:29:11.270 [2024-10-09 00:36:41.608875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.270 [2024-10-09 00:36:41.608907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.270 qpair failed and we were unable to recover it. 00:29:11.270 [2024-10-09 00:36:41.609284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.270 [2024-10-09 00:36:41.609313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.270 qpair failed and we were unable to recover it. 00:29:11.270 [2024-10-09 00:36:41.609688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.270 [2024-10-09 00:36:41.609717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.270 qpair failed and we were unable to recover it. 00:29:11.270 [2024-10-09 00:36:41.610115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.270 [2024-10-09 00:36:41.610144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.270 qpair failed and we were unable to recover it. 00:29:11.270 [2024-10-09 00:36:41.610488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.270 [2024-10-09 00:36:41.610518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.270 qpair failed and we were unable to recover it. 00:29:11.270 [2024-10-09 00:36:41.610887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.270 [2024-10-09 00:36:41.610917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.270 qpair failed and we were unable to recover it. 00:29:11.270 [2024-10-09 00:36:41.611265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.270 [2024-10-09 00:36:41.611294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.270 qpair failed and we were unable to recover it. 00:29:11.270 [2024-10-09 00:36:41.611681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.270 [2024-10-09 00:36:41.611710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.270 qpair failed and we were unable to recover it. 00:29:11.270 [2024-10-09 00:36:41.612068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.270 [2024-10-09 00:36:41.612105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.270 qpair failed and we were unable to recover it. 00:29:11.270 [2024-10-09 00:36:41.612473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.270 [2024-10-09 00:36:41.612502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.270 qpair failed and we were unable to recover it. 00:29:11.270 [2024-10-09 00:36:41.612781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.270 [2024-10-09 00:36:41.612810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.270 qpair failed and we were unable to recover it. 00:29:11.270 [2024-10-09 00:36:41.613180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.270 [2024-10-09 00:36:41.613209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.270 qpair failed and we were unable to recover it. 00:29:11.270 [2024-10-09 00:36:41.613425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.270 [2024-10-09 00:36:41.613454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.270 qpair failed and we were unable to recover it. 00:29:11.270 [2024-10-09 00:36:41.613873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.270 [2024-10-09 00:36:41.613904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.270 qpair failed and we were unable to recover it. 00:29:11.270 [2024-10-09 00:36:41.614273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.270 [2024-10-09 00:36:41.614302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.270 qpair failed and we were unable to recover it. 00:29:11.270 [2024-10-09 00:36:41.614553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.270 [2024-10-09 00:36:41.614582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.270 qpair failed and we were unable to recover it. 00:29:11.270 [2024-10-09 00:36:41.614956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.270 [2024-10-09 00:36:41.614987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.270 qpair failed and we were unable to recover it. 00:29:11.270 [2024-10-09 00:36:41.615249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.270 [2024-10-09 00:36:41.615280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.270 qpair failed and we were unable to recover it. 00:29:11.270 [2024-10-09 00:36:41.615636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.270 [2024-10-09 00:36:41.615666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.270 qpair failed and we were unable to recover it. 00:29:11.270 [2024-10-09 00:36:41.616048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.270 [2024-10-09 00:36:41.616078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.270 qpair failed and we were unable to recover it. 00:29:11.270 [2024-10-09 00:36:41.616315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.270 [2024-10-09 00:36:41.616347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.270 qpair failed and we were unable to recover it. 00:29:11.270 [2024-10-09 00:36:41.616634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.270 [2024-10-09 00:36:41.616663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.270 qpair failed and we were unable to recover it. 00:29:11.270 [2024-10-09 00:36:41.617013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.270 [2024-10-09 00:36:41.617043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.270 qpair failed and we were unable to recover it. 00:29:11.270 [2024-10-09 00:36:41.617375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.270 [2024-10-09 00:36:41.617405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.270 qpair failed and we were unable to recover it. 00:29:11.270 [2024-10-09 00:36:41.617674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.270 [2024-10-09 00:36:41.617703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.270 qpair failed and we were unable to recover it. 00:29:11.270 [2024-10-09 00:36:41.618080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.270 [2024-10-09 00:36:41.618115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.270 qpair failed and we were unable to recover it. 00:29:11.270 [2024-10-09 00:36:41.618480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.270 [2024-10-09 00:36:41.618508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.270 qpair failed and we were unable to recover it. 00:29:11.270 [2024-10-09 00:36:41.618782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.270 [2024-10-09 00:36:41.618813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.270 qpair failed and we were unable to recover it. 00:29:11.270 [2024-10-09 00:36:41.619066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.270 [2024-10-09 00:36:41.619098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.270 qpair failed and we were unable to recover it. 00:29:11.270 [2024-10-09 00:36:41.619359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.270 [2024-10-09 00:36:41.619389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.270 qpair failed and we were unable to recover it. 00:29:11.270 [2024-10-09 00:36:41.619764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.270 [2024-10-09 00:36:41.619794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.270 qpair failed and we were unable to recover it. 00:29:11.270 [2024-10-09 00:36:41.620162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.270 [2024-10-09 00:36:41.620191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.270 qpair failed and we were unable to recover it. 00:29:11.270 [2024-10-09 00:36:41.620548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.270 [2024-10-09 00:36:41.620577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.270 qpair failed and we were unable to recover it. 00:29:11.270 [2024-10-09 00:36:41.620818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.270 [2024-10-09 00:36:41.620850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.270 qpair failed and we were unable to recover it. 00:29:11.271 [2024-10-09 00:36:41.621223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.271 [2024-10-09 00:36:41.621252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.271 qpair failed and we were unable to recover it. 00:29:11.271 [2024-10-09 00:36:41.621473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.271 [2024-10-09 00:36:41.621501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.271 qpair failed and we were unable to recover it. 00:29:11.271 [2024-10-09 00:36:41.621719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.271 [2024-10-09 00:36:41.621758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.271 qpair failed and we were unable to recover it. 00:29:11.271 [2024-10-09 00:36:41.622133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.271 [2024-10-09 00:36:41.622162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.271 qpair failed and we were unable to recover it. 00:29:11.271 [2024-10-09 00:36:41.622529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.271 [2024-10-09 00:36:41.622559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.271 qpair failed and we were unable to recover it. 00:29:11.271 [2024-10-09 00:36:41.622890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.271 [2024-10-09 00:36:41.622921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.271 qpair failed and we were unable to recover it. 00:29:11.271 [2024-10-09 00:36:41.623378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.271 [2024-10-09 00:36:41.623408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.271 qpair failed and we were unable to recover it. 00:29:11.271 [2024-10-09 00:36:41.623822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.271 [2024-10-09 00:36:41.623851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.271 qpair failed and we were unable to recover it. 00:29:11.271 [2024-10-09 00:36:41.624110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.271 [2024-10-09 00:36:41.624138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.271 qpair failed and we were unable to recover it. 00:29:11.271 [2024-10-09 00:36:41.624529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.271 [2024-10-09 00:36:41.624558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.271 qpair failed and we were unable to recover it. 00:29:11.271 [2024-10-09 00:36:41.624945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.271 [2024-10-09 00:36:41.624976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.271 qpair failed and we were unable to recover it. 00:29:11.271 [2024-10-09 00:36:41.625221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.271 [2024-10-09 00:36:41.625250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.271 qpair failed and we were unable to recover it. 00:29:11.271 [2024-10-09 00:36:41.625615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.271 [2024-10-09 00:36:41.625645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.271 qpair failed and we were unable to recover it. 00:29:11.271 [2024-10-09 00:36:41.625899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.271 [2024-10-09 00:36:41.625929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.271 qpair failed and we were unable to recover it. 00:29:11.271 [2024-10-09 00:36:41.626167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.271 [2024-10-09 00:36:41.626195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.271 qpair failed and we were unable to recover it. 00:29:11.271 [2024-10-09 00:36:41.626555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.271 [2024-10-09 00:36:41.626583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.271 qpair failed and we were unable to recover it. 00:29:11.271 [2024-10-09 00:36:41.627007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.271 [2024-10-09 00:36:41.627038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.271 qpair failed and we were unable to recover it. 00:29:11.271 [2024-10-09 00:36:41.627405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.271 [2024-10-09 00:36:41.627434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.271 qpair failed and we were unable to recover it. 00:29:11.271 [2024-10-09 00:36:41.627801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.271 [2024-10-09 00:36:41.627836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.271 qpair failed and we were unable to recover it. 00:29:11.271 [2024-10-09 00:36:41.628196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.271 [2024-10-09 00:36:41.628226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.271 qpair failed and we were unable to recover it. 00:29:11.271 [2024-10-09 00:36:41.628586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.271 [2024-10-09 00:36:41.628616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.271 qpair failed and we were unable to recover it. 00:29:11.271 [2024-10-09 00:36:41.628979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.271 [2024-10-09 00:36:41.629009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.271 qpair failed and we were unable to recover it. 00:29:11.271 [2024-10-09 00:36:41.629123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.271 [2024-10-09 00:36:41.629152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.271 qpair failed and we were unable to recover it. 00:29:11.271 [2024-10-09 00:36:41.629554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.271 [2024-10-09 00:36:41.629583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.271 qpair failed and we were unable to recover it. 00:29:11.271 [2024-10-09 00:36:41.629960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.271 [2024-10-09 00:36:41.629990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.271 qpair failed and we were unable to recover it. 00:29:11.271 [2024-10-09 00:36:41.630316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.271 [2024-10-09 00:36:41.630346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.271 qpair failed and we were unable to recover it. 00:29:11.271 [2024-10-09 00:36:41.630719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.271 [2024-10-09 00:36:41.630781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.271 qpair failed and we were unable to recover it. 00:29:11.271 [2024-10-09 00:36:41.631142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.271 [2024-10-09 00:36:41.631173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.271 qpair failed and we were unable to recover it. 00:29:11.271 [2024-10-09 00:36:41.631531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.271 [2024-10-09 00:36:41.631561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.271 qpair failed and we were unable to recover it. 00:29:11.271 [2024-10-09 00:36:41.631795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.271 [2024-10-09 00:36:41.631825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.271 qpair failed and we were unable to recover it. 00:29:11.271 [2024-10-09 00:36:41.632190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.271 [2024-10-09 00:36:41.632220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.271 qpair failed and we were unable to recover it. 00:29:11.271 [2024-10-09 00:36:41.632613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.271 [2024-10-09 00:36:41.632644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.271 qpair failed and we were unable to recover it. 00:29:11.271 [2024-10-09 00:36:41.633009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.271 [2024-10-09 00:36:41.633040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.271 qpair failed and we were unable to recover it. 00:29:11.271 [2024-10-09 00:36:41.633385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.271 [2024-10-09 00:36:41.633417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.271 qpair failed and we were unable to recover it. 00:29:11.271 [2024-10-09 00:36:41.633791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.271 [2024-10-09 00:36:41.633823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.271 qpair failed and we were unable to recover it. 00:29:11.271 [2024-10-09 00:36:41.634213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.271 [2024-10-09 00:36:41.634244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.271 qpair failed and we were unable to recover it. 00:29:11.271 [2024-10-09 00:36:41.634491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.271 [2024-10-09 00:36:41.634519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.271 qpair failed and we were unable to recover it. 00:29:11.271 [2024-10-09 00:36:41.634854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.271 [2024-10-09 00:36:41.634886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.271 qpair failed and we were unable to recover it. 00:29:11.271 [2024-10-09 00:36:41.635278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.271 [2024-10-09 00:36:41.635307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.271 qpair failed and we were unable to recover it. 00:29:11.271 [2024-10-09 00:36:41.635676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.271 [2024-10-09 00:36:41.635704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.271 qpair failed and we were unable to recover it. 00:29:11.271 [2024-10-09 00:36:41.635960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.272 [2024-10-09 00:36:41.635989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.272 qpair failed and we were unable to recover it. 00:29:11.272 [2024-10-09 00:36:41.636341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.272 [2024-10-09 00:36:41.636371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.272 qpair failed and we were unable to recover it. 00:29:11.272 [2024-10-09 00:36:41.636736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.272 [2024-10-09 00:36:41.636766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.272 qpair failed and we were unable to recover it. 00:29:11.272 [2024-10-09 00:36:41.637191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.272 [2024-10-09 00:36:41.637221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.272 qpair failed and we were unable to recover it. 00:29:11.272 [2024-10-09 00:36:41.637602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.272 [2024-10-09 00:36:41.637633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.272 qpair failed and we were unable to recover it. 00:29:11.272 [2024-10-09 00:36:41.637879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.272 [2024-10-09 00:36:41.637913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.272 qpair failed and we were unable to recover it. 00:29:11.272 [2024-10-09 00:36:41.638285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.272 [2024-10-09 00:36:41.638315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.272 qpair failed and we were unable to recover it. 00:29:11.272 [2024-10-09 00:36:41.638681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.272 [2024-10-09 00:36:41.638710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.272 qpair failed and we were unable to recover it. 00:29:11.272 [2024-10-09 00:36:41.639113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.272 [2024-10-09 00:36:41.639143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.272 qpair failed and we were unable to recover it. 00:29:11.272 [2024-10-09 00:36:41.639388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.272 [2024-10-09 00:36:41.639417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.272 qpair failed and we were unable to recover it. 00:29:11.272 [2024-10-09 00:36:41.639794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.272 [2024-10-09 00:36:41.639824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.272 qpair failed and we were unable to recover it. 00:29:11.272 [2024-10-09 00:36:41.640183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.272 [2024-10-09 00:36:41.640212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.272 qpair failed and we were unable to recover it. 00:29:11.272 [2024-10-09 00:36:41.640603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.272 [2024-10-09 00:36:41.640632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.272 qpair failed and we were unable to recover it. 00:29:11.272 [2024-10-09 00:36:41.640892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.272 [2024-10-09 00:36:41.640923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.272 qpair failed and we were unable to recover it. 00:29:11.272 [2024-10-09 00:36:41.641295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.272 [2024-10-09 00:36:41.641324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.272 qpair failed and we were unable to recover it. 00:29:11.272 [2024-10-09 00:36:41.641686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.272 [2024-10-09 00:36:41.641716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.272 qpair failed and we were unable to recover it. 00:29:11.272 [2024-10-09 00:36:41.641979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.272 [2024-10-09 00:36:41.642012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.272 qpair failed and we were unable to recover it. 00:29:11.272 [2024-10-09 00:36:41.642244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.272 [2024-10-09 00:36:41.642276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.272 qpair failed and we were unable to recover it. 00:29:11.272 [2024-10-09 00:36:41.642659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.272 [2024-10-09 00:36:41.642689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.272 qpair failed and we were unable to recover it. 00:29:11.272 [2024-10-09 00:36:41.643075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.272 [2024-10-09 00:36:41.643111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.272 qpair failed and we were unable to recover it. 00:29:11.272 [2024-10-09 00:36:41.643481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.272 [2024-10-09 00:36:41.643510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.272 qpair failed and we were unable to recover it. 00:29:11.272 [2024-10-09 00:36:41.643874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.272 [2024-10-09 00:36:41.643905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.272 qpair failed and we were unable to recover it. 00:29:11.272 [2024-10-09 00:36:41.644127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.272 [2024-10-09 00:36:41.644156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.272 qpair failed and we were unable to recover it. 00:29:11.272 [2024-10-09 00:36:41.644502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.272 [2024-10-09 00:36:41.644530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.272 qpair failed and we were unable to recover it. 00:29:11.272 [2024-10-09 00:36:41.644893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.272 [2024-10-09 00:36:41.644924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.272 qpair failed and we were unable to recover it. 00:29:11.272 [2024-10-09 00:36:41.645317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.272 [2024-10-09 00:36:41.645348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.272 qpair failed and we were unable to recover it. 00:29:11.272 [2024-10-09 00:36:41.645549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.272 [2024-10-09 00:36:41.645582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.272 qpair failed and we were unable to recover it. 00:29:11.272 [2024-10-09 00:36:41.645923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.272 [2024-10-09 00:36:41.645954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.272 qpair failed and we were unable to recover it. 00:29:11.272 [2024-10-09 00:36:41.646325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.272 [2024-10-09 00:36:41.646355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.272 qpair failed and we were unable to recover it. 00:29:11.272 [2024-10-09 00:36:41.646738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.272 [2024-10-09 00:36:41.646768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.272 qpair failed and we were unable to recover it. 00:29:11.272 [2024-10-09 00:36:41.647133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.272 [2024-10-09 00:36:41.647163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.272 qpair failed and we were unable to recover it. 00:29:11.272 [2024-10-09 00:36:41.647522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.272 [2024-10-09 00:36:41.647550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.272 qpair failed and we were unable to recover it. 00:29:11.272 [2024-10-09 00:36:41.647662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.272 [2024-10-09 00:36:41.647691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.272 qpair failed and we were unable to recover it. 00:29:11.272 [2024-10-09 00:36:41.648072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.272 [2024-10-09 00:36:41.648102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.272 qpair failed and we were unable to recover it. 00:29:11.272 [2024-10-09 00:36:41.648446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.272 [2024-10-09 00:36:41.648477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.272 qpair failed and we were unable to recover it. 00:29:11.272 [2024-10-09 00:36:41.648713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.272 [2024-10-09 00:36:41.648756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.272 qpair failed and we were unable to recover it. 00:29:11.272 [2024-10-09 00:36:41.648988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.272 [2024-10-09 00:36:41.649017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.272 qpair failed and we were unable to recover it. 00:29:11.272 [2024-10-09 00:36:41.649368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.272 [2024-10-09 00:36:41.649398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.272 qpair failed and we were unable to recover it. 00:29:11.272 [2024-10-09 00:36:41.649764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.272 [2024-10-09 00:36:41.649796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.273 qpair failed and we were unable to recover it. 00:29:11.273 [2024-10-09 00:36:41.650152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.273 [2024-10-09 00:36:41.650182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.273 qpair failed and we were unable to recover it. 00:29:11.273 [2024-10-09 00:36:41.650564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.273 [2024-10-09 00:36:41.650594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.273 qpair failed and we were unable to recover it. 00:29:11.273 [2024-10-09 00:36:41.650932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.273 [2024-10-09 00:36:41.650962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.273 qpair failed and we were unable to recover it. 00:29:11.273 [2024-10-09 00:36:41.651412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.273 [2024-10-09 00:36:41.651442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.273 qpair failed and we were unable to recover it. 00:29:11.273 [2024-10-09 00:36:41.651792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.273 [2024-10-09 00:36:41.651823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.273 qpair failed and we were unable to recover it. 00:29:11.273 [2024-10-09 00:36:41.652228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.273 [2024-10-09 00:36:41.652257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.273 qpair failed and we were unable to recover it. 00:29:11.273 [2024-10-09 00:36:41.652593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.273 [2024-10-09 00:36:41.652623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.273 qpair failed and we were unable to recover it. 00:29:11.273 [2024-10-09 00:36:41.652973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.273 [2024-10-09 00:36:41.653008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.273 qpair failed and we were unable to recover it. 00:29:11.273 [2024-10-09 00:36:41.653356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.273 [2024-10-09 00:36:41.653387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.273 qpair failed and we were unable to recover it. 00:29:11.273 [2024-10-09 00:36:41.653754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.273 [2024-10-09 00:36:41.653784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.273 qpair failed and we were unable to recover it. 00:29:11.273 [2024-10-09 00:36:41.654165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.273 [2024-10-09 00:36:41.654195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.273 qpair failed and we were unable to recover it. 00:29:11.273 [2024-10-09 00:36:41.654532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.273 [2024-10-09 00:36:41.654562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.273 qpair failed and we were unable to recover it. 00:29:11.273 [2024-10-09 00:36:41.654784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.273 [2024-10-09 00:36:41.654814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.273 qpair failed and we were unable to recover it. 00:29:11.273 [2024-10-09 00:36:41.655132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.273 [2024-10-09 00:36:41.655162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.273 qpair failed and we were unable to recover it. 00:29:11.273 [2024-10-09 00:36:41.655408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.273 [2024-10-09 00:36:41.655440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.273 qpair failed and we were unable to recover it. 00:29:11.273 [2024-10-09 00:36:41.655800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.273 [2024-10-09 00:36:41.655831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.273 qpair failed and we were unable to recover it. 00:29:11.273 [2024-10-09 00:36:41.656119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.273 [2024-10-09 00:36:41.656149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.273 qpair failed and we were unable to recover it. 00:29:11.273 [2024-10-09 00:36:41.656510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.273 [2024-10-09 00:36:41.656538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.273 qpair failed and we were unable to recover it. 00:29:11.273 [2024-10-09 00:36:41.656910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.273 [2024-10-09 00:36:41.656940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.273 qpair failed and we were unable to recover it. 00:29:11.273 [2024-10-09 00:36:41.657322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.273 [2024-10-09 00:36:41.657350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.273 qpair failed and we were unable to recover it. 00:29:11.273 [2024-10-09 00:36:41.657710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.273 [2024-10-09 00:36:41.657749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.273 qpair failed and we were unable to recover it. 00:29:11.273 [2024-10-09 00:36:41.658045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.273 [2024-10-09 00:36:41.658074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.273 qpair failed and we were unable to recover it. 00:29:11.273 [2024-10-09 00:36:41.658300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.273 [2024-10-09 00:36:41.658329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.273 qpair failed and we were unable to recover it. 00:29:11.273 [2024-10-09 00:36:41.658706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.273 [2024-10-09 00:36:41.658756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.273 qpair failed and we were unable to recover it. 00:29:11.273 [2024-10-09 00:36:41.659136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.273 [2024-10-09 00:36:41.659165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.273 qpair failed and we were unable to recover it. 00:29:11.273 [2024-10-09 00:36:41.659526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.273 [2024-10-09 00:36:41.659557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.273 qpair failed and we were unable to recover it. 00:29:11.273 [2024-10-09 00:36:41.659938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.273 [2024-10-09 00:36:41.659969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.273 qpair failed and we were unable to recover it. 00:29:11.273 [2024-10-09 00:36:41.660240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.273 [2024-10-09 00:36:41.660268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.273 qpair failed and we were unable to recover it. 00:29:11.273 [2024-10-09 00:36:41.660613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.273 [2024-10-09 00:36:41.660642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.273 qpair failed and we were unable to recover it. 00:29:11.273 [2024-10-09 00:36:41.661006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.273 [2024-10-09 00:36:41.661036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.273 qpair failed and we were unable to recover it. 00:29:11.274 [2024-10-09 00:36:41.661261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.274 [2024-10-09 00:36:41.661292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.274 qpair failed and we were unable to recover it. 00:29:11.274 [2024-10-09 00:36:41.661687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.274 [2024-10-09 00:36:41.661718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.274 qpair failed and we were unable to recover it. 00:29:11.274 [2024-10-09 00:36:41.662098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.274 [2024-10-09 00:36:41.662127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.274 qpair failed and we were unable to recover it. 00:29:11.274 [2024-10-09 00:36:41.662486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.274 [2024-10-09 00:36:41.662516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.274 qpair failed and we were unable to recover it. 00:29:11.274 [2024-10-09 00:36:41.662902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.274 [2024-10-09 00:36:41.662939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.274 qpair failed and we were unable to recover it. 00:29:11.274 [2024-10-09 00:36:41.663320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.274 [2024-10-09 00:36:41.663349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.274 qpair failed and we were unable to recover it. 00:29:11.274 [2024-10-09 00:36:41.663707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.274 [2024-10-09 00:36:41.663754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.274 qpair failed and we were unable to recover it. 00:29:11.274 [2024-10-09 00:36:41.664161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.274 [2024-10-09 00:36:41.664191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.274 qpair failed and we were unable to recover it. 00:29:11.274 [2024-10-09 00:36:41.664526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.274 [2024-10-09 00:36:41.664556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.274 qpair failed and we were unable to recover it. 00:29:11.274 [2024-10-09 00:36:41.664910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.274 [2024-10-09 00:36:41.664942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.274 qpair failed and we were unable to recover it. 00:29:11.274 [2024-10-09 00:36:41.665311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.274 [2024-10-09 00:36:41.665340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.274 qpair failed and we were unable to recover it. 00:29:11.274 [2024-10-09 00:36:41.665782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.274 [2024-10-09 00:36:41.665813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.274 qpair failed and we were unable to recover it. 00:29:11.274 [2024-10-09 00:36:41.666179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.274 [2024-10-09 00:36:41.666208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.274 qpair failed and we were unable to recover it. 00:29:11.274 [2024-10-09 00:36:41.666566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.274 [2024-10-09 00:36:41.666596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.274 qpair failed and we were unable to recover it. 00:29:11.274 [2024-10-09 00:36:41.666951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.274 [2024-10-09 00:36:41.666982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.274 qpair failed and we were unable to recover it. 00:29:11.274 [2024-10-09 00:36:41.667332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.274 [2024-10-09 00:36:41.667361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.274 qpair failed and we were unable to recover it. 00:29:11.274 [2024-10-09 00:36:41.667745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.274 [2024-10-09 00:36:41.667775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.274 qpair failed and we were unable to recover it. 00:29:11.274 [2024-10-09 00:36:41.668127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.274 [2024-10-09 00:36:41.668158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.274 qpair failed and we were unable to recover it. 00:29:11.274 [2024-10-09 00:36:41.668515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.274 [2024-10-09 00:36:41.668545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.274 qpair failed and we were unable to recover it. 00:29:11.274 [2024-10-09 00:36:41.668912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.274 [2024-10-09 00:36:41.668942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.274 qpair failed and we were unable to recover it. 00:29:11.274 [2024-10-09 00:36:41.669265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.274 [2024-10-09 00:36:41.669294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.274 qpair failed and we were unable to recover it. 00:29:11.274 [2024-10-09 00:36:41.669649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.274 [2024-10-09 00:36:41.669679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.274 qpair failed and we were unable to recover it. 00:29:11.274 [2024-10-09 00:36:41.670022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.274 [2024-10-09 00:36:41.670054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.274 qpair failed and we were unable to recover it. 00:29:11.274 [2024-10-09 00:36:41.670389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.274 [2024-10-09 00:36:41.670418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.274 qpair failed and we were unable to recover it. 00:29:11.274 [2024-10-09 00:36:41.670783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.274 [2024-10-09 00:36:41.670813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.274 qpair failed and we were unable to recover it. 00:29:11.274 [2024-10-09 00:36:41.671168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.274 [2024-10-09 00:36:41.671196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.274 qpair failed and we were unable to recover it. 00:29:11.274 [2024-10-09 00:36:41.671562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.274 [2024-10-09 00:36:41.671592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.274 qpair failed and we were unable to recover it. 00:29:11.274 [2024-10-09 00:36:41.671856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.274 [2024-10-09 00:36:41.671886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.274 qpair failed and we were unable to recover it. 00:29:11.274 [2024-10-09 00:36:41.672138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.274 [2024-10-09 00:36:41.672167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.274 qpair failed and we were unable to recover it. 00:29:11.274 [2024-10-09 00:36:41.672520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.274 [2024-10-09 00:36:41.672550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.274 qpair failed and we were unable to recover it. 00:29:11.274 [2024-10-09 00:36:41.672908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.274 [2024-10-09 00:36:41.672938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.274 qpair failed and we were unable to recover it. 00:29:11.274 [2024-10-09 00:36:41.673389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.274 [2024-10-09 00:36:41.673419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.274 qpair failed and we were unable to recover it. 00:29:11.274 [2024-10-09 00:36:41.673780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.274 [2024-10-09 00:36:41.673810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.274 qpair failed and we were unable to recover it. 00:29:11.274 [2024-10-09 00:36:41.674187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.274 [2024-10-09 00:36:41.674216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.274 qpair failed and we were unable to recover it. 00:29:11.274 [2024-10-09 00:36:41.674519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.274 [2024-10-09 00:36:41.674547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.274 qpair failed and we were unable to recover it. 00:29:11.274 [2024-10-09 00:36:41.674893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.274 [2024-10-09 00:36:41.674923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.274 qpair failed and we were unable to recover it. 00:29:11.274 [2024-10-09 00:36:41.675237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.274 [2024-10-09 00:36:41.675265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.274 qpair failed and we were unable to recover it. 00:29:11.274 [2024-10-09 00:36:41.675645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.274 [2024-10-09 00:36:41.675674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.274 qpair failed and we were unable to recover it. 00:29:11.274 [2024-10-09 00:36:41.675981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.274 [2024-10-09 00:36:41.676013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.274 qpair failed and we were unable to recover it. 00:29:11.274 [2024-10-09 00:36:41.676349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.275 [2024-10-09 00:36:41.676378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.275 qpair failed and we were unable to recover it. 00:29:11.275 [2024-10-09 00:36:41.676754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.275 [2024-10-09 00:36:41.676786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.275 qpair failed and we were unable to recover it. 00:29:11.275 [2024-10-09 00:36:41.677138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.275 [2024-10-09 00:36:41.677167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.275 qpair failed and we were unable to recover it. 00:29:11.275 [2024-10-09 00:36:41.677538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.275 [2024-10-09 00:36:41.677567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.275 qpair failed and we were unable to recover it. 00:29:11.275 [2024-10-09 00:36:41.677825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.275 [2024-10-09 00:36:41.677855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.275 qpair failed and we were unable to recover it. 00:29:11.275 [2024-10-09 00:36:41.678220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.275 [2024-10-09 00:36:41.678249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.275 qpair failed and we were unable to recover it. 00:29:11.275 [2024-10-09 00:36:41.678573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.275 [2024-10-09 00:36:41.678604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.275 qpair failed and we were unable to recover it. 00:29:11.275 [2024-10-09 00:36:41.679038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.275 [2024-10-09 00:36:41.679068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.275 qpair failed and we were unable to recover it. 00:29:11.275 [2024-10-09 00:36:41.679436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.275 [2024-10-09 00:36:41.679465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.275 qpair failed and we were unable to recover it. 00:29:11.275 [2024-10-09 00:36:41.679821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.275 [2024-10-09 00:36:41.679850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.275 qpair failed and we were unable to recover it. 00:29:11.275 [2024-10-09 00:36:41.680223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.275 [2024-10-09 00:36:41.680253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.275 qpair failed and we were unable to recover it. 00:29:11.275 [2024-10-09 00:36:41.680668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.275 [2024-10-09 00:36:41.680698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.275 qpair failed and we were unable to recover it. 00:29:11.275 [2024-10-09 00:36:41.681073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.275 [2024-10-09 00:36:41.681103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.275 qpair failed and we were unable to recover it. 00:29:11.275 [2024-10-09 00:36:41.681444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.275 [2024-10-09 00:36:41.681474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.275 qpair failed and we were unable to recover it. 00:29:11.275 [2024-10-09 00:36:41.681830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.275 [2024-10-09 00:36:41.681860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.275 qpair failed and we were unable to recover it. 00:29:11.275 [2024-10-09 00:36:41.682220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.275 [2024-10-09 00:36:41.682248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.275 qpair failed and we were unable to recover it. 00:29:11.275 [2024-10-09 00:36:41.682621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.275 [2024-10-09 00:36:41.682649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.275 qpair failed and we were unable to recover it. 00:29:11.275 [2024-10-09 00:36:41.683011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.275 [2024-10-09 00:36:41.683042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.275 qpair failed and we were unable to recover it. 00:29:11.275 [2024-10-09 00:36:41.683419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.275 [2024-10-09 00:36:41.683447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.275 qpair failed and we were unable to recover it. 00:29:11.275 [2024-10-09 00:36:41.683695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.275 [2024-10-09 00:36:41.683761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.275 qpair failed and we were unable to recover it. 00:29:11.275 [2024-10-09 00:36:41.684128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.275 [2024-10-09 00:36:41.684158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.275 qpair failed and we were unable to recover it. 00:29:11.275 [2024-10-09 00:36:41.684416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.275 [2024-10-09 00:36:41.684444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.275 qpair failed and we were unable to recover it. 00:29:11.275 [2024-10-09 00:36:41.684824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.275 [2024-10-09 00:36:41.684855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.275 qpair failed and we were unable to recover it. 00:29:11.275 [2024-10-09 00:36:41.685130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.275 [2024-10-09 00:36:41.685159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.275 qpair failed and we were unable to recover it. 00:29:11.275 [2024-10-09 00:36:41.685504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.275 [2024-10-09 00:36:41.685533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.275 qpair failed and we were unable to recover it. 00:29:11.275 [2024-10-09 00:36:41.685855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.275 [2024-10-09 00:36:41.685886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.275 qpair failed and we were unable to recover it. 00:29:11.275 [2024-10-09 00:36:41.686257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.275 [2024-10-09 00:36:41.686285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.275 qpair failed and we were unable to recover it. 00:29:11.275 [2024-10-09 00:36:41.686648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.275 [2024-10-09 00:36:41.686677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.275 qpair failed and we were unable to recover it. 00:29:11.275 [2024-10-09 00:36:41.687157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.275 [2024-10-09 00:36:41.687188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.275 qpair failed and we were unable to recover it. 00:29:11.275 [2024-10-09 00:36:41.687478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.275 [2024-10-09 00:36:41.687507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.275 qpair failed and we were unable to recover it. 00:29:11.275 [2024-10-09 00:36:41.687869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.275 [2024-10-09 00:36:41.687899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.275 qpair failed and we were unable to recover it. 00:29:11.275 [2024-10-09 00:36:41.688261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.275 [2024-10-09 00:36:41.688290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.275 qpair failed and we were unable to recover it. 00:29:11.275 [2024-10-09 00:36:41.688642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.275 [2024-10-09 00:36:41.688671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.275 qpair failed and we were unable to recover it. 00:29:11.275 [2024-10-09 00:36:41.689019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.275 [2024-10-09 00:36:41.689060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.275 qpair failed and we were unable to recover it. 00:29:11.275 [2024-10-09 00:36:41.689418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.275 [2024-10-09 00:36:41.689446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.275 qpair failed and we were unable to recover it. 00:29:11.275 [2024-10-09 00:36:41.689786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.275 [2024-10-09 00:36:41.689817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.275 qpair failed and we were unable to recover it. 00:29:11.275 [2024-10-09 00:36:41.690180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.275 [2024-10-09 00:36:41.690210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.275 qpair failed and we were unable to recover it. 00:29:11.275 [2024-10-09 00:36:41.690573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.275 [2024-10-09 00:36:41.690602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.275 qpair failed and we were unable to recover it. 00:29:11.275 [2024-10-09 00:36:41.690854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.275 [2024-10-09 00:36:41.690883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.275 qpair failed and we were unable to recover it. 00:29:11.275 [2024-10-09 00:36:41.691237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.275 [2024-10-09 00:36:41.691266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.275 qpair failed and we were unable to recover it. 00:29:11.275 [2024-10-09 00:36:41.691621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.275 [2024-10-09 00:36:41.691649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.275 qpair failed and we were unable to recover it. 00:29:11.276 [2024-10-09 00:36:41.692058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.276 [2024-10-09 00:36:41.692088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.276 qpair failed and we were unable to recover it. 00:29:11.276 [2024-10-09 00:36:41.692521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.276 [2024-10-09 00:36:41.692551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.276 qpair failed and we were unable to recover it. 00:29:11.276 [2024-10-09 00:36:41.692987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.276 [2024-10-09 00:36:41.693017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.276 qpair failed and we were unable to recover it. 00:29:11.276 [2024-10-09 00:36:41.693382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.276 [2024-10-09 00:36:41.693411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.276 qpair failed and we were unable to recover it. 00:29:11.276 [2024-10-09 00:36:41.693777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.276 [2024-10-09 00:36:41.693808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.276 qpair failed and we were unable to recover it. 00:29:11.276 [2024-10-09 00:36:41.694171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.276 [2024-10-09 00:36:41.694200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.276 qpair failed and we were unable to recover it. 00:29:11.276 [2024-10-09 00:36:41.694563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.276 [2024-10-09 00:36:41.694592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.276 qpair failed and we were unable to recover it. 00:29:11.276 [2024-10-09 00:36:41.694846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.276 [2024-10-09 00:36:41.694876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.276 qpair failed and we were unable to recover it. 00:29:11.276 [2024-10-09 00:36:41.695238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.276 [2024-10-09 00:36:41.695268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.276 qpair failed and we were unable to recover it. 00:29:11.276 [2024-10-09 00:36:41.695669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.276 [2024-10-09 00:36:41.695700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.276 qpair failed and we were unable to recover it. 00:29:11.276 [2024-10-09 00:36:41.696066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.276 [2024-10-09 00:36:41.696096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.276 qpair failed and we were unable to recover it. 00:29:11.276 [2024-10-09 00:36:41.696461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.276 [2024-10-09 00:36:41.696490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.276 qpair failed and we were unable to recover it. 00:29:11.276 [2024-10-09 00:36:41.696760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.276 [2024-10-09 00:36:41.696790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.276 qpair failed and we were unable to recover it. 00:29:11.276 [2024-10-09 00:36:41.697174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.276 [2024-10-09 00:36:41.697203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.276 qpair failed and we were unable to recover it. 00:29:11.276 [2024-10-09 00:36:41.697563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.276 [2024-10-09 00:36:41.697592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.276 qpair failed and we were unable to recover it. 00:29:11.276 [2024-10-09 00:36:41.697961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.276 [2024-10-09 00:36:41.697991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.276 qpair failed and we were unable to recover it. 00:29:11.276 [2024-10-09 00:36:41.698338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.276 [2024-10-09 00:36:41.698367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.276 qpair failed and we were unable to recover it. 00:29:11.276 [2024-10-09 00:36:41.698732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.276 [2024-10-09 00:36:41.698764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.276 qpair failed and we were unable to recover it. 00:29:11.276 [2024-10-09 00:36:41.699132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.276 [2024-10-09 00:36:41.699160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.276 qpair failed and we were unable to recover it. 00:29:11.276 [2024-10-09 00:36:41.699526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.276 [2024-10-09 00:36:41.699560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.276 qpair failed and we were unable to recover it. 00:29:11.276 [2024-10-09 00:36:41.699974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.276 [2024-10-09 00:36:41.700006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.276 qpair failed and we were unable to recover it. 00:29:11.276 [2024-10-09 00:36:41.700188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.276 [2024-10-09 00:36:41.700218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.276 qpair failed and we were unable to recover it. 00:29:11.276 [2024-10-09 00:36:41.700623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.276 [2024-10-09 00:36:41.700653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.276 qpair failed and we were unable to recover it. 00:29:11.276 [2024-10-09 00:36:41.701000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.276 [2024-10-09 00:36:41.701030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.276 qpair failed and we were unable to recover it. 00:29:11.276 [2024-10-09 00:36:41.701420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.276 [2024-10-09 00:36:41.701449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.276 qpair failed and we were unable to recover it. 00:29:11.276 [2024-10-09 00:36:41.701815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.276 [2024-10-09 00:36:41.701846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.276 qpair failed and we were unable to recover it. 00:29:11.276 [2024-10-09 00:36:41.702156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.276 [2024-10-09 00:36:41.702185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.276 qpair failed and we were unable to recover it. 00:29:11.276 [2024-10-09 00:36:41.702547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.276 [2024-10-09 00:36:41.702576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.276 qpair failed and we were unable to recover it. 00:29:11.276 [2024-10-09 00:36:41.702904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.276 [2024-10-09 00:36:41.702933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.276 qpair failed and we were unable to recover it. 00:29:11.276 [2024-10-09 00:36:41.703300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.276 [2024-10-09 00:36:41.703329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.276 qpair failed and we were unable to recover it. 00:29:11.276 [2024-10-09 00:36:41.703572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.276 [2024-10-09 00:36:41.703604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.276 qpair failed and we were unable to recover it. 00:29:11.276 [2024-10-09 00:36:41.703951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.276 [2024-10-09 00:36:41.703982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.276 qpair failed and we were unable to recover it. 00:29:11.276 [2024-10-09 00:36:41.704323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.276 [2024-10-09 00:36:41.704353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.276 qpair failed and we were unable to recover it. 00:29:11.276 [2024-10-09 00:36:41.704733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.276 [2024-10-09 00:36:41.704764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.276 qpair failed and we were unable to recover it. 00:29:11.276 [2024-10-09 00:36:41.705119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.276 [2024-10-09 00:36:41.705149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.276 qpair failed and we were unable to recover it. 00:29:11.276 [2024-10-09 00:36:41.705518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.276 [2024-10-09 00:36:41.705546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.276 qpair failed and we were unable to recover it. 00:29:11.276 [2024-10-09 00:36:41.705906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.276 [2024-10-09 00:36:41.705939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.276 qpair failed and we were unable to recover it. 00:29:11.276 [2024-10-09 00:36:41.706302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.276 [2024-10-09 00:36:41.706332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.276 qpair failed and we were unable to recover it. 00:29:11.276 [2024-10-09 00:36:41.706570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.276 [2024-10-09 00:36:41.706603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.277 qpair failed and we were unable to recover it. 00:29:11.277 [2024-10-09 00:36:41.706950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.277 [2024-10-09 00:36:41.706981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.277 qpair failed and we were unable to recover it. 00:29:11.277 [2024-10-09 00:36:41.707355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.277 [2024-10-09 00:36:41.707386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.277 qpair failed and we were unable to recover it. 00:29:11.277 [2024-10-09 00:36:41.707745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.277 [2024-10-09 00:36:41.707776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.277 qpair failed and we were unable to recover it. 00:29:11.277 [2024-10-09 00:36:41.708136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.277 [2024-10-09 00:36:41.708166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.277 qpair failed and we were unable to recover it. 00:29:11.277 [2024-10-09 00:36:41.708509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.277 [2024-10-09 00:36:41.708537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.277 qpair failed and we were unable to recover it. 00:29:11.277 [2024-10-09 00:36:41.708916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.277 [2024-10-09 00:36:41.708946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.277 qpair failed and we were unable to recover it. 00:29:11.277 [2024-10-09 00:36:41.709283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.277 [2024-10-09 00:36:41.709312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.277 qpair failed and we were unable to recover it. 00:29:11.277 [2024-10-09 00:36:41.709679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.277 [2024-10-09 00:36:41.709713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.277 qpair failed and we were unable to recover it. 00:29:11.277 [2024-10-09 00:36:41.709972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.277 [2024-10-09 00:36:41.710005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.277 qpair failed and we were unable to recover it. 00:29:11.277 [2024-10-09 00:36:41.710390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.277 [2024-10-09 00:36:41.710420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.277 qpair failed and we were unable to recover it. 00:29:11.277 [2024-10-09 00:36:41.710784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.277 [2024-10-09 00:36:41.710815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.277 qpair failed and we were unable to recover it. 00:29:11.277 [2024-10-09 00:36:41.711192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.277 [2024-10-09 00:36:41.711222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.277 qpair failed and we were unable to recover it. 00:29:11.277 [2024-10-09 00:36:41.711457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.277 [2024-10-09 00:36:41.711485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.277 qpair failed and we were unable to recover it. 00:29:11.277 [2024-10-09 00:36:41.711903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.277 [2024-10-09 00:36:41.711934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.277 qpair failed and we were unable to recover it. 00:29:11.277 [2024-10-09 00:36:41.712276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.277 [2024-10-09 00:36:41.712313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.277 qpair failed and we were unable to recover it. 00:29:11.277 [2024-10-09 00:36:41.712704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.277 [2024-10-09 00:36:41.712742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.277 qpair failed and we were unable to recover it. 00:29:11.277 [2024-10-09 00:36:41.713142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.277 [2024-10-09 00:36:41.713171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.277 qpair failed and we were unable to recover it. 00:29:11.277 [2024-10-09 00:36:41.713540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.277 [2024-10-09 00:36:41.713570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.277 qpair failed and we were unable to recover it. 00:29:11.277 [2024-10-09 00:36:41.713990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.277 [2024-10-09 00:36:41.714020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.277 qpair failed and we were unable to recover it. 00:29:11.277 [2024-10-09 00:36:41.714362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.277 [2024-10-09 00:36:41.714392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.277 qpair failed and we were unable to recover it. 00:29:11.277 [2024-10-09 00:36:41.714759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.277 [2024-10-09 00:36:41.714791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.277 qpair failed and we were unable to recover it. 00:29:11.277 [2024-10-09 00:36:41.715151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.277 [2024-10-09 00:36:41.715179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.277 qpair failed and we were unable to recover it. 00:29:11.277 [2024-10-09 00:36:41.715521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.277 [2024-10-09 00:36:41.715551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.277 qpair failed and we were unable to recover it. 00:29:11.277 [2024-10-09 00:36:41.715930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.277 [2024-10-09 00:36:41.715961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.277 qpair failed and we were unable to recover it. 00:29:11.277 [2024-10-09 00:36:41.716317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.277 [2024-10-09 00:36:41.716345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.277 qpair failed and we were unable to recover it. 00:29:11.277 [2024-10-09 00:36:41.716718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.277 [2024-10-09 00:36:41.716758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.277 qpair failed and we were unable to recover it. 00:29:11.277 [2024-10-09 00:36:41.717149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.277 [2024-10-09 00:36:41.717178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.277 qpair failed and we were unable to recover it. 00:29:11.277 [2024-10-09 00:36:41.717549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.277 [2024-10-09 00:36:41.717577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.277 qpair failed and we were unable to recover it. 00:29:11.277 [2024-10-09 00:36:41.717825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.277 [2024-10-09 00:36:41.717858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.277 qpair failed and we were unable to recover it. 00:29:11.277 [2024-10-09 00:36:41.718130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.277 [2024-10-09 00:36:41.718161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.277 qpair failed and we were unable to recover it. 00:29:11.277 [2024-10-09 00:36:41.718544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.277 [2024-10-09 00:36:41.718573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.277 qpair failed and we were unable to recover it. 00:29:11.277 [2024-10-09 00:36:41.718959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.277 [2024-10-09 00:36:41.718990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.277 qpair failed and we were unable to recover it. 00:29:11.277 [2024-10-09 00:36:41.719355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.277 [2024-10-09 00:36:41.719383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.277 qpair failed and we were unable to recover it. 00:29:11.277 [2024-10-09 00:36:41.719711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.277 [2024-10-09 00:36:41.719749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.277 qpair failed and we were unable to recover it. 00:29:11.277 [2024-10-09 00:36:41.720095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.277 [2024-10-09 00:36:41.720124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.277 qpair failed and we were unable to recover it. 00:29:11.277 [2024-10-09 00:36:41.720490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.277 [2024-10-09 00:36:41.720520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.277 qpair failed and we were unable to recover it. 00:29:11.277 [2024-10-09 00:36:41.720884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.277 [2024-10-09 00:36:41.720915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.277 qpair failed and we were unable to recover it. 00:29:11.277 [2024-10-09 00:36:41.721246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.277 [2024-10-09 00:36:41.721276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.277 qpair failed and we were unable to recover it. 00:29:11.277 [2024-10-09 00:36:41.721632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.277 [2024-10-09 00:36:41.721661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.277 qpair failed and we were unable to recover it. 00:29:11.277 [2024-10-09 00:36:41.722028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.277 [2024-10-09 00:36:41.722058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.277 qpair failed and we were unable to recover it. 00:29:11.277 [2024-10-09 00:36:41.722416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.278 [2024-10-09 00:36:41.722445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.278 qpair failed and we were unable to recover it. 00:29:11.278 [2024-10-09 00:36:41.722794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.278 [2024-10-09 00:36:41.722824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.278 qpair failed and we were unable to recover it. 00:29:11.278 [2024-10-09 00:36:41.723189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.278 [2024-10-09 00:36:41.723218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.278 qpair failed and we were unable to recover it. 00:29:11.278 [2024-10-09 00:36:41.723575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.278 [2024-10-09 00:36:41.723605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.278 qpair failed and we were unable to recover it. 00:29:11.278 [2024-10-09 00:36:41.723965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.278 [2024-10-09 00:36:41.723996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.278 qpair failed and we were unable to recover it. 00:29:11.278 [2024-10-09 00:36:41.724368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.278 [2024-10-09 00:36:41.724399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.278 qpair failed and we were unable to recover it. 00:29:11.278 [2024-10-09 00:36:41.724752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.278 [2024-10-09 00:36:41.724782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.278 qpair failed and we were unable to recover it. 00:29:11.278 [2024-10-09 00:36:41.725144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.278 [2024-10-09 00:36:41.725173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.278 qpair failed and we were unable to recover it. 00:29:11.278 [2024-10-09 00:36:41.725418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.278 [2024-10-09 00:36:41.725452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.278 qpair failed and we were unable to recover it. 00:29:11.278 [2024-10-09 00:36:41.725809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.278 [2024-10-09 00:36:41.725839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.278 qpair failed and we were unable to recover it. 00:29:11.278 [2024-10-09 00:36:41.726201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.278 [2024-10-09 00:36:41.726231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.278 qpair failed and we were unable to recover it. 00:29:11.278 [2024-10-09 00:36:41.726456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.278 [2024-10-09 00:36:41.726489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.278 qpair failed and we were unable to recover it. 00:29:11.278 [2024-10-09 00:36:41.726843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.278 [2024-10-09 00:36:41.726874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.278 qpair failed and we were unable to recover it. 00:29:11.278 [2024-10-09 00:36:41.727246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.278 [2024-10-09 00:36:41.727275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.278 qpair failed and we were unable to recover it. 00:29:11.278 [2024-10-09 00:36:41.727604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.278 [2024-10-09 00:36:41.727634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.278 qpair failed and we were unable to recover it. 00:29:11.278 [2024-10-09 00:36:41.728009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.278 [2024-10-09 00:36:41.728040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.278 qpair failed and we were unable to recover it. 00:29:11.278 [2024-10-09 00:36:41.728401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.278 [2024-10-09 00:36:41.728430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.278 qpair failed and we were unable to recover it. 00:29:11.278 [2024-10-09 00:36:41.728778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.278 [2024-10-09 00:36:41.728809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.278 qpair failed and we were unable to recover it. 00:29:11.278 [2024-10-09 00:36:41.729158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.278 [2024-10-09 00:36:41.729189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.278 qpair failed and we were unable to recover it. 00:29:11.278 [2024-10-09 00:36:41.729548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.278 [2024-10-09 00:36:41.729577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.278 qpair failed and we were unable to recover it. 00:29:11.278 [2024-10-09 00:36:41.729919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.278 [2024-10-09 00:36:41.729951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.278 qpair failed and we were unable to recover it. 00:29:11.278 [2024-10-09 00:36:41.730318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.278 [2024-10-09 00:36:41.730347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.278 qpair failed and we were unable to recover it. 00:29:11.278 [2024-10-09 00:36:41.730713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.278 [2024-10-09 00:36:41.730771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.278 qpair failed and we were unable to recover it. 00:29:11.278 [2024-10-09 00:36:41.731115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.278 [2024-10-09 00:36:41.731144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.278 qpair failed and we were unable to recover it. 00:29:11.278 [2024-10-09 00:36:41.731397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.278 [2024-10-09 00:36:41.731429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.278 qpair failed and we were unable to recover it. 00:29:11.278 [2024-10-09 00:36:41.731794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.278 [2024-10-09 00:36:41.731827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.278 qpair failed and we were unable to recover it. 00:29:11.278 [2024-10-09 00:36:41.732205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.278 [2024-10-09 00:36:41.732233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.278 qpair failed and we were unable to recover it. 00:29:11.278 [2024-10-09 00:36:41.732573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.278 [2024-10-09 00:36:41.732603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.278 qpair failed and we were unable to recover it. 00:29:11.278 [2024-10-09 00:36:41.732944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.278 [2024-10-09 00:36:41.732975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.278 qpair failed and we were unable to recover it. 00:29:11.278 [2024-10-09 00:36:41.733321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.278 [2024-10-09 00:36:41.733351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.278 qpair failed and we were unable to recover it. 00:29:11.278 [2024-10-09 00:36:41.733728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.278 [2024-10-09 00:36:41.733758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.278 qpair failed and we were unable to recover it. 00:29:11.278 [2024-10-09 00:36:41.734005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.278 [2024-10-09 00:36:41.734034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.278 qpair failed and we were unable to recover it. 00:29:11.278 [2024-10-09 00:36:41.734388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.278 [2024-10-09 00:36:41.734416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.278 qpair failed and we were unable to recover it. 00:29:11.278 [2024-10-09 00:36:41.734758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.278 [2024-10-09 00:36:41.734789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.278 qpair failed and we were unable to recover it. 00:29:11.278 [2024-10-09 00:36:41.735140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.278 [2024-10-09 00:36:41.735169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.278 qpair failed and we were unable to recover it. 00:29:11.278 [2024-10-09 00:36:41.735527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.278 [2024-10-09 00:36:41.735561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.278 qpair failed and we were unable to recover it. 00:29:11.279 [2024-10-09 00:36:41.735940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.279 [2024-10-09 00:36:41.735971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.279 qpair failed and we were unable to recover it. 00:29:11.279 [2024-10-09 00:36:41.736309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.279 [2024-10-09 00:36:41.736339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.279 qpair failed and we were unable to recover it. 00:29:11.279 [2024-10-09 00:36:41.736704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.279 [2024-10-09 00:36:41.736742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.279 qpair failed and we were unable to recover it. 00:29:11.279 [2024-10-09 00:36:41.737089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.279 [2024-10-09 00:36:41.737118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.279 qpair failed and we were unable to recover it. 00:29:11.279 [2024-10-09 00:36:41.737477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.279 [2024-10-09 00:36:41.737506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.279 qpair failed and we were unable to recover it. 00:29:11.279 [2024-10-09 00:36:41.737886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.279 [2024-10-09 00:36:41.737916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.279 qpair failed and we were unable to recover it. 00:29:11.279 [2024-10-09 00:36:41.738264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.279 [2024-10-09 00:36:41.738292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.279 qpair failed and we were unable to recover it. 00:29:11.279 [2024-10-09 00:36:41.738644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.279 [2024-10-09 00:36:41.738673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.279 qpair failed and we were unable to recover it. 00:29:11.279 [2024-10-09 00:36:41.739058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.279 [2024-10-09 00:36:41.739089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.279 qpair failed and we were unable to recover it. 00:29:11.279 [2024-10-09 00:36:41.739412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.279 [2024-10-09 00:36:41.739440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.279 qpair failed and we were unable to recover it. 00:29:11.279 [2024-10-09 00:36:41.739716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.279 [2024-10-09 00:36:41.739756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.279 qpair failed and we were unable to recover it. 00:29:11.279 [2024-10-09 00:36:41.740130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.279 [2024-10-09 00:36:41.740159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.279 qpair failed and we were unable to recover it. 00:29:11.279 [2024-10-09 00:36:41.740531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.279 [2024-10-09 00:36:41.740560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.279 qpair failed and we were unable to recover it. 00:29:11.279 [2024-10-09 00:36:41.740921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.279 [2024-10-09 00:36:41.740953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.279 qpair failed and we were unable to recover it. 00:29:11.279 [2024-10-09 00:36:41.741327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.279 [2024-10-09 00:36:41.741356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.279 qpair failed and we were unable to recover it. 00:29:11.279 [2024-10-09 00:36:41.741716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.279 [2024-10-09 00:36:41.741769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.279 qpair failed and we were unable to recover it. 00:29:11.279 [2024-10-09 00:36:41.742165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.279 [2024-10-09 00:36:41.742194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.279 qpair failed and we were unable to recover it. 00:29:11.279 [2024-10-09 00:36:41.742439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.279 [2024-10-09 00:36:41.742468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.279 qpair failed and we were unable to recover it. 00:29:11.279 [2024-10-09 00:36:41.742883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.279 [2024-10-09 00:36:41.742914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.279 qpair failed and we were unable to recover it. 00:29:11.279 [2024-10-09 00:36:41.743261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.279 [2024-10-09 00:36:41.743289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.279 qpair failed and we were unable to recover it. 00:29:11.279 [2024-10-09 00:36:41.743629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.279 [2024-10-09 00:36:41.743658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.279 qpair failed and we were unable to recover it. 00:29:11.279 [2024-10-09 00:36:41.744055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.279 [2024-10-09 00:36:41.744085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.279 qpair failed and we were unable to recover it. 00:29:11.279 [2024-10-09 00:36:41.744437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.279 [2024-10-09 00:36:41.744465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.279 qpair failed and we were unable to recover it. 00:29:11.279 [2024-10-09 00:36:41.744882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.279 [2024-10-09 00:36:41.744912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.279 qpair failed and we were unable to recover it. 00:29:11.279 [2024-10-09 00:36:41.745274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.279 [2024-10-09 00:36:41.745303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.279 qpair failed and we were unable to recover it. 00:29:11.279 [2024-10-09 00:36:41.745685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.279 [2024-10-09 00:36:41.745713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.279 qpair failed and we were unable to recover it. 00:29:11.279 [2024-10-09 00:36:41.746087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.279 [2024-10-09 00:36:41.746121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.279 qpair failed and we were unable to recover it. 00:29:11.279 [2024-10-09 00:36:41.746477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.279 [2024-10-09 00:36:41.746506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.279 qpair failed and we were unable to recover it. 00:29:11.279 [2024-10-09 00:36:41.746870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.279 [2024-10-09 00:36:41.746899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.279 qpair failed and we were unable to recover it. 00:29:11.279 [2024-10-09 00:36:41.747230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.279 [2024-10-09 00:36:41.747259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.279 qpair failed and we were unable to recover it. 00:29:11.279 [2024-10-09 00:36:41.747612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.279 [2024-10-09 00:36:41.747641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.279 qpair failed and we were unable to recover it. 00:29:11.279 [2024-10-09 00:36:41.747984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.279 [2024-10-09 00:36:41.748013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.279 qpair failed and we were unable to recover it. 00:29:11.279 [2024-10-09 00:36:41.748381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.279 [2024-10-09 00:36:41.748410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.279 qpair failed and we were unable to recover it. 00:29:11.279 [2024-10-09 00:36:41.748791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.279 [2024-10-09 00:36:41.748822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.279 qpair failed and we were unable to recover it. 00:29:11.279 [2024-10-09 00:36:41.749190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.279 [2024-10-09 00:36:41.749218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.279 qpair failed and we were unable to recover it. 00:29:11.279 [2024-10-09 00:36:41.749585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.279 [2024-10-09 00:36:41.749613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.279 qpair failed and we were unable to recover it. 00:29:11.279 [2024-10-09 00:36:41.749976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.279 [2024-10-09 00:36:41.750007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.279 qpair failed and we were unable to recover it. 00:29:11.279 [2024-10-09 00:36:41.750385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.279 [2024-10-09 00:36:41.750415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.279 qpair failed and we were unable to recover it. 00:29:11.279 [2024-10-09 00:36:41.750769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.279 [2024-10-09 00:36:41.750800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.279 qpair failed and we were unable to recover it. 00:29:11.279 [2024-10-09 00:36:41.751171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.279 [2024-10-09 00:36:41.751201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.279 qpair failed and we were unable to recover it. 00:29:11.280 [2024-10-09 00:36:41.751570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.280 [2024-10-09 00:36:41.751598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.280 qpair failed and we were unable to recover it. 00:29:11.280 [2024-10-09 00:36:41.751945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.280 [2024-10-09 00:36:41.751976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.280 qpair failed and we were unable to recover it. 00:29:11.280 [2024-10-09 00:36:41.752344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.280 [2024-10-09 00:36:41.752373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.280 qpair failed and we were unable to recover it. 00:29:11.280 [2024-10-09 00:36:41.752707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.280 [2024-10-09 00:36:41.752746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.280 qpair failed and we were unable to recover it. 00:29:11.280 [2024-10-09 00:36:41.753078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.280 [2024-10-09 00:36:41.753107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.280 qpair failed and we were unable to recover it. 00:29:11.280 [2024-10-09 00:36:41.753479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.280 [2024-10-09 00:36:41.753507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.280 qpair failed and we were unable to recover it. 00:29:11.280 [2024-10-09 00:36:41.753883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.280 [2024-10-09 00:36:41.753915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.280 qpair failed and we were unable to recover it. 00:29:11.280 [2024-10-09 00:36:41.754267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.280 [2024-10-09 00:36:41.754296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.280 qpair failed and we were unable to recover it. 00:29:11.280 [2024-10-09 00:36:41.754659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.280 [2024-10-09 00:36:41.754688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.280 qpair failed and we were unable to recover it. 00:29:11.280 [2024-10-09 00:36:41.755076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.280 [2024-10-09 00:36:41.755107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.280 qpair failed and we were unable to recover it. 00:29:11.280 [2024-10-09 00:36:41.755468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.280 [2024-10-09 00:36:41.755496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.280 qpair failed and we were unable to recover it. 00:29:11.280 [2024-10-09 00:36:41.755848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.280 [2024-10-09 00:36:41.755879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.280 qpair failed and we were unable to recover it. 00:29:11.280 [2024-10-09 00:36:41.756247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.280 [2024-10-09 00:36:41.756276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.280 qpair failed and we were unable to recover it. 00:29:11.280 [2024-10-09 00:36:41.756607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.280 [2024-10-09 00:36:41.756637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.280 qpair failed and we were unable to recover it. 00:29:11.280 [2024-10-09 00:36:41.757040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.280 [2024-10-09 00:36:41.757070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.280 qpair failed and we were unable to recover it. 00:29:11.280 [2024-10-09 00:36:41.757419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.280 [2024-10-09 00:36:41.757449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.280 qpair failed and we were unable to recover it. 00:29:11.280 [2024-10-09 00:36:41.757798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.280 [2024-10-09 00:36:41.757829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.280 qpair failed and we were unable to recover it. 00:29:11.280 [2024-10-09 00:36:41.758171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.280 [2024-10-09 00:36:41.758201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.280 qpair failed and we were unable to recover it. 00:29:11.280 [2024-10-09 00:36:41.758442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.280 [2024-10-09 00:36:41.758475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.280 qpair failed and we were unable to recover it. 00:29:11.280 [2024-10-09 00:36:41.758829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.280 [2024-10-09 00:36:41.758868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.280 qpair failed and we were unable to recover it. 00:29:11.280 [2024-10-09 00:36:41.759229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.280 [2024-10-09 00:36:41.759258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.280 qpair failed and we were unable to recover it. 00:29:11.280 [2024-10-09 00:36:41.759636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.280 [2024-10-09 00:36:41.759665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.280 qpair failed and we were unable to recover it. 00:29:11.280 [2024-10-09 00:36:41.760112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.280 [2024-10-09 00:36:41.760149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.280 qpair failed and we were unable to recover it. 00:29:11.280 [2024-10-09 00:36:41.760506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.280 [2024-10-09 00:36:41.760535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.280 qpair failed and we were unable to recover it. 00:29:11.280 [2024-10-09 00:36:41.760886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.280 [2024-10-09 00:36:41.760917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.280 qpair failed and we were unable to recover it. 00:29:11.280 [2024-10-09 00:36:41.761305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.280 [2024-10-09 00:36:41.761334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.280 qpair failed and we were unable to recover it. 00:29:11.280 [2024-10-09 00:36:41.761690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.280 [2024-10-09 00:36:41.761730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.280 qpair failed and we were unable to recover it. 00:29:11.280 [2024-10-09 00:36:41.762006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.280 [2024-10-09 00:36:41.762040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.280 qpair failed and we were unable to recover it. 00:29:11.280 [2024-10-09 00:36:41.762374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.280 [2024-10-09 00:36:41.762404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.280 qpair failed and we were unable to recover it. 00:29:11.280 [2024-10-09 00:36:41.762716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.280 [2024-10-09 00:36:41.762757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.280 qpair failed and we were unable to recover it. 00:29:11.280 [2024-10-09 00:36:41.763093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.280 [2024-10-09 00:36:41.763122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.280 qpair failed and we were unable to recover it. 00:29:11.280 [2024-10-09 00:36:41.763485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.280 [2024-10-09 00:36:41.763514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.280 qpair failed and we were unable to recover it. 00:29:11.280 [2024-10-09 00:36:41.763889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.280 [2024-10-09 00:36:41.763920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.280 qpair failed and we were unable to recover it. 00:29:11.280 [2024-10-09 00:36:41.764180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.280 [2024-10-09 00:36:41.764209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.280 qpair failed and we were unable to recover it. 00:29:11.280 [2024-10-09 00:36:41.764575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.280 [2024-10-09 00:36:41.764604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.280 qpair failed and we were unable to recover it. 00:29:11.280 [2024-10-09 00:36:41.764947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.280 [2024-10-09 00:36:41.764978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.280 qpair failed and we were unable to recover it. 00:29:11.280 [2024-10-09 00:36:41.765360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.280 [2024-10-09 00:36:41.765389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.280 qpair failed and we were unable to recover it. 00:29:11.280 [2024-10-09 00:36:41.765761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.280 [2024-10-09 00:36:41.765792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.280 qpair failed and we were unable to recover it. 00:29:11.280 [2024-10-09 00:36:41.766058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.281 [2024-10-09 00:36:41.766086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.281 qpair failed and we were unable to recover it. 00:29:11.281 [2024-10-09 00:36:41.766417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.281 [2024-10-09 00:36:41.766448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.281 qpair failed and we were unable to recover it. 00:29:11.281 [2024-10-09 00:36:41.766814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.281 [2024-10-09 00:36:41.766844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.281 qpair failed and we were unable to recover it. 00:29:11.281 [2024-10-09 00:36:41.767095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.281 [2024-10-09 00:36:41.767123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.281 qpair failed and we were unable to recover it. 00:29:11.281 [2024-10-09 00:36:41.767369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.281 [2024-10-09 00:36:41.767398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.281 qpair failed and we were unable to recover it. 00:29:11.281 [2024-10-09 00:36:41.767749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.281 [2024-10-09 00:36:41.767779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.281 qpair failed and we were unable to recover it. 00:29:11.281 [2024-10-09 00:36:41.768116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.281 [2024-10-09 00:36:41.768145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.281 qpair failed and we were unable to recover it. 00:29:11.281 [2024-10-09 00:36:41.768527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.281 [2024-10-09 00:36:41.768556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.281 qpair failed and we were unable to recover it. 00:29:11.281 [2024-10-09 00:36:41.768908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.281 [2024-10-09 00:36:41.768938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.281 qpair failed and we were unable to recover it. 00:29:11.281 [2024-10-09 00:36:41.769313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.281 [2024-10-09 00:36:41.769341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.281 qpair failed and we were unable to recover it. 00:29:11.281 [2024-10-09 00:36:41.769608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.281 [2024-10-09 00:36:41.769636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.281 qpair failed and we were unable to recover it. 00:29:11.281 [2024-10-09 00:36:41.770001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.281 [2024-10-09 00:36:41.770031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.281 qpair failed and we were unable to recover it. 00:29:11.281 [2024-10-09 00:36:41.770392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.281 [2024-10-09 00:36:41.770422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.281 qpair failed and we were unable to recover it. 00:29:11.281 [2024-10-09 00:36:41.770781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.281 [2024-10-09 00:36:41.770811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.281 qpair failed and we were unable to recover it. 00:29:11.281 [2024-10-09 00:36:41.771170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.281 [2024-10-09 00:36:41.771199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.281 qpair failed and we were unable to recover it. 00:29:11.281 [2024-10-09 00:36:41.771580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.281 [2024-10-09 00:36:41.771608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.281 qpair failed and we were unable to recover it. 00:29:11.281 [2024-10-09 00:36:41.771954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.281 [2024-10-09 00:36:41.771990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.281 qpair failed and we were unable to recover it. 00:29:11.281 [2024-10-09 00:36:41.772348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.281 [2024-10-09 00:36:41.772377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.281 qpair failed and we were unable to recover it. 00:29:11.281 [2024-10-09 00:36:41.772743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.281 [2024-10-09 00:36:41.772772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.281 qpair failed and we were unable to recover it. 00:29:11.281 [2024-10-09 00:36:41.773137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.281 [2024-10-09 00:36:41.773166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.281 qpair failed and we were unable to recover it. 00:29:11.281 [2024-10-09 00:36:41.773522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.281 [2024-10-09 00:36:41.773552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.281 qpair failed and we were unable to recover it. 00:29:11.281 [2024-10-09 00:36:41.773919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.281 [2024-10-09 00:36:41.773949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.281 qpair failed and we were unable to recover it. 00:29:11.281 [2024-10-09 00:36:41.774291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.281 [2024-10-09 00:36:41.774320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.281 qpair failed and we were unable to recover it. 00:29:11.281 [2024-10-09 00:36:41.774696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.281 [2024-10-09 00:36:41.774734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.281 qpair failed and we were unable to recover it. 00:29:11.281 [2024-10-09 00:36:41.775114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.281 [2024-10-09 00:36:41.775143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.281 qpair failed and we were unable to recover it. 00:29:11.281 [2024-10-09 00:36:41.775493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.281 [2024-10-09 00:36:41.775523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.281 qpair failed and we were unable to recover it. 00:29:11.281 [2024-10-09 00:36:41.775884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.281 [2024-10-09 00:36:41.775915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.281 qpair failed and we were unable to recover it. 00:29:11.281 [2024-10-09 00:36:41.776255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.281 [2024-10-09 00:36:41.776285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.281 qpair failed and we were unable to recover it. 00:29:11.281 [2024-10-09 00:36:41.776633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.281 [2024-10-09 00:36:41.776663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.281 qpair failed and we were unable to recover it. 00:29:11.281 [2024-10-09 00:36:41.777018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.281 [2024-10-09 00:36:41.777048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.281 qpair failed and we were unable to recover it. 00:29:11.281 [2024-10-09 00:36:41.777412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.281 [2024-10-09 00:36:41.777442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.281 qpair failed and we were unable to recover it. 00:29:11.281 [2024-10-09 00:36:41.777796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.281 [2024-10-09 00:36:41.777826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.281 qpair failed and we were unable to recover it. 00:29:11.281 [2024-10-09 00:36:41.778073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.281 [2024-10-09 00:36:41.778103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.281 qpair failed and we were unable to recover it. 00:29:11.281 [2024-10-09 00:36:41.778459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.281 [2024-10-09 00:36:41.778489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.281 qpair failed and we were unable to recover it. 00:29:11.281 [2024-10-09 00:36:41.778849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.281 [2024-10-09 00:36:41.778879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.281 qpair failed and we were unable to recover it. 00:29:11.281 [2024-10-09 00:36:41.779268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.281 [2024-10-09 00:36:41.779297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.281 qpair failed and we were unable to recover it. 00:29:11.281 [2024-10-09 00:36:41.779540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.281 [2024-10-09 00:36:41.779573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.281 qpair failed and we were unable to recover it. 00:29:11.281 [2024-10-09 00:36:41.779950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.281 [2024-10-09 00:36:41.779980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.281 qpair failed and we were unable to recover it. 00:29:11.281 [2024-10-09 00:36:41.780228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.281 [2024-10-09 00:36:41.780257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.281 qpair failed and we were unable to recover it. 00:29:11.281 [2024-10-09 00:36:41.780617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.281 [2024-10-09 00:36:41.780646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.281 qpair failed and we were unable to recover it. 00:29:11.281 [2024-10-09 00:36:41.781011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.281 [2024-10-09 00:36:41.781042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.281 qpair failed and we were unable to recover it. 00:29:11.282 [2024-10-09 00:36:41.781405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.282 [2024-10-09 00:36:41.781433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.282 qpair failed and we were unable to recover it. 00:29:11.282 [2024-10-09 00:36:41.781800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.282 [2024-10-09 00:36:41.781832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.282 qpair failed and we were unable to recover it. 00:29:11.282 [2024-10-09 00:36:41.782196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.282 [2024-10-09 00:36:41.782231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.282 qpair failed and we were unable to recover it. 00:29:11.282 [2024-10-09 00:36:41.782569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.282 [2024-10-09 00:36:41.782599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.282 qpair failed and we were unable to recover it. 00:29:11.282 [2024-10-09 00:36:41.782948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.282 [2024-10-09 00:36:41.782978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.282 qpair failed and we were unable to recover it. 00:29:11.282 [2024-10-09 00:36:41.783336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.282 [2024-10-09 00:36:41.783366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.282 qpair failed and we were unable to recover it. 00:29:11.282 [2024-10-09 00:36:41.783744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.282 [2024-10-09 00:36:41.783776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.282 qpair failed and we were unable to recover it. 00:29:11.282 [2024-10-09 00:36:41.784124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.282 [2024-10-09 00:36:41.784152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.282 qpair failed and we were unable to recover it. 00:29:11.282 [2024-10-09 00:36:41.784404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.282 [2024-10-09 00:36:41.784436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.282 qpair failed and we were unable to recover it. 00:29:11.282 [2024-10-09 00:36:41.784792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.282 [2024-10-09 00:36:41.784823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.282 qpair failed and we were unable to recover it. 00:29:11.282 [2024-10-09 00:36:41.785195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.282 [2024-10-09 00:36:41.785224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.282 qpair failed and we were unable to recover it. 00:29:11.282 [2024-10-09 00:36:41.785581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.282 [2024-10-09 00:36:41.785610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.282 qpair failed and we were unable to recover it. 00:29:11.282 [2024-10-09 00:36:41.785971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.282 [2024-10-09 00:36:41.786001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.282 qpair failed and we were unable to recover it. 00:29:11.282 [2024-10-09 00:36:41.786352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.282 [2024-10-09 00:36:41.786382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.282 qpair failed and we were unable to recover it. 00:29:11.282 [2024-10-09 00:36:41.786754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.282 [2024-10-09 00:36:41.786784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.282 qpair failed and we were unable to recover it. 00:29:11.282 [2024-10-09 00:36:41.787143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.282 [2024-10-09 00:36:41.787171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.282 qpair failed and we were unable to recover it. 00:29:11.282 [2024-10-09 00:36:41.787541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.282 [2024-10-09 00:36:41.787569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.282 qpair failed and we were unable to recover it. 00:29:11.282 [2024-10-09 00:36:41.787952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.282 [2024-10-09 00:36:41.787983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.282 qpair failed and we were unable to recover it. 00:29:11.282 [2024-10-09 00:36:41.788345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.282 [2024-10-09 00:36:41.788374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.282 qpair failed and we were unable to recover it. 00:29:11.282 [2024-10-09 00:36:41.788718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.282 [2024-10-09 00:36:41.788758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.282 qpair failed and we were unable to recover it. 00:29:11.282 [2024-10-09 00:36:41.789093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.282 [2024-10-09 00:36:41.789123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.282 qpair failed and we were unable to recover it. 00:29:11.282 [2024-10-09 00:36:41.789489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.282 [2024-10-09 00:36:41.789519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.282 qpair failed and we were unable to recover it. 00:29:11.282 [2024-10-09 00:36:41.789885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.282 [2024-10-09 00:36:41.789914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.282 qpair failed and we were unable to recover it. 00:29:11.282 [2024-10-09 00:36:41.790279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.282 [2024-10-09 00:36:41.790308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.282 qpair failed and we were unable to recover it. 00:29:11.282 [2024-10-09 00:36:41.790669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.282 [2024-10-09 00:36:41.790697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.282 qpair failed and we were unable to recover it. 00:29:11.282 [2024-10-09 00:36:41.791064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.282 [2024-10-09 00:36:41.791093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.282 qpair failed and we were unable to recover it. 00:29:11.282 [2024-10-09 00:36:41.791359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.282 [2024-10-09 00:36:41.791388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.282 qpair failed and we were unable to recover it. 00:29:11.282 [2024-10-09 00:36:41.791782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.282 [2024-10-09 00:36:41.791813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.282 qpair failed and we were unable to recover it. 00:29:11.282 [2024-10-09 00:36:41.792188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.282 [2024-10-09 00:36:41.792217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.282 qpair failed and we were unable to recover it. 00:29:11.282 [2024-10-09 00:36:41.792590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.282 [2024-10-09 00:36:41.792625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.282 qpair failed and we were unable to recover it. 00:29:11.282 [2024-10-09 00:36:41.792965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.282 [2024-10-09 00:36:41.792994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.282 qpair failed and we were unable to recover it. 00:29:11.282 [2024-10-09 00:36:41.793362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.282 [2024-10-09 00:36:41.793392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.282 qpair failed and we were unable to recover it. 00:29:11.282 [2024-10-09 00:36:41.793753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.282 [2024-10-09 00:36:41.793783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.282 qpair failed and we were unable to recover it. 00:29:11.282 [2024-10-09 00:36:41.794140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.282 [2024-10-09 00:36:41.794169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.282 qpair failed and we were unable to recover it. 00:29:11.282 [2024-10-09 00:36:41.794429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.282 [2024-10-09 00:36:41.794458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.282 qpair failed and we were unable to recover it. 00:29:11.282 [2024-10-09 00:36:41.794841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.283 [2024-10-09 00:36:41.794871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.283 qpair failed and we were unable to recover it. 00:29:11.283 [2024-10-09 00:36:41.795223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.283 [2024-10-09 00:36:41.795252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.283 qpair failed and we were unable to recover it. 00:29:11.283 [2024-10-09 00:36:41.795613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.283 [2024-10-09 00:36:41.795642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.283 qpair failed and we were unable to recover it. 00:29:11.283 [2024-10-09 00:36:41.796029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.283 [2024-10-09 00:36:41.796059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.283 qpair failed and we were unable to recover it. 00:29:11.283 [2024-10-09 00:36:41.796496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.283 [2024-10-09 00:36:41.796526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.283 qpair failed and we were unable to recover it. 00:29:11.283 [2024-10-09 00:36:41.796884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.283 [2024-10-09 00:36:41.796922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.283 qpair failed and we were unable to recover it. 00:29:11.283 [2024-10-09 00:36:41.797289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.283 [2024-10-09 00:36:41.797318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.283 qpair failed and we were unable to recover it. 00:29:11.283 [2024-10-09 00:36:41.797553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.283 [2024-10-09 00:36:41.797582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.283 qpair failed and we were unable to recover it. 00:29:11.283 [2024-10-09 00:36:41.797954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.283 [2024-10-09 00:36:41.797986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.283 qpair failed and we were unable to recover it. 00:29:11.283 [2024-10-09 00:36:41.798343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.283 [2024-10-09 00:36:41.798373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.283 qpair failed and we were unable to recover it. 00:29:11.283 [2024-10-09 00:36:41.798752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.283 [2024-10-09 00:36:41.798781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.283 qpair failed and we were unable to recover it. 00:29:11.283 [2024-10-09 00:36:41.799147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.283 [2024-10-09 00:36:41.799176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.283 qpair failed and we were unable to recover it. 00:29:11.283 [2024-10-09 00:36:41.799535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.283 [2024-10-09 00:36:41.799564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.283 qpair failed and we were unable to recover it. 00:29:11.283 [2024-10-09 00:36:41.799900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.283 [2024-10-09 00:36:41.799931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.283 qpair failed and we were unable to recover it. 00:29:11.283 [2024-10-09 00:36:41.800298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.283 [2024-10-09 00:36:41.800327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.283 qpair failed and we were unable to recover it. 00:29:11.283 [2024-10-09 00:36:41.800700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.283 [2024-10-09 00:36:41.800737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.283 qpair failed and we were unable to recover it. 00:29:11.283 [2024-10-09 00:36:41.801080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.283 [2024-10-09 00:36:41.801109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.283 qpair failed and we were unable to recover it. 00:29:11.283 [2024-10-09 00:36:41.801537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.283 [2024-10-09 00:36:41.801567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.283 qpair failed and we were unable to recover it. 00:29:11.283 [2024-10-09 00:36:41.801816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.283 [2024-10-09 00:36:41.801847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.283 qpair failed and we were unable to recover it. 00:29:11.283 [2024-10-09 00:36:41.802213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.283 [2024-10-09 00:36:41.802243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.283 qpair failed and we were unable to recover it. 00:29:11.283 [2024-10-09 00:36:41.802602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.283 [2024-10-09 00:36:41.802631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.283 qpair failed and we were unable to recover it. 00:29:11.283 [2024-10-09 00:36:41.802971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.283 [2024-10-09 00:36:41.803002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.283 qpair failed and we were unable to recover it. 00:29:11.283 [2024-10-09 00:36:41.803255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.283 [2024-10-09 00:36:41.803283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.283 qpair failed and we were unable to recover it. 00:29:11.283 [2024-10-09 00:36:41.803652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.283 [2024-10-09 00:36:41.803682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.283 qpair failed and we were unable to recover it. 00:29:11.283 [2024-10-09 00:36:41.804080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.283 [2024-10-09 00:36:41.804112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.283 qpair failed and we were unable to recover it. 00:29:11.283 [2024-10-09 00:36:41.804472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.283 [2024-10-09 00:36:41.804502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.283 qpair failed and we were unable to recover it. 00:29:11.283 [2024-10-09 00:36:41.804869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.283 [2024-10-09 00:36:41.804900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.283 qpair failed and we were unable to recover it. 00:29:11.283 [2024-10-09 00:36:41.805225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.283 [2024-10-09 00:36:41.805255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.283 qpair failed and we were unable to recover it. 00:29:11.283 [2024-10-09 00:36:41.805622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.283 [2024-10-09 00:36:41.805651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.283 qpair failed and we were unable to recover it. 00:29:11.283 [2024-10-09 00:36:41.806079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.283 [2024-10-09 00:36:41.806109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.283 qpair failed and we were unable to recover it. 00:29:11.283 [2024-10-09 00:36:41.806467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.283 [2024-10-09 00:36:41.806497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.283 qpair failed and we were unable to recover it. 00:29:11.283 [2024-10-09 00:36:41.806830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.283 [2024-10-09 00:36:41.806861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.283 qpair failed and we were unable to recover it. 00:29:11.283 [2024-10-09 00:36:41.807208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.283 [2024-10-09 00:36:41.807238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.283 qpair failed and we were unable to recover it. 00:29:11.283 [2024-10-09 00:36:41.807588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.283 [2024-10-09 00:36:41.807617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.283 qpair failed and we were unable to recover it. 00:29:11.283 [2024-10-09 00:36:41.807960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.283 [2024-10-09 00:36:41.807990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.283 qpair failed and we were unable to recover it. 00:29:11.283 [2024-10-09 00:36:41.808368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.283 [2024-10-09 00:36:41.808402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.283 qpair failed and we were unable to recover it. 00:29:11.283 [2024-10-09 00:36:41.808653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.283 [2024-10-09 00:36:41.808682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.283 qpair failed and we were unable to recover it. 00:29:11.283 [2024-10-09 00:36:41.809082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.283 [2024-10-09 00:36:41.809112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.283 qpair failed and we were unable to recover it. 00:29:11.283 [2024-10-09 00:36:41.809474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.283 [2024-10-09 00:36:41.809503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.283 qpair failed and we were unable to recover it. 00:29:11.283 [2024-10-09 00:36:41.809764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.283 [2024-10-09 00:36:41.809795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.283 qpair failed and we were unable to recover it. 00:29:11.283 [2024-10-09 00:36:41.810158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.284 [2024-10-09 00:36:41.810187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.284 qpair failed and we were unable to recover it. 00:29:11.284 [2024-10-09 00:36:41.810550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.284 [2024-10-09 00:36:41.810579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.284 qpair failed and we were unable to recover it. 00:29:11.284 [2024-10-09 00:36:41.810937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.284 [2024-10-09 00:36:41.810967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.284 qpair failed and we were unable to recover it. 00:29:11.284 [2024-10-09 00:36:41.811342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.284 [2024-10-09 00:36:41.811371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.284 qpair failed and we were unable to recover it. 00:29:11.284 [2024-10-09 00:36:41.811739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.284 [2024-10-09 00:36:41.811770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.284 qpair failed and we were unable to recover it. 00:29:11.284 [2024-10-09 00:36:41.812104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.284 [2024-10-09 00:36:41.812133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.284 qpair failed and we were unable to recover it. 00:29:11.284 [2024-10-09 00:36:41.812506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.284 [2024-10-09 00:36:41.812536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.284 qpair failed and we were unable to recover it. 00:29:11.284 [2024-10-09 00:36:41.812939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.284 [2024-10-09 00:36:41.812968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.284 qpair failed and we were unable to recover it. 00:29:11.284 [2024-10-09 00:36:41.813308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.284 [2024-10-09 00:36:41.813338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.284 qpair failed and we were unable to recover it. 00:29:11.284 [2024-10-09 00:36:41.813700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.284 [2024-10-09 00:36:41.813739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.284 qpair failed and we were unable to recover it. 00:29:11.284 [2024-10-09 00:36:41.814072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.284 [2024-10-09 00:36:41.814100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.284 qpair failed and we were unable to recover it. 00:29:11.284 [2024-10-09 00:36:41.814446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.284 [2024-10-09 00:36:41.814475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.284 qpair failed and we were unable to recover it. 00:29:11.284 [2024-10-09 00:36:41.814848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.284 [2024-10-09 00:36:41.814880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.284 qpair failed and we were unable to recover it. 00:29:11.284 [2024-10-09 00:36:41.815238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.284 [2024-10-09 00:36:41.815267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.284 qpair failed and we were unable to recover it. 00:29:11.284 [2024-10-09 00:36:41.815611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.284 [2024-10-09 00:36:41.815640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.284 qpair failed and we were unable to recover it. 00:29:11.284 [2024-10-09 00:36:41.815892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.284 [2024-10-09 00:36:41.815925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.284 qpair failed and we were unable to recover it. 00:29:11.284 [2024-10-09 00:36:41.816205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.284 [2024-10-09 00:36:41.816233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.284 qpair failed and we were unable to recover it. 00:29:11.284 [2024-10-09 00:36:41.816578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.284 [2024-10-09 00:36:41.816607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.284 qpair failed and we were unable to recover it. 00:29:11.284 [2024-10-09 00:36:41.816987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.284 [2024-10-09 00:36:41.817017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.284 qpair failed and we were unable to recover it. 00:29:11.284 [2024-10-09 00:36:41.817269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.284 [2024-10-09 00:36:41.817301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.284 qpair failed and we were unable to recover it. 00:29:11.284 [2024-10-09 00:36:41.817664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.284 [2024-10-09 00:36:41.817694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.284 qpair failed and we were unable to recover it. 00:29:11.284 [2024-10-09 00:36:41.818007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.284 [2024-10-09 00:36:41.818037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.284 qpair failed and we were unable to recover it. 00:29:11.284 [2024-10-09 00:36:41.818371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.284 [2024-10-09 00:36:41.818405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.284 qpair failed and we were unable to recover it. 00:29:11.284 [2024-10-09 00:36:41.818756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.284 [2024-10-09 00:36:41.818789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.284 qpair failed and we were unable to recover it. 00:29:11.284 [2024-10-09 00:36:41.819151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.284 [2024-10-09 00:36:41.819181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.284 qpair failed and we were unable to recover it. 00:29:11.284 [2024-10-09 00:36:41.819549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.284 [2024-10-09 00:36:41.819578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.284 qpair failed and we were unable to recover it. 00:29:11.284 [2024-10-09 00:36:41.819968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.284 [2024-10-09 00:36:41.820000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.284 qpair failed and we were unable to recover it. 00:29:11.284 [2024-10-09 00:36:41.820365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.284 [2024-10-09 00:36:41.820393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.284 qpair failed and we were unable to recover it. 00:29:11.284 [2024-10-09 00:36:41.820737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.284 [2024-10-09 00:36:41.820766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.284 qpair failed and we were unable to recover it. 00:29:11.284 [2024-10-09 00:36:41.821129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.284 [2024-10-09 00:36:41.821159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.284 qpair failed and we were unable to recover it. 00:29:11.284 [2024-10-09 00:36:41.821527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.284 [2024-10-09 00:36:41.821556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.284 qpair failed and we were unable to recover it. 00:29:11.284 [2024-10-09 00:36:41.821897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.284 [2024-10-09 00:36:41.821928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.284 qpair failed and we were unable to recover it. 00:29:11.284 [2024-10-09 00:36:41.822293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.284 [2024-10-09 00:36:41.822324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.284 qpair failed and we were unable to recover it. 00:29:11.284 [2024-10-09 00:36:41.822673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.284 [2024-10-09 00:36:41.822702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.284 qpair failed and we were unable to recover it. 00:29:11.284 [2024-10-09 00:36:41.823038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.284 [2024-10-09 00:36:41.823069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.284 qpair failed and we were unable to recover it. 00:29:11.284 [2024-10-09 00:36:41.823446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.284 [2024-10-09 00:36:41.823476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.284 qpair failed and we were unable to recover it. 00:29:11.284 [2024-10-09 00:36:41.823829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.284 [2024-10-09 00:36:41.823861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.284 qpair failed and we were unable to recover it. 00:29:11.284 [2024-10-09 00:36:41.824230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.284 [2024-10-09 00:36:41.824260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.284 qpair failed and we were unable to recover it. 00:29:11.284 [2024-10-09 00:36:41.824621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.284 [2024-10-09 00:36:41.824649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.284 qpair failed and we were unable to recover it. 00:29:11.284 [2024-10-09 00:36:41.824929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.284 [2024-10-09 00:36:41.824958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.284 qpair failed and we were unable to recover it. 00:29:11.284 [2024-10-09 00:36:41.825334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.284 [2024-10-09 00:36:41.825363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.284 qpair failed and we were unable to recover it. 00:29:11.285 [2024-10-09 00:36:41.825729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.285 [2024-10-09 00:36:41.825759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.285 qpair failed and we were unable to recover it. 00:29:11.285 [2024-10-09 00:36:41.826118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.285 [2024-10-09 00:36:41.826147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.285 qpair failed and we were unable to recover it. 00:29:11.285 [2024-10-09 00:36:41.826516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.285 [2024-10-09 00:36:41.826546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.285 qpair failed and we were unable to recover it. 00:29:11.285 [2024-10-09 00:36:41.826929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.285 [2024-10-09 00:36:41.826968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.285 qpair failed and we were unable to recover it. 00:29:11.285 [2024-10-09 00:36:41.827314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.285 [2024-10-09 00:36:41.827344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.285 qpair failed and we were unable to recover it. 00:29:11.285 [2024-10-09 00:36:41.827708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.285 [2024-10-09 00:36:41.827749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.285 qpair failed and we were unable to recover it. 00:29:11.285 [2024-10-09 00:36:41.828146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.285 [2024-10-09 00:36:41.828176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.285 qpair failed and we were unable to recover it. 00:29:11.285 [2024-10-09 00:36:41.828524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.285 [2024-10-09 00:36:41.828561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.285 qpair failed and we were unable to recover it. 00:29:11.285 [2024-10-09 00:36:41.828945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.285 [2024-10-09 00:36:41.828981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.285 qpair failed and we were unable to recover it. 00:29:11.285 [2024-10-09 00:36:41.829332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.285 [2024-10-09 00:36:41.829363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.285 qpair failed and we were unable to recover it. 00:29:11.285 [2024-10-09 00:36:41.829706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.285 [2024-10-09 00:36:41.829743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.285 qpair failed and we were unable to recover it. 00:29:11.285 [2024-10-09 00:36:41.830086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.285 [2024-10-09 00:36:41.830117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.285 qpair failed and we were unable to recover it. 00:29:11.285 [2024-10-09 00:36:41.830480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.285 [2024-10-09 00:36:41.830509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.285 qpair failed and we were unable to recover it. 00:29:11.285 [2024-10-09 00:36:41.830882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.285 [2024-10-09 00:36:41.830913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.285 qpair failed and we were unable to recover it. 00:29:11.285 [2024-10-09 00:36:41.831242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.285 [2024-10-09 00:36:41.831270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.285 qpair failed and we were unable to recover it. 00:29:11.285 [2024-10-09 00:36:41.831646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.285 [2024-10-09 00:36:41.831675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.285 qpair failed and we were unable to recover it. 00:29:11.285 [2024-10-09 00:36:41.832065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.285 [2024-10-09 00:36:41.832097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.285 qpair failed and we were unable to recover it. 00:29:11.285 [2024-10-09 00:36:41.832529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.285 [2024-10-09 00:36:41.832559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.285 qpair failed and we were unable to recover it. 00:29:11.285 [2024-10-09 00:36:41.832797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.285 [2024-10-09 00:36:41.832831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.285 qpair failed and we were unable to recover it. 00:29:11.285 [2024-10-09 00:36:41.833189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.285 [2024-10-09 00:36:41.833218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.285 qpair failed and we were unable to recover it. 00:29:11.285 [2024-10-09 00:36:41.833600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.285 [2024-10-09 00:36:41.833629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.285 qpair failed and we were unable to recover it. 00:29:11.285 [2024-10-09 00:36:41.833935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.285 [2024-10-09 00:36:41.833965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.285 qpair failed and we were unable to recover it. 00:29:11.285 [2024-10-09 00:36:41.834298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.285 [2024-10-09 00:36:41.834328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.285 qpair failed and we were unable to recover it. 00:29:11.285 [2024-10-09 00:36:41.834551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.285 [2024-10-09 00:36:41.834583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.285 qpair failed and we were unable to recover it. 00:29:11.285 [2024-10-09 00:36:41.834949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.285 [2024-10-09 00:36:41.834979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.285 qpair failed and we were unable to recover it. 00:29:11.285 [2024-10-09 00:36:41.835352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.285 [2024-10-09 00:36:41.835383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.285 qpair failed and we were unable to recover it. 00:29:11.285 [2024-10-09 00:36:41.835775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.285 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 3437291 Killed "${NVMF_APP[@]}" "$@" 00:29:11.285 [2024-10-09 00:36:41.835808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.285 qpair failed and we were unable to recover it. 00:29:11.285 [2024-10-09 00:36:41.836058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.285 [2024-10-09 00:36:41.836089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.285 qpair failed and we were unable to recover it. 00:29:11.285 [2024-10-09 00:36:41.836456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.285 [2024-10-09 00:36:41.836487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.285 qpair failed and we were unable to recover it. 00:29:11.285 00:36:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:29:11.285 [2024-10-09 00:36:41.836843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.285 [2024-10-09 00:36:41.836882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.285 qpair failed and we were unable to recover it. 00:29:11.285 [2024-10-09 00:36:41.837238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.285 [2024-10-09 00:36:41.837269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.285 qpair failed and we were unable to recover it. 00:29:11.285 00:36:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:29:11.285 [2024-10-09 00:36:41.837630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.285 [2024-10-09 00:36:41.837661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.285 qpair failed and we were unable to recover it. 00:29:11.285 00:36:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:29:11.285 [2024-10-09 00:36:41.837905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.285 [2024-10-09 00:36:41.837943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.285 qpair failed and we were unable to recover it. 00:29:11.285 00:36:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:11.285 [2024-10-09 00:36:41.838367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.285 [2024-10-09 00:36:41.838406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.285 qpair failed and we were unable to recover it. 00:29:11.285 00:36:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:11.285 [2024-10-09 00:36:41.838774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.285 [2024-10-09 00:36:41.838807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.285 qpair failed and we were unable to recover it. 00:29:11.285 [2024-10-09 00:36:41.839160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.285 [2024-10-09 00:36:41.839190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.285 qpair failed and we were unable to recover it. 00:29:11.285 [2024-10-09 00:36:41.839548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.286 [2024-10-09 00:36:41.839579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.286 qpair failed and we were unable to recover it. 00:29:11.286 [2024-10-09 00:36:41.839959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.286 [2024-10-09 00:36:41.839991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.286 qpair failed and we were unable to recover it. 00:29:11.286 [2024-10-09 00:36:41.840357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.286 [2024-10-09 00:36:41.840387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.286 qpair failed and we were unable to recover it. 00:29:11.286 [2024-10-09 00:36:41.840753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.286 [2024-10-09 00:36:41.840784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.286 qpair failed and we were unable to recover it. 00:29:11.286 [2024-10-09 00:36:41.841020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.286 [2024-10-09 00:36:41.841053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.286 qpair failed and we were unable to recover it. 00:29:11.286 [2024-10-09 00:36:41.841442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.286 [2024-10-09 00:36:41.841471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.286 qpair failed and we were unable to recover it. 00:29:11.286 [2024-10-09 00:36:41.841828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.286 [2024-10-09 00:36:41.841858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.286 qpair failed and we were unable to recover it. 00:29:11.286 [2024-10-09 00:36:41.842196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.286 [2024-10-09 00:36:41.842225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.286 qpair failed and we were unable to recover it. 00:29:11.286 [2024-10-09 00:36:41.842467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.286 [2024-10-09 00:36:41.842498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.286 qpair failed and we were unable to recover it. 00:29:11.286 [2024-10-09 00:36:41.842847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.286 [2024-10-09 00:36:41.842876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.286 qpair failed and we were unable to recover it. 00:29:11.286 [2024-10-09 00:36:41.843264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.286 [2024-10-09 00:36:41.843306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.286 qpair failed and we were unable to recover it. 00:29:11.286 [2024-10-09 00:36:41.843650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.286 [2024-10-09 00:36:41.843683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.286 qpair failed and we were unable to recover it. 00:29:11.286 [2024-10-09 00:36:41.844062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.286 [2024-10-09 00:36:41.844094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.286 qpair failed and we were unable to recover it. 00:29:11.286 [2024-10-09 00:36:41.844452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.286 [2024-10-09 00:36:41.844481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.286 qpair failed and we were unable to recover it. 00:29:11.286 [2024-10-09 00:36:41.844841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.286 [2024-10-09 00:36:41.844873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.286 qpair failed and we were unable to recover it. 00:29:11.286 [2024-10-09 00:36:41.845279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.286 [2024-10-09 00:36:41.845308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.286 qpair failed and we were unable to recover it. 00:29:11.286 [2024-10-09 00:36:41.845558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.286 [2024-10-09 00:36:41.845587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.286 qpair failed and we were unable to recover it. 00:29:11.286 [2024-10-09 00:36:41.845956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.286 [2024-10-09 00:36:41.845988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.286 qpair failed and we were unable to recover it. 00:29:11.286 [2024-10-09 00:36:41.846420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.286 [2024-10-09 00:36:41.846449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.286 qpair failed and we were unable to recover it. 00:29:11.286 [2024-10-09 00:36:41.846808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.286 [2024-10-09 00:36:41.846838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.286 qpair failed and we were unable to recover it. 00:29:11.286 00:36:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # nvmfpid=3438321 00:29:11.286 [2024-10-09 00:36:41.847207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.286 [2024-10-09 00:36:41.847239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.286 qpair failed and we were unable to recover it. 00:29:11.286 00:36:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # waitforlisten 3438321 00:29:11.286 [2024-10-09 00:36:41.847601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.286 [2024-10-09 00:36:41.847633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.286 qpair failed and we were unable to recover it. 00:29:11.286 00:36:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:29:11.286 00:36:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@831 -- # '[' -z 3438321 ']' 00:29:11.286 [2024-10-09 00:36:41.848034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.286 [2024-10-09 00:36:41.848067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.286 qpair failed and we were unable to recover it. 00:29:11.286 00:36:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:11.286 00:36:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:11.286 [2024-10-09 00:36:41.848485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.286 [2024-10-09 00:36:41.848517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.286 qpair failed and we were unable to recover it. 00:29:11.286 00:36:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:11.286 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:11.286 [2024-10-09 00:36:41.848879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.286 00:36:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:11.286 [2024-10-09 00:36:41.848915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.286 qpair failed and we were unable to recover it. 00:29:11.286 00:36:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:11.286 [2024-10-09 00:36:41.849283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.286 [2024-10-09 00:36:41.849319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.286 qpair failed and we were unable to recover it. 00:29:11.286 [2024-10-09 00:36:41.849516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.286 [2024-10-09 00:36:41.849546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.286 qpair failed and we were unable to recover it. 00:29:11.286 [2024-10-09 00:36:41.849961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.286 [2024-10-09 00:36:41.849993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.286 qpair failed and we were unable to recover it. 00:29:11.286 [2024-10-09 00:36:41.850238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.286 [2024-10-09 00:36:41.850272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.286 qpair failed and we were unable to recover it. 00:29:11.286 [2024-10-09 00:36:41.850517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.286 [2024-10-09 00:36:41.850548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.286 qpair failed and we were unable to recover it. 00:29:11.286 [2024-10-09 00:36:41.850882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.286 [2024-10-09 00:36:41.850913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.286 qpair failed and we were unable to recover it. 00:29:11.286 [2024-10-09 00:36:41.851297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.286 [2024-10-09 00:36:41.851328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.286 qpair failed and we were unable to recover it. 00:29:11.286 [2024-10-09 00:36:41.851672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.286 [2024-10-09 00:36:41.851706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.286 qpair failed and we were unable to recover it. 00:29:11.286 [2024-10-09 00:36:41.852107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.286 [2024-10-09 00:36:41.852140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.286 qpair failed and we were unable to recover it. 00:29:11.286 [2024-10-09 00:36:41.852512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.286 [2024-10-09 00:36:41.852546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.286 qpair failed and we were unable to recover it. 00:29:11.286 [2024-10-09 00:36:41.852900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.286 [2024-10-09 00:36:41.852933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.287 qpair failed and we were unable to recover it. 00:29:11.287 [2024-10-09 00:36:41.853297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.287 [2024-10-09 00:36:41.853327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.287 qpair failed and we were unable to recover it. 00:29:11.287 [2024-10-09 00:36:41.853581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.287 [2024-10-09 00:36:41.853616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.287 qpair failed and we were unable to recover it. 00:29:11.287 [2024-10-09 00:36:41.854019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.287 [2024-10-09 00:36:41.854054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.287 qpair failed and we were unable to recover it. 00:29:11.287 [2024-10-09 00:36:41.854412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.287 [2024-10-09 00:36:41.854443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.287 qpair failed and we were unable to recover it. 00:29:11.287 [2024-10-09 00:36:41.854828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.287 [2024-10-09 00:36:41.854860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.287 qpair failed and we were unable to recover it. 00:29:11.287 [2024-10-09 00:36:41.855227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.287 [2024-10-09 00:36:41.855257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.287 qpair failed and we were unable to recover it. 00:29:11.287 [2024-10-09 00:36:41.855637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.287 [2024-10-09 00:36:41.855669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.287 qpair failed and we were unable to recover it. 00:29:11.287 [2024-10-09 00:36:41.856015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.287 [2024-10-09 00:36:41.856047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.287 qpair failed and we were unable to recover it. 00:29:11.287 [2024-10-09 00:36:41.856464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.287 [2024-10-09 00:36:41.856495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.287 qpair failed and we were unable to recover it. 00:29:11.287 [2024-10-09 00:36:41.856867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.287 [2024-10-09 00:36:41.856899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.287 qpair failed and we were unable to recover it. 00:29:11.287 [2024-10-09 00:36:41.857164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.287 [2024-10-09 00:36:41.857202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.287 qpair failed and we were unable to recover it. 00:29:11.287 [2024-10-09 00:36:41.857553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.287 [2024-10-09 00:36:41.857586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.287 qpair failed and we were unable to recover it. 00:29:11.287 [2024-10-09 00:36:41.857939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.287 [2024-10-09 00:36:41.857974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.287 qpair failed and we were unable to recover it. 00:29:11.287 [2024-10-09 00:36:41.858357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.287 [2024-10-09 00:36:41.858388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.287 qpair failed and we were unable to recover it. 00:29:11.287 [2024-10-09 00:36:41.858757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.287 [2024-10-09 00:36:41.858790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.287 qpair failed and we were unable to recover it. 00:29:11.287 [2024-10-09 00:36:41.859149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.287 [2024-10-09 00:36:41.859179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.287 qpair failed and we were unable to recover it. 00:29:11.287 [2024-10-09 00:36:41.859542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.287 [2024-10-09 00:36:41.859572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.287 qpair failed and we were unable to recover it. 00:29:11.287 [2024-10-09 00:36:41.859834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.287 [2024-10-09 00:36:41.859866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.287 qpair failed and we were unable to recover it. 00:29:11.287 [2024-10-09 00:36:41.860255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.287 [2024-10-09 00:36:41.860285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.287 qpair failed and we were unable to recover it. 00:29:11.287 [2024-10-09 00:36:41.860520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.287 [2024-10-09 00:36:41.860550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.287 qpair failed and we were unable to recover it. 00:29:11.287 [2024-10-09 00:36:41.860840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.287 [2024-10-09 00:36:41.860871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.287 qpair failed and we were unable to recover it. 00:29:11.287 [2024-10-09 00:36:41.861131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.287 [2024-10-09 00:36:41.861164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.287 qpair failed and we were unable to recover it. 00:29:11.287 [2024-10-09 00:36:41.861499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.287 [2024-10-09 00:36:41.861529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.287 qpair failed and we were unable to recover it. 00:29:11.287 [2024-10-09 00:36:41.861827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.287 [2024-10-09 00:36:41.861857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.287 qpair failed and we were unable to recover it. 00:29:11.287 [2024-10-09 00:36:41.862223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.287 [2024-10-09 00:36:41.862253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.287 qpair failed and we were unable to recover it. 00:29:11.287 [2024-10-09 00:36:41.862622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.287 [2024-10-09 00:36:41.862655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.287 qpair failed and we were unable to recover it. 00:29:11.287 [2024-10-09 00:36:41.863030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.287 [2024-10-09 00:36:41.863061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.287 qpair failed and we were unable to recover it. 00:29:11.287 [2024-10-09 00:36:41.863430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.287 [2024-10-09 00:36:41.863459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.287 qpair failed and we were unable to recover it. 00:29:11.287 [2024-10-09 00:36:41.863832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.287 [2024-10-09 00:36:41.863862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.287 qpair failed and we were unable to recover it. 00:29:11.287 [2024-10-09 00:36:41.864226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.287 [2024-10-09 00:36:41.864255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.287 qpair failed and we were unable to recover it. 00:29:11.287 [2024-10-09 00:36:41.864637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.287 [2024-10-09 00:36:41.864667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.287 qpair failed and we were unable to recover it. 00:29:11.287 [2024-10-09 00:36:41.865034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.287 [2024-10-09 00:36:41.865065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.287 qpair failed and we were unable to recover it. 00:29:11.287 [2024-10-09 00:36:41.865425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.287 [2024-10-09 00:36:41.865456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.287 qpair failed and we were unable to recover it. 00:29:11.287 [2024-10-09 00:36:41.865827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.287 [2024-10-09 00:36:41.865859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.287 qpair failed and we were unable to recover it. 00:29:11.287 [2024-10-09 00:36:41.866203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.287 [2024-10-09 00:36:41.866233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.287 qpair failed and we were unable to recover it. 00:29:11.288 [2024-10-09 00:36:41.866501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.288 [2024-10-09 00:36:41.866531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.288 qpair failed and we were unable to recover it. 00:29:11.288 [2024-10-09 00:36:41.866769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.288 [2024-10-09 00:36:41.866803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.288 qpair failed and we were unable to recover it. 00:29:11.288 [2024-10-09 00:36:41.867184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.288 [2024-10-09 00:36:41.867220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.288 qpair failed and we were unable to recover it. 00:29:11.288 [2024-10-09 00:36:41.867592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.288 [2024-10-09 00:36:41.867622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.288 qpair failed and we were unable to recover it. 00:29:11.288 [2024-10-09 00:36:41.867907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.288 [2024-10-09 00:36:41.867939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.288 qpair failed and we were unable to recover it. 00:29:11.288 [2024-10-09 00:36:41.868293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.288 [2024-10-09 00:36:41.868322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.288 qpair failed and we were unable to recover it. 00:29:11.288 [2024-10-09 00:36:41.868686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.288 [2024-10-09 00:36:41.868716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.288 qpair failed and we were unable to recover it. 00:29:11.288 [2024-10-09 00:36:41.869017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.288 [2024-10-09 00:36:41.869047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.288 qpair failed and we were unable to recover it. 00:29:11.288 [2024-10-09 00:36:41.869397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.288 [2024-10-09 00:36:41.869434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.288 qpair failed and we were unable to recover it. 00:29:11.288 [2024-10-09 00:36:41.869784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.288 [2024-10-09 00:36:41.869817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.288 qpair failed and we were unable to recover it. 00:29:11.288 [2024-10-09 00:36:41.870221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.288 [2024-10-09 00:36:41.870251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.288 qpair failed and we were unable to recover it. 00:29:11.288 [2024-10-09 00:36:41.870485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.288 [2024-10-09 00:36:41.870519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.288 qpair failed and we were unable to recover it. 00:29:11.288 [2024-10-09 00:36:41.870883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.288 [2024-10-09 00:36:41.870913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.288 qpair failed and we were unable to recover it. 00:29:11.288 [2024-10-09 00:36:41.871159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.288 [2024-10-09 00:36:41.871189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.288 qpair failed and we were unable to recover it. 00:29:11.288 [2024-10-09 00:36:41.871549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.288 [2024-10-09 00:36:41.871579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.288 qpair failed and we were unable to recover it. 00:29:11.288 [2024-10-09 00:36:41.871945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.288 [2024-10-09 00:36:41.871976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.288 qpair failed and we were unable to recover it. 00:29:11.288 [2024-10-09 00:36:41.872270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.288 [2024-10-09 00:36:41.872300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.288 qpair failed and we were unable to recover it. 00:29:11.288 [2024-10-09 00:36:41.872648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.288 [2024-10-09 00:36:41.872679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.288 qpair failed and we were unable to recover it. 00:29:11.288 [2024-10-09 00:36:41.873072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.288 [2024-10-09 00:36:41.873102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.288 qpair failed and we were unable to recover it. 00:29:11.288 [2024-10-09 00:36:41.873349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.288 [2024-10-09 00:36:41.873378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.288 qpair failed and we were unable to recover it. 00:29:11.288 [2024-10-09 00:36:41.873604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.288 [2024-10-09 00:36:41.873633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.288 qpair failed and we were unable to recover it. 00:29:11.288 [2024-10-09 00:36:41.873974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.288 [2024-10-09 00:36:41.874004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.288 qpair failed and we were unable to recover it. 00:29:11.288 [2024-10-09 00:36:41.874373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.288 [2024-10-09 00:36:41.874403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.288 qpair failed and we were unable to recover it. 00:29:11.288 [2024-10-09 00:36:41.874794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.288 [2024-10-09 00:36:41.874825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.288 qpair failed and we were unable to recover it. 00:29:11.288 [2024-10-09 00:36:41.875213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.288 [2024-10-09 00:36:41.875242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.288 qpair failed and we were unable to recover it. 00:29:11.288 [2024-10-09 00:36:41.875607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.288 [2024-10-09 00:36:41.875638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.288 qpair failed and we were unable to recover it. 00:29:11.288 [2024-10-09 00:36:41.875941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.288 [2024-10-09 00:36:41.875971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.288 qpair failed and we were unable to recover it. 00:29:11.288 [2024-10-09 00:36:41.876233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.288 [2024-10-09 00:36:41.876266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.288 qpair failed and we were unable to recover it. 00:29:11.288 [2024-10-09 00:36:41.876708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.288 [2024-10-09 00:36:41.876764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.288 qpair failed and we were unable to recover it. 00:29:11.288 [2024-10-09 00:36:41.877214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.288 [2024-10-09 00:36:41.877244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.288 qpair failed and we were unable to recover it. 00:29:11.288 [2024-10-09 00:36:41.877604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.288 [2024-10-09 00:36:41.877633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.288 qpair failed and we were unable to recover it. 00:29:11.288 [2024-10-09 00:36:41.877848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.288 [2024-10-09 00:36:41.877880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.288 qpair failed and we were unable to recover it. 00:29:11.288 [2024-10-09 00:36:41.878144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.288 [2024-10-09 00:36:41.878174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.288 qpair failed and we were unable to recover it. 00:29:11.288 [2024-10-09 00:36:41.878520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.288 [2024-10-09 00:36:41.878550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.288 qpair failed and we were unable to recover it. 00:29:11.288 [2024-10-09 00:36:41.878795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.288 [2024-10-09 00:36:41.878824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.288 qpair failed and we were unable to recover it. 00:29:11.288 [2024-10-09 00:36:41.879195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.288 [2024-10-09 00:36:41.879224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.288 qpair failed and we were unable to recover it. 00:29:11.288 [2024-10-09 00:36:41.879467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.288 [2024-10-09 00:36:41.879496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.288 qpair failed and we were unable to recover it. 00:29:11.288 [2024-10-09 00:36:41.879761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.288 [2024-10-09 00:36:41.879790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.288 qpair failed and we were unable to recover it. 00:29:11.288 [2024-10-09 00:36:41.880171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.288 [2024-10-09 00:36:41.880202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.289 qpair failed and we were unable to recover it. 00:29:11.289 [2024-10-09 00:36:41.880553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.289 [2024-10-09 00:36:41.880584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.289 qpair failed and we were unable to recover it. 00:29:11.289 [2024-10-09 00:36:41.880838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.289 [2024-10-09 00:36:41.880869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.289 qpair failed and we were unable to recover it. 00:29:11.289 [2024-10-09 00:36:41.881246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.289 [2024-10-09 00:36:41.881277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.289 qpair failed and we were unable to recover it. 00:29:11.289 [2024-10-09 00:36:41.881626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.289 [2024-10-09 00:36:41.881655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.289 qpair failed and we were unable to recover it. 00:29:11.289 [2024-10-09 00:36:41.882022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.289 [2024-10-09 00:36:41.882054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.289 qpair failed and we were unable to recover it. 00:29:11.289 [2024-10-09 00:36:41.882395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.289 [2024-10-09 00:36:41.882424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.289 qpair failed and we were unable to recover it. 00:29:11.289 [2024-10-09 00:36:41.882783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.289 [2024-10-09 00:36:41.882814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.289 qpair failed and we were unable to recover it. 00:29:11.289 [2024-10-09 00:36:41.883143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.289 [2024-10-09 00:36:41.883177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.289 qpair failed and we were unable to recover it. 00:29:11.289 [2024-10-09 00:36:41.883527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.289 [2024-10-09 00:36:41.883556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.289 qpair failed and we were unable to recover it. 00:29:11.289 [2024-10-09 00:36:41.884045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.289 [2024-10-09 00:36:41.884075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.289 qpair failed and we were unable to recover it. 00:29:11.289 [2024-10-09 00:36:41.884429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.289 [2024-10-09 00:36:41.884460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.289 qpair failed and we were unable to recover it. 00:29:11.289 [2024-10-09 00:36:41.884851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.289 [2024-10-09 00:36:41.884883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.289 qpair failed and we were unable to recover it. 00:29:11.289 [2024-10-09 00:36:41.885260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.289 [2024-10-09 00:36:41.885290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.289 qpair failed and we were unable to recover it. 00:29:11.289 [2024-10-09 00:36:41.885646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.289 [2024-10-09 00:36:41.885675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.289 qpair failed and we were unable to recover it. 00:29:11.289 [2024-10-09 00:36:41.885970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.289 [2024-10-09 00:36:41.886000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.289 qpair failed and we were unable to recover it. 00:29:11.289 [2024-10-09 00:36:41.886372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.289 [2024-10-09 00:36:41.886402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.289 qpair failed and we were unable to recover it. 00:29:11.289 [2024-10-09 00:36:41.886638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.289 [2024-10-09 00:36:41.886670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.289 qpair failed and we were unable to recover it. 00:29:11.289 [2024-10-09 00:36:41.887066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.289 [2024-10-09 00:36:41.887097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.289 qpair failed and we were unable to recover it. 00:29:11.289 [2024-10-09 00:36:41.887407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.289 [2024-10-09 00:36:41.887439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.289 qpair failed and we were unable to recover it. 00:29:11.289 [2024-10-09 00:36:41.887794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.289 [2024-10-09 00:36:41.887825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.289 qpair failed and we were unable to recover it. 00:29:11.289 [2024-10-09 00:36:41.888236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.289 [2024-10-09 00:36:41.888266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.289 qpair failed and we were unable to recover it. 00:29:11.289 [2024-10-09 00:36:41.888509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.289 [2024-10-09 00:36:41.888541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.289 qpair failed and we were unable to recover it. 00:29:11.289 [2024-10-09 00:36:41.888946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.289 [2024-10-09 00:36:41.888977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.289 qpair failed and we were unable to recover it. 00:29:11.289 [2024-10-09 00:36:41.889334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.289 [2024-10-09 00:36:41.889364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.289 qpair failed and we were unable to recover it. 00:29:11.289 [2024-10-09 00:36:41.889745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.289 [2024-10-09 00:36:41.889776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.289 qpair failed and we were unable to recover it. 00:29:11.289 [2024-10-09 00:36:41.890143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.289 [2024-10-09 00:36:41.890173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.289 qpair failed and we were unable to recover it. 00:29:11.289 [2024-10-09 00:36:41.890548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.289 [2024-10-09 00:36:41.890578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.289 qpair failed and we were unable to recover it. 00:29:11.289 [2024-10-09 00:36:41.890799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.289 [2024-10-09 00:36:41.890830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.289 qpair failed and we were unable to recover it. 00:29:11.289 [2024-10-09 00:36:41.891180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.289 [2024-10-09 00:36:41.891211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.289 qpair failed and we were unable to recover it. 00:29:11.289 [2024-10-09 00:36:41.891583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.289 [2024-10-09 00:36:41.891612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.289 qpair failed and we were unable to recover it. 00:29:11.566 [2024-10-09 00:36:41.891957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.566 [2024-10-09 00:36:41.891991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.566 qpair failed and we were unable to recover it. 00:29:11.566 [2024-10-09 00:36:41.892289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.566 [2024-10-09 00:36:41.892327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.566 qpair failed and we were unable to recover it. 00:29:11.566 [2024-10-09 00:36:41.892657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.566 [2024-10-09 00:36:41.892686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.566 qpair failed and we were unable to recover it. 00:29:11.566 [2024-10-09 00:36:41.893092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.566 [2024-10-09 00:36:41.893124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.566 qpair failed and we were unable to recover it. 00:29:11.566 [2024-10-09 00:36:41.893491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.566 [2024-10-09 00:36:41.893521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.566 qpair failed and we were unable to recover it. 00:29:11.566 [2024-10-09 00:36:41.893931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.566 [2024-10-09 00:36:41.893962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.566 qpair failed and we were unable to recover it. 00:29:11.566 [2024-10-09 00:36:41.894330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.566 [2024-10-09 00:36:41.894360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.566 qpair failed and we were unable to recover it. 00:29:11.566 [2024-10-09 00:36:41.894759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.566 [2024-10-09 00:36:41.894790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.566 qpair failed and we were unable to recover it. 00:29:11.566 [2024-10-09 00:36:41.895066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.566 [2024-10-09 00:36:41.895095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.566 qpair failed and we were unable to recover it. 00:29:11.567 [2024-10-09 00:36:41.895336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.567 [2024-10-09 00:36:41.895365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.567 qpair failed and we were unable to recover it. 00:29:11.567 [2024-10-09 00:36:41.895746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.567 [2024-10-09 00:36:41.895777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.567 qpair failed and we were unable to recover it. 00:29:11.567 [2024-10-09 00:36:41.896131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.567 [2024-10-09 00:36:41.896163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.567 qpair failed and we were unable to recover it. 00:29:11.567 [2024-10-09 00:36:41.896585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.567 [2024-10-09 00:36:41.896614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.567 qpair failed and we were unable to recover it. 00:29:11.567 [2024-10-09 00:36:41.896869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.567 [2024-10-09 00:36:41.896901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.567 qpair failed and we were unable to recover it. 00:29:11.567 [2024-10-09 00:36:41.897274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.567 [2024-10-09 00:36:41.897304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.567 qpair failed and we were unable to recover it. 00:29:11.567 [2024-10-09 00:36:41.897762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.567 [2024-10-09 00:36:41.897795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.567 qpair failed and we were unable to recover it. 00:29:11.567 [2024-10-09 00:36:41.898174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.567 [2024-10-09 00:36:41.898206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.567 qpair failed and we were unable to recover it. 00:29:11.567 [2024-10-09 00:36:41.898464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.567 [2024-10-09 00:36:41.898493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.567 qpair failed and we were unable to recover it. 00:29:11.567 [2024-10-09 00:36:41.898843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.567 [2024-10-09 00:36:41.898874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.567 qpair failed and we were unable to recover it. 00:29:11.567 [2024-10-09 00:36:41.899257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.567 [2024-10-09 00:36:41.899287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.567 qpair failed and we were unable to recover it. 00:29:11.567 [2024-10-09 00:36:41.899662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.567 [2024-10-09 00:36:41.899691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.567 qpair failed and we were unable to recover it. 00:29:11.567 [2024-10-09 00:36:41.899944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.567 [2024-10-09 00:36:41.899977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.567 qpair failed and we were unable to recover it. 00:29:11.567 [2024-10-09 00:36:41.900369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.567 [2024-10-09 00:36:41.900400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.567 qpair failed and we were unable to recover it. 00:29:11.567 [2024-10-09 00:36:41.900760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.567 [2024-10-09 00:36:41.900799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.567 qpair failed and we were unable to recover it. 00:29:11.567 [2024-10-09 00:36:41.901225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.567 [2024-10-09 00:36:41.901256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.567 qpair failed and we were unable to recover it. 00:29:11.567 [2024-10-09 00:36:41.901509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.567 [2024-10-09 00:36:41.901539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.567 qpair failed and we were unable to recover it. 00:29:11.567 [2024-10-09 00:36:41.901907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.567 [2024-10-09 00:36:41.901938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.567 qpair failed and we were unable to recover it. 00:29:11.567 [2024-10-09 00:36:41.902305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.567 [2024-10-09 00:36:41.902337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.567 qpair failed and we were unable to recover it. 00:29:11.567 [2024-10-09 00:36:41.902716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.567 [2024-10-09 00:36:41.902767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.567 qpair failed and we were unable to recover it. 00:29:11.567 [2024-10-09 00:36:41.903130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.567 [2024-10-09 00:36:41.903160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.567 qpair failed and we were unable to recover it. 00:29:11.567 [2024-10-09 00:36:41.903540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.567 [2024-10-09 00:36:41.903574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.567 qpair failed and we were unable to recover it. 00:29:11.567 [2024-10-09 00:36:41.903740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.567 [2024-10-09 00:36:41.903777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 wit[2024-10-09 00:36:41.903766] Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 initialization... 00:29:11.567 h addr=10.0.0.2, port=4420 00:29:11.567 qpair failed and we were unable to recover it. 00:29:11.567 [2024-10-09 00:36:41.903842] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:11.567 [2024-10-09 00:36:41.904166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.567 [2024-10-09 00:36:41.904198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.567 qpair failed and we were unable to recover it. 00:29:11.567 [2024-10-09 00:36:41.904558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.567 [2024-10-09 00:36:41.904588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.567 qpair failed and we were unable to recover it. 00:29:11.567 [2024-10-09 00:36:41.904839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.567 [2024-10-09 00:36:41.904871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.567 qpair failed and we were unable to recover it. 00:29:11.567 [2024-10-09 00:36:41.905231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.567 [2024-10-09 00:36:41.905262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.567 qpair failed and we were unable to recover it. 00:29:11.567 [2024-10-09 00:36:41.905611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.567 [2024-10-09 00:36:41.905642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.567 qpair failed and we were unable to recover it. 00:29:11.567 [2024-10-09 00:36:41.906014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.567 [2024-10-09 00:36:41.906048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.567 qpair failed and we were unable to recover it. 00:29:11.567 [2024-10-09 00:36:41.906414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.567 [2024-10-09 00:36:41.906446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.567 qpair failed and we were unable to recover it. 00:29:11.567 [2024-10-09 00:36:41.906797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.567 [2024-10-09 00:36:41.906830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.567 qpair failed and we were unable to recover it. 00:29:11.567 [2024-10-09 00:36:41.907209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.567 [2024-10-09 00:36:41.907242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.567 qpair failed and we were unable to recover it. 00:29:11.567 [2024-10-09 00:36:41.907601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.567 [2024-10-09 00:36:41.907634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.567 qpair failed and we were unable to recover it. 00:29:11.567 [2024-10-09 00:36:41.907973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.567 [2024-10-09 00:36:41.908007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.567 qpair failed and we were unable to recover it. 00:29:11.567 [2024-10-09 00:36:41.908304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.567 [2024-10-09 00:36:41.908336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.567 qpair failed and we were unable to recover it. 00:29:11.567 [2024-10-09 00:36:41.908684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.567 [2024-10-09 00:36:41.908718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.567 qpair failed and we were unable to recover it. 00:29:11.567 [2024-10-09 00:36:41.909136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.567 [2024-10-09 00:36:41.909170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.567 qpair failed and we were unable to recover it. 00:29:11.567 [2024-10-09 00:36:41.909540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.568 [2024-10-09 00:36:41.909571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.568 qpair failed and we were unable to recover it. 00:29:11.568 [2024-10-09 00:36:41.909918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.568 [2024-10-09 00:36:41.909951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.568 qpair failed and we were unable to recover it. 00:29:11.568 [2024-10-09 00:36:41.910326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.568 [2024-10-09 00:36:41.910357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.568 qpair failed and we were unable to recover it. 00:29:11.568 [2024-10-09 00:36:41.910715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.568 [2024-10-09 00:36:41.910760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.568 qpair failed and we were unable to recover it. 00:29:11.568 [2024-10-09 00:36:41.911103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.568 [2024-10-09 00:36:41.911134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.568 qpair failed and we were unable to recover it. 00:29:11.568 [2024-10-09 00:36:41.911474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.568 [2024-10-09 00:36:41.911506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.568 qpair failed and we were unable to recover it. 00:29:11.568 [2024-10-09 00:36:41.911823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.568 [2024-10-09 00:36:41.911855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.568 qpair failed and we were unable to recover it. 00:29:11.568 [2024-10-09 00:36:41.912214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.568 [2024-10-09 00:36:41.912246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.568 qpair failed and we were unable to recover it. 00:29:11.568 [2024-10-09 00:36:41.912604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.568 [2024-10-09 00:36:41.912641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.568 qpair failed and we were unable to recover it. 00:29:11.568 [2024-10-09 00:36:41.912972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.568 [2024-10-09 00:36:41.913006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.568 qpair failed and we were unable to recover it. 00:29:11.568 [2024-10-09 00:36:41.913458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.568 [2024-10-09 00:36:41.913488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.568 qpair failed and we were unable to recover it. 00:29:11.568 [2024-10-09 00:36:41.913883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.568 [2024-10-09 00:36:41.913917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.568 qpair failed and we were unable to recover it. 00:29:11.568 [2024-10-09 00:36:41.914285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.568 [2024-10-09 00:36:41.914317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.568 qpair failed and we were unable to recover it. 00:29:11.568 [2024-10-09 00:36:41.914678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.568 [2024-10-09 00:36:41.914711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.568 qpair failed and we were unable to recover it. 00:29:11.568 [2024-10-09 00:36:41.915088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.568 [2024-10-09 00:36:41.915121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.568 qpair failed and we were unable to recover it. 00:29:11.568 [2024-10-09 00:36:41.915484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.568 [2024-10-09 00:36:41.915515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.568 qpair failed and we were unable to recover it. 00:29:11.568 [2024-10-09 00:36:41.915859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.568 [2024-10-09 00:36:41.915892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.568 qpair failed and we were unable to recover it. 00:29:11.568 [2024-10-09 00:36:41.916256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.568 [2024-10-09 00:36:41.916287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.568 qpair failed and we were unable to recover it. 00:29:11.568 [2024-10-09 00:36:41.916549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.568 [2024-10-09 00:36:41.916581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.568 qpair failed and we were unable to recover it. 00:29:11.568 [2024-10-09 00:36:41.916942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.568 [2024-10-09 00:36:41.916973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.568 qpair failed and we were unable to recover it. 00:29:11.568 [2024-10-09 00:36:41.917297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.568 [2024-10-09 00:36:41.917329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.568 qpair failed and we were unable to recover it. 00:29:11.568 [2024-10-09 00:36:41.917702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.568 [2024-10-09 00:36:41.917751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.568 qpair failed and we were unable to recover it. 00:29:11.568 [2024-10-09 00:36:41.918119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.568 [2024-10-09 00:36:41.918152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.568 qpair failed and we were unable to recover it. 00:29:11.568 [2024-10-09 00:36:41.918502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.568 [2024-10-09 00:36:41.918536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.568 qpair failed and we were unable to recover it. 00:29:11.568 [2024-10-09 00:36:41.918775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.568 [2024-10-09 00:36:41.918810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.568 qpair failed and we were unable to recover it. 00:29:11.568 [2024-10-09 00:36:41.919175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.568 [2024-10-09 00:36:41.919205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.568 qpair failed and we were unable to recover it. 00:29:11.568 [2024-10-09 00:36:41.919563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.568 [2024-10-09 00:36:41.919596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.568 qpair failed and we were unable to recover it. 00:29:11.568 [2024-10-09 00:36:41.919965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.568 [2024-10-09 00:36:41.919999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.568 qpair failed and we were unable to recover it. 00:29:11.568 [2024-10-09 00:36:41.920201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.568 [2024-10-09 00:36:41.920232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.568 qpair failed and we were unable to recover it. 00:29:11.568 [2024-10-09 00:36:41.920614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.568 [2024-10-09 00:36:41.920647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.568 qpair failed and we were unable to recover it. 00:29:11.568 [2024-10-09 00:36:41.921030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.568 [2024-10-09 00:36:41.921064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.568 qpair failed and we were unable to recover it. 00:29:11.568 [2024-10-09 00:36:41.921428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.568 [2024-10-09 00:36:41.921462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.568 qpair failed and we were unable to recover it. 00:29:11.568 [2024-10-09 00:36:41.921784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.568 [2024-10-09 00:36:41.921817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.568 qpair failed and we were unable to recover it. 00:29:11.568 [2024-10-09 00:36:41.922223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.568 [2024-10-09 00:36:41.922255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.568 qpair failed and we were unable to recover it. 00:29:11.568 [2024-10-09 00:36:41.922611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.568 [2024-10-09 00:36:41.922643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.568 qpair failed and we were unable to recover it. 00:29:11.568 [2024-10-09 00:36:41.922977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.568 [2024-10-09 00:36:41.923023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.568 qpair failed and we were unable to recover it. 00:29:11.568 [2024-10-09 00:36:41.923276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.568 [2024-10-09 00:36:41.923309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.568 qpair failed and we were unable to recover it. 00:29:11.568 [2024-10-09 00:36:41.923665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.568 [2024-10-09 00:36:41.923697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.569 qpair failed and we were unable to recover it. 00:29:11.569 [2024-10-09 00:36:41.924097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.569 [2024-10-09 00:36:41.924132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.569 qpair failed and we were unable to recover it. 00:29:11.569 [2024-10-09 00:36:41.924510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.569 [2024-10-09 00:36:41.924545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.569 qpair failed and we were unable to recover it. 00:29:11.569 [2024-10-09 00:36:41.924898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.569 [2024-10-09 00:36:41.924933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.569 qpair failed and we were unable to recover it. 00:29:11.569 [2024-10-09 00:36:41.925276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.569 [2024-10-09 00:36:41.925307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.569 qpair failed and we were unable to recover it. 00:29:11.569 [2024-10-09 00:36:41.925658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.569 [2024-10-09 00:36:41.925689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.569 qpair failed and we were unable to recover it. 00:29:11.569 [2024-10-09 00:36:41.926117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.569 [2024-10-09 00:36:41.926152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.569 qpair failed and we were unable to recover it. 00:29:11.569 [2024-10-09 00:36:41.926516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.569 [2024-10-09 00:36:41.926547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.569 qpair failed and we were unable to recover it. 00:29:11.569 [2024-10-09 00:36:41.926898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.569 [2024-10-09 00:36:41.926930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.569 qpair failed and we were unable to recover it. 00:29:11.569 [2024-10-09 00:36:41.927309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.569 [2024-10-09 00:36:41.927339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.569 qpair failed and we were unable to recover it. 00:29:11.569 [2024-10-09 00:36:41.927698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.569 [2024-10-09 00:36:41.927741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.569 qpair failed and we were unable to recover it. 00:29:11.569 [2024-10-09 00:36:41.928068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.569 [2024-10-09 00:36:41.928098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.569 qpair failed and we were unable to recover it. 00:29:11.569 [2024-10-09 00:36:41.928455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.569 [2024-10-09 00:36:41.928486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.569 qpair failed and we were unable to recover it. 00:29:11.569 [2024-10-09 00:36:41.928840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.569 [2024-10-09 00:36:41.928872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.569 qpair failed and we were unable to recover it. 00:29:11.569 [2024-10-09 00:36:41.929128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.569 [2024-10-09 00:36:41.929158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.569 qpair failed and we were unable to recover it. 00:29:11.569 [2024-10-09 00:36:41.929522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.569 [2024-10-09 00:36:41.929553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.569 qpair failed and we were unable to recover it. 00:29:11.569 [2024-10-09 00:36:41.929950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.569 [2024-10-09 00:36:41.929981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.569 qpair failed and we were unable to recover it. 00:29:11.569 [2024-10-09 00:36:41.930169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.569 [2024-10-09 00:36:41.930198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.569 qpair failed and we were unable to recover it. 00:29:11.569 [2024-10-09 00:36:41.930617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.569 [2024-10-09 00:36:41.930648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.569 qpair failed and we were unable to recover it. 00:29:11.569 [2024-10-09 00:36:41.930995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.569 [2024-10-09 00:36:41.931027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.569 qpair failed and we were unable to recover it. 00:29:11.569 [2024-10-09 00:36:41.931384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.569 [2024-10-09 00:36:41.931415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.569 qpair failed and we were unable to recover it. 00:29:11.569 [2024-10-09 00:36:41.931776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.569 [2024-10-09 00:36:41.931809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.569 qpair failed and we were unable to recover it. 00:29:11.569 [2024-10-09 00:36:41.932168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.569 [2024-10-09 00:36:41.932198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.569 qpair failed and we were unable to recover it. 00:29:11.569 [2024-10-09 00:36:41.932571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.569 [2024-10-09 00:36:41.932602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.569 qpair failed and we were unable to recover it. 00:29:11.569 [2024-10-09 00:36:41.932867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.569 [2024-10-09 00:36:41.932900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.569 qpair failed and we were unable to recover it. 00:29:11.569 [2024-10-09 00:36:41.933267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.569 [2024-10-09 00:36:41.933298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.569 qpair failed and we were unable to recover it. 00:29:11.569 [2024-10-09 00:36:41.933562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.569 [2024-10-09 00:36:41.933593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.569 qpair failed and we were unable to recover it. 00:29:11.569 [2024-10-09 00:36:41.933975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.569 [2024-10-09 00:36:41.934006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.569 qpair failed and we were unable to recover it. 00:29:11.569 [2024-10-09 00:36:41.934342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.569 [2024-10-09 00:36:41.934374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.569 qpair failed and we were unable to recover it. 00:29:11.569 [2024-10-09 00:36:41.934745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.569 [2024-10-09 00:36:41.934778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.569 qpair failed and we were unable to recover it. 00:29:11.569 [2024-10-09 00:36:41.935154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.569 [2024-10-09 00:36:41.935184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.569 qpair failed and we were unable to recover it. 00:29:11.569 [2024-10-09 00:36:41.935422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.569 [2024-10-09 00:36:41.935453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.569 qpair failed and we were unable to recover it. 00:29:11.569 [2024-10-09 00:36:41.935842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.569 [2024-10-09 00:36:41.935874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.569 qpair failed and we were unable to recover it. 00:29:11.569 [2024-10-09 00:36:41.936256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.569 [2024-10-09 00:36:41.936286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.569 qpair failed and we were unable to recover it. 00:29:11.569 [2024-10-09 00:36:41.936645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.569 [2024-10-09 00:36:41.936675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.569 qpair failed and we were unable to recover it. 00:29:11.569 [2024-10-09 00:36:41.937065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.569 [2024-10-09 00:36:41.937097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.569 qpair failed and we were unable to recover it. 00:29:11.569 [2024-10-09 00:36:41.937345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.570 [2024-10-09 00:36:41.937375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.570 qpair failed and we were unable to recover it. 00:29:11.570 [2024-10-09 00:36:41.937734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.570 [2024-10-09 00:36:41.937766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.570 qpair failed and we were unable to recover it. 00:29:11.570 [2024-10-09 00:36:41.938134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.570 [2024-10-09 00:36:41.938163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.570 qpair failed and we were unable to recover it. 00:29:11.570 [2024-10-09 00:36:41.938546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.570 [2024-10-09 00:36:41.938575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.570 qpair failed and we were unable to recover it. 00:29:11.570 [2024-10-09 00:36:41.938948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.570 [2024-10-09 00:36:41.938980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.570 qpair failed and we were unable to recover it. 00:29:11.570 [2024-10-09 00:36:41.939334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.570 [2024-10-09 00:36:41.939364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.570 qpair failed and we were unable to recover it. 00:29:11.570 [2024-10-09 00:36:41.939716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.570 [2024-10-09 00:36:41.939758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.570 qpair failed and we were unable to recover it. 00:29:11.570 [2024-10-09 00:36:41.940018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.570 [2024-10-09 00:36:41.940048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.570 qpair failed and we were unable to recover it. 00:29:11.570 [2024-10-09 00:36:41.940428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.570 [2024-10-09 00:36:41.940458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.570 qpair failed and we were unable to recover it. 00:29:11.570 [2024-10-09 00:36:41.940829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.570 [2024-10-09 00:36:41.940859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.570 qpair failed and we were unable to recover it. 00:29:11.570 [2024-10-09 00:36:41.941218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.570 [2024-10-09 00:36:41.941249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.570 qpair failed and we were unable to recover it. 00:29:11.570 [2024-10-09 00:36:41.941625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.570 [2024-10-09 00:36:41.941655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.570 qpair failed and we were unable to recover it. 00:29:11.570 [2024-10-09 00:36:41.941882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.570 [2024-10-09 00:36:41.941911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.570 qpair failed and we were unable to recover it. 00:29:11.570 [2024-10-09 00:36:41.942257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.570 [2024-10-09 00:36:41.942286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.570 qpair failed and we were unable to recover it. 00:29:11.570 [2024-10-09 00:36:41.942678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.570 [2024-10-09 00:36:41.942707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.570 qpair failed and we were unable to recover it. 00:29:11.570 [2024-10-09 00:36:41.943089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.570 [2024-10-09 00:36:41.943118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.570 qpair failed and we were unable to recover it. 00:29:11.570 [2024-10-09 00:36:41.943479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.570 [2024-10-09 00:36:41.943509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.570 qpair failed and we were unable to recover it. 00:29:11.570 [2024-10-09 00:36:41.943755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.570 [2024-10-09 00:36:41.943791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.570 qpair failed and we were unable to recover it. 00:29:11.570 [2024-10-09 00:36:41.944183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.570 [2024-10-09 00:36:41.944214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.570 qpair failed and we were unable to recover it. 00:29:11.570 [2024-10-09 00:36:41.944558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.570 [2024-10-09 00:36:41.944587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.570 qpair failed and we were unable to recover it. 00:29:11.570 [2024-10-09 00:36:41.944841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.570 [2024-10-09 00:36:41.944872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.570 qpair failed and we were unable to recover it. 00:29:11.570 [2024-10-09 00:36:41.945242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.570 [2024-10-09 00:36:41.945273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.570 qpair failed and we were unable to recover it. 00:29:11.570 [2024-10-09 00:36:41.945509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.570 [2024-10-09 00:36:41.945537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.570 qpair failed and we were unable to recover it. 00:29:11.570 [2024-10-09 00:36:41.945863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.570 [2024-10-09 00:36:41.945893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.570 qpair failed and we were unable to recover it. 00:29:11.570 [2024-10-09 00:36:41.946271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.570 [2024-10-09 00:36:41.946301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.570 qpair failed and we were unable to recover it. 00:29:11.570 [2024-10-09 00:36:41.946548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.570 [2024-10-09 00:36:41.946581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.570 qpair failed and we were unable to recover it. 00:29:11.570 [2024-10-09 00:36:41.946972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.570 [2024-10-09 00:36:41.947003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.570 qpair failed and we were unable to recover it. 00:29:11.570 [2024-10-09 00:36:41.947368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.570 [2024-10-09 00:36:41.947398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.570 qpair failed and we were unable to recover it. 00:29:11.570 [2024-10-09 00:36:41.947631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.570 [2024-10-09 00:36:41.947661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.570 qpair failed and we were unable to recover it. 00:29:11.570 [2024-10-09 00:36:41.948065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.570 [2024-10-09 00:36:41.948097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.570 qpair failed and we were unable to recover it. 00:29:11.570 [2024-10-09 00:36:41.948457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.570 [2024-10-09 00:36:41.948495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.570 qpair failed and we were unable to recover it. 00:29:11.570 [2024-10-09 00:36:41.948824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.570 [2024-10-09 00:36:41.948855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.570 qpair failed and we were unable to recover it. 00:29:11.570 [2024-10-09 00:36:41.949232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.570 [2024-10-09 00:36:41.949261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.570 qpair failed and we were unable to recover it. 00:29:11.570 [2024-10-09 00:36:41.949614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.570 [2024-10-09 00:36:41.949644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.570 qpair failed and we were unable to recover it. 00:29:11.570 [2024-10-09 00:36:41.949987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.570 [2024-10-09 00:36:41.950018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.570 qpair failed and we were unable to recover it. 00:29:11.570 [2024-10-09 00:36:41.950418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.570 [2024-10-09 00:36:41.950448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.570 qpair failed and we were unable to recover it. 00:29:11.570 [2024-10-09 00:36:41.950787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.570 [2024-10-09 00:36:41.950817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.570 qpair failed and we were unable to recover it. 00:29:11.570 [2024-10-09 00:36:41.951193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.570 [2024-10-09 00:36:41.951222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.570 qpair failed and we were unable to recover it. 00:29:11.570 [2024-10-09 00:36:41.951472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.570 [2024-10-09 00:36:41.951505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.570 qpair failed and we were unable to recover it. 00:29:11.571 [2024-10-09 00:36:41.951805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.571 [2024-10-09 00:36:41.951836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.571 qpair failed and we were unable to recover it. 00:29:11.571 [2024-10-09 00:36:41.952209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.571 [2024-10-09 00:36:41.952239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.571 qpair failed and we were unable to recover it. 00:29:11.571 [2024-10-09 00:36:41.952574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.571 [2024-10-09 00:36:41.952604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.571 qpair failed and we were unable to recover it. 00:29:11.571 [2024-10-09 00:36:41.952950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.571 [2024-10-09 00:36:41.952982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.571 qpair failed and we were unable to recover it. 00:29:11.571 [2024-10-09 00:36:41.953330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.571 [2024-10-09 00:36:41.953359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.571 qpair failed and we were unable to recover it. 00:29:11.571 [2024-10-09 00:36:41.953717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.571 [2024-10-09 00:36:41.953758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.571 qpair failed and we were unable to recover it. 00:29:11.571 [2024-10-09 00:36:41.954063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.571 [2024-10-09 00:36:41.954091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.571 qpair failed and we were unable to recover it. 00:29:11.571 [2024-10-09 00:36:41.954454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.571 [2024-10-09 00:36:41.954483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.571 qpair failed and we were unable to recover it. 00:29:11.571 [2024-10-09 00:36:41.954830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.571 [2024-10-09 00:36:41.954861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.571 qpair failed and we were unable to recover it. 00:29:11.571 [2024-10-09 00:36:41.955238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.571 [2024-10-09 00:36:41.955267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.571 qpair failed and we were unable to recover it. 00:29:11.571 [2024-10-09 00:36:41.955575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.571 [2024-10-09 00:36:41.955606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.571 qpair failed and we were unable to recover it. 00:29:11.571 [2024-10-09 00:36:41.955954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.571 [2024-10-09 00:36:41.955984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.571 qpair failed and we were unable to recover it. 00:29:11.571 [2024-10-09 00:36:41.956365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.571 [2024-10-09 00:36:41.956393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.571 qpair failed and we were unable to recover it. 00:29:11.571 [2024-10-09 00:36:41.956655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.571 [2024-10-09 00:36:41.956684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.571 qpair failed and we were unable to recover it. 00:29:11.571 [2024-10-09 00:36:41.957076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.571 [2024-10-09 00:36:41.957107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.571 qpair failed and we were unable to recover it. 00:29:11.571 [2024-10-09 00:36:41.957472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.571 [2024-10-09 00:36:41.957500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.571 qpair failed and we were unable to recover it. 00:29:11.571 [2024-10-09 00:36:41.957868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.571 [2024-10-09 00:36:41.957898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.571 qpair failed and we were unable to recover it. 00:29:11.571 [2024-10-09 00:36:41.958234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.571 [2024-10-09 00:36:41.958263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.571 qpair failed and we were unable to recover it. 00:29:11.571 [2024-10-09 00:36:41.958629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.571 [2024-10-09 00:36:41.958664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.571 qpair failed and we were unable to recover it. 00:29:11.571 [2024-10-09 00:36:41.959026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.571 [2024-10-09 00:36:41.959055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.571 qpair failed and we were unable to recover it. 00:29:11.571 [2024-10-09 00:36:41.959327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.571 [2024-10-09 00:36:41.959357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.571 qpair failed and we were unable to recover it. 00:29:11.571 [2024-10-09 00:36:41.959742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.571 [2024-10-09 00:36:41.959774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.571 qpair failed and we were unable to recover it. 00:29:11.571 [2024-10-09 00:36:41.960158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.571 [2024-10-09 00:36:41.960189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.571 qpair failed and we were unable to recover it. 00:29:11.571 [2024-10-09 00:36:41.960557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.571 [2024-10-09 00:36:41.960589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.571 qpair failed and we were unable to recover it. 00:29:11.571 [2024-10-09 00:36:41.960936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.571 [2024-10-09 00:36:41.960967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.571 qpair failed and we were unable to recover it. 00:29:11.571 [2024-10-09 00:36:41.961344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.571 [2024-10-09 00:36:41.961374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.571 qpair failed and we were unable to recover it. 00:29:11.571 [2024-10-09 00:36:41.961615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.571 [2024-10-09 00:36:41.961643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.571 qpair failed and we were unable to recover it. 00:29:11.571 [2024-10-09 00:36:41.961999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.571 [2024-10-09 00:36:41.962030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.571 qpair failed and we were unable to recover it. 00:29:11.571 [2024-10-09 00:36:41.962290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.571 [2024-10-09 00:36:41.962320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.571 qpair failed and we were unable to recover it. 00:29:11.571 [2024-10-09 00:36:41.962718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.571 [2024-10-09 00:36:41.962757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.571 qpair failed and we were unable to recover it. 00:29:11.571 [2024-10-09 00:36:41.963106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.571 [2024-10-09 00:36:41.963135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.571 qpair failed and we were unable to recover it. 00:29:11.571 [2024-10-09 00:36:41.963494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.572 [2024-10-09 00:36:41.963523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.572 qpair failed and we were unable to recover it. 00:29:11.572 [2024-10-09 00:36:41.963909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.572 [2024-10-09 00:36:41.963941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.572 qpair failed and we were unable to recover it. 00:29:11.572 [2024-10-09 00:36:41.964199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.572 [2024-10-09 00:36:41.964232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.572 qpair failed and we were unable to recover it. 00:29:11.572 [2024-10-09 00:36:41.964583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.572 [2024-10-09 00:36:41.964613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.572 qpair failed and we were unable to recover it. 00:29:11.572 [2024-10-09 00:36:41.964949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.572 [2024-10-09 00:36:41.964981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.572 qpair failed and we were unable to recover it. 00:29:11.572 [2024-10-09 00:36:41.965238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.572 [2024-10-09 00:36:41.965268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.572 qpair failed and we were unable to recover it. 00:29:11.572 [2024-10-09 00:36:41.965627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.572 [2024-10-09 00:36:41.965656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.572 qpair failed and we were unable to recover it. 00:29:11.572 [2024-10-09 00:36:41.966024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.572 [2024-10-09 00:36:41.966055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.572 qpair failed and we were unable to recover it. 00:29:11.572 [2024-10-09 00:36:41.966425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.572 [2024-10-09 00:36:41.966457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.572 qpair failed and we were unable to recover it. 00:29:11.572 [2024-10-09 00:36:41.966779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.572 [2024-10-09 00:36:41.966809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.572 qpair failed and we were unable to recover it. 00:29:11.572 [2024-10-09 00:36:41.967188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.572 [2024-10-09 00:36:41.967217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.572 qpair failed and we were unable to recover it. 00:29:11.572 [2024-10-09 00:36:41.967570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.572 [2024-10-09 00:36:41.967600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.572 qpair failed and we were unable to recover it. 00:29:11.572 [2024-10-09 00:36:41.967959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.572 [2024-10-09 00:36:41.967996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.572 qpair failed and we were unable to recover it. 00:29:11.572 [2024-10-09 00:36:41.968368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.572 [2024-10-09 00:36:41.968398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.572 qpair failed and we were unable to recover it. 00:29:11.572 [2024-10-09 00:36:41.968754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.572 [2024-10-09 00:36:41.968792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.572 qpair failed and we were unable to recover it. 00:29:11.572 [2024-10-09 00:36:41.969055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.572 [2024-10-09 00:36:41.969087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.572 qpair failed and we were unable to recover it. 00:29:11.572 [2024-10-09 00:36:41.969456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.572 [2024-10-09 00:36:41.969485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.572 qpair failed and we were unable to recover it. 00:29:11.572 [2024-10-09 00:36:41.969853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.572 [2024-10-09 00:36:41.969884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.572 qpair failed and we were unable to recover it. 00:29:11.572 [2024-10-09 00:36:41.970262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.572 [2024-10-09 00:36:41.970290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.572 qpair failed and we were unable to recover it. 00:29:11.572 [2024-10-09 00:36:41.970663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.572 [2024-10-09 00:36:41.970693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.572 qpair failed and we were unable to recover it. 00:29:11.572 [2024-10-09 00:36:41.971066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.572 [2024-10-09 00:36:41.971096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.572 qpair failed and we were unable to recover it. 00:29:11.572 [2024-10-09 00:36:41.971465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.572 [2024-10-09 00:36:41.971496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.572 qpair failed and we were unable to recover it. 00:29:11.572 [2024-10-09 00:36:41.971860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.572 [2024-10-09 00:36:41.971896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.572 qpair failed and we were unable to recover it. 00:29:11.572 [2024-10-09 00:36:41.972267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.572 [2024-10-09 00:36:41.972297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.572 qpair failed and we were unable to recover it. 00:29:11.572 [2024-10-09 00:36:41.972657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.572 [2024-10-09 00:36:41.972687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.572 qpair failed and we were unable to recover it. 00:29:11.572 [2024-10-09 00:36:41.973093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.572 [2024-10-09 00:36:41.973125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.572 qpair failed and we were unable to recover it. 00:29:11.572 [2024-10-09 00:36:41.973355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.572 [2024-10-09 00:36:41.973389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.572 qpair failed and we were unable to recover it. 00:29:11.572 [2024-10-09 00:36:41.973736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.572 [2024-10-09 00:36:41.973768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.572 qpair failed and we were unable to recover it. 00:29:11.572 [2024-10-09 00:36:41.974126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.572 [2024-10-09 00:36:41.974157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.572 qpair failed and we were unable to recover it. 00:29:11.572 [2024-10-09 00:36:41.974521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.572 [2024-10-09 00:36:41.974550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.572 qpair failed and we were unable to recover it. 00:29:11.572 [2024-10-09 00:36:41.974886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.572 [2024-10-09 00:36:41.974918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.572 qpair failed and we were unable to recover it. 00:29:11.572 [2024-10-09 00:36:41.975275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.572 [2024-10-09 00:36:41.975307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.572 qpair failed and we were unable to recover it. 00:29:11.572 [2024-10-09 00:36:41.975649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.572 [2024-10-09 00:36:41.975679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.572 qpair failed and we were unable to recover it. 00:29:11.572 [2024-10-09 00:36:41.976040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.572 [2024-10-09 00:36:41.976072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.572 qpair failed and we were unable to recover it. 00:29:11.572 [2024-10-09 00:36:41.976408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.572 [2024-10-09 00:36:41.976437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.572 qpair failed and we were unable to recover it. 00:29:11.572 [2024-10-09 00:36:41.976794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.572 [2024-10-09 00:36:41.976828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.572 qpair failed and we were unable to recover it. 00:29:11.572 [2024-10-09 00:36:41.977173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.572 [2024-10-09 00:36:41.977203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.572 qpair failed and we were unable to recover it. 00:29:11.572 [2024-10-09 00:36:41.977437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.572 [2024-10-09 00:36:41.977468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.572 qpair failed and we were unable to recover it. 00:29:11.572 [2024-10-09 00:36:41.977848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.572 [2024-10-09 00:36:41.977879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.572 qpair failed and we were unable to recover it. 00:29:11.572 [2024-10-09 00:36:41.978246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.572 [2024-10-09 00:36:41.978275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.572 qpair failed and we were unable to recover it. 00:29:11.572 [2024-10-09 00:36:41.978641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.572 [2024-10-09 00:36:41.978671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.573 qpair failed and we were unable to recover it. 00:29:11.573 [2024-10-09 00:36:41.979092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.573 [2024-10-09 00:36:41.979125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.573 qpair failed and we were unable to recover it. 00:29:11.573 [2024-10-09 00:36:41.979491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.573 [2024-10-09 00:36:41.979520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.573 qpair failed and we were unable to recover it. 00:29:11.573 [2024-10-09 00:36:41.979881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.573 [2024-10-09 00:36:41.979912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.573 qpair failed and we were unable to recover it. 00:29:11.573 [2024-10-09 00:36:41.980264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.573 [2024-10-09 00:36:41.980294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.573 qpair failed and we were unable to recover it. 00:29:11.573 [2024-10-09 00:36:41.980442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.573 [2024-10-09 00:36:41.980472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.573 qpair failed and we were unable to recover it. 00:29:11.573 [2024-10-09 00:36:41.980887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.573 [2024-10-09 00:36:41.980919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.573 qpair failed and we were unable to recover it. 00:29:11.573 [2024-10-09 00:36:41.981260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.573 [2024-10-09 00:36:41.981300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.573 qpair failed and we were unable to recover it. 00:29:11.573 [2024-10-09 00:36:41.981627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.573 [2024-10-09 00:36:41.981657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.573 qpair failed and we were unable to recover it. 00:29:11.573 [2024-10-09 00:36:41.982022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.573 [2024-10-09 00:36:41.982053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.573 qpair failed and we were unable to recover it. 00:29:11.573 [2024-10-09 00:36:41.982417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.573 [2024-10-09 00:36:41.982445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.573 qpair failed and we were unable to recover it. 00:29:11.573 [2024-10-09 00:36:41.982837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.573 [2024-10-09 00:36:41.982868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.573 qpair failed and we were unable to recover it. 00:29:11.573 [2024-10-09 00:36:41.983121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.573 [2024-10-09 00:36:41.983153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.573 qpair failed and we were unable to recover it. 00:29:11.573 [2024-10-09 00:36:41.983466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.573 [2024-10-09 00:36:41.983497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.573 qpair failed and we were unable to recover it. 00:29:11.573 [2024-10-09 00:36:41.983839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.573 [2024-10-09 00:36:41.983870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.573 qpair failed and we were unable to recover it. 00:29:11.573 [2024-10-09 00:36:41.984233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.573 [2024-10-09 00:36:41.984263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.573 qpair failed and we were unable to recover it. 00:29:11.573 [2024-10-09 00:36:41.984622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.573 [2024-10-09 00:36:41.984653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.573 qpair failed and we were unable to recover it. 00:29:11.573 [2024-10-09 00:36:41.985005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.573 [2024-10-09 00:36:41.985036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.573 qpair failed and we were unable to recover it. 00:29:11.573 [2024-10-09 00:36:41.985395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.573 [2024-10-09 00:36:41.985428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.573 qpair failed and we were unable to recover it. 00:29:11.573 [2024-10-09 00:36:41.985791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.573 [2024-10-09 00:36:41.985822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.573 qpair failed and we were unable to recover it. 00:29:11.573 [2024-10-09 00:36:41.986191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.573 [2024-10-09 00:36:41.986222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.573 qpair failed and we were unable to recover it. 00:29:11.573 [2024-10-09 00:36:41.986611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.573 [2024-10-09 00:36:41.986641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.573 qpair failed and we were unable to recover it. 00:29:11.573 [2024-10-09 00:36:41.987054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.573 [2024-10-09 00:36:41.987086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.573 qpair failed and we were unable to recover it. 00:29:11.573 [2024-10-09 00:36:41.987328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.573 [2024-10-09 00:36:41.987357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.573 qpair failed and we were unable to recover it. 00:29:11.573 [2024-10-09 00:36:41.987739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.573 [2024-10-09 00:36:41.987769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.573 qpair failed and we were unable to recover it. 00:29:11.573 [2024-10-09 00:36:41.988105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.573 [2024-10-09 00:36:41.988138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.573 qpair failed and we were unable to recover it. 00:29:11.573 [2024-10-09 00:36:41.988491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.573 [2024-10-09 00:36:41.988523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.573 qpair failed and we were unable to recover it. 00:29:11.573 [2024-10-09 00:36:41.988874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.573 [2024-10-09 00:36:41.988908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.573 qpair failed and we were unable to recover it. 00:29:11.573 [2024-10-09 00:36:41.989273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.573 [2024-10-09 00:36:41.989304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.573 qpair failed and we were unable to recover it. 00:29:11.573 [2024-10-09 00:36:41.989563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.573 [2024-10-09 00:36:41.989595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.573 qpair failed and we were unable to recover it. 00:29:11.573 [2024-10-09 00:36:41.990034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.573 [2024-10-09 00:36:41.990065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.573 qpair failed and we were unable to recover it. 00:29:11.573 [2024-10-09 00:36:41.990408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.573 [2024-10-09 00:36:41.990437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.573 qpair failed and we were unable to recover it. 00:29:11.573 [2024-10-09 00:36:41.990788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.573 [2024-10-09 00:36:41.990819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.573 qpair failed and we were unable to recover it. 00:29:11.573 [2024-10-09 00:36:41.991155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.573 [2024-10-09 00:36:41.991186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.573 qpair failed and we were unable to recover it. 00:29:11.573 [2024-10-09 00:36:41.991424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.573 [2024-10-09 00:36:41.991454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.573 qpair failed and we were unable to recover it. 00:29:11.573 [2024-10-09 00:36:41.991800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.573 [2024-10-09 00:36:41.991831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.573 qpair failed and we were unable to recover it. 00:29:11.573 [2024-10-09 00:36:41.992189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.573 [2024-10-09 00:36:41.992218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.573 qpair failed and we were unable to recover it. 00:29:11.573 [2024-10-09 00:36:41.992598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.573 [2024-10-09 00:36:41.992627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.573 qpair failed and we were unable to recover it. 00:29:11.573 [2024-10-09 00:36:41.993051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.573 [2024-10-09 00:36:41.993082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.573 qpair failed and we were unable to recover it. 00:29:11.573 [2024-10-09 00:36:41.993447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.573 [2024-10-09 00:36:41.993479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.573 qpair failed and we were unable to recover it. 00:29:11.573 [2024-10-09 00:36:41.993850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.573 [2024-10-09 00:36:41.993881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.573 qpair failed and we were unable to recover it. 00:29:11.573 [2024-10-09 00:36:41.994261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.574 [2024-10-09 00:36:41.994290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.574 qpair failed and we were unable to recover it. 00:29:11.574 [2024-10-09 00:36:41.994646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.574 [2024-10-09 00:36:41.994681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.574 qpair failed and we were unable to recover it. 00:29:11.574 [2024-10-09 00:36:41.995096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.574 [2024-10-09 00:36:41.995127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.574 qpair failed and we were unable to recover it. 00:29:11.574 [2024-10-09 00:36:41.995226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.574 [2024-10-09 00:36:41.995254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.574 qpair failed and we were unable to recover it. 00:29:11.574 [2024-10-09 00:36:41.995601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.574 [2024-10-09 00:36:41.995630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.574 qpair failed and we were unable to recover it. 00:29:11.574 [2024-10-09 00:36:41.995979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.574 [2024-10-09 00:36:41.996010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.574 qpair failed and we were unable to recover it. 00:29:11.574 [2024-10-09 00:36:41.996261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.574 [2024-10-09 00:36:41.996290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.574 qpair failed and we were unable to recover it. 00:29:11.574 [2024-10-09 00:36:41.996655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.574 [2024-10-09 00:36:41.996684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.574 qpair failed and we were unable to recover it. 00:29:11.574 [2024-10-09 00:36:41.997092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.574 [2024-10-09 00:36:41.997124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.574 qpair failed and we were unable to recover it. 00:29:11.574 [2024-10-09 00:36:41.997444] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:11.574 [2024-10-09 00:36:41.997487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.574 [2024-10-09 00:36:41.997517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.574 qpair failed and we were unable to recover it. 00:29:11.574 [2024-10-09 00:36:41.997898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.574 [2024-10-09 00:36:41.997929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.574 qpair failed and we were unable to recover it. 00:29:11.574 [2024-10-09 00:36:41.998292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.574 [2024-10-09 00:36:41.998321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.574 qpair failed and we were unable to recover it. 00:29:11.574 [2024-10-09 00:36:41.998696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.574 [2024-10-09 00:36:41.998735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.574 qpair failed and we were unable to recover it. 00:29:11.574 [2024-10-09 00:36:41.999090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.574 [2024-10-09 00:36:41.999119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.574 qpair failed and we were unable to recover it. 00:29:11.574 [2024-10-09 00:36:41.999490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.574 [2024-10-09 00:36:41.999519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.574 qpair failed and we were unable to recover it. 00:29:11.574 [2024-10-09 00:36:41.999892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.574 [2024-10-09 00:36:41.999924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.574 qpair failed and we were unable to recover it. 00:29:11.574 [2024-10-09 00:36:42.000306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.574 [2024-10-09 00:36:42.000336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.574 qpair failed and we were unable to recover it. 00:29:11.574 [2024-10-09 00:36:42.000711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.574 [2024-10-09 00:36:42.000762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.574 qpair failed and we were unable to recover it. 00:29:11.574 [2024-10-09 00:36:42.001123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.574 [2024-10-09 00:36:42.001152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.574 qpair failed and we were unable to recover it. 00:29:11.574 [2024-10-09 00:36:42.001395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.574 [2024-10-09 00:36:42.001425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.574 qpair failed and we were unable to recover it. 00:29:11.574 [2024-10-09 00:36:42.001746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.574 [2024-10-09 00:36:42.001778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.574 qpair failed and we were unable to recover it. 00:29:11.574 [2024-10-09 00:36:42.002163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.574 [2024-10-09 00:36:42.002193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.574 qpair failed and we were unable to recover it. 00:29:11.574 [2024-10-09 00:36:42.002530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.574 [2024-10-09 00:36:42.002562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.574 qpair failed and we were unable to recover it. 00:29:11.574 [2024-10-09 00:36:42.002942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.574 [2024-10-09 00:36:42.002973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.574 qpair failed and we were unable to recover it. 00:29:11.574 [2024-10-09 00:36:42.003355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.574 [2024-10-09 00:36:42.003385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.574 qpair failed and we were unable to recover it. 00:29:11.574 [2024-10-09 00:36:42.003747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.574 [2024-10-09 00:36:42.003778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.574 qpair failed and we were unable to recover it. 00:29:11.574 [2024-10-09 00:36:42.004130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.574 [2024-10-09 00:36:42.004160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.574 qpair failed and we were unable to recover it. 00:29:11.574 [2024-10-09 00:36:42.004535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.574 [2024-10-09 00:36:42.004564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.574 qpair failed and we were unable to recover it. 00:29:11.574 [2024-10-09 00:36:42.004900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.574 [2024-10-09 00:36:42.004932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.574 qpair failed and we were unable to recover it. 00:29:11.574 [2024-10-09 00:36:42.005304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.574 [2024-10-09 00:36:42.005335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.574 qpair failed and we were unable to recover it. 00:29:11.574 [2024-10-09 00:36:42.005696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.574 [2024-10-09 00:36:42.005737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.574 qpair failed and we were unable to recover it. 00:29:11.574 [2024-10-09 00:36:42.006117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.574 [2024-10-09 00:36:42.006146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.574 qpair failed and we were unable to recover it. 00:29:11.574 [2024-10-09 00:36:42.006416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.574 [2024-10-09 00:36:42.006448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.574 qpair failed and we were unable to recover it. 00:29:11.574 [2024-10-09 00:36:42.006805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.574 [2024-10-09 00:36:42.006837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.574 qpair failed and we were unable to recover it. 00:29:11.574 [2024-10-09 00:36:42.007203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.574 [2024-10-09 00:36:42.007233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.574 qpair failed and we were unable to recover it. 00:29:11.574 [2024-10-09 00:36:42.007599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.574 [2024-10-09 00:36:42.007630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.574 qpair failed and we were unable to recover it. 00:29:11.574 [2024-10-09 00:36:42.007986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.574 [2024-10-09 00:36:42.008017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.574 qpair failed and we were unable to recover it. 00:29:11.574 [2024-10-09 00:36:42.008387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.574 [2024-10-09 00:36:42.008418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.574 qpair failed and we were unable to recover it. 00:29:11.574 [2024-10-09 00:36:42.008785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.575 [2024-10-09 00:36:42.008817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.575 qpair failed and we were unable to recover it. 00:29:11.575 [2024-10-09 00:36:42.009170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.575 [2024-10-09 00:36:42.009201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.575 qpair failed and we were unable to recover it. 00:29:11.575 [2024-10-09 00:36:42.009547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.575 [2024-10-09 00:36:42.009578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.575 qpair failed and we were unable to recover it. 00:29:11.575 [2024-10-09 00:36:42.009947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.575 [2024-10-09 00:36:42.009978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.575 qpair failed and we were unable to recover it. 00:29:11.575 [2024-10-09 00:36:42.010256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.575 [2024-10-09 00:36:42.010288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.575 qpair failed and we were unable to recover it. 00:29:11.575 [2024-10-09 00:36:42.010672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.575 [2024-10-09 00:36:42.010703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.575 qpair failed and we were unable to recover it. 00:29:11.575 [2024-10-09 00:36:42.011095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.575 [2024-10-09 00:36:42.011126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.575 qpair failed and we were unable to recover it. 00:29:11.575 [2024-10-09 00:36:42.011394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.575 [2024-10-09 00:36:42.011423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.575 qpair failed and we were unable to recover it. 00:29:11.575 [2024-10-09 00:36:42.011776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.575 [2024-10-09 00:36:42.011808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.575 qpair failed and we were unable to recover it. 00:29:11.575 [2024-10-09 00:36:42.012176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.575 [2024-10-09 00:36:42.012207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.575 qpair failed and we were unable to recover it. 00:29:11.575 [2024-10-09 00:36:42.012582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.575 [2024-10-09 00:36:42.012612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.575 qpair failed and we were unable to recover it. 00:29:11.575 [2024-10-09 00:36:42.013042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.575 [2024-10-09 00:36:42.013074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.575 qpair failed and we were unable to recover it. 00:29:11.575 [2024-10-09 00:36:42.013321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.575 [2024-10-09 00:36:42.013351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.575 qpair failed and we were unable to recover it. 00:29:11.575 [2024-10-09 00:36:42.013750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.575 [2024-10-09 00:36:42.013780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.575 qpair failed and we were unable to recover it. 00:29:11.575 [2024-10-09 00:36:42.014042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.575 [2024-10-09 00:36:42.014074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.575 qpair failed and we were unable to recover it. 00:29:11.575 [2024-10-09 00:36:42.014331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.575 [2024-10-09 00:36:42.014361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.575 qpair failed and we were unable to recover it. 00:29:11.575 [2024-10-09 00:36:42.014744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.575 [2024-10-09 00:36:42.014775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.575 qpair failed and we were unable to recover it. 00:29:11.575 [2024-10-09 00:36:42.015162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.575 [2024-10-09 00:36:42.015197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.575 qpair failed and we were unable to recover it. 00:29:11.575 [2024-10-09 00:36:42.015427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.575 [2024-10-09 00:36:42.015456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.575 qpair failed and we were unable to recover it. 00:29:11.575 [2024-10-09 00:36:42.015884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.575 [2024-10-09 00:36:42.015915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.575 qpair failed and we were unable to recover it. 00:29:11.575 [2024-10-09 00:36:42.016282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.575 [2024-10-09 00:36:42.016313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.575 qpair failed and we were unable to recover it. 00:29:11.575 [2024-10-09 00:36:42.016679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.575 [2024-10-09 00:36:42.016709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.575 qpair failed and we were unable to recover it. 00:29:11.575 [2024-10-09 00:36:42.017087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.575 [2024-10-09 00:36:42.017117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.575 qpair failed and we were unable to recover it. 00:29:11.575 [2024-10-09 00:36:42.017472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.575 [2024-10-09 00:36:42.017501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.575 qpair failed and we were unable to recover it. 00:29:11.575 [2024-10-09 00:36:42.017906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.575 [2024-10-09 00:36:42.017937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.575 qpair failed and we were unable to recover it. 00:29:11.575 [2024-10-09 00:36:42.018163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.575 [2024-10-09 00:36:42.018192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.575 qpair failed and we were unable to recover it. 00:29:11.575 [2024-10-09 00:36:42.018539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.575 [2024-10-09 00:36:42.018569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.575 qpair failed and we were unable to recover it. 00:29:11.575 [2024-10-09 00:36:42.018934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.575 [2024-10-09 00:36:42.018964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.575 qpair failed and we were unable to recover it. 00:29:11.575 [2024-10-09 00:36:42.019325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.575 [2024-10-09 00:36:42.019354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.575 qpair failed and we were unable to recover it. 00:29:11.575 [2024-10-09 00:36:42.019712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.575 [2024-10-09 00:36:42.019755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.575 qpair failed and we were unable to recover it. 00:29:11.575 [2024-10-09 00:36:42.019897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.575 [2024-10-09 00:36:42.019928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.575 qpair failed and we were unable to recover it. 00:29:11.575 [2024-10-09 00:36:42.020187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.575 [2024-10-09 00:36:42.020217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.575 qpair failed and we were unable to recover it. 00:29:11.575 [2024-10-09 00:36:42.020568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.575 [2024-10-09 00:36:42.020599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.575 qpair failed and we were unable to recover it. 00:29:11.575 [2024-10-09 00:36:42.020951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.575 [2024-10-09 00:36:42.020982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.575 qpair failed and we were unable to recover it. 00:29:11.575 [2024-10-09 00:36:42.021360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.576 [2024-10-09 00:36:42.021390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.576 qpair failed and we were unable to recover it. 00:29:11.576 [2024-10-09 00:36:42.021750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.576 [2024-10-09 00:36:42.021780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.576 qpair failed and we were unable to recover it. 00:29:11.576 [2024-10-09 00:36:42.022140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.576 [2024-10-09 00:36:42.022170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.576 qpair failed and we were unable to recover it. 00:29:11.576 [2024-10-09 00:36:42.022531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.576 [2024-10-09 00:36:42.022561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.576 qpair failed and we were unable to recover it. 00:29:11.576 [2024-10-09 00:36:42.022944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.576 [2024-10-09 00:36:42.022974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.576 qpair failed and we were unable to recover it. 00:29:11.576 [2024-10-09 00:36:42.023341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.576 [2024-10-09 00:36:42.023370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.576 qpair failed and we were unable to recover it. 00:29:11.576 [2024-10-09 00:36:42.023746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.576 [2024-10-09 00:36:42.023777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.576 qpair failed and we were unable to recover it. 00:29:11.576 [2024-10-09 00:36:42.024153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.576 [2024-10-09 00:36:42.024182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.576 qpair failed and we were unable to recover it. 00:29:11.576 [2024-10-09 00:36:42.024562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.576 [2024-10-09 00:36:42.024592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.576 qpair failed and we were unable to recover it. 00:29:11.576 [2024-10-09 00:36:42.024949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.576 [2024-10-09 00:36:42.024981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.576 qpair failed and we were unable to recover it. 00:29:11.576 [2024-10-09 00:36:42.025393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.576 [2024-10-09 00:36:42.025429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.576 qpair failed and we were unable to recover it. 00:29:11.576 [2024-10-09 00:36:42.025857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.576 [2024-10-09 00:36:42.025892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.576 qpair failed and we were unable to recover it. 00:29:11.576 [2024-10-09 00:36:42.026233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.576 [2024-10-09 00:36:42.026263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.576 qpair failed and we were unable to recover it. 00:29:11.576 [2024-10-09 00:36:42.026603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.576 [2024-10-09 00:36:42.026632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.576 qpair failed and we were unable to recover it. 00:29:11.576 [2024-10-09 00:36:42.026971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.576 [2024-10-09 00:36:42.027003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.576 qpair failed and we were unable to recover it. 00:29:11.576 [2024-10-09 00:36:42.027380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.576 [2024-10-09 00:36:42.027409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.576 qpair failed and we were unable to recover it. 00:29:11.576 [2024-10-09 00:36:42.027782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.576 [2024-10-09 00:36:42.027813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.576 qpair failed and we were unable to recover it. 00:29:11.576 [2024-10-09 00:36:42.028172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.576 [2024-10-09 00:36:42.028202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.576 qpair failed and we were unable to recover it. 00:29:11.576 [2024-10-09 00:36:42.028563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.576 [2024-10-09 00:36:42.028593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.576 qpair failed and we were unable to recover it. 00:29:11.576 [2024-10-09 00:36:42.028897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.576 [2024-10-09 00:36:42.028928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.576 qpair failed and we were unable to recover it. 00:29:11.576 [2024-10-09 00:36:42.029293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.576 [2024-10-09 00:36:42.029325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.576 qpair failed and we were unable to recover it. 00:29:11.576 [2024-10-09 00:36:42.029676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.576 [2024-10-09 00:36:42.029709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.576 qpair failed and we were unable to recover it. 00:29:11.576 [2024-10-09 00:36:42.030126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.576 [2024-10-09 00:36:42.030156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.576 qpair failed and we were unable to recover it. 00:29:11.576 [2024-10-09 00:36:42.030524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.576 [2024-10-09 00:36:42.030554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.576 qpair failed and we were unable to recover it. 00:29:11.576 [2024-10-09 00:36:42.030783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.576 [2024-10-09 00:36:42.030813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.576 qpair failed and we were unable to recover it. 00:29:11.576 [2024-10-09 00:36:42.031188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.576 [2024-10-09 00:36:42.031217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.576 qpair failed and we were unable to recover it. 00:29:11.576 [2024-10-09 00:36:42.031583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.576 [2024-10-09 00:36:42.031613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.576 qpair failed and we were unable to recover it. 00:29:11.576 [2024-10-09 00:36:42.031972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.576 [2024-10-09 00:36:42.032004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.576 qpair failed and we were unable to recover it. 00:29:11.576 [2024-10-09 00:36:42.032375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.576 [2024-10-09 00:36:42.032405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.576 qpair failed and we were unable to recover it. 00:29:11.576 [2024-10-09 00:36:42.032768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.576 [2024-10-09 00:36:42.032800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.576 qpair failed and we were unable to recover it. 00:29:11.576 [2024-10-09 00:36:42.033171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.576 [2024-10-09 00:36:42.033199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.576 qpair failed and we were unable to recover it. 00:29:11.576 [2024-10-09 00:36:42.033548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.576 [2024-10-09 00:36:42.033577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.576 qpair failed and we were unable to recover it. 00:29:11.576 [2024-10-09 00:36:42.033804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.576 [2024-10-09 00:36:42.033835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.576 qpair failed and we were unable to recover it. 00:29:11.576 [2024-10-09 00:36:42.034193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.576 [2024-10-09 00:36:42.034222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.576 qpair failed and we were unable to recover it. 00:29:11.576 [2024-10-09 00:36:42.034585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.576 [2024-10-09 00:36:42.034615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.576 qpair failed and we were unable to recover it. 00:29:11.576 [2024-10-09 00:36:42.034981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.576 [2024-10-09 00:36:42.035011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.576 qpair failed and we were unable to recover it. 00:29:11.576 [2024-10-09 00:36:42.035336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.576 [2024-10-09 00:36:42.035366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.576 qpair failed and we were unable to recover it. 00:29:11.576 [2024-10-09 00:36:42.035589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.576 [2024-10-09 00:36:42.035623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.576 qpair failed and we were unable to recover it. 00:29:11.576 [2024-10-09 00:36:42.035895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.576 [2024-10-09 00:36:42.035929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.576 qpair failed and we were unable to recover it. 00:29:11.576 [2024-10-09 00:36:42.036268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.576 [2024-10-09 00:36:42.036307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.576 qpair failed and we were unable to recover it. 00:29:11.576 [2024-10-09 00:36:42.036634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.577 [2024-10-09 00:36:42.036663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.577 qpair failed and we were unable to recover it. 00:29:11.577 [2024-10-09 00:36:42.037110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.577 [2024-10-09 00:36:42.037143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.577 qpair failed and we were unable to recover it. 00:29:11.577 [2024-10-09 00:36:42.037512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.577 [2024-10-09 00:36:42.037543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.577 qpair failed and we were unable to recover it. 00:29:11.577 [2024-10-09 00:36:42.037906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.577 [2024-10-09 00:36:42.037936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.577 qpair failed and we were unable to recover it. 00:29:11.577 [2024-10-09 00:36:42.038181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.577 [2024-10-09 00:36:42.038213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.577 qpair failed and we were unable to recover it. 00:29:11.577 [2024-10-09 00:36:42.038598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.577 [2024-10-09 00:36:42.038629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.577 qpair failed and we were unable to recover it. 00:29:11.577 [2024-10-09 00:36:42.039019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.577 [2024-10-09 00:36:42.039049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.577 qpair failed and we were unable to recover it. 00:29:11.577 [2024-10-09 00:36:42.039401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.577 [2024-10-09 00:36:42.039432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.577 qpair failed and we were unable to recover it. 00:29:11.577 [2024-10-09 00:36:42.039797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.577 [2024-10-09 00:36:42.039828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.577 qpair failed and we were unable to recover it. 00:29:11.577 [2024-10-09 00:36:42.040193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.577 [2024-10-09 00:36:42.040224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.577 qpair failed and we were unable to recover it. 00:29:11.577 [2024-10-09 00:36:42.040594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.577 [2024-10-09 00:36:42.040623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.577 qpair failed and we were unable to recover it. 00:29:11.577 [2024-10-09 00:36:42.041003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.577 [2024-10-09 00:36:42.041035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.577 qpair failed and we were unable to recover it. 00:29:11.577 [2024-10-09 00:36:42.041267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.577 [2024-10-09 00:36:42.041295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.577 qpair failed and we were unable to recover it. 00:29:11.577 [2024-10-09 00:36:42.041680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.577 [2024-10-09 00:36:42.041709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.577 qpair failed and we were unable to recover it. 00:29:11.577 [2024-10-09 00:36:42.042072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.577 [2024-10-09 00:36:42.042102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.577 qpair failed and we were unable to recover it. 00:29:11.577 [2024-10-09 00:36:42.042451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.577 [2024-10-09 00:36:42.042480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.577 qpair failed and we were unable to recover it. 00:29:11.577 [2024-10-09 00:36:42.042833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.577 [2024-10-09 00:36:42.042865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.577 qpair failed and we were unable to recover it. 00:29:11.577 [2024-10-09 00:36:42.043237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.577 [2024-10-09 00:36:42.043266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.577 qpair failed and we were unable to recover it. 00:29:11.577 [2024-10-09 00:36:42.043625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.577 [2024-10-09 00:36:42.043654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.577 qpair failed and we were unable to recover it. 00:29:11.577 [2024-10-09 00:36:42.044016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.577 [2024-10-09 00:36:42.044048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.577 qpair failed and we were unable to recover it. 00:29:11.577 [2024-10-09 00:36:42.044379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.577 [2024-10-09 00:36:42.044409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.577 qpair failed and we were unable to recover it. 00:29:11.577 [2024-10-09 00:36:42.044784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.577 [2024-10-09 00:36:42.044816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.577 qpair failed and we were unable to recover it. 00:29:11.577 [2024-10-09 00:36:42.045173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.577 [2024-10-09 00:36:42.045206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.577 qpair failed and we were unable to recover it. 00:29:11.577 [2024-10-09 00:36:42.045573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.577 [2024-10-09 00:36:42.045605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.577 qpair failed and we were unable to recover it. 00:29:11.577 [2024-10-09 00:36:42.045969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.577 [2024-10-09 00:36:42.046001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.577 qpair failed and we were unable to recover it. 00:29:11.577 [2024-10-09 00:36:42.046339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.577 [2024-10-09 00:36:42.046370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.577 qpair failed and we were unable to recover it. 00:29:11.577 [2024-10-09 00:36:42.046740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.577 [2024-10-09 00:36:42.046773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.577 qpair failed and we were unable to recover it. 00:29:11.577 [2024-10-09 00:36:42.047133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.577 [2024-10-09 00:36:42.047165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.577 qpair failed and we were unable to recover it. 00:29:11.577 [2024-10-09 00:36:42.047406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.577 [2024-10-09 00:36:42.047436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.577 qpair failed and we were unable to recover it. 00:29:11.577 [2024-10-09 00:36:42.047717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.577 [2024-10-09 00:36:42.047762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.577 qpair failed and we were unable to recover it. 00:29:11.577 [2024-10-09 00:36:42.048151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.577 [2024-10-09 00:36:42.048182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.577 qpair failed and we were unable to recover it. 00:29:11.577 [2024-10-09 00:36:42.048546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.577 [2024-10-09 00:36:42.048576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.577 qpair failed and we were unable to recover it. 00:29:11.577 [2024-10-09 00:36:42.048958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.577 [2024-10-09 00:36:42.048991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.577 qpair failed and we were unable to recover it. 00:29:11.577 [2024-10-09 00:36:42.049340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.577 [2024-10-09 00:36:42.049371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.577 qpair failed and we were unable to recover it. 00:29:11.577 [2024-10-09 00:36:42.049750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.577 [2024-10-09 00:36:42.049780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.577 qpair failed and we were unable to recover it. 00:29:11.577 [2024-10-09 00:36:42.050177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.577 [2024-10-09 00:36:42.050207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.577 qpair failed and we were unable to recover it. 00:29:11.577 [2024-10-09 00:36:42.050451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.577 [2024-10-09 00:36:42.050481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.577 qpair failed and we were unable to recover it. 00:29:11.577 [2024-10-09 00:36:42.050824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.577 [2024-10-09 00:36:42.050857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.577 qpair failed and we were unable to recover it. 00:29:11.577 [2024-10-09 00:36:42.051231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.577 [2024-10-09 00:36:42.051263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.577 qpair failed and we were unable to recover it. 00:29:11.577 [2024-10-09 00:36:42.051643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.577 [2024-10-09 00:36:42.051672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.577 qpair failed and we were unable to recover it. 00:29:11.577 [2024-10-09 00:36:42.052072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.577 [2024-10-09 00:36:42.052104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.578 qpair failed and we were unable to recover it. 00:29:11.578 [2024-10-09 00:36:42.052469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.578 [2024-10-09 00:36:42.052498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.578 qpair failed and we were unable to recover it. 00:29:11.578 [2024-10-09 00:36:42.052872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.578 [2024-10-09 00:36:42.052904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.578 qpair failed and we were unable to recover it. 00:29:11.578 [2024-10-09 00:36:42.053263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.578 [2024-10-09 00:36:42.053293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.578 qpair failed and we were unable to recover it. 00:29:11.578 [2024-10-09 00:36:42.053641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.578 [2024-10-09 00:36:42.053672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.578 qpair failed and we were unable to recover it. 00:29:11.578 [2024-10-09 00:36:42.054096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.578 [2024-10-09 00:36:42.054126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.578 qpair failed and we were unable to recover it. 00:29:11.578 [2024-10-09 00:36:42.054479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.578 [2024-10-09 00:36:42.054509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.578 qpair failed and we were unable to recover it. 00:29:11.578 [2024-10-09 00:36:42.054879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.578 [2024-10-09 00:36:42.054910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.578 qpair failed and we were unable to recover it. 00:29:11.578 [2024-10-09 00:36:42.055281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.578 [2024-10-09 00:36:42.055310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.578 qpair failed and we were unable to recover it. 00:29:11.578 [2024-10-09 00:36:42.055558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.578 [2024-10-09 00:36:42.055590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.578 qpair failed and we were unable to recover it. 00:29:11.578 [2024-10-09 00:36:42.055984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.578 [2024-10-09 00:36:42.056015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.578 qpair failed and we were unable to recover it. 00:29:11.578 [2024-10-09 00:36:42.056405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.578 [2024-10-09 00:36:42.056434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.578 qpair failed and we were unable to recover it. 00:29:11.578 [2024-10-09 00:36:42.056800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.578 [2024-10-09 00:36:42.056831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.578 qpair failed and we were unable to recover it. 00:29:11.578 [2024-10-09 00:36:42.057281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.578 [2024-10-09 00:36:42.057310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.578 qpair failed and we were unable to recover it. 00:29:11.578 [2024-10-09 00:36:42.057558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.578 [2024-10-09 00:36:42.057587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.578 qpair failed and we were unable to recover it. 00:29:11.578 [2024-10-09 00:36:42.057969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.578 [2024-10-09 00:36:42.057999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.578 qpair failed and we were unable to recover it. 00:29:11.578 [2024-10-09 00:36:42.058234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.578 [2024-10-09 00:36:42.058263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.578 qpair failed and we were unable to recover it. 00:29:11.578 [2024-10-09 00:36:42.058625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.578 [2024-10-09 00:36:42.058654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.578 qpair failed and we were unable to recover it. 00:29:11.578 [2024-10-09 00:36:42.059013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.578 [2024-10-09 00:36:42.059044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.578 qpair failed and we were unable to recover it. 00:29:11.578 [2024-10-09 00:36:42.059413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.578 [2024-10-09 00:36:42.059441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.578 qpair failed and we were unable to recover it. 00:29:11.578 [2024-10-09 00:36:42.059776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.578 [2024-10-09 00:36:42.059807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.578 qpair failed and we were unable to recover it. 00:29:11.578 [2024-10-09 00:36:42.060189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.578 [2024-10-09 00:36:42.060218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.578 qpair failed and we were unable to recover it. 00:29:11.578 [2024-10-09 00:36:42.060562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.578 [2024-10-09 00:36:42.060593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.578 qpair failed and we were unable to recover it. 00:29:11.578 [2024-10-09 00:36:42.060962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.578 [2024-10-09 00:36:42.060994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.578 qpair failed and we were unable to recover it. 00:29:11.578 [2024-10-09 00:36:42.061368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.578 [2024-10-09 00:36:42.061397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.578 qpair failed and we were unable to recover it. 00:29:11.578 [2024-10-09 00:36:42.061754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.578 [2024-10-09 00:36:42.061791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.578 qpair failed and we were unable to recover it. 00:29:11.578 [2024-10-09 00:36:42.062180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.578 [2024-10-09 00:36:42.062210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.578 qpair failed and we were unable to recover it. 00:29:11.578 [2024-10-09 00:36:42.062579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.578 [2024-10-09 00:36:42.062609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.578 qpair failed and we were unable to recover it. 00:29:11.578 [2024-10-09 00:36:42.062941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.578 [2024-10-09 00:36:42.062970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.578 qpair failed and we were unable to recover it. 00:29:11.578 [2024-10-09 00:36:42.063309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.578 [2024-10-09 00:36:42.063339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.578 qpair failed and we were unable to recover it. 00:29:11.578 [2024-10-09 00:36:42.063693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.578 [2024-10-09 00:36:42.063732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.578 qpair failed and we were unable to recover it. 00:29:11.578 [2024-10-09 00:36:42.064130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.578 [2024-10-09 00:36:42.064161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.578 qpair failed and we were unable to recover it. 00:29:11.578 [2024-10-09 00:36:42.064541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.578 [2024-10-09 00:36:42.064572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.578 qpair failed and we were unable to recover it. 00:29:11.578 [2024-10-09 00:36:42.064942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.578 [2024-10-09 00:36:42.064973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.578 qpair failed and we were unable to recover it. 00:29:11.578 [2024-10-09 00:36:42.065328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.578 [2024-10-09 00:36:42.065357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.578 qpair failed and we were unable to recover it. 00:29:11.578 [2024-10-09 00:36:42.065734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.578 [2024-10-09 00:36:42.065764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.578 qpair failed and we were unable to recover it. 00:29:11.578 [2024-10-09 00:36:42.066108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.578 [2024-10-09 00:36:42.066137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.578 qpair failed and we were unable to recover it. 00:29:11.578 [2024-10-09 00:36:42.066481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.578 [2024-10-09 00:36:42.066512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.578 qpair failed and we were unable to recover it. 00:29:11.578 [2024-10-09 00:36:42.066770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.578 [2024-10-09 00:36:42.066800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.578 qpair failed and we were unable to recover it. 00:29:11.578 [2024-10-09 00:36:42.067191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.579 [2024-10-09 00:36:42.067220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.579 qpair failed and we were unable to recover it. 00:29:11.579 [2024-10-09 00:36:42.067596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.579 [2024-10-09 00:36:42.067625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.579 qpair failed and we were unable to recover it. 00:29:11.579 [2024-10-09 00:36:42.067977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.579 [2024-10-09 00:36:42.068007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.579 qpair failed and we were unable to recover it. 00:29:11.579 [2024-10-09 00:36:42.068372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.579 [2024-10-09 00:36:42.068402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.579 qpair failed and we were unable to recover it. 00:29:11.579 [2024-10-09 00:36:42.068759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.579 [2024-10-09 00:36:42.068790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.579 qpair failed and we were unable to recover it. 00:29:11.579 [2024-10-09 00:36:42.069159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.579 [2024-10-09 00:36:42.069187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.579 qpair failed and we were unable to recover it. 00:29:11.579 [2024-10-09 00:36:42.069555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.579 [2024-10-09 00:36:42.069584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.579 qpair failed and we were unable to recover it. 00:29:11.579 [2024-10-09 00:36:42.069832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.579 [2024-10-09 00:36:42.069861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.579 qpair failed and we were unable to recover it. 00:29:11.579 [2024-10-09 00:36:42.070253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.579 [2024-10-09 00:36:42.070282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.579 qpair failed and we were unable to recover it. 00:29:11.579 [2024-10-09 00:36:42.070677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.579 [2024-10-09 00:36:42.070706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.579 qpair failed and we were unable to recover it. 00:29:11.579 [2024-10-09 00:36:42.071061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.579 [2024-10-09 00:36:42.071100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.579 qpair failed and we were unable to recover it. 00:29:11.579 [2024-10-09 00:36:42.071455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.579 [2024-10-09 00:36:42.071485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.579 qpair failed and we were unable to recover it. 00:29:11.579 [2024-10-09 00:36:42.071757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.579 [2024-10-09 00:36:42.071788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.579 qpair failed and we were unable to recover it. 00:29:11.579 [2024-10-09 00:36:42.072168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.579 [2024-10-09 00:36:42.072203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.579 qpair failed and we were unable to recover it. 00:29:11.579 [2024-10-09 00:36:42.072564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.579 [2024-10-09 00:36:42.072593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.579 qpair failed and we were unable to recover it. 00:29:11.579 [2024-10-09 00:36:42.072886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.579 [2024-10-09 00:36:42.072919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.579 qpair failed and we were unable to recover it. 00:29:11.579 [2024-10-09 00:36:42.073290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.579 [2024-10-09 00:36:42.073319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.579 qpair failed and we were unable to recover it. 00:29:11.579 [2024-10-09 00:36:42.073645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.579 [2024-10-09 00:36:42.073675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.579 qpair failed and we were unable to recover it. 00:29:11.579 [2024-10-09 00:36:42.074039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.579 [2024-10-09 00:36:42.074070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.579 qpair failed and we were unable to recover it. 00:29:11.579 [2024-10-09 00:36:42.074432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.579 [2024-10-09 00:36:42.074461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.579 qpair failed and we were unable to recover it. 00:29:11.579 [2024-10-09 00:36:42.074826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.579 [2024-10-09 00:36:42.074857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.579 qpair failed and we were unable to recover it. 00:29:11.579 [2024-10-09 00:36:42.075227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.579 [2024-10-09 00:36:42.075257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.579 qpair failed and we were unable to recover it. 00:29:11.579 [2024-10-09 00:36:42.075617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.579 [2024-10-09 00:36:42.075646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.579 qpair failed and we were unable to recover it. 00:29:11.579 [2024-10-09 00:36:42.076011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.579 [2024-10-09 00:36:42.076041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.579 qpair failed and we were unable to recover it. 00:29:11.579 [2024-10-09 00:36:42.076413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.579 [2024-10-09 00:36:42.076442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.579 qpair failed and we were unable to recover it. 00:29:11.579 [2024-10-09 00:36:42.076815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.579 [2024-10-09 00:36:42.076845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.579 qpair failed and we were unable to recover it. 00:29:11.579 [2024-10-09 00:36:42.077191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.579 [2024-10-09 00:36:42.077220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.579 qpair failed and we were unable to recover it. 00:29:11.579 [2024-10-09 00:36:42.077576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.579 [2024-10-09 00:36:42.077606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.579 qpair failed and we were unable to recover it. 00:29:11.579 [2024-10-09 00:36:42.077980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.579 [2024-10-09 00:36:42.078011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.579 qpair failed and we were unable to recover it. 00:29:11.579 [2024-10-09 00:36:42.078245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.579 [2024-10-09 00:36:42.078273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.579 qpair failed and we were unable to recover it. 00:29:11.579 [2024-10-09 00:36:42.078622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.579 [2024-10-09 00:36:42.078650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.579 qpair failed and we were unable to recover it. 00:29:11.579 [2024-10-09 00:36:42.079039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.579 [2024-10-09 00:36:42.079069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.579 qpair failed and we were unable to recover it. 00:29:11.579 [2024-10-09 00:36:42.079427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.579 [2024-10-09 00:36:42.079457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.579 qpair failed and we were unable to recover it. 00:29:11.579 [2024-10-09 00:36:42.079861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.579 [2024-10-09 00:36:42.079894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.579 qpair failed and we were unable to recover it. 00:29:11.579 [2024-10-09 00:36:42.080305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.579 [2024-10-09 00:36:42.080335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.579 qpair failed and we were unable to recover it. 00:29:11.579 [2024-10-09 00:36:42.080709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.579 [2024-10-09 00:36:42.080758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.579 qpair failed and we were unable to recover it. 00:29:11.579 [2024-10-09 00:36:42.081157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.579 [2024-10-09 00:36:42.081187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.579 qpair failed and we were unable to recover it. 00:29:11.579 [2024-10-09 00:36:42.081566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.579 [2024-10-09 00:36:42.081597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.579 qpair failed and we were unable to recover it. 00:29:11.579 [2024-10-09 00:36:42.081947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.579 [2024-10-09 00:36:42.081979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.579 qpair failed and we were unable to recover it. 00:29:11.579 [2024-10-09 00:36:42.082350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.579 [2024-10-09 00:36:42.082382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.579 qpair failed and we were unable to recover it. 00:29:11.579 [2024-10-09 00:36:42.082739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.580 [2024-10-09 00:36:42.082771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.580 qpair failed and we were unable to recover it. 00:29:11.580 [2024-10-09 00:36:42.083150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.580 [2024-10-09 00:36:42.083181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.580 qpair failed and we were unable to recover it. 00:29:11.580 [2024-10-09 00:36:42.083520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.580 [2024-10-09 00:36:42.083548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.580 qpair failed and we were unable to recover it. 00:29:11.580 [2024-10-09 00:36:42.083893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.580 [2024-10-09 00:36:42.083924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.580 qpair failed and we were unable to recover it. 00:29:11.580 [2024-10-09 00:36:42.084278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.580 [2024-10-09 00:36:42.084308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.580 qpair failed and we were unable to recover it. 00:29:11.580 [2024-10-09 00:36:42.084564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.580 [2024-10-09 00:36:42.084597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.580 qpair failed and we were unable to recover it. 00:29:11.580 [2024-10-09 00:36:42.084949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.580 [2024-10-09 00:36:42.084982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.580 qpair failed and we were unable to recover it. 00:29:11.580 [2024-10-09 00:36:42.085242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.580 [2024-10-09 00:36:42.085270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.580 qpair failed and we were unable to recover it. 00:29:11.580 [2024-10-09 00:36:42.085621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.580 [2024-10-09 00:36:42.085651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.580 qpair failed and we were unable to recover it. 00:29:11.580 [2024-10-09 00:36:42.086028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.580 [2024-10-09 00:36:42.086059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.580 qpair failed and we were unable to recover it. 00:29:11.580 [2024-10-09 00:36:42.086430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.580 [2024-10-09 00:36:42.086459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.580 qpair failed and we were unable to recover it. 00:29:11.580 [2024-10-09 00:36:42.086789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.580 [2024-10-09 00:36:42.086820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.580 qpair failed and we were unable to recover it. 00:29:11.580 [2024-10-09 00:36:42.087243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.580 [2024-10-09 00:36:42.087274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.580 qpair failed and we were unable to recover it. 00:29:11.580 [2024-10-09 00:36:42.087653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.580 [2024-10-09 00:36:42.087683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.580 qpair failed and we were unable to recover it. 00:29:11.580 [2024-10-09 00:36:42.088080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.580 [2024-10-09 00:36:42.088111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.580 qpair failed and we were unable to recover it. 00:29:11.580 [2024-10-09 00:36:42.088468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.580 [2024-10-09 00:36:42.088498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.580 qpair failed and we were unable to recover it. 00:29:11.580 [2024-10-09 00:36:42.088889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.580 [2024-10-09 00:36:42.088920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.580 qpair failed and we were unable to recover it. 00:29:11.580 [2024-10-09 00:36:42.089161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.580 [2024-10-09 00:36:42.089194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.580 qpair failed and we were unable to recover it. 00:29:11.580 [2024-10-09 00:36:42.089561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.580 [2024-10-09 00:36:42.089592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.580 qpair failed and we were unable to recover it. 00:29:11.580 [2024-10-09 00:36:42.089865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.580 [2024-10-09 00:36:42.089897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.580 qpair failed and we were unable to recover it. 00:29:11.580 [2024-10-09 00:36:42.090239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.580 [2024-10-09 00:36:42.090269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.580 qpair failed and we were unable to recover it. 00:29:11.580 [2024-10-09 00:36:42.090462] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:11.580 [2024-10-09 00:36:42.090515] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:11.580 [2024-10-09 00:36:42.090524] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:11.580 [2024-10-09 00:36:42.090532] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:11.580 [2024-10-09 00:36:42.090538] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:11.580 [2024-10-09 00:36:42.090597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.580 [2024-10-09 00:36:42.090627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.580 qpair failed and we were unable to recover it. 00:29:11.580 [2024-10-09 00:36:42.090979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.580 [2024-10-09 00:36:42.091009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.580 qpair failed and we were unable to recover it. 00:29:11.580 [2024-10-09 00:36:42.091434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.580 [2024-10-09 00:36:42.091463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.580 qpair failed and we were unable to recover it. 00:29:11.580 [2024-10-09 00:36:42.091734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.580 [2024-10-09 00:36:42.091764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.580 qpair failed and we were unable to recover it. 00:29:11.580 [2024-10-09 00:36:42.092133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.580 [2024-10-09 00:36:42.092163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.580 qpair failed and we were unable to recover it. 00:29:11.580 [2024-10-09 00:36:42.092531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.580 [2024-10-09 00:36:42.092560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.580 qpair failed and we were unable to recover it. 00:29:11.580 [2024-10-09 00:36:42.092589] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 5 00:29:11.580 [2024-10-09 00:36:42.092762] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 6 00:29:11.580 [2024-10-09 00:36:42.092908] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 7 00:29:11.580 [2024-10-09 00:36:42.093001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.580 [2024-10-09 00:36:42.093038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.580 qpair failed and we were unable to recover it. 00:29:11.580 [2024-10-09 00:36:42.093049] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 4 00:29:11.580 [2024-10-09 00:36:42.093418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.580 [2024-10-09 00:36:42.093448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.580 qpair failed and we were unable to recover it. 00:29:11.580 [2024-10-09 00:36:42.093713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.580 [2024-10-09 00:36:42.093755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.580 qpair failed and we were unable to recover it. 00:29:11.580 [2024-10-09 00:36:42.094209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.580 [2024-10-09 00:36:42.094239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.580 qpair failed and we were unable to recover it. 00:29:11.581 [2024-10-09 00:36:42.094481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.581 [2024-10-09 00:36:42.094509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.581 qpair failed and we were unable to recover it. 00:29:11.581 [2024-10-09 00:36:42.094881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.581 [2024-10-09 00:36:42.094912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.581 qpair failed and we were unable to recover it. 00:29:11.581 [2024-10-09 00:36:42.095308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.581 [2024-10-09 00:36:42.095337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.581 qpair failed and we were unable to recover it. 00:29:11.581 [2024-10-09 00:36:42.095772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.581 [2024-10-09 00:36:42.095803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.581 qpair failed and we were unable to recover it. 00:29:11.581 [2024-10-09 00:36:42.096072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.581 [2024-10-09 00:36:42.096101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.581 qpair failed and we were unable to recover it. 00:29:11.581 [2024-10-09 00:36:42.096468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.581 [2024-10-09 00:36:42.096497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.581 qpair failed and we were unable to recover it. 00:29:11.581 [2024-10-09 00:36:42.096845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.581 [2024-10-09 00:36:42.096875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.581 qpair failed and we were unable to recover it. 00:29:11.581 [2024-10-09 00:36:42.097263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.581 [2024-10-09 00:36:42.097293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.581 qpair failed and we were unable to recover it. 00:29:11.581 [2024-10-09 00:36:42.097643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.581 [2024-10-09 00:36:42.097674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.581 qpair failed and we were unable to recover it. 00:29:11.581 [2024-10-09 00:36:42.097931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.581 [2024-10-09 00:36:42.097967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.581 qpair failed and we were unable to recover it. 00:29:11.581 [2024-10-09 00:36:42.098369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.581 [2024-10-09 00:36:42.098401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.581 qpair failed and we were unable to recover it. 00:29:11.581 [2024-10-09 00:36:42.098622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.581 [2024-10-09 00:36:42.098655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.581 qpair failed and we were unable to recover it. 00:29:11.581 [2024-10-09 00:36:42.099002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.581 [2024-10-09 00:36:42.099076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.581 qpair failed and we were unable to recover it. 00:29:11.581 [2024-10-09 00:36:42.099438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.581 [2024-10-09 00:36:42.099469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.581 qpair failed and we were unable to recover it. 00:29:11.581 [2024-10-09 00:36:42.099858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.581 [2024-10-09 00:36:42.099891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.581 qpair failed and we were unable to recover it. 00:29:11.581 [2024-10-09 00:36:42.100262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.581 [2024-10-09 00:36:42.100293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.581 qpair failed and we were unable to recover it. 00:29:11.581 [2024-10-09 00:36:42.100546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.581 [2024-10-09 00:36:42.100575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.581 qpair failed and we were unable to recover it. 00:29:11.581 [2024-10-09 00:36:42.100968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.581 [2024-10-09 00:36:42.101000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.581 qpair failed and we were unable to recover it. 00:29:11.581 [2024-10-09 00:36:42.101240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.581 [2024-10-09 00:36:42.101273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.581 qpair failed and we were unable to recover it. 00:29:11.581 [2024-10-09 00:36:42.101631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.581 [2024-10-09 00:36:42.101662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.581 qpair failed and we were unable to recover it. 00:29:11.581 [2024-10-09 00:36:42.102049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.581 [2024-10-09 00:36:42.102081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.581 qpair failed and we were unable to recover it. 00:29:11.581 [2024-10-09 00:36:42.102338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.581 [2024-10-09 00:36:42.102368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.581 qpair failed and we were unable to recover it. 00:29:11.581 [2024-10-09 00:36:42.102755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.581 [2024-10-09 00:36:42.102787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.581 qpair failed and we were unable to recover it. 00:29:11.581 [2024-10-09 00:36:42.103171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.581 [2024-10-09 00:36:42.103200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.581 qpair failed and we were unable to recover it. 00:29:11.581 [2024-10-09 00:36:42.103456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.581 [2024-10-09 00:36:42.103489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.581 qpair failed and we were unable to recover it. 00:29:11.581 [2024-10-09 00:36:42.103847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.581 [2024-10-09 00:36:42.103878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.581 qpair failed and we were unable to recover it. 00:29:11.581 [2024-10-09 00:36:42.104041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.581 [2024-10-09 00:36:42.104074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.581 qpair failed and we were unable to recover it. 00:29:11.581 [2024-10-09 00:36:42.104341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.581 [2024-10-09 00:36:42.104379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.581 qpair failed and we were unable to recover it. 00:29:11.581 [2024-10-09 00:36:42.104571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.581 [2024-10-09 00:36:42.104611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.581 qpair failed and we were unable to recover it. 00:29:11.581 [2024-10-09 00:36:42.104975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.581 [2024-10-09 00:36:42.105010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.581 qpair failed and we were unable to recover it. 00:29:11.581 [2024-10-09 00:36:42.105357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.581 [2024-10-09 00:36:42.105391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.581 qpair failed and we were unable to recover it. 00:29:11.581 [2024-10-09 00:36:42.105761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.581 [2024-10-09 00:36:42.105793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.581 qpair failed and we were unable to recover it. 00:29:11.581 [2024-10-09 00:36:42.106020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.581 [2024-10-09 00:36:42.106052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.581 qpair failed and we were unable to recover it. 00:29:11.581 [2024-10-09 00:36:42.106413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.581 [2024-10-09 00:36:42.106443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.581 qpair failed and we were unable to recover it. 00:29:11.581 [2024-10-09 00:36:42.106780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.581 [2024-10-09 00:36:42.106810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.581 qpair failed and we were unable to recover it. 00:29:11.581 [2024-10-09 00:36:42.107172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.581 [2024-10-09 00:36:42.107201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.581 qpair failed and we were unable to recover it. 00:29:11.581 [2024-10-09 00:36:42.107507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.581 [2024-10-09 00:36:42.107537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.581 qpair failed and we were unable to recover it. 00:29:11.581 [2024-10-09 00:36:42.107873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.581 [2024-10-09 00:36:42.107905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.581 qpair failed and we were unable to recover it. 00:29:11.582 [2024-10-09 00:36:42.108144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.582 [2024-10-09 00:36:42.108173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.582 qpair failed and we were unable to recover it. 00:29:11.582 [2024-10-09 00:36:42.108529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.582 [2024-10-09 00:36:42.108559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.582 qpair failed and we were unable to recover it. 00:29:11.582 [2024-10-09 00:36:42.108922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.582 [2024-10-09 00:36:42.108953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.582 qpair failed and we were unable to recover it. 00:29:11.582 [2024-10-09 00:36:42.109320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.582 [2024-10-09 00:36:42.109348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.582 qpair failed and we were unable to recover it. 00:29:11.582 [2024-10-09 00:36:42.109685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.582 [2024-10-09 00:36:42.109716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.582 qpair failed and we were unable to recover it. 00:29:11.582 [2024-10-09 00:36:42.110108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.582 [2024-10-09 00:36:42.110148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.582 qpair failed and we were unable to recover it. 00:29:11.582 [2024-10-09 00:36:42.110540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.582 [2024-10-09 00:36:42.110571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.582 qpair failed and we were unable to recover it. 00:29:11.582 [2024-10-09 00:36:42.111707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.582 [2024-10-09 00:36:42.111772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.582 qpair failed and we were unable to recover it. 00:29:11.582 [2024-10-09 00:36:42.112143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.582 [2024-10-09 00:36:42.112174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.582 qpair failed and we were unable to recover it. 00:29:11.582 [2024-10-09 00:36:42.112530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.582 [2024-10-09 00:36:42.112562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.582 qpair failed and we were unable to recover it. 00:29:11.582 [2024-10-09 00:36:42.112810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.582 [2024-10-09 00:36:42.112843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.582 qpair failed and we were unable to recover it. 00:29:11.582 [2024-10-09 00:36:42.113257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.582 [2024-10-09 00:36:42.113288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.582 qpair failed and we were unable to recover it. 00:29:11.582 [2024-10-09 00:36:42.113547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.582 [2024-10-09 00:36:42.113576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.582 qpair failed and we were unable to recover it. 00:29:11.582 [2024-10-09 00:36:42.113966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.582 [2024-10-09 00:36:42.113998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.582 qpair failed and we were unable to recover it. 00:29:11.582 [2024-10-09 00:36:42.114384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.582 [2024-10-09 00:36:42.114415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.582 qpair failed and we were unable to recover it. 00:29:11.582 [2024-10-09 00:36:42.114782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.582 [2024-10-09 00:36:42.114814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.582 qpair failed and we were unable to recover it. 00:29:11.582 [2024-10-09 00:36:42.115193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.582 [2024-10-09 00:36:42.115224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.582 qpair failed and we were unable to recover it. 00:29:11.582 [2024-10-09 00:36:42.115445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.582 [2024-10-09 00:36:42.115475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.582 qpair failed and we were unable to recover it. 00:29:11.582 [2024-10-09 00:36:42.115758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.582 [2024-10-09 00:36:42.115795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.582 qpair failed and we were unable to recover it. 00:29:11.582 [2024-10-09 00:36:42.116171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.582 [2024-10-09 00:36:42.116204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.582 qpair failed and we were unable to recover it. 00:29:11.582 [2024-10-09 00:36:42.116605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.582 [2024-10-09 00:36:42.116636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.582 qpair failed and we were unable to recover it. 00:29:11.582 [2024-10-09 00:36:42.117000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.582 [2024-10-09 00:36:42.117034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.582 qpair failed and we were unable to recover it. 00:29:11.582 [2024-10-09 00:36:42.117372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.582 [2024-10-09 00:36:42.117402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.582 qpair failed and we were unable to recover it. 00:29:11.582 [2024-10-09 00:36:42.117778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.582 [2024-10-09 00:36:42.117816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.582 qpair failed and we were unable to recover it. 00:29:11.582 [2024-10-09 00:36:42.118172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.582 [2024-10-09 00:36:42.118202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.582 qpair failed and we were unable to recover it. 00:29:11.582 [2024-10-09 00:36:42.118572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.582 [2024-10-09 00:36:42.118603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.582 qpair failed and we were unable to recover it. 00:29:11.582 [2024-10-09 00:36:42.118909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.582 [2024-10-09 00:36:42.118940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.582 qpair failed and we were unable to recover it. 00:29:11.582 [2024-10-09 00:36:42.119155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.582 [2024-10-09 00:36:42.119187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.582 qpair failed and we were unable to recover it. 00:29:11.582 [2024-10-09 00:36:42.119552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.582 [2024-10-09 00:36:42.119584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.582 qpair failed and we were unable to recover it. 00:29:11.582 [2024-10-09 00:36:42.119948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.582 [2024-10-09 00:36:42.119979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.582 qpair failed and we were unable to recover it. 00:29:11.582 [2024-10-09 00:36:42.120366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.582 [2024-10-09 00:36:42.120398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.582 qpair failed and we were unable to recover it. 00:29:11.582 [2024-10-09 00:36:42.120747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.582 [2024-10-09 00:36:42.120778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.582 qpair failed and we were unable to recover it. 00:29:11.582 [2024-10-09 00:36:42.121141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.582 [2024-10-09 00:36:42.121173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.582 qpair failed and we were unable to recover it. 00:29:11.582 [2024-10-09 00:36:42.121532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.582 [2024-10-09 00:36:42.121564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.582 qpair failed and we were unable to recover it. 00:29:11.582 [2024-10-09 00:36:42.121797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.582 [2024-10-09 00:36:42.121831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.582 qpair failed and we were unable to recover it. 00:29:11.582 [2024-10-09 00:36:42.122210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.582 [2024-10-09 00:36:42.122246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.582 qpair failed and we were unable to recover it. 00:29:11.582 [2024-10-09 00:36:42.122601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.582 [2024-10-09 00:36:42.122632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.582 qpair failed and we were unable to recover it. 00:29:11.582 [2024-10-09 00:36:42.122906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.582 [2024-10-09 00:36:42.122939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.582 qpair failed and we were unable to recover it. 00:29:11.582 [2024-10-09 00:36:42.123326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.582 [2024-10-09 00:36:42.123356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.582 qpair failed and we were unable to recover it. 00:29:11.582 [2024-10-09 00:36:42.123736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.582 [2024-10-09 00:36:42.123767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.582 qpair failed and we were unable to recover it. 00:29:11.582 [2024-10-09 00:36:42.123896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.582 [2024-10-09 00:36:42.123926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.583 qpair failed and we were unable to recover it. 00:29:11.583 [2024-10-09 00:36:42.124297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.583 [2024-10-09 00:36:42.124330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.583 qpair failed and we were unable to recover it. 00:29:11.583 [2024-10-09 00:36:42.124742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.583 [2024-10-09 00:36:42.124773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.583 qpair failed and we were unable to recover it. 00:29:11.583 [2024-10-09 00:36:42.125128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.583 [2024-10-09 00:36:42.125159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.583 qpair failed and we were unable to recover it. 00:29:11.583 [2024-10-09 00:36:42.125540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.583 [2024-10-09 00:36:42.125572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.583 qpair failed and we were unable to recover it. 00:29:11.583 [2024-10-09 00:36:42.125928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.583 [2024-10-09 00:36:42.125961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.583 qpair failed and we were unable to recover it. 00:29:11.583 [2024-10-09 00:36:42.126340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.583 [2024-10-09 00:36:42.126369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.583 qpair failed and we were unable to recover it. 00:29:11.583 [2024-10-09 00:36:42.126740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.583 [2024-10-09 00:36:42.126773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.583 qpair failed and we were unable to recover it. 00:29:11.583 [2024-10-09 00:36:42.127012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.583 [2024-10-09 00:36:42.127043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.583 qpair failed and we were unable to recover it. 00:29:11.583 [2024-10-09 00:36:42.127444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.583 [2024-10-09 00:36:42.127474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.583 qpair failed and we were unable to recover it. 00:29:11.583 [2024-10-09 00:36:42.127846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.583 [2024-10-09 00:36:42.127884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.583 qpair failed and we were unable to recover it. 00:29:11.583 [2024-10-09 00:36:42.128252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.583 [2024-10-09 00:36:42.128282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.583 qpair failed and we were unable to recover it. 00:29:11.583 [2024-10-09 00:36:42.128656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.583 [2024-10-09 00:36:42.128685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.583 qpair failed and we were unable to recover it. 00:29:11.583 [2024-10-09 00:36:42.129034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.583 [2024-10-09 00:36:42.129065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.583 qpair failed and we were unable to recover it. 00:29:11.583 [2024-10-09 00:36:42.129410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.583 [2024-10-09 00:36:42.129442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.583 qpair failed and we were unable to recover it. 00:29:11.583 [2024-10-09 00:36:42.129806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.583 [2024-10-09 00:36:42.129839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.583 qpair failed and we were unable to recover it. 00:29:11.583 [2024-10-09 00:36:42.130088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.583 [2024-10-09 00:36:42.130121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.583 qpair failed and we were unable to recover it. 00:29:11.583 [2024-10-09 00:36:42.130349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.583 [2024-10-09 00:36:42.130378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.583 qpair failed and we were unable to recover it. 00:29:11.583 [2024-10-09 00:36:42.130712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.583 [2024-10-09 00:36:42.130757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.583 qpair failed and we were unable to recover it. 00:29:11.583 [2024-10-09 00:36:42.131002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.583 [2024-10-09 00:36:42.131032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.583 qpair failed and we were unable to recover it. 00:29:11.583 [2024-10-09 00:36:42.131380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.583 [2024-10-09 00:36:42.131417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.583 qpair failed and we were unable to recover it. 00:29:11.583 [2024-10-09 00:36:42.131797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.583 [2024-10-09 00:36:42.131828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.583 qpair failed and we were unable to recover it. 00:29:11.583 [2024-10-09 00:36:42.132192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.583 [2024-10-09 00:36:42.132222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.583 qpair failed and we were unable to recover it. 00:29:11.583 [2024-10-09 00:36:42.132581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.583 [2024-10-09 00:36:42.132612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.583 qpair failed and we were unable to recover it. 00:29:11.583 [2024-10-09 00:36:42.132840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.583 [2024-10-09 00:36:42.132870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.583 qpair failed and we were unable to recover it. 00:29:11.583 [2024-10-09 00:36:42.133303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.583 [2024-10-09 00:36:42.133333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.583 qpair failed and we were unable to recover it. 00:29:11.583 [2024-10-09 00:36:42.133609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.583 [2024-10-09 00:36:42.133640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.583 qpair failed and we were unable to recover it. 00:29:11.583 [2024-10-09 00:36:42.134013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.583 [2024-10-09 00:36:42.134047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.583 qpair failed and we were unable to recover it. 00:29:11.583 [2024-10-09 00:36:42.134437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.583 [2024-10-09 00:36:42.134468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.583 qpair failed and we were unable to recover it. 00:29:11.583 [2024-10-09 00:36:42.134828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.583 [2024-10-09 00:36:42.134858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.583 qpair failed and we were unable to recover it. 00:29:11.583 [2024-10-09 00:36:42.135224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.583 [2024-10-09 00:36:42.135257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.583 qpair failed and we were unable to recover it. 00:29:11.583 [2024-10-09 00:36:42.135639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.583 [2024-10-09 00:36:42.135668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.583 qpair failed and we were unable to recover it. 00:29:11.583 [2024-10-09 00:36:42.135906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.583 [2024-10-09 00:36:42.135937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.583 qpair failed and we were unable to recover it. 00:29:11.583 [2024-10-09 00:36:42.136294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.583 [2024-10-09 00:36:42.136327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.583 qpair failed and we were unable to recover it. 00:29:11.583 [2024-10-09 00:36:42.136697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.583 [2024-10-09 00:36:42.136759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.583 qpair failed and we were unable to recover it. 00:29:11.583 [2024-10-09 00:36:42.137035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.583 [2024-10-09 00:36:42.137064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.583 qpair failed and we were unable to recover it. 00:29:11.583 [2024-10-09 00:36:42.137415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.583 [2024-10-09 00:36:42.137444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.583 qpair failed and we were unable to recover it. 00:29:11.583 [2024-10-09 00:36:42.137791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.583 [2024-10-09 00:36:42.137831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.583 qpair failed and we were unable to recover it. 00:29:11.583 [2024-10-09 00:36:42.138035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.583 [2024-10-09 00:36:42.138064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.583 qpair failed and we were unable to recover it. 00:29:11.583 [2024-10-09 00:36:42.138293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.584 [2024-10-09 00:36:42.138323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.584 qpair failed and we were unable to recover it. 00:29:11.584 [2024-10-09 00:36:42.138760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.584 [2024-10-09 00:36:42.138791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.584 qpair failed and we were unable to recover it. 00:29:11.584 [2024-10-09 00:36:42.139167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.584 [2024-10-09 00:36:42.139198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.584 qpair failed and we were unable to recover it. 00:29:11.584 [2024-10-09 00:36:42.139575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.584 [2024-10-09 00:36:42.139605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.584 qpair failed and we were unable to recover it. 00:29:11.584 [2024-10-09 00:36:42.140081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.584 [2024-10-09 00:36:42.140112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.584 qpair failed and we were unable to recover it. 00:29:11.584 [2024-10-09 00:36:42.140323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.584 [2024-10-09 00:36:42.140353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.584 qpair failed and we were unable to recover it. 00:29:11.584 [2024-10-09 00:36:42.140732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.584 [2024-10-09 00:36:42.140764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.584 qpair failed and we were unable to recover it. 00:29:11.584 [2024-10-09 00:36:42.141013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.584 [2024-10-09 00:36:42.141044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.584 qpair failed and we were unable to recover it. 00:29:11.584 [2024-10-09 00:36:42.141445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.584 [2024-10-09 00:36:42.141475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.584 qpair failed and we were unable to recover it. 00:29:11.584 [2024-10-09 00:36:42.141868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.584 [2024-10-09 00:36:42.141900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.584 qpair failed and we were unable to recover it. 00:29:11.584 [2024-10-09 00:36:42.142276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.584 [2024-10-09 00:36:42.142305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.584 qpair failed and we were unable to recover it. 00:29:11.584 [2024-10-09 00:36:42.142674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.584 [2024-10-09 00:36:42.142705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.584 qpair failed and we were unable to recover it. 00:29:11.584 [2024-10-09 00:36:42.143140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.584 [2024-10-09 00:36:42.143171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.584 qpair failed and we were unable to recover it. 00:29:11.584 [2024-10-09 00:36:42.143432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.584 [2024-10-09 00:36:42.143461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.584 qpair failed and we were unable to recover it. 00:29:11.584 [2024-10-09 00:36:42.143892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.584 [2024-10-09 00:36:42.143924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.584 qpair failed and we were unable to recover it. 00:29:11.584 [2024-10-09 00:36:42.144289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.584 [2024-10-09 00:36:42.144320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.584 qpair failed and we were unable to recover it. 00:29:11.584 [2024-10-09 00:36:42.144681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.584 [2024-10-09 00:36:42.144712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.584 qpair failed and we were unable to recover it. 00:29:11.584 [2024-10-09 00:36:42.144944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.584 [2024-10-09 00:36:42.144975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.584 qpair failed and we were unable to recover it. 00:29:11.584 [2024-10-09 00:36:42.145230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.584 [2024-10-09 00:36:42.145259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.584 qpair failed and we were unable to recover it. 00:29:11.584 [2024-10-09 00:36:42.145613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.584 [2024-10-09 00:36:42.145644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.584 qpair failed and we were unable to recover it. 00:29:11.584 [2024-10-09 00:36:42.146010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.584 [2024-10-09 00:36:42.146043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.584 qpair failed and we were unable to recover it. 00:29:11.584 [2024-10-09 00:36:42.146294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.584 [2024-10-09 00:36:42.146326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.584 qpair failed and we were unable to recover it. 00:29:11.584 [2024-10-09 00:36:42.146673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.584 [2024-10-09 00:36:42.146704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.584 qpair failed and we were unable to recover it. 00:29:11.584 [2024-10-09 00:36:42.147053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.584 [2024-10-09 00:36:42.147083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.584 qpair failed and we were unable to recover it. 00:29:11.584 [2024-10-09 00:36:42.147436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.584 [2024-10-09 00:36:42.147465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.584 qpair failed and we were unable to recover it. 00:29:11.584 [2024-10-09 00:36:42.147812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.584 [2024-10-09 00:36:42.147843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.584 qpair failed and we were unable to recover it. 00:29:11.584 [2024-10-09 00:36:42.148092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.584 [2024-10-09 00:36:42.148121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.584 qpair failed and we were unable to recover it. 00:29:11.584 [2024-10-09 00:36:42.148515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.584 [2024-10-09 00:36:42.148546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.584 qpair failed and we were unable to recover it. 00:29:11.584 [2024-10-09 00:36:42.148921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.584 [2024-10-09 00:36:42.148952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.584 qpair failed and we were unable to recover it. 00:29:11.584 [2024-10-09 00:36:42.149285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.584 [2024-10-09 00:36:42.149324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.584 qpair failed and we were unable to recover it. 00:29:11.584 [2024-10-09 00:36:42.149667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.584 [2024-10-09 00:36:42.149697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.584 qpair failed and we were unable to recover it. 00:29:11.584 [2024-10-09 00:36:42.150072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.584 [2024-10-09 00:36:42.150104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.584 qpair failed and we were unable to recover it. 00:29:11.584 [2024-10-09 00:36:42.150457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.584 [2024-10-09 00:36:42.150486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.584 qpair failed and we were unable to recover it. 00:29:11.584 [2024-10-09 00:36:42.150817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.584 [2024-10-09 00:36:42.150851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.584 qpair failed and we were unable to recover it. 00:29:11.584 [2024-10-09 00:36:42.151201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.584 [2024-10-09 00:36:42.151231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.584 qpair failed and we were unable to recover it. 00:29:11.584 [2024-10-09 00:36:42.151614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.585 [2024-10-09 00:36:42.151644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.585 qpair failed and we were unable to recover it. 00:29:11.585 [2024-10-09 00:36:42.152006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.585 [2024-10-09 00:36:42.152035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.585 qpair failed and we were unable to recover it. 00:29:11.585 [2024-10-09 00:36:42.152270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.585 [2024-10-09 00:36:42.152299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.585 qpair failed and we were unable to recover it. 00:29:11.585 [2024-10-09 00:36:42.152632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.585 [2024-10-09 00:36:42.152661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.585 qpair failed and we were unable to recover it. 00:29:11.585 [2024-10-09 00:36:42.153101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.585 [2024-10-09 00:36:42.153133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.585 qpair failed and we were unable to recover it. 00:29:11.585 [2024-10-09 00:36:42.153493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.585 [2024-10-09 00:36:42.153522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.585 qpair failed and we were unable to recover it. 00:29:11.585 [2024-10-09 00:36:42.153889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.585 [2024-10-09 00:36:42.153922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.585 qpair failed and we were unable to recover it. 00:29:11.585 [2024-10-09 00:36:42.154329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.585 [2024-10-09 00:36:42.154359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.585 qpair failed and we were unable to recover it. 00:29:11.585 [2024-10-09 00:36:42.154584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.585 [2024-10-09 00:36:42.154612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.585 qpair failed and we were unable to recover it. 00:29:11.585 [2024-10-09 00:36:42.154821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.585 [2024-10-09 00:36:42.154851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.585 qpair failed and we were unable to recover it. 00:29:11.585 [2024-10-09 00:36:42.155100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.585 [2024-10-09 00:36:42.155131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.585 qpair failed and we were unable to recover it. 00:29:11.585 [2024-10-09 00:36:42.155472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.585 [2024-10-09 00:36:42.155510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.585 qpair failed and we were unable to recover it. 00:29:11.585 [2024-10-09 00:36:42.155890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.585 [2024-10-09 00:36:42.155921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.585 qpair failed and we were unable to recover it. 00:29:11.585 [2024-10-09 00:36:42.156288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.585 [2024-10-09 00:36:42.156318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.585 qpair failed and we were unable to recover it. 00:29:11.585 [2024-10-09 00:36:42.156645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.585 [2024-10-09 00:36:42.156674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.585 qpair failed and we were unable to recover it. 00:29:11.585 [2024-10-09 00:36:42.157060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.585 [2024-10-09 00:36:42.157092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.585 qpair failed and we were unable to recover it. 00:29:11.585 [2024-10-09 00:36:42.157465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.585 [2024-10-09 00:36:42.157496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.585 qpair failed and we were unable to recover it. 00:29:11.585 [2024-10-09 00:36:42.157747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.585 [2024-10-09 00:36:42.157779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.585 qpair failed and we were unable to recover it. 00:29:11.585 [2024-10-09 00:36:42.158158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.585 [2024-10-09 00:36:42.158187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.585 qpair failed and we were unable to recover it. 00:29:11.585 [2024-10-09 00:36:42.158456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.585 [2024-10-09 00:36:42.158485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.585 qpair failed and we were unable to recover it. 00:29:11.585 [2024-10-09 00:36:42.158842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.585 [2024-10-09 00:36:42.158874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.585 qpair failed and we were unable to recover it. 00:29:11.585 [2024-10-09 00:36:42.159281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.585 [2024-10-09 00:36:42.159311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.585 qpair failed and we were unable to recover it. 00:29:11.585 [2024-10-09 00:36:42.159663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.585 [2024-10-09 00:36:42.159693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.585 qpair failed and we were unable to recover it. 00:29:11.585 [2024-10-09 00:36:42.160133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.585 [2024-10-09 00:36:42.160163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.585 qpair failed and we were unable to recover it. 00:29:11.585 [2024-10-09 00:36:42.160505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.585 [2024-10-09 00:36:42.160535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.585 qpair failed and we were unable to recover it. 00:29:11.585 [2024-10-09 00:36:42.160783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.585 [2024-10-09 00:36:42.160812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.585 qpair failed and we were unable to recover it. 00:29:11.585 [2024-10-09 00:36:42.161197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.585 [2024-10-09 00:36:42.161228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.585 qpair failed and we were unable to recover it. 00:29:11.585 [2024-10-09 00:36:42.161604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.585 [2024-10-09 00:36:42.161632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.585 qpair failed and we were unable to recover it. 00:29:11.585 [2024-10-09 00:36:42.161877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.585 [2024-10-09 00:36:42.161908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.585 qpair failed and we were unable to recover it. 00:29:11.585 [2024-10-09 00:36:42.162283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.585 [2024-10-09 00:36:42.162313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.585 qpair failed and we were unable to recover it. 00:29:11.585 [2024-10-09 00:36:42.162653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.585 [2024-10-09 00:36:42.162684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.585 qpair failed and we were unable to recover it. 00:29:11.585 [2024-10-09 00:36:42.163068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.585 [2024-10-09 00:36:42.163106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.585 qpair failed and we were unable to recover it. 00:29:11.585 [2024-10-09 00:36:42.163447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.585 [2024-10-09 00:36:42.163478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.585 qpair failed and we were unable to recover it. 00:29:11.585 [2024-10-09 00:36:42.163843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.585 [2024-10-09 00:36:42.163874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.585 qpair failed and we were unable to recover it. 00:29:11.585 [2024-10-09 00:36:42.164219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.585 [2024-10-09 00:36:42.164248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.585 qpair failed and we were unable to recover it. 00:29:11.585 [2024-10-09 00:36:42.164481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.585 [2024-10-09 00:36:42.164514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.585 qpair failed and we were unable to recover it. 00:29:11.585 [2024-10-09 00:36:42.164762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.585 [2024-10-09 00:36:42.164792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.585 qpair failed and we were unable to recover it. 00:29:11.585 [2024-10-09 00:36:42.165208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.585 [2024-10-09 00:36:42.165237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.585 qpair failed and we were unable to recover it. 00:29:11.585 [2024-10-09 00:36:42.165588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.585 [2024-10-09 00:36:42.165616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.585 qpair failed and we were unable to recover it. 00:29:11.585 [2024-10-09 00:36:42.165987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.585 [2024-10-09 00:36:42.166017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.585 qpair failed and we were unable to recover it. 00:29:11.585 [2024-10-09 00:36:42.166387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.585 [2024-10-09 00:36:42.166418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.586 qpair failed and we were unable to recover it. 00:29:11.586 [2024-10-09 00:36:42.166518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.586 [2024-10-09 00:36:42.166547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.586 qpair failed and we were unable to recover it. 00:29:11.586 [2024-10-09 00:36:42.166888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.586 [2024-10-09 00:36:42.166919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.586 qpair failed and we were unable to recover it. 00:29:11.586 [2024-10-09 00:36:42.167289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.586 [2024-10-09 00:36:42.167319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.586 qpair failed and we were unable to recover it. 00:29:11.586 [2024-10-09 00:36:42.167683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.586 [2024-10-09 00:36:42.167711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.586 qpair failed and we were unable to recover it. 00:29:11.586 [2024-10-09 00:36:42.168091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.586 [2024-10-09 00:36:42.168121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.586 qpair failed and we were unable to recover it. 00:29:11.586 [2024-10-09 00:36:42.168520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.586 [2024-10-09 00:36:42.168557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.586 qpair failed and we were unable to recover it. 00:29:11.586 [2024-10-09 00:36:42.168896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.586 [2024-10-09 00:36:42.168926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.586 qpair failed and we were unable to recover it. 00:29:11.586 [2024-10-09 00:36:42.169293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.586 [2024-10-09 00:36:42.169321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.586 qpair failed and we were unable to recover it. 00:29:11.586 [2024-10-09 00:36:42.169573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.586 [2024-10-09 00:36:42.169601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.586 qpair failed and we were unable to recover it. 00:29:11.586 [2024-10-09 00:36:42.169860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.586 [2024-10-09 00:36:42.169889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.586 qpair failed and we were unable to recover it. 00:29:11.586 [2024-10-09 00:36:42.170279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.586 [2024-10-09 00:36:42.170308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.586 qpair failed and we were unable to recover it. 00:29:11.586 [2024-10-09 00:36:42.170672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.586 [2024-10-09 00:36:42.170700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.586 qpair failed and we were unable to recover it. 00:29:11.586 [2024-10-09 00:36:42.171080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.586 [2024-10-09 00:36:42.171110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.586 qpair failed and we were unable to recover it. 00:29:11.586 [2024-10-09 00:36:42.171496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.586 [2024-10-09 00:36:42.171525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.586 qpair failed and we were unable to recover it. 00:29:11.586 [2024-10-09 00:36:42.171896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.586 [2024-10-09 00:36:42.171925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.586 qpair failed and we were unable to recover it. 00:29:11.586 [2024-10-09 00:36:42.172365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.586 [2024-10-09 00:36:42.172393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.586 qpair failed and we were unable to recover it. 00:29:11.586 [2024-10-09 00:36:42.172620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.586 [2024-10-09 00:36:42.172648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.586 qpair failed and we were unable to recover it. 00:29:11.586 [2024-10-09 00:36:42.173038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.586 [2024-10-09 00:36:42.173073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.586 qpair failed and we were unable to recover it. 00:29:11.586 [2024-10-09 00:36:42.173438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.586 [2024-10-09 00:36:42.173466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.586 qpair failed and we were unable to recover it. 00:29:11.586 [2024-10-09 00:36:42.173697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.586 [2024-10-09 00:36:42.173734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.586 qpair failed and we were unable to recover it. 00:29:11.586 [2024-10-09 00:36:42.174119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.586 [2024-10-09 00:36:42.174148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.586 qpair failed and we were unable to recover it. 00:29:11.586 [2024-10-09 00:36:42.174484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.586 [2024-10-09 00:36:42.174513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.586 qpair failed and we were unable to recover it. 00:29:11.586 [2024-10-09 00:36:42.174902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.586 [2024-10-09 00:36:42.174932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.586 qpair failed and we were unable to recover it. 00:29:11.586 [2024-10-09 00:36:42.175299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.586 [2024-10-09 00:36:42.175327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.586 qpair failed and we were unable to recover it. 00:29:11.586 [2024-10-09 00:36:42.175705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.586 [2024-10-09 00:36:42.175831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.586 qpair failed and we were unable to recover it. 00:29:11.586 [2024-10-09 00:36:42.176189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.586 [2024-10-09 00:36:42.176218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.586 qpair failed and we were unable to recover it. 00:29:11.586 [2024-10-09 00:36:42.176575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.586 [2024-10-09 00:36:42.176603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.586 qpair failed and we were unable to recover it. 00:29:11.586 [2024-10-09 00:36:42.176997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.586 [2024-10-09 00:36:42.177028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.586 qpair failed and we were unable to recover it. 00:29:11.586 [2024-10-09 00:36:42.177272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.586 [2024-10-09 00:36:42.177301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.586 qpair failed and we were unable to recover it. 00:29:11.586 [2024-10-09 00:36:42.177700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.586 [2024-10-09 00:36:42.177746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.586 qpair failed and we were unable to recover it. 00:29:11.586 [2024-10-09 00:36:42.178117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.586 [2024-10-09 00:36:42.178146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.586 qpair failed and we were unable to recover it. 00:29:11.586 [2024-10-09 00:36:42.178401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.586 [2024-10-09 00:36:42.178433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.586 qpair failed and we were unable to recover it. 00:29:11.586 [2024-10-09 00:36:42.178824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.586 [2024-10-09 00:36:42.178854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.586 qpair failed and we were unable to recover it. 00:29:11.586 [2024-10-09 00:36:42.179211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.586 [2024-10-09 00:36:42.179241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.586 qpair failed and we were unable to recover it. 00:29:11.586 [2024-10-09 00:36:42.179570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.586 [2024-10-09 00:36:42.179599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.586 qpair failed and we were unable to recover it. 00:29:11.586 [2024-10-09 00:36:42.179850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.586 [2024-10-09 00:36:42.179880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.586 qpair failed and we were unable to recover it. 00:29:11.586 [2024-10-09 00:36:42.180113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.586 [2024-10-09 00:36:42.180142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.586 qpair failed and we were unable to recover it. 00:29:11.586 [2024-10-09 00:36:42.180514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.586 [2024-10-09 00:36:42.180542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.586 qpair failed and we were unable to recover it. 00:29:11.586 [2024-10-09 00:36:42.180812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.586 [2024-10-09 00:36:42.180843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.586 qpair failed and we were unable to recover it. 00:29:11.586 [2024-10-09 00:36:42.181223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.586 [2024-10-09 00:36:42.181251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.587 qpair failed and we were unable to recover it. 00:29:11.587 [2024-10-09 00:36:42.181628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.587 [2024-10-09 00:36:42.181657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.587 qpair failed and we were unable to recover it. 00:29:11.587 [2024-10-09 00:36:42.181892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.587 [2024-10-09 00:36:42.181923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.587 qpair failed and we were unable to recover it. 00:29:11.587 [2024-10-09 00:36:42.182145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.587 [2024-10-09 00:36:42.182173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.587 qpair failed and we were unable to recover it. 00:29:11.587 [2024-10-09 00:36:42.182558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.587 [2024-10-09 00:36:42.182587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.587 qpair failed and we were unable to recover it. 00:29:11.587 [2024-10-09 00:36:42.182890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.587 [2024-10-09 00:36:42.182920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.587 qpair failed and we were unable to recover it. 00:29:11.587 [2024-10-09 00:36:42.183274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.587 [2024-10-09 00:36:42.183303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.587 qpair failed and we were unable to recover it. 00:29:11.587 [2024-10-09 00:36:42.183671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.587 [2024-10-09 00:36:42.183700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.587 qpair failed and we were unable to recover it. 00:29:11.587 [2024-10-09 00:36:42.184074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.587 [2024-10-09 00:36:42.184104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.587 qpair failed and we were unable to recover it. 00:29:11.587 [2024-10-09 00:36:42.184468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.587 [2024-10-09 00:36:42.184497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.587 qpair failed and we were unable to recover it. 00:29:11.587 [2024-10-09 00:36:42.184878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.587 [2024-10-09 00:36:42.184910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.587 qpair failed and we were unable to recover it. 00:29:11.860 [2024-10-09 00:36:42.185266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.860 [2024-10-09 00:36:42.185298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.860 qpair failed and we were unable to recover it. 00:29:11.860 [2024-10-09 00:36:42.185658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.860 [2024-10-09 00:36:42.185689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.860 qpair failed and we were unable to recover it. 00:29:11.860 [2024-10-09 00:36:42.185925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.860 [2024-10-09 00:36:42.185956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.860 qpair failed and we were unable to recover it. 00:29:11.860 [2024-10-09 00:36:42.186231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.860 [2024-10-09 00:36:42.186263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.860 qpair failed and we were unable to recover it. 00:29:11.860 [2024-10-09 00:36:42.186495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.860 [2024-10-09 00:36:42.186524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.860 qpair failed and we were unable to recover it. 00:29:11.860 [2024-10-09 00:36:42.186903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.860 [2024-10-09 00:36:42.186934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.860 qpair failed and we were unable to recover it. 00:29:11.860 [2024-10-09 00:36:42.187313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.860 [2024-10-09 00:36:42.187342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.860 qpair failed and we were unable to recover it. 00:29:11.860 [2024-10-09 00:36:42.187671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.860 [2024-10-09 00:36:42.187701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.860 qpair failed and we were unable to recover it. 00:29:11.860 [2024-10-09 00:36:42.188080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.860 [2024-10-09 00:36:42.188111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.860 qpair failed and we were unable to recover it. 00:29:11.860 [2024-10-09 00:36:42.188474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.860 [2024-10-09 00:36:42.188502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.860 qpair failed and we were unable to recover it. 00:29:11.860 [2024-10-09 00:36:42.188880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.860 [2024-10-09 00:36:42.188910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.860 qpair failed and we were unable to recover it. 00:29:11.860 [2024-10-09 00:36:42.189298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.860 [2024-10-09 00:36:42.189328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.860 qpair failed and we were unable to recover it. 00:29:11.860 [2024-10-09 00:36:42.189564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.860 [2024-10-09 00:36:42.189592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.860 qpair failed and we were unable to recover it. 00:29:11.860 [2024-10-09 00:36:42.189970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.860 [2024-10-09 00:36:42.190000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.860 qpair failed and we were unable to recover it. 00:29:11.860 [2024-10-09 00:36:42.190365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.860 [2024-10-09 00:36:42.190393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.860 qpair failed and we were unable to recover it. 00:29:11.860 [2024-10-09 00:36:42.190740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.860 [2024-10-09 00:36:42.190770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.860 qpair failed and we were unable to recover it. 00:29:11.860 [2024-10-09 00:36:42.191041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.860 [2024-10-09 00:36:42.191071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.860 qpair failed and we were unable to recover it. 00:29:11.860 [2024-10-09 00:36:42.191440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.860 [2024-10-09 00:36:42.191469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.860 qpair failed and we were unable to recover it. 00:29:11.860 [2024-10-09 00:36:42.191670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.860 [2024-10-09 00:36:42.191700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.860 qpair failed and we were unable to recover it. 00:29:11.860 [2024-10-09 00:36:42.191923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.860 [2024-10-09 00:36:42.191954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.860 qpair failed and we were unable to recover it. 00:29:11.860 [2024-10-09 00:36:42.192174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.860 [2024-10-09 00:36:42.192203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.860 qpair failed and we were unable to recover it. 00:29:11.860 [2024-10-09 00:36:42.192586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.860 [2024-10-09 00:36:42.192616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.860 qpair failed and we were unable to recover it. 00:29:11.860 [2024-10-09 00:36:42.192989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.860 [2024-10-09 00:36:42.193021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.860 qpair failed and we were unable to recover it. 00:29:11.860 [2024-10-09 00:36:42.193359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.860 [2024-10-09 00:36:42.193389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.860 qpair failed and we were unable to recover it. 00:29:11.860 [2024-10-09 00:36:42.193612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.860 [2024-10-09 00:36:42.193641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.860 qpair failed and we were unable to recover it. 00:29:11.860 [2024-10-09 00:36:42.193982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.860 [2024-10-09 00:36:42.194013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.860 qpair failed and we were unable to recover it. 00:29:11.860 [2024-10-09 00:36:42.194392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.860 [2024-10-09 00:36:42.194422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.860 qpair failed and we were unable to recover it. 00:29:11.860 [2024-10-09 00:36:42.194569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.860 [2024-10-09 00:36:42.194598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.860 qpair failed and we were unable to recover it. 00:29:11.860 [2024-10-09 00:36:42.194821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.860 [2024-10-09 00:36:42.194852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.860 qpair failed and we were unable to recover it. 00:29:11.860 [2024-10-09 00:36:42.195253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.860 [2024-10-09 00:36:42.195281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.860 qpair failed and we were unable to recover it. 00:29:11.860 [2024-10-09 00:36:42.195650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.860 [2024-10-09 00:36:42.195680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.860 qpair failed and we were unable to recover it. 00:29:11.860 [2024-10-09 00:36:42.196069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.860 [2024-10-09 00:36:42.196099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.860 qpair failed and we were unable to recover it. 00:29:11.860 [2024-10-09 00:36:42.196483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.860 [2024-10-09 00:36:42.196512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.860 qpair failed and we were unable to recover it. 00:29:11.861 [2024-10-09 00:36:42.196883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.861 [2024-10-09 00:36:42.196914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.861 qpair failed and we were unable to recover it. 00:29:11.861 [2024-10-09 00:36:42.197132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.861 [2024-10-09 00:36:42.197160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.861 qpair failed and we were unable to recover it. 00:29:11.861 [2024-10-09 00:36:42.197522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.861 [2024-10-09 00:36:42.197564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.861 qpair failed and we were unable to recover it. 00:29:11.861 [2024-10-09 00:36:42.197939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.861 [2024-10-09 00:36:42.197970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.861 qpair failed and we were unable to recover it. 00:29:11.861 [2024-10-09 00:36:42.198207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.861 [2024-10-09 00:36:42.198239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.861 qpair failed and we were unable to recover it. 00:29:11.861 [2024-10-09 00:36:42.198491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.861 [2024-10-09 00:36:42.198520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.861 qpair failed and we were unable to recover it. 00:29:11.861 [2024-10-09 00:36:42.199014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.861 [2024-10-09 00:36:42.199044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.861 qpair failed and we were unable to recover it. 00:29:11.861 [2024-10-09 00:36:42.199392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.861 [2024-10-09 00:36:42.199422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.861 qpair failed and we were unable to recover it. 00:29:11.861 [2024-10-09 00:36:42.199790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.861 [2024-10-09 00:36:42.199820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.861 qpair failed and we were unable to recover it. 00:29:11.861 [2024-10-09 00:36:42.200188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.861 [2024-10-09 00:36:42.200216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.861 qpair failed and we were unable to recover it. 00:29:11.861 [2024-10-09 00:36:42.200431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.861 [2024-10-09 00:36:42.200460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.861 qpair failed and we were unable to recover it. 00:29:11.861 [2024-10-09 00:36:42.200710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.861 [2024-10-09 00:36:42.200751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.861 qpair failed and we were unable to recover it. 00:29:11.861 [2024-10-09 00:36:42.201103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.861 [2024-10-09 00:36:42.201132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.861 qpair failed and we were unable to recover it. 00:29:11.861 [2024-10-09 00:36:42.201498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.861 [2024-10-09 00:36:42.201528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.861 qpair failed and we were unable to recover it. 00:29:11.861 [2024-10-09 00:36:42.201899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.861 [2024-10-09 00:36:42.201930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.861 qpair failed and we were unable to recover it. 00:29:11.861 [2024-10-09 00:36:42.202305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.861 [2024-10-09 00:36:42.202334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.861 qpair failed and we were unable to recover it. 00:29:11.861 [2024-10-09 00:36:42.202714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.861 [2024-10-09 00:36:42.202756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.861 qpair failed and we were unable to recover it. 00:29:11.861 [2024-10-09 00:36:42.203019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.861 [2024-10-09 00:36:42.203052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.861 qpair failed and we were unable to recover it. 00:29:11.861 [2024-10-09 00:36:42.203413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.861 [2024-10-09 00:36:42.203443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.861 qpair failed and we were unable to recover it. 00:29:11.861 [2024-10-09 00:36:42.203654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.861 [2024-10-09 00:36:42.203683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.861 qpair failed and we were unable to recover it. 00:29:11.861 [2024-10-09 00:36:42.204055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.861 [2024-10-09 00:36:42.204085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.861 qpair failed and we were unable to recover it. 00:29:11.861 [2024-10-09 00:36:42.204432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.861 [2024-10-09 00:36:42.204460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.861 qpair failed and we were unable to recover it. 00:29:11.861 [2024-10-09 00:36:42.204827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.861 [2024-10-09 00:36:42.204858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.861 qpair failed and we were unable to recover it. 00:29:11.861 [2024-10-09 00:36:42.205228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.861 [2024-10-09 00:36:42.205256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.861 qpair failed and we were unable to recover it. 00:29:11.861 [2024-10-09 00:36:42.205627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.861 [2024-10-09 00:36:42.205656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.861 qpair failed and we were unable to recover it. 00:29:11.861 [2024-10-09 00:36:42.206028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.861 [2024-10-09 00:36:42.206058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.861 qpair failed and we were unable to recover it. 00:29:11.861 [2024-10-09 00:36:42.206322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.861 [2024-10-09 00:36:42.206350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.861 qpair failed and we were unable to recover it. 00:29:11.861 [2024-10-09 00:36:42.206605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.861 [2024-10-09 00:36:42.206634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.861 qpair failed and we were unable to recover it. 00:29:11.861 [2024-10-09 00:36:42.207013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.861 [2024-10-09 00:36:42.207043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.861 qpair failed and we were unable to recover it. 00:29:11.861 [2024-10-09 00:36:42.207411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.861 [2024-10-09 00:36:42.207446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.861 qpair failed and we were unable to recover it. 00:29:11.861 [2024-10-09 00:36:42.207745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.861 [2024-10-09 00:36:42.207774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.861 qpair failed and we were unable to recover it. 00:29:11.861 [2024-10-09 00:36:42.208131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.861 [2024-10-09 00:36:42.208160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.861 qpair failed and we were unable to recover it. 00:29:11.861 [2024-10-09 00:36:42.208398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.861 [2024-10-09 00:36:42.208426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.861 qpair failed and we were unable to recover it. 00:29:11.861 [2024-10-09 00:36:42.208640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.861 [2024-10-09 00:36:42.208669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.861 qpair failed and we were unable to recover it. 00:29:11.861 [2024-10-09 00:36:42.209067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.861 [2024-10-09 00:36:42.209098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.861 qpair failed and we were unable to recover it. 00:29:11.861 [2024-10-09 00:36:42.209311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.861 [2024-10-09 00:36:42.209339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.861 qpair failed and we were unable to recover it. 00:29:11.861 [2024-10-09 00:36:42.209683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.861 [2024-10-09 00:36:42.209713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.861 qpair failed and we were unable to recover it. 00:29:11.861 [2024-10-09 00:36:42.210058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.861 [2024-10-09 00:36:42.210087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.861 qpair failed and we were unable to recover it. 00:29:11.861 [2024-10-09 00:36:42.210328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.861 [2024-10-09 00:36:42.210356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.861 qpair failed and we were unable to recover it. 00:29:11.861 [2024-10-09 00:36:42.210718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.861 [2024-10-09 00:36:42.210760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.862 qpair failed and we were unable to recover it. 00:29:11.862 [2024-10-09 00:36:42.211142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.862 [2024-10-09 00:36:42.211172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.862 qpair failed and we were unable to recover it. 00:29:11.862 [2024-10-09 00:36:42.211588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.862 [2024-10-09 00:36:42.211618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.862 qpair failed and we were unable to recover it. 00:29:11.862 [2024-10-09 00:36:42.211950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.862 [2024-10-09 00:36:42.211981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.862 qpair failed and we were unable to recover it. 00:29:11.862 [2024-10-09 00:36:42.212328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.862 [2024-10-09 00:36:42.212356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.862 qpair failed and we were unable to recover it. 00:29:11.862 [2024-10-09 00:36:42.212625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.862 [2024-10-09 00:36:42.212655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.862 qpair failed and we were unable to recover it. 00:29:11.862 [2024-10-09 00:36:42.212992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.862 [2024-10-09 00:36:42.213024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.862 qpair failed and we were unable to recover it. 00:29:11.862 [2024-10-09 00:36:42.213272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.862 [2024-10-09 00:36:42.213301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.862 qpair failed and we were unable to recover it. 00:29:11.862 [2024-10-09 00:36:42.213539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.862 [2024-10-09 00:36:42.213569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.862 qpair failed and we were unable to recover it. 00:29:11.862 [2024-10-09 00:36:42.213849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.862 [2024-10-09 00:36:42.213879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.862 qpair failed and we were unable to recover it. 00:29:11.862 [2024-10-09 00:36:42.214230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.862 [2024-10-09 00:36:42.214260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.862 qpair failed and we were unable to recover it. 00:29:11.862 [2024-10-09 00:36:42.214617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.862 [2024-10-09 00:36:42.214646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.862 qpair failed and we were unable to recover it. 00:29:11.862 [2024-10-09 00:36:42.214986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.862 [2024-10-09 00:36:42.215016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.862 qpair failed and we were unable to recover it. 00:29:11.862 [2024-10-09 00:36:42.215376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.862 [2024-10-09 00:36:42.215405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.862 qpair failed and we were unable to recover it. 00:29:11.862 [2024-10-09 00:36:42.215792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.862 [2024-10-09 00:36:42.215822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.862 qpair failed and we were unable to recover it. 00:29:11.862 [2024-10-09 00:36:42.216141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.862 [2024-10-09 00:36:42.216169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.862 qpair failed and we were unable to recover it. 00:29:11.862 [2024-10-09 00:36:42.216546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.862 [2024-10-09 00:36:42.216575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.862 qpair failed and we were unable to recover it. 00:29:11.862 [2024-10-09 00:36:42.216804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.862 [2024-10-09 00:36:42.216840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.862 qpair failed and we were unable to recover it. 00:29:11.862 [2024-10-09 00:36:42.217104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.862 [2024-10-09 00:36:42.217132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.862 qpair failed and we were unable to recover it. 00:29:11.862 [2024-10-09 00:36:42.217481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.862 [2024-10-09 00:36:42.217511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.862 qpair failed and we were unable to recover it. 00:29:11.862 [2024-10-09 00:36:42.217897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.862 [2024-10-09 00:36:42.217927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.862 qpair failed and we were unable to recover it. 00:29:11.862 [2024-10-09 00:36:42.218309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.862 [2024-10-09 00:36:42.218337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.862 qpair failed and we were unable to recover it. 00:29:11.862 [2024-10-09 00:36:42.218704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.862 [2024-10-09 00:36:42.218742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.862 qpair failed and we were unable to recover it. 00:29:11.862 [2024-10-09 00:36:42.219148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.862 [2024-10-09 00:36:42.219178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.862 qpair failed and we were unable to recover it. 00:29:11.862 [2024-10-09 00:36:42.219561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.862 [2024-10-09 00:36:42.219590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.862 qpair failed and we were unable to recover it. 00:29:11.862 [2024-10-09 00:36:42.219975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.862 [2024-10-09 00:36:42.220005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.862 qpair failed and we were unable to recover it. 00:29:11.862 [2024-10-09 00:36:42.220102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.862 [2024-10-09 00:36:42.220130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.862 qpair failed and we were unable to recover it. 00:29:11.862 [2024-10-09 00:36:42.220458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.862 [2024-10-09 00:36:42.220487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.862 qpair failed and we were unable to recover it. 00:29:11.862 [2024-10-09 00:36:42.220845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.862 [2024-10-09 00:36:42.220874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.862 qpair failed and we were unable to recover it. 00:29:11.862 [2024-10-09 00:36:42.221234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.862 [2024-10-09 00:36:42.221263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.862 qpair failed and we were unable to recover it. 00:29:11.862 [2024-10-09 00:36:42.221571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.862 [2024-10-09 00:36:42.221599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.862 qpair failed and we were unable to recover it. 00:29:11.862 [2024-10-09 00:36:42.221957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.862 [2024-10-09 00:36:42.221988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.862 qpair failed and we were unable to recover it. 00:29:11.862 [2024-10-09 00:36:42.222370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.862 [2024-10-09 00:36:42.222400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.862 qpair failed and we were unable to recover it. 00:29:11.862 [2024-10-09 00:36:42.222768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.862 [2024-10-09 00:36:42.222798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.862 qpair failed and we were unable to recover it. 00:29:11.862 [2024-10-09 00:36:42.223038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.862 [2024-10-09 00:36:42.223066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.862 qpair failed and we were unable to recover it. 00:29:11.862 [2024-10-09 00:36:42.223376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.862 [2024-10-09 00:36:42.223404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.862 qpair failed and we were unable to recover it. 00:29:11.862 [2024-10-09 00:36:42.223784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.862 [2024-10-09 00:36:42.223813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.862 qpair failed and we were unable to recover it. 00:29:11.862 [2024-10-09 00:36:42.224172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.862 [2024-10-09 00:36:42.224201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.862 qpair failed and we were unable to recover it. 00:29:11.862 [2024-10-09 00:36:42.224576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.863 [2024-10-09 00:36:42.224604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.863 qpair failed and we were unable to recover it. 00:29:11.863 [2024-10-09 00:36:42.224984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.863 [2024-10-09 00:36:42.225015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.863 qpair failed and we were unable to recover it. 00:29:11.863 [2024-10-09 00:36:42.225219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.863 [2024-10-09 00:36:42.225248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.863 qpair failed and we were unable to recover it. 00:29:11.863 [2024-10-09 00:36:42.225621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.863 [2024-10-09 00:36:42.225651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.863 qpair failed and we were unable to recover it. 00:29:11.863 [2024-10-09 00:36:42.225994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.863 [2024-10-09 00:36:42.226024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.863 qpair failed and we were unable to recover it. 00:29:11.863 [2024-10-09 00:36:42.226392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.863 [2024-10-09 00:36:42.226421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.863 qpair failed and we were unable to recover it. 00:29:11.863 [2024-10-09 00:36:42.226756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.863 [2024-10-09 00:36:42.226785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.863 qpair failed and we were unable to recover it. 00:29:11.863 [2024-10-09 00:36:42.227036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.863 [2024-10-09 00:36:42.227066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.863 qpair failed and we were unable to recover it. 00:29:11.863 [2024-10-09 00:36:42.227409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.863 [2024-10-09 00:36:42.227439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.863 qpair failed and we were unable to recover it. 00:29:11.863 [2024-10-09 00:36:42.227705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.863 [2024-10-09 00:36:42.227745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.863 qpair failed and we were unable to recover it. 00:29:11.863 [2024-10-09 00:36:42.228091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.863 [2024-10-09 00:36:42.228119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.863 qpair failed and we were unable to recover it. 00:29:11.863 [2024-10-09 00:36:42.228498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.863 [2024-10-09 00:36:42.228527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.863 qpair failed and we were unable to recover it. 00:29:11.863 [2024-10-09 00:36:42.228888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.863 [2024-10-09 00:36:42.228920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.863 qpair failed and we were unable to recover it. 00:29:11.863 [2024-10-09 00:36:42.229294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.863 [2024-10-09 00:36:42.229323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.863 qpair failed and we were unable to recover it. 00:29:11.863 [2024-10-09 00:36:42.229588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.863 [2024-10-09 00:36:42.229616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.863 qpair failed and we were unable to recover it. 00:29:11.863 [2024-10-09 00:36:42.229844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.863 [2024-10-09 00:36:42.229875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.863 qpair failed and we were unable to recover it. 00:29:11.863 [2024-10-09 00:36:42.230310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.863 [2024-10-09 00:36:42.230339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.863 qpair failed and we were unable to recover it. 00:29:11.863 [2024-10-09 00:36:42.230695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.863 [2024-10-09 00:36:42.230733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.863 qpair failed and we were unable to recover it. 00:29:11.863 [2024-10-09 00:36:42.231111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.863 [2024-10-09 00:36:42.231140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.863 qpair failed and we were unable to recover it. 00:29:11.863 [2024-10-09 00:36:42.231498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.863 [2024-10-09 00:36:42.231527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.863 qpair failed and we were unable to recover it. 00:29:11.863 [2024-10-09 00:36:42.231896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.863 [2024-10-09 00:36:42.231927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.863 qpair failed and we were unable to recover it. 00:29:11.863 [2024-10-09 00:36:42.232295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.863 [2024-10-09 00:36:42.232324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.863 qpair failed and we were unable to recover it. 00:29:11.863 [2024-10-09 00:36:42.232705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.863 [2024-10-09 00:36:42.232745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.863 qpair failed and we were unable to recover it. 00:29:11.863 [2024-10-09 00:36:42.233102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.863 [2024-10-09 00:36:42.233131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.863 qpair failed and we were unable to recover it. 00:29:11.863 [2024-10-09 00:36:42.233501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.863 [2024-10-09 00:36:42.233529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.863 qpair failed and we were unable to recover it. 00:29:11.863 [2024-10-09 00:36:42.233902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.863 [2024-10-09 00:36:42.233932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.863 qpair failed and we were unable to recover it. 00:29:11.863 [2024-10-09 00:36:42.234338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.863 [2024-10-09 00:36:42.234367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.863 qpair failed and we were unable to recover it. 00:29:11.863 [2024-10-09 00:36:42.234739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.863 [2024-10-09 00:36:42.234769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.863 qpair failed and we were unable to recover it. 00:29:11.863 [2024-10-09 00:36:42.235128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.863 [2024-10-09 00:36:42.235157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.863 qpair failed and we were unable to recover it. 00:29:11.863 [2024-10-09 00:36:42.235405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.863 [2024-10-09 00:36:42.235434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.863 qpair failed and we were unable to recover it. 00:29:11.863 [2024-10-09 00:36:42.235779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.863 [2024-10-09 00:36:42.235808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.863 qpair failed and we were unable to recover it. 00:29:11.863 [2024-10-09 00:36:42.236044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.863 [2024-10-09 00:36:42.236073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.863 qpair failed and we were unable to recover it. 00:29:11.863 [2024-10-09 00:36:42.236423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.863 [2024-10-09 00:36:42.236453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.863 qpair failed and we were unable to recover it. 00:29:11.863 [2024-10-09 00:36:42.236830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.863 [2024-10-09 00:36:42.236860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.863 qpair failed and we were unable to recover it. 00:29:11.863 [2024-10-09 00:36:42.237225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.863 [2024-10-09 00:36:42.237255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.863 qpair failed and we were unable to recover it. 00:29:11.863 [2024-10-09 00:36:42.237627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.863 [2024-10-09 00:36:42.237655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.863 qpair failed and we were unable to recover it. 00:29:11.863 [2024-10-09 00:36:42.237883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.863 [2024-10-09 00:36:42.237914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.863 qpair failed and we were unable to recover it. 00:29:11.863 [2024-10-09 00:36:42.238292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.863 [2024-10-09 00:36:42.238321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.863 qpair failed and we were unable to recover it. 00:29:11.863 [2024-10-09 00:36:42.238705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.863 [2024-10-09 00:36:42.238759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.863 qpair failed and we were unable to recover it. 00:29:11.863 [2024-10-09 00:36:42.239146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.863 [2024-10-09 00:36:42.239175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.863 qpair failed and we were unable to recover it. 00:29:11.863 [2024-10-09 00:36:42.239548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.863 [2024-10-09 00:36:42.239576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.863 qpair failed and we were unable to recover it. 00:29:11.864 [2024-10-09 00:36:42.239907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.864 [2024-10-09 00:36:42.239937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.864 qpair failed and we were unable to recover it. 00:29:11.864 [2024-10-09 00:36:42.240279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.864 [2024-10-09 00:36:42.240309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.864 qpair failed and we were unable to recover it. 00:29:11.864 [2024-10-09 00:36:42.240542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.864 [2024-10-09 00:36:42.240572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.864 qpair failed and we were unable to recover it. 00:29:11.864 [2024-10-09 00:36:42.240915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.864 [2024-10-09 00:36:42.240944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.864 qpair failed and we were unable to recover it. 00:29:11.864 [2024-10-09 00:36:42.241170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.864 [2024-10-09 00:36:42.241198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.864 qpair failed and we were unable to recover it. 00:29:11.864 [2024-10-09 00:36:42.241461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.864 [2024-10-09 00:36:42.241489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.864 qpair failed and we were unable to recover it. 00:29:11.864 [2024-10-09 00:36:42.241739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.864 [2024-10-09 00:36:42.241781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.864 qpair failed and we were unable to recover it. 00:29:11.864 [2024-10-09 00:36:42.242171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.864 [2024-10-09 00:36:42.242201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.864 qpair failed and we were unable to recover it. 00:29:11.864 [2024-10-09 00:36:42.242582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.864 [2024-10-09 00:36:42.242610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.864 qpair failed and we were unable to recover it. 00:29:11.864 [2024-10-09 00:36:42.242858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.864 [2024-10-09 00:36:42.242888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.864 qpair failed and we were unable to recover it. 00:29:11.864 [2024-10-09 00:36:42.243263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.864 [2024-10-09 00:36:42.243292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.864 qpair failed and we were unable to recover it. 00:29:11.864 [2024-10-09 00:36:42.243654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.864 [2024-10-09 00:36:42.243682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.864 qpair failed and we were unable to recover it. 00:29:11.864 [2024-10-09 00:36:42.243926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.864 [2024-10-09 00:36:42.243959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.864 qpair failed and we were unable to recover it. 00:29:11.864 [2024-10-09 00:36:42.244363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.864 [2024-10-09 00:36:42.244393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.864 qpair failed and we were unable to recover it. 00:29:11.864 [2024-10-09 00:36:42.244614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.864 [2024-10-09 00:36:42.244642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.864 qpair failed and we were unable to recover it. 00:29:11.864 [2024-10-09 00:36:42.244987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.864 [2024-10-09 00:36:42.245018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.864 qpair failed and we were unable to recover it. 00:29:11.864 [2024-10-09 00:36:42.245385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.864 [2024-10-09 00:36:42.245414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.864 qpair failed and we were unable to recover it. 00:29:11.864 [2024-10-09 00:36:42.245749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.864 [2024-10-09 00:36:42.245780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.864 qpair failed and we were unable to recover it. 00:29:11.864 [2024-10-09 00:36:42.246151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.864 [2024-10-09 00:36:42.246181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.864 qpair failed and we were unable to recover it. 00:29:11.864 [2024-10-09 00:36:42.246389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.864 [2024-10-09 00:36:42.246417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.864 qpair failed and we were unable to recover it. 00:29:11.864 [2024-10-09 00:36:42.246668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.864 [2024-10-09 00:36:42.246701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.864 qpair failed and we were unable to recover it. 00:29:11.864 [2024-10-09 00:36:42.247069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.864 [2024-10-09 00:36:42.247107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.864 qpair failed and we were unable to recover it. 00:29:11.864 [2024-10-09 00:36:42.247329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.864 [2024-10-09 00:36:42.247359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.864 qpair failed and we were unable to recover it. 00:29:11.864 [2024-10-09 00:36:42.247755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.864 [2024-10-09 00:36:42.247787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.864 qpair failed and we were unable to recover it. 00:29:11.864 [2024-10-09 00:36:42.248150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.864 [2024-10-09 00:36:42.248179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.864 qpair failed and we were unable to recover it. 00:29:11.864 [2024-10-09 00:36:42.248516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.864 [2024-10-09 00:36:42.248545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.864 qpair failed and we were unable to recover it. 00:29:11.864 [2024-10-09 00:36:42.248954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.864 [2024-10-09 00:36:42.248984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.864 qpair failed and we were unable to recover it. 00:29:11.864 [2024-10-09 00:36:42.249348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.864 [2024-10-09 00:36:42.249377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.864 qpair failed and we were unable to recover it. 00:29:11.864 [2024-10-09 00:36:42.249762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.864 [2024-10-09 00:36:42.249793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.864 qpair failed and we were unable to recover it. 00:29:11.864 [2024-10-09 00:36:42.250136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.864 [2024-10-09 00:36:42.250164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.864 qpair failed and we were unable to recover it. 00:29:11.864 [2024-10-09 00:36:42.250516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.864 [2024-10-09 00:36:42.250544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.864 qpair failed and we were unable to recover it. 00:29:11.864 [2024-10-09 00:36:42.250910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.864 [2024-10-09 00:36:42.250940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.864 qpair failed and we were unable to recover it. 00:29:11.864 [2024-10-09 00:36:42.251173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.864 [2024-10-09 00:36:42.251204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.864 qpair failed and we were unable to recover it. 00:29:11.864 [2024-10-09 00:36:42.251441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.864 [2024-10-09 00:36:42.251475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.864 qpair failed and we were unable to recover it. 00:29:11.864 [2024-10-09 00:36:42.251854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.864 [2024-10-09 00:36:42.251885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.865 qpair failed and we were unable to recover it. 00:29:11.865 [2024-10-09 00:36:42.252257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.865 [2024-10-09 00:36:42.252285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.865 qpair failed and we were unable to recover it. 00:29:11.865 [2024-10-09 00:36:42.252666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.865 [2024-10-09 00:36:42.252694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.865 qpair failed and we were unable to recover it. 00:29:11.865 [2024-10-09 00:36:42.253157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.865 [2024-10-09 00:36:42.253187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.865 qpair failed and we were unable to recover it. 00:29:11.865 [2024-10-09 00:36:42.253547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.865 [2024-10-09 00:36:42.253576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.865 qpair failed and we were unable to recover it. 00:29:11.865 [2024-10-09 00:36:42.253903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.865 [2024-10-09 00:36:42.253932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.865 qpair failed and we were unable to recover it. 00:29:11.865 [2024-10-09 00:36:42.254299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.865 [2024-10-09 00:36:42.254327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.865 qpair failed and we were unable to recover it. 00:29:11.865 [2024-10-09 00:36:42.254653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.865 [2024-10-09 00:36:42.254683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.865 qpair failed and we were unable to recover it. 00:29:11.865 [2024-10-09 00:36:42.254856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.865 [2024-10-09 00:36:42.254889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.865 qpair failed and we were unable to recover it. 00:29:11.865 [2024-10-09 00:36:42.255134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.865 [2024-10-09 00:36:42.255163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.865 qpair failed and we were unable to recover it. 00:29:11.865 [2024-10-09 00:36:42.255413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.865 [2024-10-09 00:36:42.255441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.865 qpair failed and we were unable to recover it. 00:29:11.865 [2024-10-09 00:36:42.255827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.865 [2024-10-09 00:36:42.255858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.865 qpair failed and we were unable to recover it. 00:29:11.865 [2024-10-09 00:36:42.256214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.865 [2024-10-09 00:36:42.256243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.865 qpair failed and we were unable to recover it. 00:29:11.865 [2024-10-09 00:36:42.256612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.865 [2024-10-09 00:36:42.256640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.865 qpair failed and we were unable to recover it. 00:29:11.865 [2024-10-09 00:36:42.256992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.865 [2024-10-09 00:36:42.257024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.865 qpair failed and we were unable to recover it. 00:29:11.865 [2024-10-09 00:36:42.257368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.865 [2024-10-09 00:36:42.257398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.865 qpair failed and we were unable to recover it. 00:29:11.865 [2024-10-09 00:36:42.257774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.865 [2024-10-09 00:36:42.257804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.865 qpair failed and we were unable to recover it. 00:29:11.865 [2024-10-09 00:36:42.258075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.865 [2024-10-09 00:36:42.258103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.865 qpair failed and we were unable to recover it. 00:29:11.865 [2024-10-09 00:36:42.258461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.865 [2024-10-09 00:36:42.258489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.865 qpair failed and we were unable to recover it. 00:29:11.865 [2024-10-09 00:36:42.258879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.865 [2024-10-09 00:36:42.258909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.865 qpair failed and we were unable to recover it. 00:29:11.865 [2024-10-09 00:36:42.259347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.865 [2024-10-09 00:36:42.259375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.865 qpair failed and we were unable to recover it. 00:29:11.865 [2024-10-09 00:36:42.259814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.865 [2024-10-09 00:36:42.259845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.865 qpair failed and we were unable to recover it. 00:29:11.865 [2024-10-09 00:36:42.260220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.865 [2024-10-09 00:36:42.260249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.865 qpair failed and we were unable to recover it. 00:29:11.865 [2024-10-09 00:36:42.260649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.865 [2024-10-09 00:36:42.260678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.865 qpair failed and we were unable to recover it. 00:29:11.865 [2024-10-09 00:36:42.261072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.865 [2024-10-09 00:36:42.261102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.865 qpair failed and we were unable to recover it. 00:29:11.865 [2024-10-09 00:36:42.261470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.865 [2024-10-09 00:36:42.261499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.865 qpair failed and we were unable to recover it. 00:29:11.865 [2024-10-09 00:36:42.261746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.865 [2024-10-09 00:36:42.261777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.865 qpair failed and we were unable to recover it. 00:29:11.865 [2024-10-09 00:36:42.262007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.865 [2024-10-09 00:36:42.262036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.865 qpair failed and we were unable to recover it. 00:29:11.865 [2024-10-09 00:36:42.262402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.865 [2024-10-09 00:36:42.262430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.865 qpair failed and we were unable to recover it. 00:29:11.865 [2024-10-09 00:36:42.262808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.865 [2024-10-09 00:36:42.262839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.865 qpair failed and we were unable to recover it. 00:29:11.865 [2024-10-09 00:36:42.263216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.865 [2024-10-09 00:36:42.263245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.865 qpair failed and we were unable to recover it. 00:29:11.865 [2024-10-09 00:36:42.263485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.865 [2024-10-09 00:36:42.263513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.865 qpair failed and we were unable to recover it. 00:29:11.865 [2024-10-09 00:36:42.263929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.865 [2024-10-09 00:36:42.263959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.865 qpair failed and we were unable to recover it. 00:29:11.865 [2024-10-09 00:36:42.264224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.865 [2024-10-09 00:36:42.264253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.865 qpair failed and we were unable to recover it. 00:29:11.865 [2024-10-09 00:36:42.264608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.865 [2024-10-09 00:36:42.264636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.865 qpair failed and we were unable to recover it. 00:29:11.865 [2024-10-09 00:36:42.264945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.865 [2024-10-09 00:36:42.264975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.865 qpair failed and we were unable to recover it. 00:29:11.865 [2024-10-09 00:36:42.265220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.865 [2024-10-09 00:36:42.265249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.865 qpair failed and we were unable to recover it. 00:29:11.865 [2024-10-09 00:36:42.265499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.865 [2024-10-09 00:36:42.265528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.865 qpair failed and we were unable to recover it. 00:29:11.865 [2024-10-09 00:36:42.265900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.865 [2024-10-09 00:36:42.265930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.865 qpair failed and we were unable to recover it. 00:29:11.866 [2024-10-09 00:36:42.266269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.866 [2024-10-09 00:36:42.266300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.866 qpair failed and we were unable to recover it. 00:29:11.866 [2024-10-09 00:36:42.266555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.866 [2024-10-09 00:36:42.266585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.866 qpair failed and we were unable to recover it. 00:29:11.866 [2024-10-09 00:36:42.266944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.866 [2024-10-09 00:36:42.266974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.866 qpair failed and we were unable to recover it. 00:29:11.866 [2024-10-09 00:36:42.267199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.866 [2024-10-09 00:36:42.267227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.866 qpair failed and we were unable to recover it. 00:29:11.866 [2024-10-09 00:36:42.267542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.866 [2024-10-09 00:36:42.267570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.866 qpair failed and we were unable to recover it. 00:29:11.866 [2024-10-09 00:36:42.267940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.866 [2024-10-09 00:36:42.267970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.866 qpair failed and we were unable to recover it. 00:29:11.866 [2024-10-09 00:36:42.268180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.866 [2024-10-09 00:36:42.268207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.866 qpair failed and we were unable to recover it. 00:29:11.866 [2024-10-09 00:36:42.268585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.866 [2024-10-09 00:36:42.268613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.866 qpair failed and we were unable to recover it. 00:29:11.866 [2024-10-09 00:36:42.269001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.866 [2024-10-09 00:36:42.269031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.866 qpair failed and we were unable to recover it. 00:29:11.866 [2024-10-09 00:36:42.269399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.866 [2024-10-09 00:36:42.269426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.866 qpair failed and we were unable to recover it. 00:29:11.866 [2024-10-09 00:36:42.269806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.866 [2024-10-09 00:36:42.269837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.866 qpair failed and we were unable to recover it. 00:29:11.866 [2024-10-09 00:36:42.270199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.866 [2024-10-09 00:36:42.270229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.866 qpair failed and we were unable to recover it. 00:29:11.866 [2024-10-09 00:36:42.270490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.866 [2024-10-09 00:36:42.270520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.866 qpair failed and we were unable to recover it. 00:29:11.866 [2024-10-09 00:36:42.270995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.866 [2024-10-09 00:36:42.271026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.866 qpair failed and we were unable to recover it. 00:29:11.866 [2024-10-09 00:36:42.271428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.866 [2024-10-09 00:36:42.271458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.866 qpair failed and we were unable to recover it. 00:29:11.866 [2024-10-09 00:36:42.271835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.866 [2024-10-09 00:36:42.271867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.866 qpair failed and we were unable to recover it. 00:29:11.866 [2024-10-09 00:36:42.272211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.866 [2024-10-09 00:36:42.272241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.866 qpair failed and we were unable to recover it. 00:29:11.866 [2024-10-09 00:36:42.272606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.866 [2024-10-09 00:36:42.272635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.866 qpair failed and we were unable to recover it. 00:29:11.866 [2024-10-09 00:36:42.273021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.866 [2024-10-09 00:36:42.273053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.866 qpair failed and we were unable to recover it. 00:29:11.866 [2024-10-09 00:36:42.273410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.866 [2024-10-09 00:36:42.273440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.866 qpair failed and we were unable to recover it. 00:29:11.866 [2024-10-09 00:36:42.273797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.866 [2024-10-09 00:36:42.273827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.866 qpair failed and we were unable to recover it. 00:29:11.866 [2024-10-09 00:36:42.274050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.866 [2024-10-09 00:36:42.274080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.866 qpair failed and we were unable to recover it. 00:29:11.866 [2024-10-09 00:36:42.274334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.866 [2024-10-09 00:36:42.274364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.866 qpair failed and we were unable to recover it. 00:29:11.866 [2024-10-09 00:36:42.274741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.866 [2024-10-09 00:36:42.274774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.866 qpair failed and we were unable to recover it. 00:29:11.866 [2024-10-09 00:36:42.275155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.866 [2024-10-09 00:36:42.275185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.866 qpair failed and we were unable to recover it. 00:29:11.866 [2024-10-09 00:36:42.275563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.866 [2024-10-09 00:36:42.275591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.866 qpair failed and we were unable to recover it. 00:29:11.866 [2024-10-09 00:36:42.275978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.866 [2024-10-09 00:36:42.276009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.866 qpair failed and we were unable to recover it. 00:29:11.866 [2024-10-09 00:36:42.276386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.866 [2024-10-09 00:36:42.276415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.866 qpair failed and we were unable to recover it. 00:29:11.866 [2024-10-09 00:36:42.276621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.866 [2024-10-09 00:36:42.276654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.866 qpair failed and we were unable to recover it. 00:29:11.866 [2024-10-09 00:36:42.277019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.866 [2024-10-09 00:36:42.277050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.866 qpair failed and we were unable to recover it. 00:29:11.866 [2024-10-09 00:36:42.277414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.866 [2024-10-09 00:36:42.277443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.866 qpair failed and we were unable to recover it. 00:29:11.866 [2024-10-09 00:36:42.277818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.866 [2024-10-09 00:36:42.277847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.866 qpair failed and we were unable to recover it. 00:29:11.866 [2024-10-09 00:36:42.278251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.866 [2024-10-09 00:36:42.278280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.866 qpair failed and we were unable to recover it. 00:29:11.866 [2024-10-09 00:36:42.278650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.866 [2024-10-09 00:36:42.278679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.866 qpair failed and we were unable to recover it. 00:29:11.866 [2024-10-09 00:36:42.278910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.866 [2024-10-09 00:36:42.278940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.866 qpair failed and we were unable to recover it. 00:29:11.866 [2024-10-09 00:36:42.279115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.866 [2024-10-09 00:36:42.279143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.866 qpair failed and we were unable to recover it. 00:29:11.866 [2024-10-09 00:36:42.279528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.866 [2024-10-09 00:36:42.279556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.866 qpair failed and we were unable to recover it. 00:29:11.866 [2024-10-09 00:36:42.279914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.866 [2024-10-09 00:36:42.279946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.866 qpair failed and we were unable to recover it. 00:29:11.866 [2024-10-09 00:36:42.280315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.866 [2024-10-09 00:36:42.280344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.866 qpair failed and we were unable to recover it. 00:29:11.866 [2024-10-09 00:36:42.280714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.866 [2024-10-09 00:36:42.280754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.866 qpair failed and we were unable to recover it. 00:29:11.866 [2024-10-09 00:36:42.281126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.866 [2024-10-09 00:36:42.281157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.867 qpair failed and we were unable to recover it. 00:29:11.867 [2024-10-09 00:36:42.281535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.867 [2024-10-09 00:36:42.281564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.867 qpair failed and we were unable to recover it. 00:29:11.867 [2024-10-09 00:36:42.281933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.867 [2024-10-09 00:36:42.281963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.867 qpair failed and we were unable to recover it. 00:29:11.867 [2024-10-09 00:36:42.282346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.867 [2024-10-09 00:36:42.282374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.867 qpair failed and we were unable to recover it. 00:29:11.867 [2024-10-09 00:36:42.282758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.867 [2024-10-09 00:36:42.282788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.867 qpair failed and we were unable to recover it. 00:29:11.867 [2024-10-09 00:36:42.283145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.867 [2024-10-09 00:36:42.283174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.867 qpair failed and we were unable to recover it. 00:29:11.867 [2024-10-09 00:36:42.283600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.867 [2024-10-09 00:36:42.283628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.867 qpair failed and we were unable to recover it. 00:29:11.867 [2024-10-09 00:36:42.284074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.867 [2024-10-09 00:36:42.284104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.867 qpair failed and we were unable to recover it. 00:29:11.867 [2024-10-09 00:36:42.284478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.867 [2024-10-09 00:36:42.284509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.867 qpair failed and we were unable to recover it. 00:29:11.867 [2024-10-09 00:36:42.284847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.867 [2024-10-09 00:36:42.284877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.867 qpair failed and we were unable to recover it. 00:29:11.867 [2024-10-09 00:36:42.285228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.867 [2024-10-09 00:36:42.285258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.867 qpair failed and we were unable to recover it. 00:29:11.867 [2024-10-09 00:36:42.285643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.867 [2024-10-09 00:36:42.285672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.867 qpair failed and we were unable to recover it. 00:29:11.867 [2024-10-09 00:36:42.286059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.867 [2024-10-09 00:36:42.286090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.867 qpair failed and we were unable to recover it. 00:29:11.867 [2024-10-09 00:36:42.286329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.867 [2024-10-09 00:36:42.286358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.867 qpair failed and we were unable to recover it. 00:29:11.867 [2024-10-09 00:36:42.286735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.867 [2024-10-09 00:36:42.286765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.867 qpair failed and we were unable to recover it. 00:29:11.867 [2024-10-09 00:36:42.287164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.867 [2024-10-09 00:36:42.287199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.867 qpair failed and we were unable to recover it. 00:29:11.867 [2024-10-09 00:36:42.287428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.867 [2024-10-09 00:36:42.287457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.867 qpair failed and we were unable to recover it. 00:29:11.867 [2024-10-09 00:36:42.287826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.867 [2024-10-09 00:36:42.287856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.867 qpair failed and we were unable to recover it. 00:29:11.867 [2024-10-09 00:36:42.288238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.867 [2024-10-09 00:36:42.288266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.867 qpair failed and we were unable to recover it. 00:29:11.867 [2024-10-09 00:36:42.288644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.867 [2024-10-09 00:36:42.288675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.867 qpair failed and we were unable to recover it. 00:29:11.867 [2024-10-09 00:36:42.289046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.867 [2024-10-09 00:36:42.289076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.867 qpair failed and we were unable to recover it. 00:29:11.867 [2024-10-09 00:36:42.289436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.867 [2024-10-09 00:36:42.289466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.867 qpair failed and we were unable to recover it. 00:29:11.867 [2024-10-09 00:36:42.289855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.867 [2024-10-09 00:36:42.289884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.867 qpair failed and we were unable to recover it. 00:29:11.867 [2024-10-09 00:36:42.290124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.867 [2024-10-09 00:36:42.290153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.867 qpair failed and we were unable to recover it. 00:29:11.867 [2024-10-09 00:36:42.290556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.867 [2024-10-09 00:36:42.290585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.867 qpair failed and we were unable to recover it. 00:29:11.867 [2024-10-09 00:36:42.290836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.867 [2024-10-09 00:36:42.290869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.867 qpair failed and we were unable to recover it. 00:29:11.867 [2024-10-09 00:36:42.291252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.867 [2024-10-09 00:36:42.291281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.867 qpair failed and we were unable to recover it. 00:29:11.867 [2024-10-09 00:36:42.291381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.867 [2024-10-09 00:36:42.291408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.867 qpair failed and we were unable to recover it. 00:29:11.867 [2024-10-09 00:36:42.291674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.867 [2024-10-09 00:36:42.291702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.867 qpair failed and we were unable to recover it. 00:29:11.867 [2024-10-09 00:36:42.292094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.867 [2024-10-09 00:36:42.292123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.867 qpair failed and we were unable to recover it. 00:29:11.867 [2024-10-09 00:36:42.292365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.867 [2024-10-09 00:36:42.292393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.867 qpair failed and we were unable to recover it. 00:29:11.867 [2024-10-09 00:36:42.292760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.867 [2024-10-09 00:36:42.292792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.867 qpair failed and we were unable to recover it. 00:29:11.867 [2024-10-09 00:36:42.293143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.867 [2024-10-09 00:36:42.293174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.867 qpair failed and we were unable to recover it. 00:29:11.867 [2024-10-09 00:36:42.293413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.867 [2024-10-09 00:36:42.293442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.867 qpair failed and we were unable to recover it. 00:29:11.867 [2024-10-09 00:36:42.293692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.867 [2024-10-09 00:36:42.293729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.867 qpair failed and we were unable to recover it. 00:29:11.867 [2024-10-09 00:36:42.293936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.867 [2024-10-09 00:36:42.293965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.867 qpair failed and we were unable to recover it. 00:29:11.867 [2024-10-09 00:36:42.294334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.867 [2024-10-09 00:36:42.294364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.867 qpair failed and we were unable to recover it. 00:29:11.867 [2024-10-09 00:36:42.294746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.867 [2024-10-09 00:36:42.294777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.867 qpair failed and we were unable to recover it. 00:29:11.867 [2024-10-09 00:36:42.295036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.867 [2024-10-09 00:36:42.295065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.867 qpair failed and we were unable to recover it. 00:29:11.867 [2024-10-09 00:36:42.295440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.867 [2024-10-09 00:36:42.295468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.867 qpair failed and we were unable to recover it. 00:29:11.867 [2024-10-09 00:36:42.295806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.867 [2024-10-09 00:36:42.295838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.867 qpair failed and we were unable to recover it. 00:29:11.868 [2024-10-09 00:36:42.296091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.868 [2024-10-09 00:36:42.296119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.868 qpair failed and we were unable to recover it. 00:29:11.868 [2024-10-09 00:36:42.296465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.868 [2024-10-09 00:36:42.296501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.868 qpair failed and we were unable to recover it. 00:29:11.868 [2024-10-09 00:36:42.296880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.868 [2024-10-09 00:36:42.296911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.868 qpair failed and we were unable to recover it. 00:29:11.868 [2024-10-09 00:36:42.297295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.868 [2024-10-09 00:36:42.297324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.868 qpair failed and we were unable to recover it. 00:29:11.868 [2024-10-09 00:36:42.297562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.868 [2024-10-09 00:36:42.297591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.868 qpair failed and we were unable to recover it. 00:29:11.868 [2024-10-09 00:36:42.297979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.868 [2024-10-09 00:36:42.298009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.868 qpair failed and we were unable to recover it. 00:29:11.868 [2024-10-09 00:36:42.298189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.868 [2024-10-09 00:36:42.298220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.868 qpair failed and we were unable to recover it. 00:29:11.868 [2024-10-09 00:36:42.298447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.868 [2024-10-09 00:36:42.298477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.868 qpair failed and we were unable to recover it. 00:29:11.868 [2024-10-09 00:36:42.298838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.868 [2024-10-09 00:36:42.298868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.868 qpair failed and we were unable to recover it. 00:29:11.868 [2024-10-09 00:36:42.299240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.868 [2024-10-09 00:36:42.299269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.868 qpair failed and we were unable to recover it. 00:29:11.868 [2024-10-09 00:36:42.299631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.868 [2024-10-09 00:36:42.299660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.868 qpair failed and we were unable to recover it. 00:29:11.868 [2024-10-09 00:36:42.300015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.868 [2024-10-09 00:36:42.300045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.868 qpair failed and we were unable to recover it. 00:29:11.868 [2024-10-09 00:36:42.300417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.868 [2024-10-09 00:36:42.300446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.868 qpair failed and we were unable to recover it. 00:29:11.868 [2024-10-09 00:36:42.300812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.868 [2024-10-09 00:36:42.300842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.868 qpair failed and we were unable to recover it. 00:29:11.868 [2024-10-09 00:36:42.301231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.868 [2024-10-09 00:36:42.301260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.868 qpair failed and we were unable to recover it. 00:29:11.868 [2024-10-09 00:36:42.301493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.868 [2024-10-09 00:36:42.301522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.868 qpair failed and we were unable to recover it. 00:29:11.868 [2024-10-09 00:36:42.301739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.868 [2024-10-09 00:36:42.301770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.868 qpair failed and we were unable to recover it. 00:29:11.868 [2024-10-09 00:36:42.302003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.868 [2024-10-09 00:36:42.302032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.868 qpair failed and we were unable to recover it. 00:29:11.868 [2024-10-09 00:36:42.302418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.868 [2024-10-09 00:36:42.302447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.868 qpair failed and we were unable to recover it. 00:29:11.868 [2024-10-09 00:36:42.302789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.868 [2024-10-09 00:36:42.302821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.868 qpair failed and we were unable to recover it. 00:29:11.868 [2024-10-09 00:36:42.303235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.868 [2024-10-09 00:36:42.303264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.868 qpair failed and we were unable to recover it. 00:29:11.868 [2024-10-09 00:36:42.303632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.868 [2024-10-09 00:36:42.303662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.868 qpair failed and we were unable to recover it. 00:29:11.868 [2024-10-09 00:36:42.304037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.868 [2024-10-09 00:36:42.304069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.868 qpair failed and we were unable to recover it. 00:29:11.868 [2024-10-09 00:36:42.304289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.868 [2024-10-09 00:36:42.304317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.868 qpair failed and we were unable to recover it. 00:29:11.868 [2024-10-09 00:36:42.304535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.868 [2024-10-09 00:36:42.304564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.868 qpair failed and we were unable to recover it. 00:29:11.868 [2024-10-09 00:36:42.304908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.868 [2024-10-09 00:36:42.304939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.868 qpair failed and we were unable to recover it. 00:29:11.868 [2024-10-09 00:36:42.305283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.868 [2024-10-09 00:36:42.305321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.868 qpair failed and we were unable to recover it. 00:29:11.868 [2024-10-09 00:36:42.305656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.868 [2024-10-09 00:36:42.305686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.868 qpair failed and we were unable to recover it. 00:29:11.868 [2024-10-09 00:36:42.306069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.868 [2024-10-09 00:36:42.306099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.868 qpair failed and we were unable to recover it. 00:29:11.868 [2024-10-09 00:36:42.306443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.868 [2024-10-09 00:36:42.306473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.868 qpair failed and we were unable to recover it. 00:29:11.868 [2024-10-09 00:36:42.306837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.868 [2024-10-09 00:36:42.306868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.868 qpair failed and we were unable to recover it. 00:29:11.868 [2024-10-09 00:36:42.307259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.868 [2024-10-09 00:36:42.307288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.868 qpair failed and we were unable to recover it. 00:29:11.868 [2024-10-09 00:36:42.307514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.868 [2024-10-09 00:36:42.307542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.868 qpair failed and we were unable to recover it. 00:29:11.868 [2024-10-09 00:36:42.307813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.868 [2024-10-09 00:36:42.307847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.868 qpair failed and we were unable to recover it. 00:29:11.868 [2024-10-09 00:36:42.308224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.868 [2024-10-09 00:36:42.308254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.868 qpair failed and we were unable to recover it. 00:29:11.868 [2024-10-09 00:36:42.308613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.868 [2024-10-09 00:36:42.308642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.868 qpair failed and we were unable to recover it. 00:29:11.868 [2024-10-09 00:36:42.309013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.868 [2024-10-09 00:36:42.309043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.868 qpair failed and we were unable to recover it. 00:29:11.868 [2024-10-09 00:36:42.309393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.869 [2024-10-09 00:36:42.309421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.869 qpair failed and we were unable to recover it. 00:29:11.869 [2024-10-09 00:36:42.309794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.869 [2024-10-09 00:36:42.309825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.869 qpair failed and we were unable to recover it. 00:29:11.869 [2024-10-09 00:36:42.310049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.869 [2024-10-09 00:36:42.310078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.869 qpair failed and we were unable to recover it. 00:29:11.869 [2024-10-09 00:36:42.310305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.869 [2024-10-09 00:36:42.310334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.869 qpair failed and we were unable to recover it. 00:29:11.869 [2024-10-09 00:36:42.310775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.869 [2024-10-09 00:36:42.310805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.869 qpair failed and we were unable to recover it. 00:29:11.869 [2024-10-09 00:36:42.311063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.869 [2024-10-09 00:36:42.311092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.869 qpair failed and we were unable to recover it. 00:29:11.869 [2024-10-09 00:36:42.311468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.869 [2024-10-09 00:36:42.311496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.869 qpair failed and we were unable to recover it. 00:29:11.869 [2024-10-09 00:36:42.311744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.869 [2024-10-09 00:36:42.311776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.869 qpair failed and we were unable to recover it. 00:29:11.869 [2024-10-09 00:36:42.311968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.869 [2024-10-09 00:36:42.311998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.869 qpair failed and we were unable to recover it. 00:29:11.869 [2024-10-09 00:36:42.312355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.869 [2024-10-09 00:36:42.312385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.869 qpair failed and we were unable to recover it. 00:29:11.869 [2024-10-09 00:36:42.312714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.869 [2024-10-09 00:36:42.312756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.869 qpair failed and we were unable to recover it. 00:29:11.869 [2024-10-09 00:36:42.313023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.869 [2024-10-09 00:36:42.313052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.869 qpair failed and we were unable to recover it. 00:29:11.869 [2024-10-09 00:36:42.313398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.869 [2024-10-09 00:36:42.313428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.869 qpair failed and we were unable to recover it. 00:29:11.869 [2024-10-09 00:36:42.313805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.869 [2024-10-09 00:36:42.313835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.869 qpair failed and we were unable to recover it. 00:29:11.869 [2024-10-09 00:36:42.314110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.869 [2024-10-09 00:36:42.314139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.869 qpair failed and we were unable to recover it. 00:29:11.869 [2024-10-09 00:36:42.314357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.869 [2024-10-09 00:36:42.314386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.869 qpair failed and we were unable to recover it. 00:29:11.869 [2024-10-09 00:36:42.314757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.869 [2024-10-09 00:36:42.314787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.869 qpair failed and we were unable to recover it. 00:29:11.869 [2024-10-09 00:36:42.315154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.869 [2024-10-09 00:36:42.315184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.869 qpair failed and we were unable to recover it. 00:29:11.869 [2024-10-09 00:36:42.315442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.869 [2024-10-09 00:36:42.315473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.869 qpair failed and we were unable to recover it. 00:29:11.869 [2024-10-09 00:36:42.315832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.869 [2024-10-09 00:36:42.315863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.869 qpair failed and we were unable to recover it. 00:29:11.869 [2024-10-09 00:36:42.316238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.869 [2024-10-09 00:36:42.316268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.869 qpair failed and we were unable to recover it. 00:29:11.869 [2024-10-09 00:36:42.316634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.869 [2024-10-09 00:36:42.316662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.869 qpair failed and we were unable to recover it. 00:29:11.869 [2024-10-09 00:36:42.317044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.869 [2024-10-09 00:36:42.317075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.869 qpair failed and we were unable to recover it. 00:29:11.869 [2024-10-09 00:36:42.317425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.869 [2024-10-09 00:36:42.317462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.869 qpair failed and we were unable to recover it. 00:29:11.869 [2024-10-09 00:36:42.317672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.869 [2024-10-09 00:36:42.317701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.869 qpair failed and we were unable to recover it. 00:29:11.869 [2024-10-09 00:36:42.318073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.869 [2024-10-09 00:36:42.318104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.869 qpair failed and we were unable to recover it. 00:29:11.869 [2024-10-09 00:36:42.318509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.869 [2024-10-09 00:36:42.318539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.869 qpair failed and we were unable to recover it. 00:29:11.869 [2024-10-09 00:36:42.318896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.869 [2024-10-09 00:36:42.318927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.869 qpair failed and we were unable to recover it. 00:29:11.869 [2024-10-09 00:36:42.319290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.869 [2024-10-09 00:36:42.319320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.869 qpair failed and we were unable to recover it. 00:29:11.869 [2024-10-09 00:36:42.319576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.869 [2024-10-09 00:36:42.319604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.869 qpair failed and we were unable to recover it. 00:29:11.869 [2024-10-09 00:36:42.319980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.869 [2024-10-09 00:36:42.320010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.869 qpair failed and we were unable to recover it. 00:29:11.869 [2024-10-09 00:36:42.320254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.869 [2024-10-09 00:36:42.320284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.869 qpair failed and we were unable to recover it. 00:29:11.869 [2024-10-09 00:36:42.320525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.869 [2024-10-09 00:36:42.320559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.869 qpair failed and we were unable to recover it. 00:29:11.869 [2024-10-09 00:36:42.320949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.869 [2024-10-09 00:36:42.320979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.869 qpair failed and we were unable to recover it. 00:29:11.870 [2024-10-09 00:36:42.321208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.870 [2024-10-09 00:36:42.321237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.870 qpair failed and we were unable to recover it. 00:29:11.870 [2024-10-09 00:36:42.321598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.870 [2024-10-09 00:36:42.321626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.870 qpair failed and we were unable to recover it. 00:29:11.870 [2024-10-09 00:36:42.321985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.870 [2024-10-09 00:36:42.322016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.870 qpair failed and we were unable to recover it. 00:29:11.870 [2024-10-09 00:36:42.322383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.870 [2024-10-09 00:36:42.322413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.870 qpair failed and we were unable to recover it. 00:29:11.870 [2024-10-09 00:36:42.322794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.870 [2024-10-09 00:36:42.322825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.870 qpair failed and we were unable to recover it. 00:29:11.870 [2024-10-09 00:36:42.323263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.870 [2024-10-09 00:36:42.323292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.870 qpair failed and we were unable to recover it. 00:29:11.870 [2024-10-09 00:36:42.323536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.870 [2024-10-09 00:36:42.323567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.870 qpair failed and we were unable to recover it. 00:29:11.870 [2024-10-09 00:36:42.323950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.870 [2024-10-09 00:36:42.323980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.870 qpair failed and we were unable to recover it. 00:29:11.870 [2024-10-09 00:36:42.324316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.870 [2024-10-09 00:36:42.324346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.870 qpair failed and we were unable to recover it. 00:29:11.870 [2024-10-09 00:36:42.324710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.870 [2024-10-09 00:36:42.324747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.870 qpair failed and we were unable to recover it. 00:29:11.870 [2024-10-09 00:36:42.325151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.870 [2024-10-09 00:36:42.325180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.870 qpair failed and we were unable to recover it. 00:29:11.870 [2024-10-09 00:36:42.325534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.870 [2024-10-09 00:36:42.325564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.870 qpair failed and we were unable to recover it. 00:29:11.870 [2024-10-09 00:36:42.325913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.870 [2024-10-09 00:36:42.325943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.870 qpair failed and we were unable to recover it. 00:29:11.870 [2024-10-09 00:36:42.326325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.870 [2024-10-09 00:36:42.326354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.870 qpair failed and we were unable to recover it. 00:29:11.870 [2024-10-09 00:36:42.326588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.870 [2024-10-09 00:36:42.326617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.870 qpair failed and we were unable to recover it. 00:29:11.870 [2024-10-09 00:36:42.326854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.870 [2024-10-09 00:36:42.326885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.870 qpair failed and we were unable to recover it. 00:29:11.870 [2024-10-09 00:36:42.327220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.870 [2024-10-09 00:36:42.327249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.870 qpair failed and we were unable to recover it. 00:29:11.870 [2024-10-09 00:36:42.327615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.870 [2024-10-09 00:36:42.327643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.870 qpair failed and we were unable to recover it. 00:29:11.870 [2024-10-09 00:36:42.328015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.870 [2024-10-09 00:36:42.328045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.870 qpair failed and we were unable to recover it. 00:29:11.870 [2024-10-09 00:36:42.328431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.870 [2024-10-09 00:36:42.328460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.870 qpair failed and we were unable to recover it. 00:29:11.870 [2024-10-09 00:36:42.328835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.870 [2024-10-09 00:36:42.328866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.870 qpair failed and we were unable to recover it. 00:29:11.870 [2024-10-09 00:36:42.329234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.870 [2024-10-09 00:36:42.329262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.870 qpair failed and we were unable to recover it. 00:29:11.870 [2024-10-09 00:36:42.329497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.870 [2024-10-09 00:36:42.329526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.870 qpair failed and we were unable to recover it. 00:29:11.870 [2024-10-09 00:36:42.329831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.870 [2024-10-09 00:36:42.329861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.870 qpair failed and we were unable to recover it. 00:29:11.870 [2024-10-09 00:36:42.330254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.870 [2024-10-09 00:36:42.330283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.870 qpair failed and we were unable to recover it. 00:29:11.870 [2024-10-09 00:36:42.330492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.870 [2024-10-09 00:36:42.330527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.870 qpair failed and we were unable to recover it. 00:29:11.870 [2024-10-09 00:36:42.330888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.870 [2024-10-09 00:36:42.330919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.870 qpair failed and we were unable to recover it. 00:29:11.870 [2024-10-09 00:36:42.331293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.870 [2024-10-09 00:36:42.331323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.870 qpair failed and we were unable to recover it. 00:29:11.870 [2024-10-09 00:36:42.331589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.870 [2024-10-09 00:36:42.331619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.870 qpair failed and we were unable to recover it. 00:29:11.870 [2024-10-09 00:36:42.331951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.870 [2024-10-09 00:36:42.331982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.870 qpair failed and we were unable to recover it. 00:29:11.870 [2024-10-09 00:36:42.332345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.870 [2024-10-09 00:36:42.332374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.870 qpair failed and we were unable to recover it. 00:29:11.870 [2024-10-09 00:36:42.332750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.870 [2024-10-09 00:36:42.332780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.870 qpair failed and we were unable to recover it. 00:29:11.870 [2024-10-09 00:36:42.333104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.870 [2024-10-09 00:36:42.333134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.870 qpair failed and we were unable to recover it. 00:29:11.870 [2024-10-09 00:36:42.333496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.870 [2024-10-09 00:36:42.333525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.870 qpair failed and we were unable to recover it. 00:29:11.870 [2024-10-09 00:36:42.333826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.870 [2024-10-09 00:36:42.333857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.870 qpair failed and we were unable to recover it. 00:29:11.870 [2024-10-09 00:36:42.334230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.870 [2024-10-09 00:36:42.334260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.870 qpair failed and we were unable to recover it. 00:29:11.870 [2024-10-09 00:36:42.334649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.870 [2024-10-09 00:36:42.334679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.870 qpair failed and we were unable to recover it. 00:29:11.870 [2024-10-09 00:36:42.335072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.870 [2024-10-09 00:36:42.335104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.870 qpair failed and we were unable to recover it. 00:29:11.870 [2024-10-09 00:36:42.335454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.870 [2024-10-09 00:36:42.335483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.870 qpair failed and we were unable to recover it. 00:29:11.870 [2024-10-09 00:36:42.335654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.870 [2024-10-09 00:36:42.335687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.870 qpair failed and we were unable to recover it. 00:29:11.870 [2024-10-09 00:36:42.336107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.871 [2024-10-09 00:36:42.336138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.871 qpair failed and we were unable to recover it. 00:29:11.871 [2024-10-09 00:36:42.336505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.871 [2024-10-09 00:36:42.336534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.871 qpair failed and we were unable to recover it. 00:29:11.871 [2024-10-09 00:36:42.336893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.871 [2024-10-09 00:36:42.336924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.871 qpair failed and we were unable to recover it. 00:29:11.871 [2024-10-09 00:36:42.337277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.871 [2024-10-09 00:36:42.337306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.871 qpair failed and we were unable to recover it. 00:29:11.871 [2024-10-09 00:36:42.337683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.871 [2024-10-09 00:36:42.337712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.871 qpair failed and we were unable to recover it. 00:29:11.871 [2024-10-09 00:36:42.337970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.871 [2024-10-09 00:36:42.337999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.871 qpair failed and we were unable to recover it. 00:29:11.871 [2024-10-09 00:36:42.338375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.871 [2024-10-09 00:36:42.338404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.871 qpair failed and we were unable to recover it. 00:29:11.871 [2024-10-09 00:36:42.338640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.871 [2024-10-09 00:36:42.338669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.871 qpair failed and we were unable to recover it. 00:29:11.871 [2024-10-09 00:36:42.339100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.871 [2024-10-09 00:36:42.339131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.871 qpair failed and we were unable to recover it. 00:29:11.871 [2024-10-09 00:36:42.339477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.871 [2024-10-09 00:36:42.339507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.871 qpair failed and we were unable to recover it. 00:29:11.871 [2024-10-09 00:36:42.339892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.871 [2024-10-09 00:36:42.339923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.871 qpair failed and we were unable to recover it. 00:29:11.871 [2024-10-09 00:36:42.340276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.871 [2024-10-09 00:36:42.340304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.871 qpair failed and we were unable to recover it. 00:29:11.871 [2024-10-09 00:36:42.340689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.871 [2024-10-09 00:36:42.340718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.871 qpair failed and we were unable to recover it. 00:29:11.871 [2024-10-09 00:36:42.341105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.871 [2024-10-09 00:36:42.341134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.871 qpair failed and we were unable to recover it. 00:29:11.871 [2024-10-09 00:36:42.341501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.871 [2024-10-09 00:36:42.341531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.871 qpair failed and we were unable to recover it. 00:29:11.871 [2024-10-09 00:36:42.341893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.871 [2024-10-09 00:36:42.341924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.871 qpair failed and we were unable to recover it. 00:29:11.871 [2024-10-09 00:36:42.342157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.871 [2024-10-09 00:36:42.342185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.871 qpair failed and we were unable to recover it. 00:29:11.871 [2024-10-09 00:36:42.342466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.871 [2024-10-09 00:36:42.342498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.871 qpair failed and we were unable to recover it. 00:29:11.871 [2024-10-09 00:36:42.342659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.871 [2024-10-09 00:36:42.342687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.871 qpair failed and we were unable to recover it. 00:29:11.871 [2024-10-09 00:36:42.343061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.871 [2024-10-09 00:36:42.343091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.871 qpair failed and we were unable to recover it. 00:29:11.871 [2024-10-09 00:36:42.343476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.871 [2024-10-09 00:36:42.343505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.871 qpair failed and we were unable to recover it. 00:29:11.871 [2024-10-09 00:36:42.343880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.871 [2024-10-09 00:36:42.343918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.871 qpair failed and we were unable to recover it. 00:29:11.871 [2024-10-09 00:36:42.344286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.871 [2024-10-09 00:36:42.344315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.871 qpair failed and we were unable to recover it. 00:29:11.871 [2024-10-09 00:36:42.344518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.871 [2024-10-09 00:36:42.344546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.871 qpair failed and we were unable to recover it. 00:29:11.871 [2024-10-09 00:36:42.344844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.871 [2024-10-09 00:36:42.344874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.871 qpair failed and we were unable to recover it. 00:29:11.871 [2024-10-09 00:36:42.345240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.871 [2024-10-09 00:36:42.345270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.871 qpair failed and we were unable to recover it. 00:29:11.871 [2024-10-09 00:36:42.345493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.871 [2024-10-09 00:36:42.345521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.871 qpair failed and we were unable to recover it. 00:29:11.871 [2024-10-09 00:36:42.345743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.871 [2024-10-09 00:36:42.345774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.871 qpair failed and we were unable to recover it. 00:29:11.871 [2024-10-09 00:36:42.345996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.871 [2024-10-09 00:36:42.346024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.871 qpair failed and we were unable to recover it. 00:29:11.871 [2024-10-09 00:36:42.346392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.871 [2024-10-09 00:36:42.346420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.871 qpair failed and we were unable to recover it. 00:29:11.871 [2024-10-09 00:36:42.346790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.871 [2024-10-09 00:36:42.346821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.871 qpair failed and we were unable to recover it. 00:29:11.871 [2024-10-09 00:36:42.347194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.871 [2024-10-09 00:36:42.347224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.871 qpair failed and we were unable to recover it. 00:29:11.871 [2024-10-09 00:36:42.347648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.871 [2024-10-09 00:36:42.347677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.871 qpair failed and we were unable to recover it. 00:29:11.871 [2024-10-09 00:36:42.348101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.871 [2024-10-09 00:36:42.348131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.871 qpair failed and we were unable to recover it. 00:29:11.871 [2024-10-09 00:36:42.348515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.871 [2024-10-09 00:36:42.348543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.871 qpair failed and we were unable to recover it. 00:29:11.871 [2024-10-09 00:36:42.348890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.871 [2024-10-09 00:36:42.348921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.871 qpair failed and we were unable to recover it. 00:29:11.871 [2024-10-09 00:36:42.349181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.871 [2024-10-09 00:36:42.349210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.871 qpair failed and we were unable to recover it. 00:29:11.871 [2024-10-09 00:36:42.349558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.871 [2024-10-09 00:36:42.349588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.871 qpair failed and we were unable to recover it. 00:29:11.871 [2024-10-09 00:36:42.349802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.871 [2024-10-09 00:36:42.349834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.871 qpair failed and we were unable to recover it. 00:29:11.871 [2024-10-09 00:36:42.350080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.871 [2024-10-09 00:36:42.350110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.871 qpair failed and we were unable to recover it. 00:29:11.871 [2024-10-09 00:36:42.350489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.871 [2024-10-09 00:36:42.350518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.871 qpair failed and we were unable to recover it. 00:29:11.871 [2024-10-09 00:36:42.350885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.872 [2024-10-09 00:36:42.350918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.872 qpair failed and we were unable to recover it. 00:29:11.872 [2024-10-09 00:36:42.351310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.872 [2024-10-09 00:36:42.351340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.872 qpair failed and we were unable to recover it. 00:29:11.872 [2024-10-09 00:36:42.351554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.872 [2024-10-09 00:36:42.351584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.872 qpair failed and we were unable to recover it. 00:29:11.872 [2024-10-09 00:36:42.351860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.872 [2024-10-09 00:36:42.351891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.872 qpair failed and we were unable to recover it. 00:29:11.872 [2024-10-09 00:36:42.352141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.872 [2024-10-09 00:36:42.352171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.872 qpair failed and we were unable to recover it. 00:29:11.872 [2024-10-09 00:36:42.352420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.872 [2024-10-09 00:36:42.352450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.872 qpair failed and we were unable to recover it. 00:29:11.872 [2024-10-09 00:36:42.352800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.872 [2024-10-09 00:36:42.352831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.872 qpair failed and we were unable to recover it. 00:29:11.872 [2024-10-09 00:36:42.353207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.872 [2024-10-09 00:36:42.353236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.872 qpair failed and we were unable to recover it. 00:29:11.872 [2024-10-09 00:36:42.353610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.872 [2024-10-09 00:36:42.353642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.872 qpair failed and we were unable to recover it. 00:29:11.872 [2024-10-09 00:36:42.354008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.872 [2024-10-09 00:36:42.354038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.872 qpair failed and we were unable to recover it. 00:29:11.872 [2024-10-09 00:36:42.354398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.872 [2024-10-09 00:36:42.354428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.872 qpair failed and we were unable to recover it. 00:29:11.872 [2024-10-09 00:36:42.354643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.872 [2024-10-09 00:36:42.354671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.872 qpair failed and we were unable to recover it. 00:29:11.872 [2024-10-09 00:36:42.355067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.872 [2024-10-09 00:36:42.355104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.872 qpair failed and we were unable to recover it. 00:29:11.872 [2024-10-09 00:36:42.355491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.872 [2024-10-09 00:36:42.355521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.872 qpair failed and we were unable to recover it. 00:29:11.872 [2024-10-09 00:36:42.355764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.872 [2024-10-09 00:36:42.355797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.872 qpair failed and we were unable to recover it. 00:29:11.872 [2024-10-09 00:36:42.356199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.872 [2024-10-09 00:36:42.356228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.872 qpair failed and we were unable to recover it. 00:29:11.872 [2024-10-09 00:36:42.356595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.872 [2024-10-09 00:36:42.356624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.872 qpair failed and we were unable to recover it. 00:29:11.872 [2024-10-09 00:36:42.356952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.872 [2024-10-09 00:36:42.356984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.872 qpair failed and we were unable to recover it. 00:29:11.872 [2024-10-09 00:36:42.357346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.872 [2024-10-09 00:36:42.357376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.872 qpair failed and we were unable to recover it. 00:29:11.872 [2024-10-09 00:36:42.357778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.872 [2024-10-09 00:36:42.357809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.872 qpair failed and we were unable to recover it. 00:29:11.872 [2024-10-09 00:36:42.358266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.872 [2024-10-09 00:36:42.358295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.872 qpair failed and we were unable to recover it. 00:29:11.872 [2024-10-09 00:36:42.358536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.872 [2024-10-09 00:36:42.358565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.872 qpair failed and we were unable to recover it. 00:29:11.872 [2024-10-09 00:36:42.358851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.872 [2024-10-09 00:36:42.358882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.872 qpair failed and we were unable to recover it. 00:29:11.872 [2024-10-09 00:36:42.359286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.872 [2024-10-09 00:36:42.359316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.872 qpair failed and we were unable to recover it. 00:29:11.872 [2024-10-09 00:36:42.359574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.872 [2024-10-09 00:36:42.359603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.872 qpair failed and we were unable to recover it. 00:29:11.872 [2024-10-09 00:36:42.359850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.872 [2024-10-09 00:36:42.359880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.872 qpair failed and we were unable to recover it. 00:29:11.872 [2024-10-09 00:36:42.360247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.872 [2024-10-09 00:36:42.360278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.872 qpair failed and we were unable to recover it. 00:29:11.872 [2024-10-09 00:36:42.360606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.872 [2024-10-09 00:36:42.360635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.872 qpair failed and we were unable to recover it. 00:29:11.872 [2024-10-09 00:36:42.360994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.872 [2024-10-09 00:36:42.361024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.872 qpair failed and we were unable to recover it. 00:29:11.872 [2024-10-09 00:36:42.361392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.872 [2024-10-09 00:36:42.361423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.872 qpair failed and we were unable to recover it. 00:29:11.872 [2024-10-09 00:36:42.361779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.872 [2024-10-09 00:36:42.361810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.872 qpair failed and we were unable to recover it. 00:29:11.872 [2024-10-09 00:36:42.361914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.872 [2024-10-09 00:36:42.361942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.872 qpair failed and we were unable to recover it. 00:29:11.872 [2024-10-09 00:36:42.362310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.872 [2024-10-09 00:36:42.362339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.872 qpair failed and we were unable to recover it. 00:29:11.872 [2024-10-09 00:36:42.362695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.872 [2024-10-09 00:36:42.362736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.872 qpair failed and we were unable to recover it. 00:29:11.872 [2024-10-09 00:36:42.363002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.872 [2024-10-09 00:36:42.363031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.872 qpair failed and we were unable to recover it. 00:29:11.872 [2024-10-09 00:36:42.363378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.872 [2024-10-09 00:36:42.363407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.872 qpair failed and we were unable to recover it. 00:29:11.872 [2024-10-09 00:36:42.363780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.872 [2024-10-09 00:36:42.363811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.872 qpair failed and we were unable to recover it. 00:29:11.872 [2024-10-09 00:36:42.364168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.872 [2024-10-09 00:36:42.364197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.872 qpair failed and we were unable to recover it. 00:29:11.872 [2024-10-09 00:36:42.364575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.872 [2024-10-09 00:36:42.364604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.872 qpair failed and we were unable to recover it. 00:29:11.872 [2024-10-09 00:36:42.364967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.872 [2024-10-09 00:36:42.365005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.872 qpair failed and we were unable to recover it. 00:29:11.872 [2024-10-09 00:36:42.365363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.872 [2024-10-09 00:36:42.365395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.872 qpair failed and we were unable to recover it. 00:29:11.873 [2024-10-09 00:36:42.365750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.873 [2024-10-09 00:36:42.365784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.873 qpair failed and we were unable to recover it. 00:29:11.873 [2024-10-09 00:36:42.366020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.873 [2024-10-09 00:36:42.366050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.873 qpair failed and we were unable to recover it. 00:29:11.873 [2024-10-09 00:36:42.366419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.873 [2024-10-09 00:36:42.366450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.873 qpair failed and we were unable to recover it. 00:29:11.873 [2024-10-09 00:36:42.366833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.873 [2024-10-09 00:36:42.366864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.873 qpair failed and we were unable to recover it. 00:29:11.873 [2024-10-09 00:36:42.367225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.873 [2024-10-09 00:36:42.367256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.873 qpair failed and we were unable to recover it. 00:29:11.873 [2024-10-09 00:36:42.367622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.873 [2024-10-09 00:36:42.367651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.873 qpair failed and we were unable to recover it. 00:29:11.873 [2024-10-09 00:36:42.368065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.873 [2024-10-09 00:36:42.368096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.873 qpair failed and we were unable to recover it. 00:29:11.873 [2024-10-09 00:36:42.368447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.873 [2024-10-09 00:36:42.368477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.873 qpair failed and we were unable to recover it. 00:29:11.873 [2024-10-09 00:36:42.368838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.873 [2024-10-09 00:36:42.368869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.873 qpair failed and we were unable to recover it. 00:29:11.873 [2024-10-09 00:36:42.369112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.873 [2024-10-09 00:36:42.369145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.873 qpair failed and we were unable to recover it. 00:29:11.873 [2024-10-09 00:36:42.369515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.873 [2024-10-09 00:36:42.369548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.873 qpair failed and we were unable to recover it. 00:29:11.873 [2024-10-09 00:36:42.369892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.873 [2024-10-09 00:36:42.369924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.873 qpair failed and we were unable to recover it. 00:29:11.873 [2024-10-09 00:36:42.370298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.873 [2024-10-09 00:36:42.370328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.873 qpair failed and we were unable to recover it. 00:29:11.873 [2024-10-09 00:36:42.370685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.873 [2024-10-09 00:36:42.370717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.873 qpair failed and we were unable to recover it. 00:29:11.873 [2024-10-09 00:36:42.371094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.873 [2024-10-09 00:36:42.371125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.873 qpair failed and we were unable to recover it. 00:29:11.873 [2024-10-09 00:36:42.371377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.873 [2024-10-09 00:36:42.371408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.873 qpair failed and we were unable to recover it. 00:29:11.873 [2024-10-09 00:36:42.371776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.873 [2024-10-09 00:36:42.371809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.873 qpair failed and we were unable to recover it. 00:29:11.873 [2024-10-09 00:36:42.372176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.873 [2024-10-09 00:36:42.372206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.873 qpair failed and we were unable to recover it. 00:29:11.873 [2024-10-09 00:36:42.372424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.873 [2024-10-09 00:36:42.372454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.873 qpair failed and we were unable to recover it. 00:29:11.873 [2024-10-09 00:36:42.372696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.873 [2024-10-09 00:36:42.372740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.873 qpair failed and we were unable to recover it. 00:29:11.873 [2024-10-09 00:36:42.373096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.873 [2024-10-09 00:36:42.373127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.873 qpair failed and we were unable to recover it. 00:29:11.873 [2024-10-09 00:36:42.373493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.873 [2024-10-09 00:36:42.373523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.873 qpair failed and we were unable to recover it. 00:29:11.873 [2024-10-09 00:36:42.373899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.873 [2024-10-09 00:36:42.373932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.873 qpair failed and we were unable to recover it. 00:29:11.873 [2024-10-09 00:36:42.374314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.873 [2024-10-09 00:36:42.374345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.873 qpair failed and we were unable to recover it. 00:29:11.873 [2024-10-09 00:36:42.374589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.873 [2024-10-09 00:36:42.374621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.873 qpair failed and we were unable to recover it. 00:29:11.873 [2024-10-09 00:36:42.374928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.873 [2024-10-09 00:36:42.374973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.873 qpair failed and we were unable to recover it. 00:29:11.873 [2024-10-09 00:36:42.375355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.873 [2024-10-09 00:36:42.375387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.873 qpair failed and we were unable to recover it. 00:29:11.873 [2024-10-09 00:36:42.375638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.873 [2024-10-09 00:36:42.375669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.873 qpair failed and we were unable to recover it. 00:29:11.873 [2024-10-09 00:36:42.375936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.873 [2024-10-09 00:36:42.375968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.873 qpair failed and we were unable to recover it. 00:29:11.873 [2024-10-09 00:36:42.376325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.873 [2024-10-09 00:36:42.376356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.873 qpair failed and we were unable to recover it. 00:29:11.873 [2024-10-09 00:36:42.376736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.873 [2024-10-09 00:36:42.376768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.873 qpair failed and we were unable to recover it. 00:29:11.873 [2024-10-09 00:36:42.377030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.873 [2024-10-09 00:36:42.377060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.873 qpair failed and we were unable to recover it. 00:29:11.873 [2024-10-09 00:36:42.377451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.873 [2024-10-09 00:36:42.377482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.873 qpair failed and we were unable to recover it. 00:29:11.873 [2024-10-09 00:36:42.377841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.873 [2024-10-09 00:36:42.377874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.873 qpair failed and we were unable to recover it. 00:29:11.873 [2024-10-09 00:36:42.378252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.873 [2024-10-09 00:36:42.378282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.874 qpair failed and we were unable to recover it. 00:29:11.874 [2024-10-09 00:36:42.378638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.874 [2024-10-09 00:36:42.378668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.874 qpair failed and we were unable to recover it. 00:29:11.874 [2024-10-09 00:36:42.379063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.874 [2024-10-09 00:36:42.379095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.874 qpair failed and we were unable to recover it. 00:29:11.874 [2024-10-09 00:36:42.379455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.874 [2024-10-09 00:36:42.379487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.874 qpair failed and we were unable to recover it. 00:29:11.874 [2024-10-09 00:36:42.379836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.874 [2024-10-09 00:36:42.379867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.874 qpair failed and we were unable to recover it. 00:29:11.874 [2024-10-09 00:36:42.380132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.874 [2024-10-09 00:36:42.380160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.874 qpair failed and we were unable to recover it. 00:29:11.874 [2024-10-09 00:36:42.380507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.874 [2024-10-09 00:36:42.380536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.874 qpair failed and we were unable to recover it. 00:29:11.874 [2024-10-09 00:36:42.380921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.874 [2024-10-09 00:36:42.380953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.874 qpair failed and we were unable to recover it. 00:29:11.874 [2024-10-09 00:36:42.381322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.874 [2024-10-09 00:36:42.381352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.874 qpair failed and we were unable to recover it. 00:29:11.874 [2024-10-09 00:36:42.381742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.874 [2024-10-09 00:36:42.381774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.874 qpair failed and we were unable to recover it. 00:29:11.874 [2024-10-09 00:36:42.382005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.874 [2024-10-09 00:36:42.382036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.874 qpair failed and we were unable to recover it. 00:29:11.874 [2024-10-09 00:36:42.382399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.874 [2024-10-09 00:36:42.382427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.874 qpair failed and we were unable to recover it. 00:29:11.874 [2024-10-09 00:36:42.382785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.874 [2024-10-09 00:36:42.382815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.874 qpair failed and we were unable to recover it. 00:29:11.874 [2024-10-09 00:36:42.383206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.874 [2024-10-09 00:36:42.383236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.874 qpair failed and we were unable to recover it. 00:29:11.874 [2024-10-09 00:36:42.383595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.874 [2024-10-09 00:36:42.383624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.874 qpair failed and we were unable to recover it. 00:29:11.874 [2024-10-09 00:36:42.384070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.874 [2024-10-09 00:36:42.384106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.874 qpair failed and we were unable to recover it. 00:29:11.874 [2024-10-09 00:36:42.384496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.874 [2024-10-09 00:36:42.384526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.874 qpair failed and we were unable to recover it. 00:29:11.874 [2024-10-09 00:36:42.384897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.874 [2024-10-09 00:36:42.384927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.874 qpair failed and we were unable to recover it. 00:29:11.874 [2024-10-09 00:36:42.385173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.874 [2024-10-09 00:36:42.385202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.874 qpair failed and we were unable to recover it. 00:29:11.874 [2024-10-09 00:36:42.385424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.874 [2024-10-09 00:36:42.385454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.874 qpair failed and we were unable to recover it. 00:29:11.874 [2024-10-09 00:36:42.385801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.874 [2024-10-09 00:36:42.385831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.874 qpair failed and we were unable to recover it. 00:29:11.874 [2024-10-09 00:36:42.386072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.874 [2024-10-09 00:36:42.386101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.874 qpair failed and we were unable to recover it. 00:29:11.874 [2024-10-09 00:36:42.386469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.874 [2024-10-09 00:36:42.386498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.874 qpair failed and we were unable to recover it. 00:29:11.874 [2024-10-09 00:36:42.386710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.874 [2024-10-09 00:36:42.386763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.874 qpair failed and we were unable to recover it. 00:29:11.874 [2024-10-09 00:36:42.387133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.874 [2024-10-09 00:36:42.387164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.874 qpair failed and we were unable to recover it. 00:29:11.874 [2024-10-09 00:36:42.387377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.874 [2024-10-09 00:36:42.387406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.874 qpair failed and we were unable to recover it. 00:29:11.874 [2024-10-09 00:36:42.387767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.874 [2024-10-09 00:36:42.387808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.874 qpair failed and we were unable to recover it. 00:29:11.874 [2024-10-09 00:36:42.388546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.874 [2024-10-09 00:36:42.388587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.874 qpair failed and we were unable to recover it. 00:29:11.874 [2024-10-09 00:36:42.389033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.874 [2024-10-09 00:36:42.389071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.874 qpair failed and we were unable to recover it. 00:29:11.874 [2024-10-09 00:36:42.389382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.874 [2024-10-09 00:36:42.389412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.874 qpair failed and we were unable to recover it. 00:29:11.874 [2024-10-09 00:36:42.389779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.874 [2024-10-09 00:36:42.389810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.874 qpair failed and we were unable to recover it. 00:29:11.874 [2024-10-09 00:36:42.390198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.874 [2024-10-09 00:36:42.390228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.874 qpair failed and we were unable to recover it. 00:29:11.874 [2024-10-09 00:36:42.390439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.874 [2024-10-09 00:36:42.390477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.874 qpair failed and we were unable to recover it. 00:29:11.874 [2024-10-09 00:36:42.390718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.874 [2024-10-09 00:36:42.390761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.874 qpair failed and we were unable to recover it. 00:29:11.874 [2024-10-09 00:36:42.390982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.874 [2024-10-09 00:36:42.391012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.874 qpair failed and we were unable to recover it. 00:29:11.874 [2024-10-09 00:36:42.391363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.874 [2024-10-09 00:36:42.391392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.874 qpair failed and we were unable to recover it. 00:29:11.874 [2024-10-09 00:36:42.391765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.874 [2024-10-09 00:36:42.391798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.874 qpair failed and we were unable to recover it. 00:29:11.874 [2024-10-09 00:36:42.392148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.874 [2024-10-09 00:36:42.392178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.874 qpair failed and we were unable to recover it. 00:29:11.874 [2024-10-09 00:36:42.392440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.874 [2024-10-09 00:36:42.392471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.874 qpair failed and we were unable to recover it. 00:29:11.874 [2024-10-09 00:36:42.392833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.874 [2024-10-09 00:36:42.392863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.874 qpair failed and we were unable to recover it. 00:29:11.874 [2024-10-09 00:36:42.393231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.874 [2024-10-09 00:36:42.393261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.874 qpair failed and we were unable to recover it. 00:29:11.874 [2024-10-09 00:36:42.393354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.875 [2024-10-09 00:36:42.393380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.875 qpair failed and we were unable to recover it. 00:29:11.875 [2024-10-09 00:36:42.393609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.875 [2024-10-09 00:36:42.393639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.875 qpair failed and we were unable to recover it. 00:29:11.875 [2024-10-09 00:36:42.394077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.875 [2024-10-09 00:36:42.394108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.875 qpair failed and we were unable to recover it. 00:29:11.875 [2024-10-09 00:36:42.394474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.875 [2024-10-09 00:36:42.394506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.875 qpair failed and we were unable to recover it. 00:29:11.875 [2024-10-09 00:36:42.394871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.875 [2024-10-09 00:36:42.394901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.875 qpair failed and we were unable to recover it. 00:29:11.875 [2024-10-09 00:36:42.395244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.875 [2024-10-09 00:36:42.395273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.875 qpair failed and we were unable to recover it. 00:29:11.875 [2024-10-09 00:36:42.395481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.875 [2024-10-09 00:36:42.395510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.875 qpair failed and we were unable to recover it. 00:29:11.875 [2024-10-09 00:36:42.395749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.875 [2024-10-09 00:36:42.395781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.875 qpair failed and we were unable to recover it. 00:29:11.875 [2024-10-09 00:36:42.396101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.875 [2024-10-09 00:36:42.396132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.875 qpair failed and we were unable to recover it. 00:29:11.875 [2024-10-09 00:36:42.396490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.875 [2024-10-09 00:36:42.396518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.875 qpair failed and we were unable to recover it. 00:29:11.875 [2024-10-09 00:36:42.396935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.875 [2024-10-09 00:36:42.396967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.875 qpair failed and we were unable to recover it. 00:29:11.875 [2024-10-09 00:36:42.397320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.875 [2024-10-09 00:36:42.397348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.875 qpair failed and we were unable to recover it. 00:29:11.875 [2024-10-09 00:36:42.397703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.875 [2024-10-09 00:36:42.397744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.875 qpair failed and we were unable to recover it. 00:29:11.875 [2024-10-09 00:36:42.398154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.875 [2024-10-09 00:36:42.398183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.875 qpair failed and we were unable to recover it. 00:29:11.875 [2024-10-09 00:36:42.398557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.875 [2024-10-09 00:36:42.398586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.875 qpair failed and we were unable to recover it. 00:29:11.875 [2024-10-09 00:36:42.398976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.875 [2024-10-09 00:36:42.399006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.875 qpair failed and we were unable to recover it. 00:29:11.875 [2024-10-09 00:36:42.399386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.875 [2024-10-09 00:36:42.399417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.875 qpair failed and we were unable to recover it. 00:29:11.875 [2024-10-09 00:36:42.399662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.875 [2024-10-09 00:36:42.399691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.875 qpair failed and we were unable to recover it. 00:29:11.875 [2024-10-09 00:36:42.399862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.875 [2024-10-09 00:36:42.399901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.875 qpair failed and we were unable to recover it. 00:29:11.875 [2024-10-09 00:36:42.400287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.875 [2024-10-09 00:36:42.400317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.875 qpair failed and we were unable to recover it. 00:29:11.875 [2024-10-09 00:36:42.400569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.875 [2024-10-09 00:36:42.400598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.875 qpair failed and we were unable to recover it. 00:29:11.875 [2024-10-09 00:36:42.400983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.875 [2024-10-09 00:36:42.401016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.875 qpair failed and we were unable to recover it. 00:29:11.875 [2024-10-09 00:36:42.401359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.875 [2024-10-09 00:36:42.401388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.875 qpair failed and we were unable to recover it. 00:29:11.875 [2024-10-09 00:36:42.401761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.875 [2024-10-09 00:36:42.401792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.875 qpair failed and we were unable to recover it. 00:29:11.875 [2024-10-09 00:36:42.402170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.875 [2024-10-09 00:36:42.402198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.875 qpair failed and we were unable to recover it. 00:29:11.875 [2024-10-09 00:36:42.402440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.875 [2024-10-09 00:36:42.402469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.875 qpair failed and we were unable to recover it. 00:29:11.875 [2024-10-09 00:36:42.402569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.875 [2024-10-09 00:36:42.402596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.875 qpair failed and we were unable to recover it. 00:29:11.875 [2024-10-09 00:36:42.402845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.875 [2024-10-09 00:36:42.402875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.875 qpair failed and we were unable to recover it. 00:29:11.875 [2024-10-09 00:36:42.403143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.875 [2024-10-09 00:36:42.403177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.875 qpair failed and we were unable to recover it. 00:29:11.875 [2024-10-09 00:36:42.403515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.875 [2024-10-09 00:36:42.403547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.875 qpair failed and we were unable to recover it. 00:29:11.875 [2024-10-09 00:36:42.403916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.875 [2024-10-09 00:36:42.403948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.875 qpair failed and we were unable to recover it. 00:29:11.875 [2024-10-09 00:36:42.404316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.875 [2024-10-09 00:36:42.404345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.875 qpair failed and we were unable to recover it. 00:29:11.875 [2024-10-09 00:36:42.404593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.875 [2024-10-09 00:36:42.404622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.875 qpair failed and we were unable to recover it. 00:29:11.875 [2024-10-09 00:36:42.404800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.875 [2024-10-09 00:36:42.404830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.875 qpair failed and we were unable to recover it. 00:29:11.875 [2024-10-09 00:36:42.405172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.875 [2024-10-09 00:36:42.405202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.875 qpair failed and we were unable to recover it. 00:29:11.875 [2024-10-09 00:36:42.405466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.875 [2024-10-09 00:36:42.405494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.875 qpair failed and we were unable to recover it. 00:29:11.875 [2024-10-09 00:36:42.405786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.875 [2024-10-09 00:36:42.405817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.875 qpair failed and we were unable to recover it. 00:29:11.875 [2024-10-09 00:36:42.406214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.875 [2024-10-09 00:36:42.406243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.875 qpair failed and we were unable to recover it. 00:29:11.875 [2024-10-09 00:36:42.406475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.875 [2024-10-09 00:36:42.406506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.875 qpair failed and we were unable to recover it. 00:29:11.875 [2024-10-09 00:36:42.406851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.875 [2024-10-09 00:36:42.406881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.875 qpair failed and we were unable to recover it. 00:29:11.875 [2024-10-09 00:36:42.407260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.875 [2024-10-09 00:36:42.407289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.875 qpair failed and we were unable to recover it. 00:29:11.875 [2024-10-09 00:36:42.407662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.876 [2024-10-09 00:36:42.407692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.876 qpair failed and we were unable to recover it. 00:29:11.876 [2024-10-09 00:36:42.408126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.876 [2024-10-09 00:36:42.408157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.876 qpair failed and we were unable to recover it. 00:29:11.876 [2024-10-09 00:36:42.408522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.876 [2024-10-09 00:36:42.408552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.876 qpair failed and we were unable to recover it. 00:29:11.876 [2024-10-09 00:36:42.408807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.876 [2024-10-09 00:36:42.408839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.876 qpair failed and we were unable to recover it. 00:29:11.876 [2024-10-09 00:36:42.409236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.876 [2024-10-09 00:36:42.409272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.876 qpair failed and we were unable to recover it. 00:29:11.876 [2024-10-09 00:36:42.409616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.876 [2024-10-09 00:36:42.409648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.876 qpair failed and we were unable to recover it. 00:29:11.876 [2024-10-09 00:36:42.410038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.876 [2024-10-09 00:36:42.410069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.876 qpair failed and we were unable to recover it. 00:29:11.876 [2024-10-09 00:36:42.410288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.876 [2024-10-09 00:36:42.410319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.876 qpair failed and we were unable to recover it. 00:29:11.876 [2024-10-09 00:36:42.410686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.876 [2024-10-09 00:36:42.410717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.876 qpair failed and we were unable to recover it. 00:29:11.876 [2024-10-09 00:36:42.411169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.876 [2024-10-09 00:36:42.411199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.876 qpair failed and we were unable to recover it. 00:29:11.876 [2024-10-09 00:36:42.411427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.876 [2024-10-09 00:36:42.411457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.876 qpair failed and we were unable to recover it. 00:29:11.876 [2024-10-09 00:36:42.411848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.876 [2024-10-09 00:36:42.411880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.876 qpair failed and we were unable to recover it. 00:29:11.876 [2024-10-09 00:36:42.412293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.876 [2024-10-09 00:36:42.412323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.876 qpair failed and we were unable to recover it. 00:29:11.876 [2024-10-09 00:36:42.412773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.876 [2024-10-09 00:36:42.412805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.876 qpair failed and we were unable to recover it. 00:29:11.876 [2024-10-09 00:36:42.413148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.876 [2024-10-09 00:36:42.413179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.876 qpair failed and we were unable to recover it. 00:29:11.876 [2024-10-09 00:36:42.413541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.876 [2024-10-09 00:36:42.413570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.876 qpair failed and we were unable to recover it. 00:29:11.876 [2024-10-09 00:36:42.413931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.876 [2024-10-09 00:36:42.413964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.876 qpair failed and we were unable to recover it. 00:29:11.876 [2024-10-09 00:36:42.414350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.876 [2024-10-09 00:36:42.414378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.876 qpair failed and we were unable to recover it. 00:29:11.876 [2024-10-09 00:36:42.414546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.876 [2024-10-09 00:36:42.414575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.876 qpair failed and we were unable to recover it. 00:29:11.876 [2024-10-09 00:36:42.414834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.876 [2024-10-09 00:36:42.414867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.876 qpair failed and we were unable to recover it. 00:29:11.876 [2024-10-09 00:36:42.415268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.876 [2024-10-09 00:36:42.415298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.876 qpair failed and we were unable to recover it. 00:29:11.876 [2024-10-09 00:36:42.415671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.876 [2024-10-09 00:36:42.415700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.876 qpair failed and we were unable to recover it. 00:29:11.876 [2024-10-09 00:36:42.416074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.876 [2024-10-09 00:36:42.416104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.876 qpair failed and we were unable to recover it. 00:29:11.876 [2024-10-09 00:36:42.416482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.876 [2024-10-09 00:36:42.416512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.876 qpair failed and we were unable to recover it. 00:29:11.876 [2024-10-09 00:36:42.416652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.876 [2024-10-09 00:36:42.416681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.876 qpair failed and we were unable to recover it. 00:29:11.876 [2024-10-09 00:36:42.416965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.876 [2024-10-09 00:36:42.416996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.876 qpair failed and we were unable to recover it. 00:29:11.876 [2024-10-09 00:36:42.417248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.876 [2024-10-09 00:36:42.417282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.876 qpair failed and we were unable to recover it. 00:29:11.876 [2024-10-09 00:36:42.417499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.876 [2024-10-09 00:36:42.417529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.876 qpair failed and we were unable to recover it. 00:29:11.876 [2024-10-09 00:36:42.417899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.876 [2024-10-09 00:36:42.417933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.876 qpair failed and we were unable to recover it. 00:29:11.876 [2024-10-09 00:36:42.418301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.876 [2024-10-09 00:36:42.418335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.876 qpair failed and we were unable to recover it. 00:29:11.876 [2024-10-09 00:36:42.418698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.876 [2024-10-09 00:36:42.418750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.876 qpair failed and we were unable to recover it. 00:29:11.876 [2024-10-09 00:36:42.419035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.876 [2024-10-09 00:36:42.419066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.876 qpair failed and we were unable to recover it. 00:29:11.876 [2024-10-09 00:36:42.419431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.876 [2024-10-09 00:36:42.419461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.876 qpair failed and we were unable to recover it. 00:29:11.876 [2024-10-09 00:36:42.419840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.876 [2024-10-09 00:36:42.419872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.876 qpair failed and we were unable to recover it. 00:29:11.876 [2024-10-09 00:36:42.420088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.876 [2024-10-09 00:36:42.420117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.876 qpair failed and we were unable to recover it. 00:29:11.876 [2024-10-09 00:36:42.420558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.876 [2024-10-09 00:36:42.420588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.876 qpair failed and we were unable to recover it. 00:29:11.876 [2024-10-09 00:36:42.420963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.876 [2024-10-09 00:36:42.420992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.876 qpair failed and we were unable to recover it. 00:29:11.876 [2024-10-09 00:36:42.421364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.876 [2024-10-09 00:36:42.421392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.876 qpair failed and we were unable to recover it. 00:29:11.876 [2024-10-09 00:36:42.421758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.876 [2024-10-09 00:36:42.421788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.877 qpair failed and we were unable to recover it. 00:29:11.877 [2024-10-09 00:36:42.422116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.877 [2024-10-09 00:36:42.422145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.877 qpair failed and we were unable to recover it. 00:29:11.877 [2024-10-09 00:36:42.422494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.877 [2024-10-09 00:36:42.422522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.877 qpair failed and we were unable to recover it. 00:29:11.877 [2024-10-09 00:36:42.422888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.877 [2024-10-09 00:36:42.422918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.877 qpair failed and we were unable to recover it. 00:29:11.877 [2024-10-09 00:36:42.423271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.877 [2024-10-09 00:36:42.423300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.877 qpair failed and we were unable to recover it. 00:29:11.877 [2024-10-09 00:36:42.423666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.877 [2024-10-09 00:36:42.423694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.877 qpair failed and we were unable to recover it. 00:29:11.877 [2024-10-09 00:36:42.423949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.877 [2024-10-09 00:36:42.423979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.877 qpair failed and we were unable to recover it. 00:29:11.877 [2024-10-09 00:36:42.424239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.877 [2024-10-09 00:36:42.424269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.877 qpair failed and we were unable to recover it. 00:29:11.877 [2024-10-09 00:36:42.424656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.877 [2024-10-09 00:36:42.424685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.877 qpair failed and we were unable to recover it. 00:29:11.877 [2024-10-09 00:36:42.425064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.877 [2024-10-09 00:36:42.425094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.877 qpair failed and we were unable to recover it. 00:29:11.877 [2024-10-09 00:36:42.425474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.877 [2024-10-09 00:36:42.425501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.877 qpair failed and we were unable to recover it. 00:29:11.877 [2024-10-09 00:36:42.425743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.877 [2024-10-09 00:36:42.425777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.877 qpair failed and we were unable to recover it. 00:29:11.877 [2024-10-09 00:36:42.426063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.877 [2024-10-09 00:36:42.426091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.877 qpair failed and we were unable to recover it. 00:29:11.877 [2024-10-09 00:36:42.426459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.877 [2024-10-09 00:36:42.426488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.877 qpair failed and we were unable to recover it. 00:29:11.877 [2024-10-09 00:36:42.426837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.877 [2024-10-09 00:36:42.426868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.877 qpair failed and we were unable to recover it. 00:29:11.877 [2024-10-09 00:36:42.427240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.877 [2024-10-09 00:36:42.427268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.877 qpair failed and we were unable to recover it. 00:29:11.877 [2024-10-09 00:36:42.427641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.877 [2024-10-09 00:36:42.427670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.877 qpair failed and we were unable to recover it. 00:29:11.877 [2024-10-09 00:36:42.428071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.877 [2024-10-09 00:36:42.428102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.877 qpair failed and we were unable to recover it. 00:29:11.877 [2024-10-09 00:36:42.428489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.877 [2024-10-09 00:36:42.428517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.877 qpair failed and we were unable to recover it. 00:29:11.877 [2024-10-09 00:36:42.428878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.877 [2024-10-09 00:36:42.428907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.877 qpair failed and we were unable to recover it. 00:29:11.877 [2024-10-09 00:36:42.429280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.877 [2024-10-09 00:36:42.429310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.877 qpair failed and we were unable to recover it. 00:29:11.877 [2024-10-09 00:36:42.429686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.877 [2024-10-09 00:36:42.429714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.877 qpair failed and we were unable to recover it. 00:29:11.877 [2024-10-09 00:36:42.430086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.877 [2024-10-09 00:36:42.430117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.877 qpair failed and we were unable to recover it. 00:29:11.877 [2024-10-09 00:36:42.430487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.877 [2024-10-09 00:36:42.430516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.877 qpair failed and we were unable to recover it. 00:29:11.877 [2024-10-09 00:36:42.430867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.877 [2024-10-09 00:36:42.430905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.877 qpair failed and we were unable to recover it. 00:29:11.877 [2024-10-09 00:36:42.431278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.877 [2024-10-09 00:36:42.431307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.877 qpair failed and we were unable to recover it. 00:29:11.877 [2024-10-09 00:36:42.431660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.877 [2024-10-09 00:36:42.431688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.877 qpair failed and we were unable to recover it. 00:29:11.877 [2024-10-09 00:36:42.431934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.877 [2024-10-09 00:36:42.431964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.877 qpair failed and we were unable to recover it. 00:29:11.877 [2024-10-09 00:36:42.432219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.877 [2024-10-09 00:36:42.432251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.877 qpair failed and we were unable to recover it. 00:29:11.877 [2024-10-09 00:36:42.432492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.877 [2024-10-09 00:36:42.432521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.877 qpair failed and we were unable to recover it. 00:29:11.877 [2024-10-09 00:36:42.432867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.877 [2024-10-09 00:36:42.432897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.877 qpair failed and we were unable to recover it. 00:29:11.877 [2024-10-09 00:36:42.433264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.877 [2024-10-09 00:36:42.433293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.877 qpair failed and we were unable to recover it. 00:29:11.877 [2024-10-09 00:36:42.433668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.878 [2024-10-09 00:36:42.433697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.878 qpair failed and we were unable to recover it. 00:29:11.878 [2024-10-09 00:36:42.434093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.878 [2024-10-09 00:36:42.434123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.878 qpair failed and we were unable to recover it. 00:29:11.878 [2024-10-09 00:36:42.434354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.878 [2024-10-09 00:36:42.434388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.878 qpair failed and we were unable to recover it. 00:29:11.878 [2024-10-09 00:36:42.434756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.878 [2024-10-09 00:36:42.434788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.878 qpair failed and we were unable to recover it. 00:29:11.878 [2024-10-09 00:36:42.435055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.878 [2024-10-09 00:36:42.435084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.878 qpair failed and we were unable to recover it. 00:29:11.878 [2024-10-09 00:36:42.435431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.878 [2024-10-09 00:36:42.435459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.878 qpair failed and we were unable to recover it. 00:29:11.878 [2024-10-09 00:36:42.435674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.878 [2024-10-09 00:36:42.435702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.878 qpair failed and we were unable to recover it. 00:29:11.878 [2024-10-09 00:36:42.436134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.878 [2024-10-09 00:36:42.436164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.878 qpair failed and we were unable to recover it. 00:29:11.878 [2024-10-09 00:36:42.436499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.878 [2024-10-09 00:36:42.436528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.878 qpair failed and we were unable to recover it. 00:29:11.878 [2024-10-09 00:36:42.436786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.878 [2024-10-09 00:36:42.436816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.878 qpair failed and we were unable to recover it. 00:29:11.878 [2024-10-09 00:36:42.437189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.878 [2024-10-09 00:36:42.437217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.878 qpair failed and we were unable to recover it. 00:29:11.878 [2024-10-09 00:36:42.437579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.878 [2024-10-09 00:36:42.437608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.878 qpair failed and we were unable to recover it. 00:29:11.878 [2024-10-09 00:36:42.438007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.878 [2024-10-09 00:36:42.438036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.878 qpair failed and we were unable to recover it. 00:29:11.878 [2024-10-09 00:36:42.438402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.878 [2024-10-09 00:36:42.438430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.878 qpair failed and we were unable to recover it. 00:29:11.878 [2024-10-09 00:36:42.438785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.878 [2024-10-09 00:36:42.438816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.878 qpair failed and we were unable to recover it. 00:29:11.878 [2024-10-09 00:36:42.439229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.878 [2024-10-09 00:36:42.439258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.878 qpair failed and we were unable to recover it. 00:29:11.878 [2024-10-09 00:36:42.439621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.878 [2024-10-09 00:36:42.439651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.878 qpair failed and we were unable to recover it. 00:29:11.878 [2024-10-09 00:36:42.440037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.878 [2024-10-09 00:36:42.440067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.878 qpair failed and we were unable to recover it. 00:29:11.878 [2024-10-09 00:36:42.440447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.878 [2024-10-09 00:36:42.440476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.878 qpair failed and we were unable to recover it. 00:29:11.878 [2024-10-09 00:36:42.440837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.878 [2024-10-09 00:36:42.440867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.878 qpair failed and we were unable to recover it. 00:29:11.878 [2024-10-09 00:36:42.441241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.878 [2024-10-09 00:36:42.441270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.878 qpair failed and we were unable to recover it. 00:29:11.878 [2024-10-09 00:36:42.441625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.878 [2024-10-09 00:36:42.441664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.878 qpair failed and we were unable to recover it. 00:29:11.878 [2024-10-09 00:36:42.442039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.878 [2024-10-09 00:36:42.442069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.878 qpair failed and we were unable to recover it. 00:29:11.878 [2024-10-09 00:36:42.442457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.878 [2024-10-09 00:36:42.442486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.878 qpair failed and we were unable to recover it. 00:29:11.878 [2024-10-09 00:36:42.442854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.878 [2024-10-09 00:36:42.442893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.878 qpair failed and we were unable to recover it. 00:29:11.878 [2024-10-09 00:36:42.443265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.878 [2024-10-09 00:36:42.443294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.878 qpair failed and we were unable to recover it. 00:29:11.878 [2024-10-09 00:36:42.443554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.878 [2024-10-09 00:36:42.443583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.878 qpair failed and we were unable to recover it. 00:29:11.878 [2024-10-09 00:36:42.443877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.878 [2024-10-09 00:36:42.443908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.878 qpair failed and we were unable to recover it. 00:29:11.878 [2024-10-09 00:36:42.444263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.878 [2024-10-09 00:36:42.444293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.878 qpair failed and we were unable to recover it. 00:29:11.878 [2024-10-09 00:36:42.444665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.878 [2024-10-09 00:36:42.444700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.878 qpair failed and we were unable to recover it. 00:29:11.878 [2024-10-09 00:36:42.445057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.878 [2024-10-09 00:36:42.445087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.878 qpair failed and we were unable to recover it. 00:29:11.878 [2024-10-09 00:36:42.445450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.878 [2024-10-09 00:36:42.445479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.878 qpair failed and we were unable to recover it. 00:29:11.878 [2024-10-09 00:36:42.445695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.878 [2024-10-09 00:36:42.445731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.878 qpair failed and we were unable to recover it. 00:29:11.878 [2024-10-09 00:36:42.445990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.878 [2024-10-09 00:36:42.446018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.878 qpair failed and we were unable to recover it. 00:29:11.878 [2024-10-09 00:36:42.446368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.878 [2024-10-09 00:36:42.446398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.878 qpair failed and we were unable to recover it. 00:29:11.878 [2024-10-09 00:36:42.446758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.878 [2024-10-09 00:36:42.446789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.878 qpair failed and we were unable to recover it. 00:29:11.878 [2024-10-09 00:36:42.447037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.878 [2024-10-09 00:36:42.447065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.878 qpair failed and we were unable to recover it. 00:29:11.879 [2024-10-09 00:36:42.447456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.879 [2024-10-09 00:36:42.447485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.879 qpair failed and we were unable to recover it. 00:29:11.879 [2024-10-09 00:36:42.447882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.879 [2024-10-09 00:36:42.447912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.879 qpair failed and we were unable to recover it. 00:29:11.879 [2024-10-09 00:36:42.448295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.879 [2024-10-09 00:36:42.448323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.879 qpair failed and we were unable to recover it. 00:29:11.879 [2024-10-09 00:36:42.448709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.879 [2024-10-09 00:36:42.448747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.879 qpair failed and we were unable to recover it. 00:29:11.879 [2024-10-09 00:36:42.449102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.879 [2024-10-09 00:36:42.449130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.879 qpair failed and we were unable to recover it. 00:29:11.879 [2024-10-09 00:36:42.449358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.879 [2024-10-09 00:36:42.449387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.879 qpair failed and we were unable to recover it. 00:29:11.879 [2024-10-09 00:36:42.449843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.879 [2024-10-09 00:36:42.449873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.879 qpair failed and we were unable to recover it. 00:29:11.879 [2024-10-09 00:36:42.450254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.879 [2024-10-09 00:36:42.450283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.879 qpair failed and we were unable to recover it. 00:29:11.879 [2024-10-09 00:36:42.450654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.879 [2024-10-09 00:36:42.450682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.879 qpair failed and we were unable to recover it. 00:29:11.879 [2024-10-09 00:36:42.451073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.879 [2024-10-09 00:36:42.451104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.879 qpair failed and we were unable to recover it. 00:29:11.879 [2024-10-09 00:36:42.451491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.879 [2024-10-09 00:36:42.451519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.879 qpair failed and we were unable to recover it. 00:29:11.879 [2024-10-09 00:36:42.451944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.879 [2024-10-09 00:36:42.451975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.879 qpair failed and we were unable to recover it. 00:29:11.879 [2024-10-09 00:36:42.452342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.879 [2024-10-09 00:36:42.452371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.879 qpair failed and we were unable to recover it. 00:29:11.879 [2024-10-09 00:36:42.452586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.879 [2024-10-09 00:36:42.452615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.879 qpair failed and we were unable to recover it. 00:29:11.879 [2024-10-09 00:36:42.452985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.879 [2024-10-09 00:36:42.453014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.879 qpair failed and we were unable to recover it. 00:29:11.879 [2024-10-09 00:36:42.453379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.879 [2024-10-09 00:36:42.453407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.879 qpair failed and we were unable to recover it. 00:29:11.879 [2024-10-09 00:36:42.453768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.879 [2024-10-09 00:36:42.453797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.879 qpair failed and we were unable to recover it. 00:29:11.879 [2024-10-09 00:36:42.454150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.879 [2024-10-09 00:36:42.454179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.879 qpair failed and we were unable to recover it. 00:29:11.879 [2024-10-09 00:36:42.454541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.879 [2024-10-09 00:36:42.454571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.879 qpair failed and we were unable to recover it. 00:29:11.879 [2024-10-09 00:36:42.454816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.879 [2024-10-09 00:36:42.454857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.879 qpair failed and we were unable to recover it. 00:29:11.879 [2024-10-09 00:36:42.455211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.879 [2024-10-09 00:36:42.455242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.879 qpair failed and we were unable to recover it. 00:29:11.879 [2024-10-09 00:36:42.455613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.879 [2024-10-09 00:36:42.455641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.879 qpair failed and we were unable to recover it. 00:29:11.879 [2024-10-09 00:36:42.455986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.879 [2024-10-09 00:36:42.456018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.879 qpair failed and we were unable to recover it. 00:29:11.879 [2024-10-09 00:36:42.456387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.879 [2024-10-09 00:36:42.456416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.879 qpair failed and we were unable to recover it. 00:29:11.879 [2024-10-09 00:36:42.456690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.879 [2024-10-09 00:36:42.456718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.879 qpair failed and we were unable to recover it. 00:29:11.879 [2024-10-09 00:36:42.457125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.879 [2024-10-09 00:36:42.457154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.879 qpair failed and we were unable to recover it. 00:29:11.879 [2024-10-09 00:36:42.457534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.879 [2024-10-09 00:36:42.457563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.879 qpair failed and we were unable to recover it. 00:29:11.879 [2024-10-09 00:36:42.457933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.879 [2024-10-09 00:36:42.457963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.879 qpair failed and we were unable to recover it. 00:29:11.879 [2024-10-09 00:36:42.458359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.879 [2024-10-09 00:36:42.458387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.879 qpair failed and we were unable to recover it. 00:29:11.879 [2024-10-09 00:36:42.458632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.879 [2024-10-09 00:36:42.458664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.879 qpair failed and we were unable to recover it. 00:29:11.879 [2024-10-09 00:36:42.459011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.879 [2024-10-09 00:36:42.459042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.879 qpair failed and we were unable to recover it. 00:29:11.879 [2024-10-09 00:36:42.459445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.879 [2024-10-09 00:36:42.459474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.879 qpair failed and we were unable to recover it. 00:29:11.879 [2024-10-09 00:36:42.459742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.879 [2024-10-09 00:36:42.459772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.879 qpair failed and we were unable to recover it. 00:29:11.879 [2024-10-09 00:36:42.460148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.879 [2024-10-09 00:36:42.460178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.879 qpair failed and we were unable to recover it. 00:29:11.879 [2024-10-09 00:36:42.460551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.879 [2024-10-09 00:36:42.460580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.879 qpair failed and we were unable to recover it. 00:29:11.879 [2024-10-09 00:36:42.460857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.879 [2024-10-09 00:36:42.460887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.879 qpair failed and we were unable to recover it. 00:29:11.879 [2024-10-09 00:36:42.461281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.879 [2024-10-09 00:36:42.461310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.879 qpair failed and we were unable to recover it. 00:29:11.879 [2024-10-09 00:36:42.461689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.879 [2024-10-09 00:36:42.461718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.879 qpair failed and we were unable to recover it. 00:29:11.879 [2024-10-09 00:36:42.462074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.879 [2024-10-09 00:36:42.462103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.879 qpair failed and we were unable to recover it. 00:29:11.879 [2024-10-09 00:36:42.462478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.879 [2024-10-09 00:36:42.462507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.880 qpair failed and we were unable to recover it. 00:29:11.880 [2024-10-09 00:36:42.462887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.880 [2024-10-09 00:36:42.462917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.880 qpair failed and we were unable to recover it. 00:29:11.880 [2024-10-09 00:36:42.463186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.880 [2024-10-09 00:36:42.463214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.880 qpair failed and we were unable to recover it. 00:29:11.880 [2024-10-09 00:36:42.463580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.880 [2024-10-09 00:36:42.463609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.880 qpair failed and we were unable to recover it. 00:29:11.880 [2024-10-09 00:36:42.463963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.880 [2024-10-09 00:36:42.463993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.880 qpair failed and we were unable to recover it. 00:29:11.880 [2024-10-09 00:36:42.464369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.880 [2024-10-09 00:36:42.464398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.880 qpair failed and we were unable to recover it. 00:29:11.880 [2024-10-09 00:36:42.464786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.880 [2024-10-09 00:36:42.464815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.880 qpair failed and we were unable to recover it. 00:29:11.880 [2024-10-09 00:36:42.465181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.880 [2024-10-09 00:36:42.465212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.880 qpair failed and we were unable to recover it. 00:29:11.880 [2024-10-09 00:36:42.465523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.880 [2024-10-09 00:36:42.465552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.880 qpair failed and we were unable to recover it. 00:29:11.880 [2024-10-09 00:36:42.465799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.880 [2024-10-09 00:36:42.465829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.880 qpair failed and we were unable to recover it. 00:29:11.880 [2024-10-09 00:36:42.466225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.880 [2024-10-09 00:36:42.466254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.880 qpair failed and we were unable to recover it. 00:29:11.880 [2024-10-09 00:36:42.466612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.880 [2024-10-09 00:36:42.466641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.880 qpair failed and we were unable to recover it. 00:29:11.880 [2024-10-09 00:36:42.467019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.880 [2024-10-09 00:36:42.467048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.880 qpair failed and we were unable to recover it. 00:29:11.880 [2024-10-09 00:36:42.467297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.880 [2024-10-09 00:36:42.467326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.880 qpair failed and we were unable to recover it. 00:29:11.880 [2024-10-09 00:36:42.467692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.880 [2024-10-09 00:36:42.467730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.880 qpair failed and we were unable to recover it. 00:29:11.880 [2024-10-09 00:36:42.468099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.880 [2024-10-09 00:36:42.468127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.880 qpair failed and we were unable to recover it. 00:29:11.880 [2024-10-09 00:36:42.468497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.880 [2024-10-09 00:36:42.468525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.880 qpair failed and we were unable to recover it. 00:29:11.880 [2024-10-09 00:36:42.468787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.880 [2024-10-09 00:36:42.468817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.880 qpair failed and we were unable to recover it. 00:29:11.880 [2024-10-09 00:36:42.469163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.880 [2024-10-09 00:36:42.469192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.880 qpair failed and we were unable to recover it. 00:29:11.880 [2024-10-09 00:36:42.469581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.880 [2024-10-09 00:36:42.469610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.880 qpair failed and we were unable to recover it. 00:29:11.880 [2024-10-09 00:36:42.469982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.880 [2024-10-09 00:36:42.470011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.880 qpair failed and we were unable to recover it. 00:29:11.880 [2024-10-09 00:36:42.470402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.880 [2024-10-09 00:36:42.470436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.880 qpair failed and we were unable to recover it. 00:29:11.880 [2024-10-09 00:36:42.470830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.880 [2024-10-09 00:36:42.470860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.880 qpair failed and we were unable to recover it. 00:29:11.880 [2024-10-09 00:36:42.471090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.880 [2024-10-09 00:36:42.471121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.880 qpair failed and we were unable to recover it. 00:29:11.880 [2024-10-09 00:36:42.471441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.880 [2024-10-09 00:36:42.471470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.880 qpair failed and we were unable to recover it. 00:29:11.880 [2024-10-09 00:36:42.471868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.880 [2024-10-09 00:36:42.471898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.880 qpair failed and we were unable to recover it. 00:29:11.880 [2024-10-09 00:36:42.472266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.880 [2024-10-09 00:36:42.472294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.880 qpair failed and we were unable to recover it. 00:29:11.880 [2024-10-09 00:36:42.472654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.880 [2024-10-09 00:36:42.472684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.880 qpair failed and we were unable to recover it. 00:29:11.880 [2024-10-09 00:36:42.473067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.880 [2024-10-09 00:36:42.473099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.880 qpair failed and we were unable to recover it. 00:29:11.880 [2024-10-09 00:36:42.473483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.880 [2024-10-09 00:36:42.473512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.880 qpair failed and we were unable to recover it. 00:29:11.880 [2024-10-09 00:36:42.473860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.880 [2024-10-09 00:36:42.473891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.880 qpair failed and we were unable to recover it. 00:29:11.880 [2024-10-09 00:36:42.474286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.880 [2024-10-09 00:36:42.474315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.880 qpair failed and we were unable to recover it. 00:29:11.880 [2024-10-09 00:36:42.474688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.880 [2024-10-09 00:36:42.474716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.880 qpair failed and we were unable to recover it. 00:29:11.880 [2024-10-09 00:36:42.475091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.880 [2024-10-09 00:36:42.475120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.880 qpair failed and we were unable to recover it. 00:29:11.880 [2024-10-09 00:36:42.475504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.880 [2024-10-09 00:36:42.475533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.880 qpair failed and we were unable to recover it. 00:29:11.880 [2024-10-09 00:36:42.475893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.880 [2024-10-09 00:36:42.475925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.880 qpair failed and we were unable to recover it. 00:29:11.880 [2024-10-09 00:36:42.476281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.880 [2024-10-09 00:36:42.476310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.880 qpair failed and we were unable to recover it. 00:29:11.880 [2024-10-09 00:36:42.476683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.880 [2024-10-09 00:36:42.476711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.880 qpair failed and we were unable to recover it. 00:29:11.880 [2024-10-09 00:36:42.477088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.880 [2024-10-09 00:36:42.477116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.880 qpair failed and we were unable to recover it. 00:29:11.880 [2024-10-09 00:36:42.477477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.880 [2024-10-09 00:36:42.477506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.880 qpair failed and we were unable to recover it. 00:29:11.880 [2024-10-09 00:36:42.477855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.880 [2024-10-09 00:36:42.477887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.880 qpair failed and we were unable to recover it. 00:29:11.880 [2024-10-09 00:36:42.478097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.881 [2024-10-09 00:36:42.478126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.881 qpair failed and we were unable to recover it. 00:29:11.881 [2024-10-09 00:36:42.478369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.881 [2024-10-09 00:36:42.478401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.881 qpair failed and we were unable to recover it. 00:29:11.881 [2024-10-09 00:36:42.478625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.881 [2024-10-09 00:36:42.478655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.881 qpair failed and we were unable to recover it. 00:29:11.881 [2024-10-09 00:36:42.479026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.881 [2024-10-09 00:36:42.479057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.881 qpair failed and we were unable to recover it. 00:29:11.881 [2024-10-09 00:36:42.479438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.881 [2024-10-09 00:36:42.479468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.881 qpair failed and we were unable to recover it. 00:29:11.881 [2024-10-09 00:36:42.479812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.881 [2024-10-09 00:36:42.479843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.881 qpair failed and we were unable to recover it. 00:29:11.881 [2024-10-09 00:36:42.480088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.881 [2024-10-09 00:36:42.480117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.881 qpair failed and we were unable to recover it. 00:29:11.881 [2024-10-09 00:36:42.480508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.881 [2024-10-09 00:36:42.480544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.881 qpair failed and we were unable to recover it. 00:29:11.881 [2024-10-09 00:36:42.480784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.881 [2024-10-09 00:36:42.480813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.881 qpair failed and we were unable to recover it. 00:29:11.881 [2024-10-09 00:36:42.481178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.881 [2024-10-09 00:36:42.481207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.881 qpair failed and we were unable to recover it. 00:29:11.881 [2024-10-09 00:36:42.481572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.881 [2024-10-09 00:36:42.481602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.881 qpair failed and we were unable to recover it. 00:29:11.881 [2024-10-09 00:36:42.481957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.881 [2024-10-09 00:36:42.481988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.881 qpair failed and we were unable to recover it. 00:29:11.881 [2024-10-09 00:36:42.482358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.881 [2024-10-09 00:36:42.482388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.881 qpair failed and we were unable to recover it. 00:29:11.881 [2024-10-09 00:36:42.482740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.881 [2024-10-09 00:36:42.482770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:11.881 qpair failed and we were unable to recover it. 00:29:12.154 [2024-10-09 00:36:42.483131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.154 [2024-10-09 00:36:42.483163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.154 qpair failed and we were unable to recover it. 00:29:12.154 [2024-10-09 00:36:42.483523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.154 [2024-10-09 00:36:42.483552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.154 qpair failed and we were unable to recover it. 00:29:12.154 [2024-10-09 00:36:42.483933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.154 [2024-10-09 00:36:42.483962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.154 qpair failed and we were unable to recover it. 00:29:12.154 [2024-10-09 00:36:42.484342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.154 [2024-10-09 00:36:42.484370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.154 qpair failed and we were unable to recover it. 00:29:12.154 [2024-10-09 00:36:42.484748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.154 [2024-10-09 00:36:42.484778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.154 qpair failed and we were unable to recover it. 00:29:12.154 [2024-10-09 00:36:42.485175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.154 [2024-10-09 00:36:42.485210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.154 qpair failed and we were unable to recover it. 00:29:12.154 [2024-10-09 00:36:42.485432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.154 [2024-10-09 00:36:42.485461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.154 qpair failed and we were unable to recover it. 00:29:12.154 [2024-10-09 00:36:42.485693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.154 [2024-10-09 00:36:42.485734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.154 qpair failed and we were unable to recover it. 00:29:12.154 [2024-10-09 00:36:42.486113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.154 [2024-10-09 00:36:42.486142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.154 qpair failed and we were unable to recover it. 00:29:12.154 [2024-10-09 00:36:42.486519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.154 [2024-10-09 00:36:42.486548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.154 qpair failed and we were unable to recover it. 00:29:12.154 [2024-10-09 00:36:42.486939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.154 [2024-10-09 00:36:42.486969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.154 qpair failed and we were unable to recover it. 00:29:12.154 [2024-10-09 00:36:42.487325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.154 [2024-10-09 00:36:42.487354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.154 qpair failed and we were unable to recover it. 00:29:12.154 [2024-10-09 00:36:42.487751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.154 [2024-10-09 00:36:42.487782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.154 qpair failed and we were unable to recover it. 00:29:12.154 [2024-10-09 00:36:42.488144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.154 [2024-10-09 00:36:42.488172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.154 qpair failed and we were unable to recover it. 00:29:12.154 [2024-10-09 00:36:42.488578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.154 [2024-10-09 00:36:42.488607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.154 qpair failed and we were unable to recover it. 00:29:12.154 [2024-10-09 00:36:42.488940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.154 [2024-10-09 00:36:42.488971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.154 qpair failed and we were unable to recover it. 00:29:12.154 [2024-10-09 00:36:42.489218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.155 [2024-10-09 00:36:42.489247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.155 qpair failed and we were unable to recover it. 00:29:12.155 [2024-10-09 00:36:42.489611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.155 [2024-10-09 00:36:42.489640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.155 qpair failed and we were unable to recover it. 00:29:12.155 [2024-10-09 00:36:42.489901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.155 [2024-10-09 00:36:42.489931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.155 qpair failed and we were unable to recover it. 00:29:12.155 [2024-10-09 00:36:42.490313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.155 [2024-10-09 00:36:42.490341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.155 qpair failed and we were unable to recover it. 00:29:12.155 [2024-10-09 00:36:42.490714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.155 [2024-10-09 00:36:42.490762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.155 qpair failed and we were unable to recover it. 00:29:12.155 [2024-10-09 00:36:42.491132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.155 [2024-10-09 00:36:42.491162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.155 qpair failed and we were unable to recover it. 00:29:12.155 [2024-10-09 00:36:42.491557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.155 [2024-10-09 00:36:42.491586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.155 qpair failed and we were unable to recover it. 00:29:12.155 [2024-10-09 00:36:42.491940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.155 [2024-10-09 00:36:42.491971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.155 qpair failed and we were unable to recover it. 00:29:12.155 [2024-10-09 00:36:42.492339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.155 [2024-10-09 00:36:42.492368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.155 qpair failed and we were unable to recover it. 00:29:12.155 [2024-10-09 00:36:42.492736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.155 [2024-10-09 00:36:42.492767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.155 qpair failed and we were unable to recover it. 00:29:12.155 [2024-10-09 00:36:42.493148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.155 [2024-10-09 00:36:42.493177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.155 qpair failed and we were unable to recover it. 00:29:12.155 [2024-10-09 00:36:42.493548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.155 [2024-10-09 00:36:42.493577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.155 qpair failed and we were unable to recover it. 00:29:12.155 [2024-10-09 00:36:42.493903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.155 [2024-10-09 00:36:42.493933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.155 qpair failed and we were unable to recover it. 00:29:12.155 [2024-10-09 00:36:42.494310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.155 [2024-10-09 00:36:42.494339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.155 qpair failed and we were unable to recover it. 00:29:12.155 [2024-10-09 00:36:42.494699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.155 [2024-10-09 00:36:42.494749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.155 qpair failed and we were unable to recover it. 00:29:12.155 [2024-10-09 00:36:42.495010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.155 [2024-10-09 00:36:42.495039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.155 qpair failed and we were unable to recover it. 00:29:12.155 [2024-10-09 00:36:42.495393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.155 [2024-10-09 00:36:42.495422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.155 qpair failed and we were unable to recover it. 00:29:12.155 [2024-10-09 00:36:42.495802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.155 [2024-10-09 00:36:42.495834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.155 qpair failed and we were unable to recover it. 00:29:12.155 [2024-10-09 00:36:42.496203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.155 [2024-10-09 00:36:42.496234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.155 qpair failed and we were unable to recover it. 00:29:12.155 [2024-10-09 00:36:42.496612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.155 [2024-10-09 00:36:42.496642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.155 qpair failed and we were unable to recover it. 00:29:12.155 [2024-10-09 00:36:42.497021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.155 [2024-10-09 00:36:42.497052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.155 qpair failed and we were unable to recover it. 00:29:12.155 [2024-10-09 00:36:42.497310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.155 [2024-10-09 00:36:42.497342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.155 qpair failed and we were unable to recover it. 00:29:12.155 [2024-10-09 00:36:42.497441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.155 [2024-10-09 00:36:42.497468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.155 qpair failed and we were unable to recover it. 00:29:12.155 Read completed with error (sct=0, sc=8) 00:29:12.155 starting I/O failed 00:29:12.155 Read completed with error (sct=0, sc=8) 00:29:12.155 starting I/O failed 00:29:12.155 Read completed with error (sct=0, sc=8) 00:29:12.155 starting I/O failed 00:29:12.155 Read completed with error (sct=0, sc=8) 00:29:12.155 starting I/O failed 00:29:12.155 Read completed with error (sct=0, sc=8) 00:29:12.155 starting I/O failed 00:29:12.155 Read completed with error (sct=0, sc=8) 00:29:12.155 starting I/O failed 00:29:12.155 Read completed with error (sct=0, sc=8) 00:29:12.155 starting I/O failed 00:29:12.155 Read completed with error (sct=0, sc=8) 00:29:12.155 starting I/O failed 00:29:12.155 Read completed with error (sct=0, sc=8) 00:29:12.155 starting I/O failed 00:29:12.155 Read completed with error (sct=0, sc=8) 00:29:12.155 starting I/O failed 00:29:12.155 Read completed with error (sct=0, sc=8) 00:29:12.155 starting I/O failed 00:29:12.155 Read completed with error (sct=0, sc=8) 00:29:12.155 starting I/O failed 00:29:12.155 Write completed with error (sct=0, sc=8) 00:29:12.155 starting I/O failed 00:29:12.155 Read completed with error (sct=0, sc=8) 00:29:12.155 starting I/O failed 00:29:12.155 Write completed with error (sct=0, sc=8) 00:29:12.155 starting I/O failed 00:29:12.155 Read completed with error (sct=0, sc=8) 00:29:12.155 starting I/O failed 00:29:12.155 Write completed with error (sct=0, sc=8) 00:29:12.155 starting I/O failed 00:29:12.155 Write completed with error (sct=0, sc=8) 00:29:12.155 starting I/O failed 00:29:12.155 Read completed with error (sct=0, sc=8) 00:29:12.155 starting I/O failed 00:29:12.155 Read completed with error (sct=0, sc=8) 00:29:12.155 starting I/O failed 00:29:12.155 Read completed with error (sct=0, sc=8) 00:29:12.155 starting I/O failed 00:29:12.155 Read completed with error (sct=0, sc=8) 00:29:12.155 starting I/O failed 00:29:12.155 Read completed with error (sct=0, sc=8) 00:29:12.155 starting I/O failed 00:29:12.155 Read completed with error (sct=0, sc=8) 00:29:12.155 starting I/O failed 00:29:12.155 Read completed with error (sct=0, sc=8) 00:29:12.155 starting I/O failed 00:29:12.155 Write completed with error (sct=0, sc=8) 00:29:12.155 starting I/O failed 00:29:12.155 Read completed with error (sct=0, sc=8) 00:29:12.155 starting I/O failed 00:29:12.155 Write completed with error (sct=0, sc=8) 00:29:12.155 starting I/O failed 00:29:12.155 Write completed with error (sct=0, sc=8) 00:29:12.155 starting I/O failed 00:29:12.155 Read completed with error (sct=0, sc=8) 00:29:12.155 starting I/O failed 00:29:12.155 Read completed with error (sct=0, sc=8) 00:29:12.155 starting I/O failed 00:29:12.155 Write completed with error (sct=0, sc=8) 00:29:12.155 starting I/O failed 00:29:12.155 [2024-10-09 00:36:42.498300] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:12.155 [2024-10-09 00:36:42.499027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.155 [2024-10-09 00:36:42.499148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa04000b90 with addr=10.0.0.2, port=4420 00:29:12.155 qpair failed and we were unable to recover it. 00:29:12.155 [2024-10-09 00:36:42.499567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.155 [2024-10-09 00:36:42.499604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa04000b90 with addr=10.0.0.2, port=4420 00:29:12.155 qpair failed and we were unable to recover it. 00:29:12.155 [2024-10-09 00:36:42.500108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.155 [2024-10-09 00:36:42.500213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa04000b90 with addr=10.0.0.2, port=4420 00:29:12.155 qpair failed and we were unable to recover it. 00:29:12.155 [2024-10-09 00:36:42.500674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.155 [2024-10-09 00:36:42.500710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa04000b90 with addr=10.0.0.2, port=4420 00:29:12.155 qpair failed and we were unable to recover it. 00:29:12.155 [2024-10-09 00:36:42.501099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.155 [2024-10-09 00:36:42.501132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa04000b90 with addr=10.0.0.2, port=4420 00:29:12.155 qpair failed and we were unable to recover it. 00:29:12.155 [2024-10-09 00:36:42.501499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.155 [2024-10-09 00:36:42.501530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa04000b90 with addr=10.0.0.2, port=4420 00:29:12.155 qpair failed and we were unable to recover it. 00:29:12.155 [2024-10-09 00:36:42.501783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.156 [2024-10-09 00:36:42.501837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa04000b90 with addr=10.0.0.2, port=4420 00:29:12.156 qpair failed and we were unable to recover it. 00:29:12.156 [2024-10-09 00:36:42.502207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.156 [2024-10-09 00:36:42.502237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa04000b90 with addr=10.0.0.2, port=4420 00:29:12.156 qpair failed and we were unable to recover it. 00:29:12.156 [2024-10-09 00:36:42.502600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.156 [2024-10-09 00:36:42.502630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa04000b90 with addr=10.0.0.2, port=4420 00:29:12.156 qpair failed and we were unable to recover it. 00:29:12.156 [2024-10-09 00:36:42.502965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.156 [2024-10-09 00:36:42.502997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa04000b90 with addr=10.0.0.2, port=4420 00:29:12.156 qpair failed and we were unable to recover it. 00:29:12.156 [2024-10-09 00:36:42.503362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.156 [2024-10-09 00:36:42.503391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa04000b90 with addr=10.0.0.2, port=4420 00:29:12.156 qpair failed and we were unable to recover it. 00:29:12.156 [2024-10-09 00:36:42.503717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.156 [2024-10-09 00:36:42.503759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa04000b90 with addr=10.0.0.2, port=4420 00:29:12.156 qpair failed and we were unable to recover it. 00:29:12.156 [2024-10-09 00:36:42.504083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.156 [2024-10-09 00:36:42.504112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa04000b90 with addr=10.0.0.2, port=4420 00:29:12.156 qpair failed and we were unable to recover it. 00:29:12.156 [2024-10-09 00:36:42.504473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.156 [2024-10-09 00:36:42.504503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa04000b90 with addr=10.0.0.2, port=4420 00:29:12.156 qpair failed and we were unable to recover it. 00:29:12.156 [2024-10-09 00:36:42.504745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.156 [2024-10-09 00:36:42.504776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa04000b90 with addr=10.0.0.2, port=4420 00:29:12.156 qpair failed and we were unable to recover it. 00:29:12.156 [2024-10-09 00:36:42.505140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.156 [2024-10-09 00:36:42.505170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa04000b90 with addr=10.0.0.2, port=4420 00:29:12.156 qpair failed and we were unable to recover it. 00:29:12.156 [2024-10-09 00:36:42.505442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.156 [2024-10-09 00:36:42.505475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa04000b90 with addr=10.0.0.2, port=4420 00:29:12.156 qpair failed and we were unable to recover it. 00:29:12.156 [2024-10-09 00:36:42.505728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.156 [2024-10-09 00:36:42.505759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa04000b90 with addr=10.0.0.2, port=4420 00:29:12.156 qpair failed and we were unable to recover it. 00:29:12.156 [2024-10-09 00:36:42.505968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.156 [2024-10-09 00:36:42.505997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa04000b90 with addr=10.0.0.2, port=4420 00:29:12.156 qpair failed and we were unable to recover it. 00:29:12.156 [2024-10-09 00:36:42.506250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.156 [2024-10-09 00:36:42.506280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa04000b90 with addr=10.0.0.2, port=4420 00:29:12.156 qpair failed and we were unable to recover it. 00:29:12.156 [2024-10-09 00:36:42.506669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.156 [2024-10-09 00:36:42.506701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa04000b90 with addr=10.0.0.2, port=4420 00:29:12.156 qpair failed and we were unable to recover it. 00:29:12.156 [2024-10-09 00:36:42.506971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.156 [2024-10-09 00:36:42.507001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa04000b90 with addr=10.0.0.2, port=4420 00:29:12.156 qpair failed and we were unable to recover it. 00:29:12.156 [2024-10-09 00:36:42.507361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.156 [2024-10-09 00:36:42.507392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa04000b90 with addr=10.0.0.2, port=4420 00:29:12.156 qpair failed and we were unable to recover it. 00:29:12.156 [2024-10-09 00:36:42.507781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.156 [2024-10-09 00:36:42.507813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa04000b90 with addr=10.0.0.2, port=4420 00:29:12.156 qpair failed and we were unable to recover it. 00:29:12.156 [2024-10-09 00:36:42.508171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.156 [2024-10-09 00:36:42.508200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa04000b90 with addr=10.0.0.2, port=4420 00:29:12.156 qpair failed and we were unable to recover it. 00:29:12.156 [2024-10-09 00:36:42.508578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.156 [2024-10-09 00:36:42.508611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa04000b90 with addr=10.0.0.2, port=4420 00:29:12.156 qpair failed and we were unable to recover it. 00:29:12.156 [2024-10-09 00:36:42.508954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.156 [2024-10-09 00:36:42.508984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa04000b90 with addr=10.0.0.2, port=4420 00:29:12.156 qpair failed and we were unable to recover it. 00:29:12.156 [2024-10-09 00:36:42.509363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.156 [2024-10-09 00:36:42.509392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa04000b90 with addr=10.0.0.2, port=4420 00:29:12.156 qpair failed and we were unable to recover it. 00:29:12.156 [2024-10-09 00:36:42.509642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.156 [2024-10-09 00:36:42.509678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa04000b90 with addr=10.0.0.2, port=4420 00:29:12.156 qpair failed and we were unable to recover it. 00:29:12.156 [2024-10-09 00:36:42.510118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.156 [2024-10-09 00:36:42.510148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa04000b90 with addr=10.0.0.2, port=4420 00:29:12.156 qpair failed and we were unable to recover it. 00:29:12.156 [2024-10-09 00:36:42.510496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.156 [2024-10-09 00:36:42.510526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa04000b90 with addr=10.0.0.2, port=4420 00:29:12.156 qpair failed and we were unable to recover it. 00:29:12.156 [2024-10-09 00:36:42.510865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.156 [2024-10-09 00:36:42.510896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa04000b90 with addr=10.0.0.2, port=4420 00:29:12.156 qpair failed and we were unable to recover it. 00:29:12.156 [2024-10-09 00:36:42.511268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.156 [2024-10-09 00:36:42.511296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa04000b90 with addr=10.0.0.2, port=4420 00:29:12.156 qpair failed and we were unable to recover it. 00:29:12.156 [2024-10-09 00:36:42.511652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.156 [2024-10-09 00:36:42.511681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa04000b90 with addr=10.0.0.2, port=4420 00:29:12.156 qpair failed and we were unable to recover it. 00:29:12.156 [2024-10-09 00:36:42.511995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.156 [2024-10-09 00:36:42.512026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa04000b90 with addr=10.0.0.2, port=4420 00:29:12.156 qpair failed and we were unable to recover it. 00:29:12.156 [2024-10-09 00:36:42.512366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.156 [2024-10-09 00:36:42.512404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa04000b90 with addr=10.0.0.2, port=4420 00:29:12.156 qpair failed and we were unable to recover it. 00:29:12.156 [2024-10-09 00:36:42.512764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.156 [2024-10-09 00:36:42.512795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa04000b90 with addr=10.0.0.2, port=4420 00:29:12.156 qpair failed and we were unable to recover it. 00:29:12.156 [2024-10-09 00:36:42.513173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.156 [2024-10-09 00:36:42.513203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa04000b90 with addr=10.0.0.2, port=4420 00:29:12.156 qpair failed and we were unable to recover it. 00:29:12.156 [2024-10-09 00:36:42.513563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.156 [2024-10-09 00:36:42.513592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa04000b90 with addr=10.0.0.2, port=4420 00:29:12.156 qpair failed and we were unable to recover it. 00:29:12.156 [2024-10-09 00:36:42.513950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.156 [2024-10-09 00:36:42.513981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa04000b90 with addr=10.0.0.2, port=4420 00:29:12.156 qpair failed and we were unable to recover it. 00:29:12.156 [2024-10-09 00:36:42.514311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.156 [2024-10-09 00:36:42.514339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa04000b90 with addr=10.0.0.2, port=4420 00:29:12.156 qpair failed and we were unable to recover it. 00:29:12.156 [2024-10-09 00:36:42.514701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.156 [2024-10-09 00:36:42.514737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa04000b90 with addr=10.0.0.2, port=4420 00:29:12.156 qpair failed and we were unable to recover it. 00:29:12.156 [2024-10-09 00:36:42.515091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.156 [2024-10-09 00:36:42.515120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa04000b90 with addr=10.0.0.2, port=4420 00:29:12.156 qpair failed and we were unable to recover it. 00:29:12.156 [2024-10-09 00:36:42.515221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.156 [2024-10-09 00:36:42.515249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa04000b90 with addr=10.0.0.2, port=4420 00:29:12.156 qpair failed and we were unable to recover it. 00:29:12.156 [2024-10-09 00:36:42.515817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.156 [2024-10-09 00:36:42.515927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.156 qpair failed and we were unable to recover it. 00:29:12.156 [2024-10-09 00:36:42.516377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.156 [2024-10-09 00:36:42.516413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.156 qpair failed and we were unable to recover it. 00:29:12.156 [2024-10-09 00:36:42.516788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.157 [2024-10-09 00:36:42.516824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.157 qpair failed and we were unable to recover it. 00:29:12.157 [2024-10-09 00:36:42.517123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.157 [2024-10-09 00:36:42.517154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.157 qpair failed and we were unable to recover it. 00:29:12.157 [2024-10-09 00:36:42.517418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.157 [2024-10-09 00:36:42.517448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.157 qpair failed and we were unable to recover it. 00:29:12.157 [2024-10-09 00:36:42.517888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.157 [2024-10-09 00:36:42.517922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.157 qpair failed and we were unable to recover it. 00:29:12.157 [2024-10-09 00:36:42.518311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.157 [2024-10-09 00:36:42.518341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.157 qpair failed and we were unable to recover it. 00:29:12.157 [2024-10-09 00:36:42.518700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.157 [2024-10-09 00:36:42.518738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.157 qpair failed and we were unable to recover it. 00:29:12.157 [2024-10-09 00:36:42.519208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.157 [2024-10-09 00:36:42.519313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.157 qpair failed and we were unable to recover it. 00:29:12.157 [2024-10-09 00:36:42.519764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.157 [2024-10-09 00:36:42.519804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.157 qpair failed and we were unable to recover it. 00:29:12.157 [2024-10-09 00:36:42.520234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.157 [2024-10-09 00:36:42.520264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.157 qpair failed and we were unable to recover it. 00:29:12.157 [2024-10-09 00:36:42.520693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.157 [2024-10-09 00:36:42.520750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.157 qpair failed and we were unable to recover it. 00:29:12.157 [2024-10-09 00:36:42.520968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.157 [2024-10-09 00:36:42.520998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.157 qpair failed and we were unable to recover it. 00:29:12.157 [2024-10-09 00:36:42.521369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.157 [2024-10-09 00:36:42.521397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.157 qpair failed and we were unable to recover it. 00:29:12.157 [2024-10-09 00:36:42.521761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.157 [2024-10-09 00:36:42.521792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.157 qpair failed and we were unable to recover it. 00:29:12.157 [2024-10-09 00:36:42.522189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.157 [2024-10-09 00:36:42.522219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.157 qpair failed and we were unable to recover it. 00:29:12.157 [2024-10-09 00:36:42.522578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.157 [2024-10-09 00:36:42.522607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.157 qpair failed and we were unable to recover it. 00:29:12.157 [2024-10-09 00:36:42.523020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.157 [2024-10-09 00:36:42.523050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.157 qpair failed and we were unable to recover it. 00:29:12.157 [2024-10-09 00:36:42.523276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.157 [2024-10-09 00:36:42.523305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.157 qpair failed and we were unable to recover it. 00:29:12.157 [2024-10-09 00:36:42.523711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.157 [2024-10-09 00:36:42.523751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.157 qpair failed and we were unable to recover it. 00:29:12.157 [2024-10-09 00:36:42.523994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.157 [2024-10-09 00:36:42.524023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.157 qpair failed and we were unable to recover it. 00:29:12.157 [2024-10-09 00:36:42.524396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.157 [2024-10-09 00:36:42.524426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.157 qpair failed and we were unable to recover it. 00:29:12.157 [2024-10-09 00:36:42.524798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.157 [2024-10-09 00:36:42.524829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.157 qpair failed and we were unable to recover it. 00:29:12.157 [2024-10-09 00:36:42.525176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.157 [2024-10-09 00:36:42.525207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.157 qpair failed and we were unable to recover it. 00:29:12.157 [2024-10-09 00:36:42.525577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.157 [2024-10-09 00:36:42.525607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.157 qpair failed and we were unable to recover it. 00:29:12.157 [2024-10-09 00:36:42.525980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.157 [2024-10-09 00:36:42.526011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.157 qpair failed and we were unable to recover it. 00:29:12.157 [2024-10-09 00:36:42.526269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.157 [2024-10-09 00:36:42.526298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.157 qpair failed and we were unable to recover it. 00:29:12.157 [2024-10-09 00:36:42.526535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.157 [2024-10-09 00:36:42.526563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.157 qpair failed and we were unable to recover it. 00:29:12.157 [2024-10-09 00:36:42.526924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.157 [2024-10-09 00:36:42.526954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.157 qpair failed and we were unable to recover it. 00:29:12.157 [2024-10-09 00:36:42.527328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.157 [2024-10-09 00:36:42.527357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.157 qpair failed and we were unable to recover it. 00:29:12.157 [2024-10-09 00:36:42.527712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.157 [2024-10-09 00:36:42.527756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.157 qpair failed and we were unable to recover it. 00:29:12.157 [2024-10-09 00:36:42.528121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.157 [2024-10-09 00:36:42.528150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.157 qpair failed and we were unable to recover it. 00:29:12.157 [2024-10-09 00:36:42.528535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.157 [2024-10-09 00:36:42.528564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.157 qpair failed and we were unable to recover it. 00:29:12.157 [2024-10-09 00:36:42.529032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.157 [2024-10-09 00:36:42.529062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.157 qpair failed and we were unable to recover it. 00:29:12.157 [2024-10-09 00:36:42.529420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.157 [2024-10-09 00:36:42.529449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.157 qpair failed and we were unable to recover it. 00:29:12.157 [2024-10-09 00:36:42.529825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.157 [2024-10-09 00:36:42.529855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.157 qpair failed and we were unable to recover it. 00:29:12.157 [2024-10-09 00:36:42.530208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.157 [2024-10-09 00:36:42.530239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.157 qpair failed and we were unable to recover it. 00:29:12.157 [2024-10-09 00:36:42.530611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.157 [2024-10-09 00:36:42.530640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.157 qpair failed and we were unable to recover it. 00:29:12.157 [2024-10-09 00:36:42.530848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.157 [2024-10-09 00:36:42.530886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.157 qpair failed and we were unable to recover it. 00:29:12.157 [2024-10-09 00:36:42.531250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.157 [2024-10-09 00:36:42.531279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.157 qpair failed and we were unable to recover it. 00:29:12.157 [2024-10-09 00:36:42.531650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.157 [2024-10-09 00:36:42.531681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.157 qpair failed and we were unable to recover it. 00:29:12.157 [2024-10-09 00:36:42.532064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.157 [2024-10-09 00:36:42.532095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.157 qpair failed and we were unable to recover it. 00:29:12.157 [2024-10-09 00:36:42.532456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.157 [2024-10-09 00:36:42.532484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.158 qpair failed and we were unable to recover it. 00:29:12.158 [2024-10-09 00:36:42.532685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.158 [2024-10-09 00:36:42.532719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.158 qpair failed and we were unable to recover it. 00:29:12.158 [2024-10-09 00:36:42.533126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.158 [2024-10-09 00:36:42.533158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.158 qpair failed and we were unable to recover it. 00:29:12.158 [2024-10-09 00:36:42.533511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.158 [2024-10-09 00:36:42.533545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.158 qpair failed and we were unable to recover it. 00:29:12.158 [2024-10-09 00:36:42.533933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.158 [2024-10-09 00:36:42.533964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.158 qpair failed and we were unable to recover it. 00:29:12.158 [2024-10-09 00:36:42.534313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.158 [2024-10-09 00:36:42.534345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.158 qpair failed and we were unable to recover it. 00:29:12.158 [2024-10-09 00:36:42.534747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.158 [2024-10-09 00:36:42.534781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.158 qpair failed and we were unable to recover it. 00:29:12.158 [2024-10-09 00:36:42.535233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.158 [2024-10-09 00:36:42.535262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.158 qpair failed and we were unable to recover it. 00:29:12.158 [2024-10-09 00:36:42.535627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.158 [2024-10-09 00:36:42.535655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.158 qpair failed and we were unable to recover it. 00:29:12.158 [2024-10-09 00:36:42.535888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.158 [2024-10-09 00:36:42.535920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.158 qpair failed and we were unable to recover it. 00:29:12.158 [2024-10-09 00:36:42.536159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.158 [2024-10-09 00:36:42.536188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.158 qpair failed and we were unable to recover it. 00:29:12.158 [2024-10-09 00:36:42.536549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.158 [2024-10-09 00:36:42.536579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.158 qpair failed and we were unable to recover it. 00:29:12.158 [2024-10-09 00:36:42.536942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.158 [2024-10-09 00:36:42.536974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.158 qpair failed and we were unable to recover it. 00:29:12.158 [2024-10-09 00:36:42.537198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.158 [2024-10-09 00:36:42.537226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.158 qpair failed and we were unable to recover it. 00:29:12.158 [2024-10-09 00:36:42.537586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.158 [2024-10-09 00:36:42.537615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.158 qpair failed and we were unable to recover it. 00:29:12.158 [2024-10-09 00:36:42.537847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.158 [2024-10-09 00:36:42.537878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.158 qpair failed and we were unable to recover it. 00:29:12.158 [2024-10-09 00:36:42.538239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.158 [2024-10-09 00:36:42.538268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.158 qpair failed and we were unable to recover it. 00:29:12.158 [2024-10-09 00:36:42.538477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.158 [2024-10-09 00:36:42.538507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.158 qpair failed and we were unable to recover it. 00:29:12.158 [2024-10-09 00:36:42.538887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.158 [2024-10-09 00:36:42.538917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.158 qpair failed and we were unable to recover it. 00:29:12.158 [2024-10-09 00:36:42.539281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.158 [2024-10-09 00:36:42.539311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.158 qpair failed and we were unable to recover it. 00:29:12.158 [2024-10-09 00:36:42.539684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.158 [2024-10-09 00:36:42.539713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.158 qpair failed and we were unable to recover it. 00:29:12.158 [2024-10-09 00:36:42.540160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.158 [2024-10-09 00:36:42.540189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.158 qpair failed and we were unable to recover it. 00:29:12.158 [2024-10-09 00:36:42.540567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.158 [2024-10-09 00:36:42.540596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.158 qpair failed and we were unable to recover it. 00:29:12.158 [2024-10-09 00:36:42.540866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.158 [2024-10-09 00:36:42.540902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.158 qpair failed and we were unable to recover it. 00:29:12.158 [2024-10-09 00:36:42.541282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.158 [2024-10-09 00:36:42.541311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.158 qpair failed and we were unable to recover it. 00:29:12.158 [2024-10-09 00:36:42.541521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.158 [2024-10-09 00:36:42.541550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.158 qpair failed and we were unable to recover it. 00:29:12.158 [2024-10-09 00:36:42.541856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.158 [2024-10-09 00:36:42.541888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.158 qpair failed and we were unable to recover it. 00:29:12.158 [2024-10-09 00:36:42.542106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.158 [2024-10-09 00:36:42.542137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.158 qpair failed and we were unable to recover it. 00:29:12.158 [2024-10-09 00:36:42.542386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.158 [2024-10-09 00:36:42.542416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.158 qpair failed and we were unable to recover it. 00:29:12.158 [2024-10-09 00:36:42.542790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.158 [2024-10-09 00:36:42.542820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.158 qpair failed and we were unable to recover it. 00:29:12.158 [2024-10-09 00:36:42.543193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.158 [2024-10-09 00:36:42.543222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.158 qpair failed and we were unable to recover it. 00:29:12.158 [2024-10-09 00:36:42.543596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.158 [2024-10-09 00:36:42.543634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.158 qpair failed and we were unable to recover it. 00:29:12.158 [2024-10-09 00:36:42.543862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.158 [2024-10-09 00:36:42.543892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.158 qpair failed and we were unable to recover it. 00:29:12.158 [2024-10-09 00:36:42.544256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.158 [2024-10-09 00:36:42.544285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.158 qpair failed and we were unable to recover it. 00:29:12.158 [2024-10-09 00:36:42.544663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.158 [2024-10-09 00:36:42.544692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.158 qpair failed and we were unable to recover it. 00:29:12.158 [2024-10-09 00:36:42.544945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.158 [2024-10-09 00:36:42.544975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.159 qpair failed and we were unable to recover it. 00:29:12.159 [2024-10-09 00:36:42.545339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.159 [2024-10-09 00:36:42.545368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.159 qpair failed and we were unable to recover it. 00:29:12.159 [2024-10-09 00:36:42.545626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.159 [2024-10-09 00:36:42.545657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.159 qpair failed and we were unable to recover it. 00:29:12.159 [2024-10-09 00:36:42.545893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.159 [2024-10-09 00:36:42.545927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.159 qpair failed and we were unable to recover it. 00:29:12.159 [2024-10-09 00:36:42.546284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.159 [2024-10-09 00:36:42.546313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.159 qpair failed and we were unable to recover it. 00:29:12.159 [2024-10-09 00:36:42.546682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.159 [2024-10-09 00:36:42.546713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.159 qpair failed and we were unable to recover it. 00:29:12.159 [2024-10-09 00:36:42.547084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.159 [2024-10-09 00:36:42.547115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.159 qpair failed and we were unable to recover it. 00:29:12.159 [2024-10-09 00:36:42.547505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.159 [2024-10-09 00:36:42.547535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.159 qpair failed and we were unable to recover it. 00:29:12.159 [2024-10-09 00:36:42.547903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.159 [2024-10-09 00:36:42.547933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.159 qpair failed and we were unable to recover it. 00:29:12.159 [2024-10-09 00:36:42.548301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.159 [2024-10-09 00:36:42.548330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.159 qpair failed and we were unable to recover it. 00:29:12.159 [2024-10-09 00:36:42.548696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.159 [2024-10-09 00:36:42.548735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.159 qpair failed and we were unable to recover it. 00:29:12.159 [2024-10-09 00:36:42.549097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.159 [2024-10-09 00:36:42.549126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.159 qpair failed and we were unable to recover it. 00:29:12.159 [2024-10-09 00:36:42.549473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.159 [2024-10-09 00:36:42.549502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.159 qpair failed and we were unable to recover it. 00:29:12.159 [2024-10-09 00:36:42.549872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.159 [2024-10-09 00:36:42.549903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.159 qpair failed and we were unable to recover it. 00:29:12.159 [2024-10-09 00:36:42.550270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.159 [2024-10-09 00:36:42.550300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.159 qpair failed and we were unable to recover it. 00:29:12.159 [2024-10-09 00:36:42.550555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.159 [2024-10-09 00:36:42.550589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.159 qpair failed and we were unable to recover it. 00:29:12.159 [2024-10-09 00:36:42.550979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.159 [2024-10-09 00:36:42.551011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.159 qpair failed and we were unable to recover it. 00:29:12.159 [2024-10-09 00:36:42.551231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.159 [2024-10-09 00:36:42.551260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.159 qpair failed and we were unable to recover it. 00:29:12.159 [2024-10-09 00:36:42.551625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.159 [2024-10-09 00:36:42.551655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.159 qpair failed and we were unable to recover it. 00:29:12.159 [2024-10-09 00:36:42.551880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.159 [2024-10-09 00:36:42.551910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.159 qpair failed and we were unable to recover it. 00:29:12.159 [2024-10-09 00:36:42.552155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.159 [2024-10-09 00:36:42.552184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.159 qpair failed and we were unable to recover it. 00:29:12.159 [2024-10-09 00:36:42.552548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.159 [2024-10-09 00:36:42.552578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.159 qpair failed and we were unable to recover it. 00:29:12.159 [2024-10-09 00:36:42.552948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.159 [2024-10-09 00:36:42.552978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.159 qpair failed and we were unable to recover it. 00:29:12.159 [2024-10-09 00:36:42.553302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.159 [2024-10-09 00:36:42.553332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.159 qpair failed and we were unable to recover it. 00:29:12.159 [2024-10-09 00:36:42.553708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.159 [2024-10-09 00:36:42.553751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.159 qpair failed and we were unable to recover it. 00:29:12.159 [2024-10-09 00:36:42.554098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.159 [2024-10-09 00:36:42.554128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.159 qpair failed and we were unable to recover it. 00:29:12.159 [2024-10-09 00:36:42.554497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.159 [2024-10-09 00:36:42.554526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.159 qpair failed and we were unable to recover it. 00:29:12.159 [2024-10-09 00:36:42.554884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.159 [2024-10-09 00:36:42.554916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.159 qpair failed and we were unable to recover it. 00:29:12.159 [2024-10-09 00:36:42.555016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.159 [2024-10-09 00:36:42.555044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.159 qpair failed and we were unable to recover it. 00:29:12.159 [2024-10-09 00:36:42.555283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.159 [2024-10-09 00:36:42.555317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.159 qpair failed and we were unable to recover it. 00:29:12.159 [2024-10-09 00:36:42.555681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.159 [2024-10-09 00:36:42.555710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.159 qpair failed and we were unable to recover it. 00:29:12.159 [2024-10-09 00:36:42.556086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.159 [2024-10-09 00:36:42.556116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.159 qpair failed and we were unable to recover it. 00:29:12.159 [2024-10-09 00:36:42.556455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.159 [2024-10-09 00:36:42.556485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.159 qpair failed and we were unable to recover it. 00:29:12.159 [2024-10-09 00:36:42.556816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.159 [2024-10-09 00:36:42.556848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.159 qpair failed and we were unable to recover it. 00:29:12.159 [2024-10-09 00:36:42.557216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.159 [2024-10-09 00:36:42.557246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.159 qpair failed and we were unable to recover it. 00:29:12.159 [2024-10-09 00:36:42.557629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.159 [2024-10-09 00:36:42.557658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.159 qpair failed and we were unable to recover it. 00:29:12.159 [2024-10-09 00:36:42.557893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.159 [2024-10-09 00:36:42.557923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.159 qpair failed and we were unable to recover it. 00:29:12.159 [2024-10-09 00:36:42.558165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.159 [2024-10-09 00:36:42.558195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.159 qpair failed and we were unable to recover it. 00:29:12.159 [2024-10-09 00:36:42.558575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.159 [2024-10-09 00:36:42.558605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.159 qpair failed and we were unable to recover it. 00:29:12.160 [2024-10-09 00:36:42.558951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.160 [2024-10-09 00:36:42.558984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.160 qpair failed and we were unable to recover it. 00:29:12.160 [2024-10-09 00:36:42.559365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.160 [2024-10-09 00:36:42.559395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.160 qpair failed and we were unable to recover it. 00:29:12.160 [2024-10-09 00:36:42.559811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.160 [2024-10-09 00:36:42.559843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.160 qpair failed and we were unable to recover it. 00:29:12.160 [2024-10-09 00:36:42.560288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.160 [2024-10-09 00:36:42.560319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.160 qpair failed and we were unable to recover it. 00:29:12.160 [2024-10-09 00:36:42.560524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.160 [2024-10-09 00:36:42.560555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.160 qpair failed and we were unable to recover it. 00:29:12.160 [2024-10-09 00:36:42.560948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.160 [2024-10-09 00:36:42.560979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.160 qpair failed and we were unable to recover it. 00:29:12.160 [2024-10-09 00:36:42.561225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.160 [2024-10-09 00:36:42.561257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.160 qpair failed and we were unable to recover it. 00:29:12.160 [2024-10-09 00:36:42.561607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.160 [2024-10-09 00:36:42.561637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.160 qpair failed and we were unable to recover it. 00:29:12.160 [2024-10-09 00:36:42.561954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.160 [2024-10-09 00:36:42.561985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.160 qpair failed and we were unable to recover it. 00:29:12.160 [2024-10-09 00:36:42.562396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.160 [2024-10-09 00:36:42.562426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.160 qpair failed and we were unable to recover it. 00:29:12.160 [2024-10-09 00:36:42.562674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.160 [2024-10-09 00:36:42.562703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.160 qpair failed and we were unable to recover it. 00:29:12.160 [2024-10-09 00:36:42.563108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.160 [2024-10-09 00:36:42.563141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.160 qpair failed and we were unable to recover it. 00:29:12.160 [2024-10-09 00:36:42.563506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.160 [2024-10-09 00:36:42.563534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.160 qpair failed and we were unable to recover it. 00:29:12.160 [2024-10-09 00:36:42.563904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.160 [2024-10-09 00:36:42.563935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.160 qpair failed and we were unable to recover it. 00:29:12.160 [2024-10-09 00:36:42.564306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.160 [2024-10-09 00:36:42.564336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.160 qpair failed and we were unable to recover it. 00:29:12.160 [2024-10-09 00:36:42.564607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.160 [2024-10-09 00:36:42.564637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.160 qpair failed and we were unable to recover it. 00:29:12.160 [2024-10-09 00:36:42.564973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.160 [2024-10-09 00:36:42.565003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.160 qpair failed and we were unable to recover it. 00:29:12.160 [2024-10-09 00:36:42.565240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.160 [2024-10-09 00:36:42.565276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.160 qpair failed and we were unable to recover it. 00:29:12.160 [2024-10-09 00:36:42.565669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.160 [2024-10-09 00:36:42.565700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.160 qpair failed and we were unable to recover it. 00:29:12.160 [2024-10-09 00:36:42.566086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.160 [2024-10-09 00:36:42.566116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.160 qpair failed and we were unable to recover it. 00:29:12.160 [2024-10-09 00:36:42.566484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.160 [2024-10-09 00:36:42.566513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.160 qpair failed and we were unable to recover it. 00:29:12.160 [2024-10-09 00:36:42.566847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.160 [2024-10-09 00:36:42.566879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.160 qpair failed and we were unable to recover it. 00:29:12.160 [2024-10-09 00:36:42.567298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.160 [2024-10-09 00:36:42.567327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.160 qpair failed and we were unable to recover it. 00:29:12.160 [2024-10-09 00:36:42.567697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.160 [2024-10-09 00:36:42.567748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.160 qpair failed and we were unable to recover it. 00:29:12.160 [2024-10-09 00:36:42.568082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.160 [2024-10-09 00:36:42.568112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.160 qpair failed and we were unable to recover it. 00:29:12.160 [2024-10-09 00:36:42.568327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.160 [2024-10-09 00:36:42.568357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.160 qpair failed and we were unable to recover it. 00:29:12.160 [2024-10-09 00:36:42.568750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.160 [2024-10-09 00:36:42.568781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.160 qpair failed and we were unable to recover it. 00:29:12.160 [2024-10-09 00:36:42.569067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.160 [2024-10-09 00:36:42.569097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.160 qpair failed and we were unable to recover it. 00:29:12.160 [2024-10-09 00:36:42.569490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.160 [2024-10-09 00:36:42.569520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.160 qpair failed and we were unable to recover it. 00:29:12.160 [2024-10-09 00:36:42.569898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.160 [2024-10-09 00:36:42.569928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.160 qpair failed and we were unable to recover it. 00:29:12.160 [2024-10-09 00:36:42.570294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.160 [2024-10-09 00:36:42.570323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.160 qpair failed and we were unable to recover it. 00:29:12.160 [2024-10-09 00:36:42.570701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.160 [2024-10-09 00:36:42.570741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.160 qpair failed and we were unable to recover it. 00:29:12.160 [2024-10-09 00:36:42.571108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.160 [2024-10-09 00:36:42.571137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.160 qpair failed and we were unable to recover it. 00:29:12.160 [2024-10-09 00:36:42.571478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.160 [2024-10-09 00:36:42.571509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.160 qpair failed and we were unable to recover it. 00:29:12.160 [2024-10-09 00:36:42.571869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.160 [2024-10-09 00:36:42.571900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.160 qpair failed and we were unable to recover it. 00:29:12.160 [2024-10-09 00:36:42.572240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.160 [2024-10-09 00:36:42.572271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.160 qpair failed and we were unable to recover it. 00:29:12.160 [2024-10-09 00:36:42.572426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.160 [2024-10-09 00:36:42.572455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.160 qpair failed and we were unable to recover it. 00:29:12.160 [2024-10-09 00:36:42.572730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.160 [2024-10-09 00:36:42.572764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.160 qpair failed and we were unable to recover it. 00:29:12.160 [2024-10-09 00:36:42.573094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.160 [2024-10-09 00:36:42.573123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.160 qpair failed and we were unable to recover it. 00:29:12.160 [2024-10-09 00:36:42.573363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.160 [2024-10-09 00:36:42.573390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.161 qpair failed and we were unable to recover it. 00:29:12.161 [2024-10-09 00:36:42.573688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.161 [2024-10-09 00:36:42.573718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.161 qpair failed and we were unable to recover it. 00:29:12.161 [2024-10-09 00:36:42.574078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.161 [2024-10-09 00:36:42.574108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.161 qpair failed and we were unable to recover it. 00:29:12.161 [2024-10-09 00:36:42.574415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.161 [2024-10-09 00:36:42.574443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.161 qpair failed and we were unable to recover it. 00:29:12.161 [2024-10-09 00:36:42.574800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.161 [2024-10-09 00:36:42.574831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.161 qpair failed and we were unable to recover it. 00:29:12.161 [2024-10-09 00:36:42.575080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.161 [2024-10-09 00:36:42.575116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.161 qpair failed and we were unable to recover it. 00:29:12.161 [2024-10-09 00:36:42.575458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.161 [2024-10-09 00:36:42.575487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.161 qpair failed and we were unable to recover it. 00:29:12.161 [2024-10-09 00:36:42.575860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.161 [2024-10-09 00:36:42.575912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.161 qpair failed and we were unable to recover it. 00:29:12.161 [2024-10-09 00:36:42.576256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.161 [2024-10-09 00:36:42.576285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.161 qpair failed and we were unable to recover it. 00:29:12.161 [2024-10-09 00:36:42.576664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.161 [2024-10-09 00:36:42.576694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.161 qpair failed and we were unable to recover it. 00:29:12.161 [2024-10-09 00:36:42.576956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.161 [2024-10-09 00:36:42.576988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.161 qpair failed and we were unable to recover it. 00:29:12.161 [2024-10-09 00:36:42.577357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.161 [2024-10-09 00:36:42.577387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.161 qpair failed and we were unable to recover it. 00:29:12.161 [2024-10-09 00:36:42.577605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.161 [2024-10-09 00:36:42.577635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.161 qpair failed and we were unable to recover it. 00:29:12.161 [2024-10-09 00:36:42.577968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.161 [2024-10-09 00:36:42.577999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.161 qpair failed and we were unable to recover it. 00:29:12.161 [2024-10-09 00:36:42.578344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.161 [2024-10-09 00:36:42.578374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.161 qpair failed and we were unable to recover it. 00:29:12.161 [2024-10-09 00:36:42.578619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.161 [2024-10-09 00:36:42.578652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.161 qpair failed and we were unable to recover it. 00:29:12.161 [2024-10-09 00:36:42.579054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.161 [2024-10-09 00:36:42.579086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.161 qpair failed and we were unable to recover it. 00:29:12.161 [2024-10-09 00:36:42.579429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.161 [2024-10-09 00:36:42.579459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.161 qpair failed and we were unable to recover it. 00:29:12.161 [2024-10-09 00:36:42.579835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.161 [2024-10-09 00:36:42.579865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.161 qpair failed and we were unable to recover it. 00:29:12.161 [2024-10-09 00:36:42.580198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.161 [2024-10-09 00:36:42.580229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.161 qpair failed and we were unable to recover it. 00:29:12.161 [2024-10-09 00:36:42.580599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.161 [2024-10-09 00:36:42.580628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.161 qpair failed and we were unable to recover it. 00:29:12.161 [2024-10-09 00:36:42.580877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.161 [2024-10-09 00:36:42.580907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.161 qpair failed and we were unable to recover it. 00:29:12.161 [2024-10-09 00:36:42.581050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.161 [2024-10-09 00:36:42.581082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.161 qpair failed and we were unable to recover it. 00:29:12.161 [2024-10-09 00:36:42.581432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.161 [2024-10-09 00:36:42.581462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.161 qpair failed and we were unable to recover it. 00:29:12.161 [2024-10-09 00:36:42.581832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.161 [2024-10-09 00:36:42.581863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.161 qpair failed and we were unable to recover it. 00:29:12.161 [2024-10-09 00:36:42.582214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.161 [2024-10-09 00:36:42.582243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.161 qpair failed and we were unable to recover it. 00:29:12.161 [2024-10-09 00:36:42.582628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.161 [2024-10-09 00:36:42.582657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.161 qpair failed and we were unable to recover it. 00:29:12.161 [2024-10-09 00:36:42.583028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.161 [2024-10-09 00:36:42.583059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.161 qpair failed and we were unable to recover it. 00:29:12.161 [2024-10-09 00:36:42.583272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.161 [2024-10-09 00:36:42.583301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.161 qpair failed and we were unable to recover it. 00:29:12.161 [2024-10-09 00:36:42.583715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.161 [2024-10-09 00:36:42.583753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.161 qpair failed and we were unable to recover it. 00:29:12.161 [2024-10-09 00:36:42.584167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.161 [2024-10-09 00:36:42.584196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.161 qpair failed and we were unable to recover it. 00:29:12.161 [2024-10-09 00:36:42.584553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.161 [2024-10-09 00:36:42.584583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.161 qpair failed and we were unable to recover it. 00:29:12.161 [2024-10-09 00:36:42.584944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.161 [2024-10-09 00:36:42.584981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.161 qpair failed and we were unable to recover it. 00:29:12.161 [2024-10-09 00:36:42.585351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.161 [2024-10-09 00:36:42.585380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.161 qpair failed and we were unable to recover it. 00:29:12.161 [2024-10-09 00:36:42.585775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.161 [2024-10-09 00:36:42.585804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.161 qpair failed and we were unable to recover it. 00:29:12.161 [2024-10-09 00:36:42.586051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.161 [2024-10-09 00:36:42.586080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.161 qpair failed and we were unable to recover it. 00:29:12.161 [2024-10-09 00:36:42.586479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.161 [2024-10-09 00:36:42.586507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.161 qpair failed and we were unable to recover it. 00:29:12.161 [2024-10-09 00:36:42.586870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.161 [2024-10-09 00:36:42.586900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.161 qpair failed and we were unable to recover it. 00:29:12.161 [2024-10-09 00:36:42.587282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.161 [2024-10-09 00:36:42.587311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.161 qpair failed and we were unable to recover it. 00:29:12.161 [2024-10-09 00:36:42.587617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.161 [2024-10-09 00:36:42.587645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.161 qpair failed and we were unable to recover it. 00:29:12.161 [2024-10-09 00:36:42.588000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.161 [2024-10-09 00:36:42.588030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.161 qpair failed and we were unable to recover it. 00:29:12.161 [2024-10-09 00:36:42.588342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.162 [2024-10-09 00:36:42.588372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.162 qpair failed and we were unable to recover it. 00:29:12.162 [2024-10-09 00:36:42.588707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.162 [2024-10-09 00:36:42.588747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.162 qpair failed and we were unable to recover it. 00:29:12.162 [2024-10-09 00:36:42.589148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.162 [2024-10-09 00:36:42.589177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.162 qpair failed and we were unable to recover it. 00:29:12.162 [2024-10-09 00:36:42.589551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.162 [2024-10-09 00:36:42.589580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.162 qpair failed and we were unable to recover it. 00:29:12.162 [2024-10-09 00:36:42.589936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.162 [2024-10-09 00:36:42.589965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.162 qpair failed and we were unable to recover it. 00:29:12.162 [2024-10-09 00:36:42.590184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.162 [2024-10-09 00:36:42.590214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.162 qpair failed and we were unable to recover it. 00:29:12.162 [2024-10-09 00:36:42.590576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.162 [2024-10-09 00:36:42.590606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.162 qpair failed and we were unable to recover it. 00:29:12.162 [2024-10-09 00:36:42.590857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.162 [2024-10-09 00:36:42.590891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.162 qpair failed and we were unable to recover it. 00:29:12.162 [2024-10-09 00:36:42.591237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.162 [2024-10-09 00:36:42.591268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.162 qpair failed and we were unable to recover it. 00:29:12.162 [2024-10-09 00:36:42.591668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.162 [2024-10-09 00:36:42.591697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.162 qpair failed and we were unable to recover it. 00:29:12.162 [2024-10-09 00:36:42.592027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.162 [2024-10-09 00:36:42.592064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.162 qpair failed and we were unable to recover it. 00:29:12.162 [2024-10-09 00:36:42.592408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.162 [2024-10-09 00:36:42.592438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.162 qpair failed and we were unable to recover it. 00:29:12.162 [2024-10-09 00:36:42.592816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.162 [2024-10-09 00:36:42.592849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.162 qpair failed and we were unable to recover it. 00:29:12.162 [2024-10-09 00:36:42.593180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.162 [2024-10-09 00:36:42.593209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.162 qpair failed and we were unable to recover it. 00:29:12.162 [2024-10-09 00:36:42.593440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.162 [2024-10-09 00:36:42.593470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.162 qpair failed and we were unable to recover it. 00:29:12.162 [2024-10-09 00:36:42.593848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.162 [2024-10-09 00:36:42.593886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.162 qpair failed and we were unable to recover it. 00:29:12.162 [2024-10-09 00:36:42.594219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.162 [2024-10-09 00:36:42.594248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.162 qpair failed and we were unable to recover it. 00:29:12.162 [2024-10-09 00:36:42.594523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.162 [2024-10-09 00:36:42.594552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.162 qpair failed and we were unable to recover it. 00:29:12.162 [2024-10-09 00:36:42.594975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.162 [2024-10-09 00:36:42.595006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.162 qpair failed and we were unable to recover it. 00:29:12.162 [2024-10-09 00:36:42.595383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.162 [2024-10-09 00:36:42.595413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.162 qpair failed and we were unable to recover it. 00:29:12.162 [2024-10-09 00:36:42.595784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.162 [2024-10-09 00:36:42.595814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.162 qpair failed and we were unable to recover it. 00:29:12.162 [2024-10-09 00:36:42.596160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.162 [2024-10-09 00:36:42.596190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.162 qpair failed and we were unable to recover it. 00:29:12.162 [2024-10-09 00:36:42.596570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.162 [2024-10-09 00:36:42.596601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.162 qpair failed and we were unable to recover it. 00:29:12.162 [2024-10-09 00:36:42.596801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.162 [2024-10-09 00:36:42.596831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.162 qpair failed and we were unable to recover it. 00:29:12.162 [2024-10-09 00:36:42.597127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.162 [2024-10-09 00:36:42.597160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.162 qpair failed and we were unable to recover it. 00:29:12.162 [2024-10-09 00:36:42.597481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.162 [2024-10-09 00:36:42.597511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.162 qpair failed and we were unable to recover it. 00:29:12.162 [2024-10-09 00:36:42.597872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.162 [2024-10-09 00:36:42.597903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.162 qpair failed and we were unable to recover it. 00:29:12.162 [2024-10-09 00:36:42.598282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.162 [2024-10-09 00:36:42.598317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.162 qpair failed and we were unable to recover it. 00:29:12.162 [2024-10-09 00:36:42.598689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.162 [2024-10-09 00:36:42.598719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.162 qpair failed and we were unable to recover it. 00:29:12.162 [2024-10-09 00:36:42.599078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.162 [2024-10-09 00:36:42.599109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.162 qpair failed and we were unable to recover it. 00:29:12.162 [2024-10-09 00:36:42.599366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.162 [2024-10-09 00:36:42.599397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.162 qpair failed and we were unable to recover it. 00:29:12.162 [2024-10-09 00:36:42.599618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.162 [2024-10-09 00:36:42.599649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.162 qpair failed and we were unable to recover it. 00:29:12.162 [2024-10-09 00:36:42.600031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.162 [2024-10-09 00:36:42.600069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.162 qpair failed and we were unable to recover it. 00:29:12.162 [2024-10-09 00:36:42.600414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.162 [2024-10-09 00:36:42.600445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.162 qpair failed and we were unable to recover it. 00:29:12.162 [2024-10-09 00:36:42.600791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.162 [2024-10-09 00:36:42.600823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.162 qpair failed and we were unable to recover it. 00:29:12.162 [2024-10-09 00:36:42.601161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.162 [2024-10-09 00:36:42.601193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.162 qpair failed and we were unable to recover it. 00:29:12.162 [2024-10-09 00:36:42.601558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.162 [2024-10-09 00:36:42.601589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.162 qpair failed and we were unable to recover it. 00:29:12.163 [2024-10-09 00:36:42.601943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.163 [2024-10-09 00:36:42.601973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.163 qpair failed and we were unable to recover it. 00:29:12.163 [2024-10-09 00:36:42.602205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.163 [2024-10-09 00:36:42.602236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.163 qpair failed and we were unable to recover it. 00:29:12.163 [2024-10-09 00:36:42.602620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.163 [2024-10-09 00:36:42.602650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.163 qpair failed and we were unable to recover it. 00:29:12.163 [2024-10-09 00:36:42.603032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.163 [2024-10-09 00:36:42.603062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.163 qpair failed and we were unable to recover it. 00:29:12.163 [2024-10-09 00:36:42.603463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.163 [2024-10-09 00:36:42.603494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.163 qpair failed and we were unable to recover it. 00:29:12.163 [2024-10-09 00:36:42.603764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.163 [2024-10-09 00:36:42.603798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.163 qpair failed and we were unable to recover it. 00:29:12.163 [2024-10-09 00:36:42.604251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.163 [2024-10-09 00:36:42.604283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.163 qpair failed and we were unable to recover it. 00:29:12.163 [2024-10-09 00:36:42.604627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.163 [2024-10-09 00:36:42.604658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.163 qpair failed and we were unable to recover it. 00:29:12.163 [2024-10-09 00:36:42.605032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.163 [2024-10-09 00:36:42.605063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.163 qpair failed and we were unable to recover it. 00:29:12.163 [2024-10-09 00:36:42.605445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.163 [2024-10-09 00:36:42.605475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.163 qpair failed and we were unable to recover it. 00:29:12.163 [2024-10-09 00:36:42.605846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.163 [2024-10-09 00:36:42.605878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.163 qpair failed and we were unable to recover it. 00:29:12.163 [2024-10-09 00:36:42.606252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.163 [2024-10-09 00:36:42.606282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.163 qpair failed and we were unable to recover it. 00:29:12.163 [2024-10-09 00:36:42.606522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.163 [2024-10-09 00:36:42.606555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.163 qpair failed and we were unable to recover it. 00:29:12.163 [2024-10-09 00:36:42.606837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.163 [2024-10-09 00:36:42.606868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.163 qpair failed and we were unable to recover it. 00:29:12.163 [2024-10-09 00:36:42.607235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.163 [2024-10-09 00:36:42.607265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.163 qpair failed and we were unable to recover it. 00:29:12.163 [2024-10-09 00:36:42.607651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.163 [2024-10-09 00:36:42.607682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.163 qpair failed and we were unable to recover it. 00:29:12.163 [2024-10-09 00:36:42.607903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.163 [2024-10-09 00:36:42.607933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.163 qpair failed and we were unable to recover it. 00:29:12.163 [2024-10-09 00:36:42.608343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.163 [2024-10-09 00:36:42.608374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.163 qpair failed and we were unable to recover it. 00:29:12.163 [2024-10-09 00:36:42.608733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.163 [2024-10-09 00:36:42.608764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.163 qpair failed and we were unable to recover it. 00:29:12.163 [2024-10-09 00:36:42.609121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.163 [2024-10-09 00:36:42.609151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.163 qpair failed and we were unable to recover it. 00:29:12.163 [2024-10-09 00:36:42.609531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.163 [2024-10-09 00:36:42.609561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.163 qpair failed and we were unable to recover it. 00:29:12.163 [2024-10-09 00:36:42.609899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.163 [2024-10-09 00:36:42.609931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.163 qpair failed and we were unable to recover it. 00:29:12.163 [2024-10-09 00:36:42.610154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.163 [2024-10-09 00:36:42.610189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.163 qpair failed and we were unable to recover it. 00:29:12.163 [2024-10-09 00:36:42.610555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.163 [2024-10-09 00:36:42.610588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.163 qpair failed and we were unable to recover it. 00:29:12.163 [2024-10-09 00:36:42.610842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.163 [2024-10-09 00:36:42.610874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.163 qpair failed and we were unable to recover it. 00:29:12.163 [2024-10-09 00:36:42.611116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.163 [2024-10-09 00:36:42.611146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.163 qpair failed and we were unable to recover it. 00:29:12.163 [2024-10-09 00:36:42.611513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.163 [2024-10-09 00:36:42.611543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.163 qpair failed and we were unable to recover it. 00:29:12.163 [2024-10-09 00:36:42.611899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.163 [2024-10-09 00:36:42.611931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.163 qpair failed and we were unable to recover it. 00:29:12.163 [2024-10-09 00:36:42.612276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.163 [2024-10-09 00:36:42.612305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.163 qpair failed and we were unable to recover it. 00:29:12.163 [2024-10-09 00:36:42.612665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.163 [2024-10-09 00:36:42.612695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.163 qpair failed and we were unable to recover it. 00:29:12.163 [2024-10-09 00:36:42.612985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.163 [2024-10-09 00:36:42.613017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.163 qpair failed and we were unable to recover it. 00:29:12.163 [2024-10-09 00:36:42.613394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.163 [2024-10-09 00:36:42.613424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.163 qpair failed and we were unable to recover it. 00:29:12.163 [2024-10-09 00:36:42.613817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.163 [2024-10-09 00:36:42.613848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.163 qpair failed and we were unable to recover it. 00:29:12.163 [2024-10-09 00:36:42.614095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.163 [2024-10-09 00:36:42.614124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.163 qpair failed and we were unable to recover it. 00:29:12.163 [2024-10-09 00:36:42.614506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.163 [2024-10-09 00:36:42.614536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.164 qpair failed and we were unable to recover it. 00:29:12.164 [2024-10-09 00:36:42.614899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.164 [2024-10-09 00:36:42.614931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.164 qpair failed and we were unable to recover it. 00:29:12.164 [2024-10-09 00:36:42.615307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.164 [2024-10-09 00:36:42.615337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.164 qpair failed and we were unable to recover it. 00:29:12.164 [2024-10-09 00:36:42.615710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.164 [2024-10-09 00:36:42.615749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.164 qpair failed and we were unable to recover it. 00:29:12.164 [2024-10-09 00:36:42.616109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.164 [2024-10-09 00:36:42.616141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.164 qpair failed and we were unable to recover it. 00:29:12.164 [2024-10-09 00:36:42.616491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.164 [2024-10-09 00:36:42.616520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.164 qpair failed and we were unable to recover it. 00:29:12.164 [2024-10-09 00:36:42.616853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.164 [2024-10-09 00:36:42.616883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.164 qpair failed and we were unable to recover it. 00:29:12.164 [2024-10-09 00:36:42.617238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.164 [2024-10-09 00:36:42.617267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.164 qpair failed and we were unable to recover it. 00:29:12.164 [2024-10-09 00:36:42.617462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.164 [2024-10-09 00:36:42.617492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.164 qpair failed and we were unable to recover it. 00:29:12.164 [2024-10-09 00:36:42.617763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.164 [2024-10-09 00:36:42.617796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.164 qpair failed and we were unable to recover it. 00:29:12.164 [2024-10-09 00:36:42.618170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.164 [2024-10-09 00:36:42.618200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.164 qpair failed and we were unable to recover it. 00:29:12.164 [2024-10-09 00:36:42.618546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.164 [2024-10-09 00:36:42.618575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.164 qpair failed and we were unable to recover it. 00:29:12.164 [2024-10-09 00:36:42.618921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.164 [2024-10-09 00:36:42.618952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.164 qpair failed and we were unable to recover it. 00:29:12.164 [2024-10-09 00:36:42.619338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.164 [2024-10-09 00:36:42.619367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.164 qpair failed and we were unable to recover it. 00:29:12.164 [2024-10-09 00:36:42.619703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.164 [2024-10-09 00:36:42.619757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.164 qpair failed and we were unable to recover it. 00:29:12.164 [2024-10-09 00:36:42.620126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.164 [2024-10-09 00:36:42.620171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.164 qpair failed and we were unable to recover it. 00:29:12.164 [2024-10-09 00:36:42.620516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.164 [2024-10-09 00:36:42.620553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.164 qpair failed and we were unable to recover it. 00:29:12.164 [2024-10-09 00:36:42.620898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.164 [2024-10-09 00:36:42.620929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.164 qpair failed and we were unable to recover it. 00:29:12.164 [2024-10-09 00:36:42.621025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.164 [2024-10-09 00:36:42.621054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.164 qpair failed and we were unable to recover it. 00:29:12.164 [2024-10-09 00:36:42.621378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.164 [2024-10-09 00:36:42.621408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.164 qpair failed and we were unable to recover it. 00:29:12.164 [2024-10-09 00:36:42.621770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.164 [2024-10-09 00:36:42.621803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.164 qpair failed and we were unable to recover it. 00:29:12.164 [2024-10-09 00:36:42.622040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.164 [2024-10-09 00:36:42.622070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.164 qpair failed and we were unable to recover it. 00:29:12.164 [2024-10-09 00:36:42.622445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.164 [2024-10-09 00:36:42.622477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.164 qpair failed and we were unable to recover it. 00:29:12.164 [2024-10-09 00:36:42.622841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.164 [2024-10-09 00:36:42.622873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.164 qpair failed and we were unable to recover it. 00:29:12.164 [2024-10-09 00:36:42.623212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.164 [2024-10-09 00:36:42.623242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.164 qpair failed and we were unable to recover it. 00:29:12.164 [2024-10-09 00:36:42.623603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.164 [2024-10-09 00:36:42.623634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.164 qpair failed and we were unable to recover it. 00:29:12.164 [2024-10-09 00:36:42.623866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.164 [2024-10-09 00:36:42.623896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.164 qpair failed and we were unable to recover it. 00:29:12.164 [2024-10-09 00:36:42.624284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.164 [2024-10-09 00:36:42.624315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.164 qpair failed and we were unable to recover it. 00:29:12.164 [2024-10-09 00:36:42.624672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.164 [2024-10-09 00:36:42.624705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.164 qpair failed and we were unable to recover it. 00:29:12.164 [2024-10-09 00:36:42.624967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.164 [2024-10-09 00:36:42.625000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.164 qpair failed and we were unable to recover it. 00:29:12.164 [2024-10-09 00:36:42.625376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.164 [2024-10-09 00:36:42.625409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.164 qpair failed and we were unable to recover it. 00:29:12.164 [2024-10-09 00:36:42.625780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.164 [2024-10-09 00:36:42.625812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.164 qpair failed and we were unable to recover it. 00:29:12.164 [2024-10-09 00:36:42.626164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.164 [2024-10-09 00:36:42.626193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.164 qpair failed and we were unable to recover it. 00:29:12.164 [2024-10-09 00:36:42.626424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.164 [2024-10-09 00:36:42.626453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.164 qpair failed and we were unable to recover it. 00:29:12.164 [2024-10-09 00:36:42.626676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.164 [2024-10-09 00:36:42.626707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.164 qpair failed and we were unable to recover it. 00:29:12.164 [2024-10-09 00:36:42.627105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.164 [2024-10-09 00:36:42.627137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.164 qpair failed and we were unable to recover it. 00:29:12.164 [2024-10-09 00:36:42.627523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.164 [2024-10-09 00:36:42.627555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.164 qpair failed and we were unable to recover it. 00:29:12.164 [2024-10-09 00:36:42.627904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.164 [2024-10-09 00:36:42.627937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.164 qpair failed and we were unable to recover it. 00:29:12.164 [2024-10-09 00:36:42.628177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.164 [2024-10-09 00:36:42.628208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.164 qpair failed and we were unable to recover it. 00:29:12.164 [2024-10-09 00:36:42.628423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.165 [2024-10-09 00:36:42.628455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.165 qpair failed and we were unable to recover it. 00:29:12.165 [2024-10-09 00:36:42.628701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.165 [2024-10-09 00:36:42.628745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.165 qpair failed and we were unable to recover it. 00:29:12.165 [2024-10-09 00:36:42.629113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.165 [2024-10-09 00:36:42.629146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.165 qpair failed and we were unable to recover it. 00:29:12.165 [2024-10-09 00:36:42.629496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.165 [2024-10-09 00:36:42.629527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.165 qpair failed and we were unable to recover it. 00:29:12.165 [2024-10-09 00:36:42.629919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.165 [2024-10-09 00:36:42.629952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.165 qpair failed and we were unable to recover it. 00:29:12.165 [2024-10-09 00:36:42.630328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.165 [2024-10-09 00:36:42.630359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.165 qpair failed and we were unable to recover it. 00:29:12.165 [2024-10-09 00:36:42.630707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.165 [2024-10-09 00:36:42.630746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.165 qpair failed and we were unable to recover it. 00:29:12.165 [2024-10-09 00:36:42.631166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.165 [2024-10-09 00:36:42.631197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.165 qpair failed and we were unable to recover it. 00:29:12.165 [2024-10-09 00:36:42.631420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.165 [2024-10-09 00:36:42.631450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.165 qpair failed and we were unable to recover it. 00:29:12.165 [2024-10-09 00:36:42.631806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.165 [2024-10-09 00:36:42.631841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.165 qpair failed and we were unable to recover it. 00:29:12.165 [2024-10-09 00:36:42.632218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.165 [2024-10-09 00:36:42.632249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.165 qpair failed and we were unable to recover it. 00:29:12.165 [2024-10-09 00:36:42.632518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.165 [2024-10-09 00:36:42.632552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.165 qpair failed and we were unable to recover it. 00:29:12.165 [2024-10-09 00:36:42.632890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.165 [2024-10-09 00:36:42.632923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.165 qpair failed and we were unable to recover it. 00:29:12.165 [2024-10-09 00:36:42.633287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.165 [2024-10-09 00:36:42.633319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.165 qpair failed and we were unable to recover it. 00:29:12.165 [2024-10-09 00:36:42.633665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.165 [2024-10-09 00:36:42.633696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.165 qpair failed and we were unable to recover it. 00:29:12.165 [2024-10-09 00:36:42.633956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.165 [2024-10-09 00:36:42.633986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.165 qpair failed and we were unable to recover it. 00:29:12.165 [2024-10-09 00:36:42.634377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.165 [2024-10-09 00:36:42.634409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.165 qpair failed and we were unable to recover it. 00:29:12.165 [2024-10-09 00:36:42.634757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.165 [2024-10-09 00:36:42.634790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.165 qpair failed and we were unable to recover it. 00:29:12.165 [2024-10-09 00:36:42.635168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.165 [2024-10-09 00:36:42.635199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.165 qpair failed and we were unable to recover it. 00:29:12.165 [2024-10-09 00:36:42.635622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.165 [2024-10-09 00:36:42.635654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.165 qpair failed and we were unable to recover it. 00:29:12.165 [2024-10-09 00:36:42.635868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.165 [2024-10-09 00:36:42.635900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.165 qpair failed and we were unable to recover it. 00:29:12.165 [2024-10-09 00:36:42.636247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.165 [2024-10-09 00:36:42.636280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.165 qpair failed and we were unable to recover it. 00:29:12.165 [2024-10-09 00:36:42.636648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.165 [2024-10-09 00:36:42.636680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.165 qpair failed and we were unable to recover it. 00:29:12.165 [2024-10-09 00:36:42.637049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.165 [2024-10-09 00:36:42.637081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.165 qpair failed and we were unable to recover it. 00:29:12.165 [2024-10-09 00:36:42.637425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.165 [2024-10-09 00:36:42.637456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.165 qpair failed and we were unable to recover it. 00:29:12.165 [2024-10-09 00:36:42.637813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.165 [2024-10-09 00:36:42.637846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.165 qpair failed and we were unable to recover it. 00:29:12.165 [2024-10-09 00:36:42.638199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.165 [2024-10-09 00:36:42.638237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.165 qpair failed and we were unable to recover it. 00:29:12.165 [2024-10-09 00:36:42.638582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.165 [2024-10-09 00:36:42.638612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.165 qpair failed and we were unable to recover it. 00:29:12.165 [2024-10-09 00:36:42.638712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.165 [2024-10-09 00:36:42.638753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334180 with addr=10.0.0.2, port=4420 00:29:12.165 qpair failed and we were unable to recover it. 00:29:12.165 [2024-10-09 00:36:42.639218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.165 [2024-10-09 00:36:42.639327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:12.165 qpair failed and we were unable to recover it. 00:29:12.165 [2024-10-09 00:36:42.639752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.165 [2024-10-09 00:36:42.639794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:12.165 qpair failed and we were unable to recover it. 00:29:12.165 [2024-10-09 00:36:42.640211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.165 [2024-10-09 00:36:42.640246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:12.165 qpair failed and we were unable to recover it. 00:29:12.165 [2024-10-09 00:36:42.640598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.165 [2024-10-09 00:36:42.640628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:12.165 qpair failed and we were unable to recover it. 00:29:12.165 [2024-10-09 00:36:42.641078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.165 [2024-10-09 00:36:42.641185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:12.165 qpair failed and we were unable to recover it. 00:29:12.165 [2024-10-09 00:36:42.641653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.165 [2024-10-09 00:36:42.641691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:12.165 qpair failed and we were unable to recover it. 00:29:12.165 [2024-10-09 00:36:42.641982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.165 [2024-10-09 00:36:42.642017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:12.165 qpair failed and we were unable to recover it. 00:29:12.165 [2024-10-09 00:36:42.642402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.165 [2024-10-09 00:36:42.642433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:12.165 qpair failed and we were unable to recover it. 00:29:12.165 [2024-10-09 00:36:42.642779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.165 [2024-10-09 00:36:42.642811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:12.165 qpair failed and we were unable to recover it. 00:29:12.165 [2024-10-09 00:36:42.643179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.165 [2024-10-09 00:36:42.643211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:12.165 qpair failed and we were unable to recover it. 00:29:12.165 [2024-10-09 00:36:42.643593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.165 [2024-10-09 00:36:42.643624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:12.165 qpair failed and we were unable to recover it. 00:29:12.165 [2024-10-09 00:36:42.643970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.166 [2024-10-09 00:36:42.644003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:12.166 qpair failed and we were unable to recover it. 00:29:12.166 [2024-10-09 00:36:42.644213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.166 [2024-10-09 00:36:42.644243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:12.166 qpair failed and we were unable to recover it. 00:29:12.166 [2024-10-09 00:36:42.644505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.166 [2024-10-09 00:36:42.644540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:12.166 qpair failed and we were unable to recover it. 00:29:12.166 [2024-10-09 00:36:42.644930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.166 [2024-10-09 00:36:42.644962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:12.166 qpair failed and we were unable to recover it. 00:29:12.166 [2024-10-09 00:36:42.645300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.166 [2024-10-09 00:36:42.645333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:12.166 qpair failed and we were unable to recover it. 00:29:12.166 [2024-10-09 00:36:42.645585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.166 [2024-10-09 00:36:42.645618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:12.166 qpair failed and we were unable to recover it. 00:29:12.166 [2024-10-09 00:36:42.645996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.166 [2024-10-09 00:36:42.646029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:12.166 qpair failed and we were unable to recover it. 00:29:12.166 [2024-10-09 00:36:42.646397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.166 [2024-10-09 00:36:42.646431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:12.166 qpair failed and we were unable to recover it. 00:29:12.166 [2024-10-09 00:36:42.646801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.166 [2024-10-09 00:36:42.646841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:12.166 qpair failed and we were unable to recover it. 00:29:12.166 [2024-10-09 00:36:42.647221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.166 [2024-10-09 00:36:42.647256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:12.166 qpair failed and we were unable to recover it. 00:29:12.166 [2024-10-09 00:36:42.647607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.166 [2024-10-09 00:36:42.647641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:12.166 qpair failed and we were unable to recover it. 00:29:12.166 [2024-10-09 00:36:42.647902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.166 [2024-10-09 00:36:42.647936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:12.166 qpair failed and we were unable to recover it. 00:29:12.166 [2024-10-09 00:36:42.648174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.166 [2024-10-09 00:36:42.648209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:12.166 qpair failed and we were unable to recover it. 00:29:12.166 [2024-10-09 00:36:42.648564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.166 [2024-10-09 00:36:42.648596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:12.166 qpair failed and we were unable to recover it. 00:29:12.166 [2024-10-09 00:36:42.648952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.166 [2024-10-09 00:36:42.648990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:12.166 qpair failed and we were unable to recover it. 00:29:12.166 [2024-10-09 00:36:42.649219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.166 [2024-10-09 00:36:42.649252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:12.166 qpair failed and we were unable to recover it. 00:29:12.166 [2024-10-09 00:36:42.649404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.166 [2024-10-09 00:36:42.649437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:12.166 qpair failed and we were unable to recover it. 00:29:12.166 [2024-10-09 00:36:42.649815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.166 [2024-10-09 00:36:42.649857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:12.166 qpair failed and we were unable to recover it. 00:29:12.166 [2024-10-09 00:36:42.650208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.166 [2024-10-09 00:36:42.650249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:12.166 qpair failed and we were unable to recover it. 00:29:12.166 [2024-10-09 00:36:42.650632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.166 [2024-10-09 00:36:42.650664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:12.166 qpair failed and we were unable to recover it. 00:29:12.166 [2024-10-09 00:36:42.651082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.166 [2024-10-09 00:36:42.651115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:12.166 qpair failed and we were unable to recover it. 00:29:12.166 [2024-10-09 00:36:42.651551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.166 [2024-10-09 00:36:42.651585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:12.166 qpair failed and we were unable to recover it. 00:29:12.166 [2024-10-09 00:36:42.651937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.166 [2024-10-09 00:36:42.651969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:12.166 qpair failed and we were unable to recover it. 00:29:12.166 [2024-10-09 00:36:42.652342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.166 [2024-10-09 00:36:42.652373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:12.166 qpair failed and we were unable to recover it. 00:29:12.166 [2024-10-09 00:36:42.652609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.166 [2024-10-09 00:36:42.652642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:12.166 qpair failed and we were unable to recover it. 00:29:12.166 [2024-10-09 00:36:42.653014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.166 [2024-10-09 00:36:42.653044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:12.166 qpair failed and we were unable to recover it. 00:29:12.166 [2024-10-09 00:36:42.653403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.166 [2024-10-09 00:36:42.653433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:12.166 qpair failed and we were unable to recover it. 00:29:12.166 [2024-10-09 00:36:42.653803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.166 [2024-10-09 00:36:42.653836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:12.166 qpair failed and we were unable to recover it. 00:29:12.166 [2024-10-09 00:36:42.654099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.166 [2024-10-09 00:36:42.654128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:12.166 qpair failed and we were unable to recover it. 00:29:12.166 [2024-10-09 00:36:42.654490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.166 [2024-10-09 00:36:42.654521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:12.166 qpair failed and we were unable to recover it. 00:29:12.166 [2024-10-09 00:36:42.654774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.166 [2024-10-09 00:36:42.654806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:12.166 qpair failed and we were unable to recover it. 00:29:12.166 [2024-10-09 00:36:42.655197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.166 [2024-10-09 00:36:42.655228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:12.166 qpair failed and we were unable to recover it. 00:29:12.166 [2024-10-09 00:36:42.655582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.166 [2024-10-09 00:36:42.655613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:12.166 qpair failed and we were unable to recover it. 00:29:12.166 [2024-10-09 00:36:42.655959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.166 [2024-10-09 00:36:42.655992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:12.166 qpair failed and we were unable to recover it. 00:29:12.166 [2024-10-09 00:36:42.656353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.166 [2024-10-09 00:36:42.656390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:12.166 qpair failed and we were unable to recover it. 00:29:12.166 [2024-10-09 00:36:42.656753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.166 [2024-10-09 00:36:42.656785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:12.166 qpair failed and we were unable to recover it. 00:29:12.166 [2024-10-09 00:36:42.657126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.167 [2024-10-09 00:36:42.657161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:12.167 qpair failed and we were unable to recover it. 00:29:12.167 [2024-10-09 00:36:42.657528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.167 [2024-10-09 00:36:42.657558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:12.167 qpair failed and we were unable to recover it. 00:29:12.167 [2024-10-09 00:36:42.657791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.167 [2024-10-09 00:36:42.657824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:12.167 qpair failed and we were unable to recover it. 00:29:12.167 [2024-10-09 00:36:42.658176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.167 [2024-10-09 00:36:42.658206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:12.167 qpair failed and we were unable to recover it. 00:29:12.167 [2024-10-09 00:36:42.658578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.167 [2024-10-09 00:36:42.658609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:12.167 qpair failed and we were unable to recover it. 00:29:12.167 [2024-10-09 00:36:42.658893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.167 [2024-10-09 00:36:42.658925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:12.167 qpair failed and we were unable to recover it. 00:29:12.167 [2024-10-09 00:36:42.659276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.167 [2024-10-09 00:36:42.659307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:12.167 qpair failed and we were unable to recover it. 00:29:12.167 [2024-10-09 00:36:42.659669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.167 [2024-10-09 00:36:42.659700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:12.167 qpair failed and we were unable to recover it. 00:29:12.167 [2024-10-09 00:36:42.660073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.167 [2024-10-09 00:36:42.660106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:12.167 qpair failed and we were unable to recover it. 00:29:12.167 [2024-10-09 00:36:42.660466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.167 [2024-10-09 00:36:42.660498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:12.167 qpair failed and we were unable to recover it. 00:29:12.167 [2024-10-09 00:36:42.660860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.167 [2024-10-09 00:36:42.660893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:12.167 qpair failed and we were unable to recover it. 00:29:12.167 [2024-10-09 00:36:42.661253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.167 [2024-10-09 00:36:42.661284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:12.167 qpair failed and we were unable to recover it. 00:29:12.167 [2024-10-09 00:36:42.661651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.167 [2024-10-09 00:36:42.661683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:12.167 qpair failed and we were unable to recover it. 00:29:12.167 [2024-10-09 00:36:42.661931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.167 [2024-10-09 00:36:42.661962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:12.167 qpair failed and we were unable to recover it. 00:29:12.167 [2024-10-09 00:36:42.662329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.167 [2024-10-09 00:36:42.662359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:12.167 qpair failed and we were unable to recover it. 00:29:12.167 [2024-10-09 00:36:42.662741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.167 [2024-10-09 00:36:42.662773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:12.167 qpair failed and we were unable to recover it. 00:29:12.167 [2024-10-09 00:36:42.663135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.167 [2024-10-09 00:36:42.663168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:12.167 qpair failed and we were unable to recover it. 00:29:12.167 [2024-10-09 00:36:42.663509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.167 [2024-10-09 00:36:42.663541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:12.167 qpair failed and we were unable to recover it. 00:29:12.167 [2024-10-09 00:36:42.663776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.167 [2024-10-09 00:36:42.663807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:12.167 qpair failed and we were unable to recover it. 00:29:12.167 [2024-10-09 00:36:42.664183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.167 [2024-10-09 00:36:42.664214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:12.167 qpair failed and we were unable to recover it. 00:29:12.167 [2024-10-09 00:36:42.664577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.167 [2024-10-09 00:36:42.664607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:12.167 qpair failed and we were unable to recover it. 00:29:12.167 [2024-10-09 00:36:42.664974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.167 [2024-10-09 00:36:42.665013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:12.167 qpair failed and we were unable to recover it. 00:29:12.167 [2024-10-09 00:36:42.665391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.167 [2024-10-09 00:36:42.665422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:12.167 qpair failed and we were unable to recover it. 00:29:12.167 [2024-10-09 00:36:42.665648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.167 [2024-10-09 00:36:42.665680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:12.167 qpair failed and we were unable to recover it. 00:29:12.167 [2024-10-09 00:36:42.665986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.167 [2024-10-09 00:36:42.666019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:12.167 qpair failed and we were unable to recover it. 00:29:12.167 [2024-10-09 00:36:42.666398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.167 [2024-10-09 00:36:42.666428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:12.167 qpair failed and we were unable to recover it. 00:29:12.167 [2024-10-09 00:36:42.666818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.167 [2024-10-09 00:36:42.666850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:12.167 qpair failed and we were unable to recover it. 00:29:12.167 [2024-10-09 00:36:42.667222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.167 [2024-10-09 00:36:42.667255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:12.167 qpair failed and we were unable to recover it. 00:29:12.167 [2024-10-09 00:36:42.667626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.167 [2024-10-09 00:36:42.667658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:12.167 qpair failed and we were unable to recover it. 00:29:12.167 [2024-10-09 00:36:42.667888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.167 [2024-10-09 00:36:42.667922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:12.167 qpair failed and we were unable to recover it. 00:29:12.167 [2024-10-09 00:36:42.668165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.167 [2024-10-09 00:36:42.668196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:12.167 qpair failed and we were unable to recover it. 00:29:12.167 [2024-10-09 00:36:42.668551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.167 [2024-10-09 00:36:42.668582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:12.167 qpair failed and we were unable to recover it. 00:29:12.167 [2024-10-09 00:36:42.668903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.167 [2024-10-09 00:36:42.668935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:12.167 qpair failed and we were unable to recover it. 00:29:12.167 [2024-10-09 00:36:42.669167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.167 [2024-10-09 00:36:42.669197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:12.167 qpair failed and we were unable to recover it. 00:29:12.167 [2024-10-09 00:36:42.669594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.167 [2024-10-09 00:36:42.669625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:12.167 qpair failed and we were unable to recover it. 00:29:12.167 [2024-10-09 00:36:42.669966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.167 [2024-10-09 00:36:42.670000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:12.167 qpair failed and we were unable to recover it. 00:29:12.167 [2024-10-09 00:36:42.670369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.167 [2024-10-09 00:36:42.670399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:12.168 qpair failed and we were unable to recover it. 00:29:12.168 [2024-10-09 00:36:42.670716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.168 [2024-10-09 00:36:42.670757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:12.168 qpair failed and we were unable to recover it. 00:29:12.168 [2024-10-09 00:36:42.671092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.168 [2024-10-09 00:36:42.671122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:12.168 qpair failed and we were unable to recover it. 00:29:12.168 [2024-10-09 00:36:42.671493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.168 [2024-10-09 00:36:42.671524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:12.168 qpair failed and we were unable to recover it. 00:29:12.168 [2024-10-09 00:36:42.671891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.168 [2024-10-09 00:36:42.671922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:12.168 qpair failed and we were unable to recover it. 00:29:12.168 [2024-10-09 00:36:42.672290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.168 [2024-10-09 00:36:42.672323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:12.168 qpair failed and we were unable to recover it. 00:29:12.168 [2024-10-09 00:36:42.672679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.168 [2024-10-09 00:36:42.672708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:12.168 qpair failed and we were unable to recover it. 00:29:12.168 [2024-10-09 00:36:42.673104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.168 [2024-10-09 00:36:42.673136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:12.168 qpair failed and we were unable to recover it. 00:29:12.168 [2024-10-09 00:36:42.673492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.168 [2024-10-09 00:36:42.673524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:12.168 qpair failed and we were unable to recover it. 00:29:12.168 [2024-10-09 00:36:42.673894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.168 [2024-10-09 00:36:42.673927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:12.168 qpair failed and we were unable to recover it. 00:29:12.168 [2024-10-09 00:36:42.674319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.168 [2024-10-09 00:36:42.674349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:12.168 qpair failed and we were unable to recover it. 00:29:12.168 [2024-10-09 00:36:42.674707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.168 [2024-10-09 00:36:42.674748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:12.168 qpair failed and we were unable to recover it. 00:29:12.168 [2024-10-09 00:36:42.675116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.168 [2024-10-09 00:36:42.675148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:12.168 qpair failed and we were unable to recover it. 00:29:12.168 [2024-10-09 00:36:42.675505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.168 [2024-10-09 00:36:42.675535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:12.168 qpair failed and we were unable to recover it. 00:29:12.168 [2024-10-09 00:36:42.675882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.168 [2024-10-09 00:36:42.675914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:12.168 qpair failed and we were unable to recover it. 00:29:12.168 [2024-10-09 00:36:42.676284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.168 [2024-10-09 00:36:42.676315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:12.168 qpair failed and we were unable to recover it. 00:29:12.168 [2024-10-09 00:36:42.676688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.168 [2024-10-09 00:36:42.676727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:12.168 qpair failed and we were unable to recover it. 00:29:12.168 [2024-10-09 00:36:42.677103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.168 [2024-10-09 00:36:42.677134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:12.168 qpair failed and we were unable to recover it. 00:29:12.168 [2024-10-09 00:36:42.677492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.168 [2024-10-09 00:36:42.677524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:12.168 qpair failed and we were unable to recover it. 00:29:12.168 [2024-10-09 00:36:42.677795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.168 [2024-10-09 00:36:42.677831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:12.168 qpair failed and we were unable to recover it. 00:29:12.168 [2024-10-09 00:36:42.678198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.168 [2024-10-09 00:36:42.678230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:12.168 qpair failed and we were unable to recover it. 00:29:12.168 [2024-10-09 00:36:42.678467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.168 [2024-10-09 00:36:42.678497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:12.168 qpair failed and we were unable to recover it. 00:29:12.168 [2024-10-09 00:36:42.678748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.168 [2024-10-09 00:36:42.678779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:12.168 qpair failed and we were unable to recover it. 00:29:12.168 [2024-10-09 00:36:42.679049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.168 [2024-10-09 00:36:42.679079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:12.168 qpair failed and we were unable to recover it. 00:29:12.168 [2024-10-09 00:36:42.679436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.168 [2024-10-09 00:36:42.679477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:12.168 qpair failed and we were unable to recover it. 00:29:12.168 [2024-10-09 00:36:42.679876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.168 [2024-10-09 00:36:42.679913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:12.168 qpair failed and we were unable to recover it. 00:29:12.168 [2024-10-09 00:36:42.680290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.168 [2024-10-09 00:36:42.680321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:12.168 qpair failed and we were unable to recover it. 00:29:12.168 [2024-10-09 00:36:42.680542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.168 [2024-10-09 00:36:42.680572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:12.168 qpair failed and we were unable to recover it. 00:29:12.168 [2024-10-09 00:36:42.680948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.168 [2024-10-09 00:36:42.680978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:12.168 qpair failed and we were unable to recover it. 00:29:12.168 [2024-10-09 00:36:42.681213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.168 [2024-10-09 00:36:42.681243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:12.168 qpair failed and we were unable to recover it. 00:29:12.168 [2024-10-09 00:36:42.681617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.168 [2024-10-09 00:36:42.681647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:12.168 qpair failed and we were unable to recover it. 00:29:12.168 [2024-10-09 00:36:42.682013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.168 [2024-10-09 00:36:42.682046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:12.168 qpair failed and we were unable to recover it. 00:29:12.168 [2024-10-09 00:36:42.682266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.168 [2024-10-09 00:36:42.682300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:12.168 qpair failed and we were unable to recover it. 00:29:12.168 [2024-10-09 00:36:42.682669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.168 [2024-10-09 00:36:42.682699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:12.168 qpair failed and we were unable to recover it. 00:29:12.168 [2024-10-09 00:36:42.683078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.168 [2024-10-09 00:36:42.683109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:12.168 qpair failed and we were unable to recover it. 00:29:12.168 [2024-10-09 00:36:42.683489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.168 [2024-10-09 00:36:42.683520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:12.168 qpair failed and we were unable to recover it. 00:29:12.168 [2024-10-09 00:36:42.683888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.169 [2024-10-09 00:36:42.683921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:12.169 qpair failed and we were unable to recover it. 00:29:12.169 [2024-10-09 00:36:42.684260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.169 [2024-10-09 00:36:42.684290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:12.169 qpair failed and we were unable to recover it. 00:29:12.169 [2024-10-09 00:36:42.684650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.169 [2024-10-09 00:36:42.684679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:12.169 qpair failed and we were unable to recover it. 00:29:12.169 [2024-10-09 00:36:42.685105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.169 [2024-10-09 00:36:42.685137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:12.169 qpair failed and we were unable to recover it. 00:29:12.169 [2024-10-09 00:36:42.685494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.169 [2024-10-09 00:36:42.685524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:12.169 qpair failed and we were unable to recover it. 00:29:12.169 [2024-10-09 00:36:42.685887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.169 [2024-10-09 00:36:42.685918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:12.169 qpair failed and we were unable to recover it. 00:29:12.169 [2024-10-09 00:36:42.686286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.169 [2024-10-09 00:36:42.686319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:12.169 qpair failed and we were unable to recover it. 00:29:12.169 [2024-10-09 00:36:42.686692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.169 [2024-10-09 00:36:42.686733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:12.169 qpair failed and we were unable to recover it. 00:29:12.169 [2024-10-09 00:36:42.686966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.169 [2024-10-09 00:36:42.687001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:12.169 qpair failed and we were unable to recover it. 00:29:12.169 [2024-10-09 00:36:42.687102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.169 [2024-10-09 00:36:42.687132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:12.169 qpair failed and we were unable to recover it. 00:29:12.169 [2024-10-09 00:36:42.687520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.169 [2024-10-09 00:36:42.687552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:12.169 qpair failed and we were unable to recover it. 00:29:12.169 [2024-10-09 00:36:42.687798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.169 [2024-10-09 00:36:42.687830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:12.169 qpair failed and we were unable to recover it. 00:29:12.169 [2024-10-09 00:36:42.688189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.169 [2024-10-09 00:36:42.688220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:12.169 qpair failed and we were unable to recover it. 00:29:12.169 [2024-10-09 00:36:42.688597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.169 [2024-10-09 00:36:42.688628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:12.169 qpair failed and we were unable to recover it. 00:29:12.169 [2024-10-09 00:36:42.688893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.169 [2024-10-09 00:36:42.688927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:12.169 qpair failed and we were unable to recover it. 00:29:12.169 [2024-10-09 00:36:42.689281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.169 [2024-10-09 00:36:42.689312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:12.169 qpair failed and we were unable to recover it. 00:29:12.169 [2024-10-09 00:36:42.689682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.169 [2024-10-09 00:36:42.689712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:12.169 qpair failed and we were unable to recover it. 00:29:12.169 [2024-10-09 00:36:42.690115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.169 [2024-10-09 00:36:42.690148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:12.169 qpair failed and we were unable to recover it. 00:29:12.169 [2024-10-09 00:36:42.690377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.169 [2024-10-09 00:36:42.690408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:12.169 qpair failed and we were unable to recover it. 00:29:12.169 [2024-10-09 00:36:42.690746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.169 [2024-10-09 00:36:42.690779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:12.169 qpair failed and we were unable to recover it. 00:29:12.169 [2024-10-09 00:36:42.690996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.169 [2024-10-09 00:36:42.691029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:12.169 qpair failed and we were unable to recover it. 00:29:12.169 [2024-10-09 00:36:42.691246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.169 [2024-10-09 00:36:42.691277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:12.169 qpair failed and we were unable to recover it. 00:29:12.169 [2024-10-09 00:36:42.691643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.169 [2024-10-09 00:36:42.691674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:12.169 qpair failed and we were unable to recover it. 00:29:12.169 [2024-10-09 00:36:42.692025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.169 [2024-10-09 00:36:42.692058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:12.169 qpair failed and we were unable to recover it. 00:29:12.169 [2024-10-09 00:36:42.692411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.169 [2024-10-09 00:36:42.692443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:12.169 qpair failed and we were unable to recover it. 00:29:12.169 [2024-10-09 00:36:42.692791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.169 [2024-10-09 00:36:42.692822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:12.169 qpair failed and we were unable to recover it. 00:29:12.169 [2024-10-09 00:36:42.693205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.169 [2024-10-09 00:36:42.693234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:12.169 qpair failed and we were unable to recover it. 00:29:12.169 [2024-10-09 00:36:42.693602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.169 [2024-10-09 00:36:42.693632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:12.169 qpair failed and we were unable to recover it. 00:29:12.169 [2024-10-09 00:36:42.693993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.169 [2024-10-09 00:36:42.694024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:12.169 qpair failed and we were unable to recover it. 00:29:12.169 [2024-10-09 00:36:42.694343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.169 [2024-10-09 00:36:42.694379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:12.169 qpair failed and we were unable to recover it. 00:29:12.169 [2024-10-09 00:36:42.694616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.169 [2024-10-09 00:36:42.694649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:12.169 qpair failed and we were unable to recover it. 00:29:12.169 [2024-10-09 00:36:42.695022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.169 [2024-10-09 00:36:42.695058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:12.169 qpair failed and we were unable to recover it. 00:29:12.169 [2024-10-09 00:36:42.695412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.169 [2024-10-09 00:36:42.695446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:12.169 qpair failed and we were unable to recover it. 00:29:12.169 [2024-10-09 00:36:42.695696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.169 [2024-10-09 00:36:42.695733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:12.169 qpair failed and we were unable to recover it. 00:29:12.169 [2024-10-09 00:36:42.696101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.169 [2024-10-09 00:36:42.696130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:12.169 qpair failed and we were unable to recover it. 00:29:12.169 [2024-10-09 00:36:42.696456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.169 [2024-10-09 00:36:42.696491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:12.169 qpair failed and we were unable to recover it. 00:29:12.169 [2024-10-09 00:36:42.696778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.169 [2024-10-09 00:36:42.696808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:12.169 qpair failed and we were unable to recover it. 00:29:12.169 [2024-10-09 00:36:42.697030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.169 [2024-10-09 00:36:42.697060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:12.169 qpair failed and we were unable to recover it. 00:29:12.169 [2024-10-09 00:36:42.697432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.169 [2024-10-09 00:36:42.697466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:12.169 qpair failed and we were unable to recover it. 00:29:12.169 [2024-10-09 00:36:42.697816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.169 [2024-10-09 00:36:42.697846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:12.169 qpair failed and we were unable to recover it. 00:29:12.169 [2024-10-09 00:36:42.698233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.170 [2024-10-09 00:36:42.698264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:12.170 qpair failed and we were unable to recover it. 00:29:12.170 [2024-10-09 00:36:42.698625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.170 [2024-10-09 00:36:42.698654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:12.170 qpair failed and we were unable to recover it. 00:29:12.170 [2024-10-09 00:36:42.699030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.170 [2024-10-09 00:36:42.699062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:12.170 qpair failed and we were unable to recover it. 00:29:12.170 [2024-10-09 00:36:42.699304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.170 [2024-10-09 00:36:42.699336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:12.170 qpair failed and we were unable to recover it. 00:29:12.170 [2024-10-09 00:36:42.699542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.170 [2024-10-09 00:36:42.699573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:12.170 qpair failed and we were unable to recover it. 00:29:12.170 [2024-10-09 00:36:42.699966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.170 [2024-10-09 00:36:42.699996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:12.170 qpair failed and we were unable to recover it. 00:29:12.170 [2024-10-09 00:36:42.700351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.170 [2024-10-09 00:36:42.700382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:12.170 qpair failed and we were unable to recover it. 00:29:12.170 [2024-10-09 00:36:42.700674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.170 [2024-10-09 00:36:42.700703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:12.170 qpair failed and we were unable to recover it. 00:29:12.170 [2024-10-09 00:36:42.700863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.170 [2024-10-09 00:36:42.700893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:12.170 qpair failed and we were unable to recover it. 00:29:12.170 [2024-10-09 00:36:42.701135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.170 [2024-10-09 00:36:42.701165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:12.170 qpair failed and we were unable to recover it. 00:29:12.170 [2024-10-09 00:36:42.701535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.170 [2024-10-09 00:36:42.701564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:12.170 qpair failed and we were unable to recover it. 00:29:12.170 [2024-10-09 00:36:42.701925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.170 [2024-10-09 00:36:42.701957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:12.170 qpair failed and we were unable to recover it. 00:29:12.170 [2024-10-09 00:36:42.702345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.170 [2024-10-09 00:36:42.702376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:12.170 qpair failed and we were unable to recover it. 00:29:12.170 [2024-10-09 00:36:42.702743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.170 [2024-10-09 00:36:42.702775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:12.170 qpair failed and we were unable to recover it. 00:29:12.170 [2024-10-09 00:36:42.703031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.170 [2024-10-09 00:36:42.703060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:12.170 qpair failed and we were unable to recover it. 00:29:12.170 [2024-10-09 00:36:42.703355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.170 [2024-10-09 00:36:42.703385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:12.170 qpair failed and we were unable to recover it. 00:29:12.170 [2024-10-09 00:36:42.703571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.170 [2024-10-09 00:36:42.703601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:12.170 qpair failed and we were unable to recover it. 00:29:12.170 [2024-10-09 00:36:42.703948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.170 [2024-10-09 00:36:42.703979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:12.170 qpair failed and we were unable to recover it. 00:29:12.170 [2024-10-09 00:36:42.704342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.170 [2024-10-09 00:36:42.704371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:12.170 qpair failed and we were unable to recover it. 00:29:12.170 [2024-10-09 00:36:42.704747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.170 [2024-10-09 00:36:42.704777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:12.170 qpair failed and we were unable to recover it. 00:29:12.170 [2024-10-09 00:36:42.705033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.170 [2024-10-09 00:36:42.705062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:12.170 qpair failed and we were unable to recover it. 00:29:12.170 [2024-10-09 00:36:42.705420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.170 [2024-10-09 00:36:42.705451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:12.170 qpair failed and we were unable to recover it. 00:29:12.170 [2024-10-09 00:36:42.705707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.170 [2024-10-09 00:36:42.705764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:12.170 qpair failed and we were unable to recover it. 00:29:12.170 [2024-10-09 00:36:42.706135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.170 [2024-10-09 00:36:42.706164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:12.170 qpair failed and we were unable to recover it. 00:29:12.170 [2024-10-09 00:36:42.706381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.170 [2024-10-09 00:36:42.706409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:12.170 qpair failed and we were unable to recover it. 00:29:12.170 [2024-10-09 00:36:42.706744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.170 [2024-10-09 00:36:42.706777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:12.170 qpair failed and we were unable to recover it. 00:29:12.170 [2024-10-09 00:36:42.706995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.170 [2024-10-09 00:36:42.707023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:12.170 qpair failed and we were unable to recover it. 00:29:12.170 [2024-10-09 00:36:42.707379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.170 [2024-10-09 00:36:42.707409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:12.170 qpair failed and we were unable to recover it. 00:29:12.170 [2024-10-09 00:36:42.707787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.170 [2024-10-09 00:36:42.707816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:12.170 qpair failed and we were unable to recover it. 00:29:12.170 [2024-10-09 00:36:42.708111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.170 [2024-10-09 00:36:42.708152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:12.170 qpair failed and we were unable to recover it. 00:29:12.170 [2024-10-09 00:36:42.708509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.170 [2024-10-09 00:36:42.708539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:12.170 qpair failed and we were unable to recover it. 00:29:12.170 [2024-10-09 00:36:42.708770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.170 [2024-10-09 00:36:42.708800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:12.170 qpair failed and we were unable to recover it. 00:29:12.170 [2024-10-09 00:36:42.709056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.170 [2024-10-09 00:36:42.709087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:12.170 qpair failed and we were unable to recover it. 00:29:12.170 [2024-10-09 00:36:42.709478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.170 [2024-10-09 00:36:42.709508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:12.170 qpair failed and we were unable to recover it. 00:29:12.170 [2024-10-09 00:36:42.709738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.170 [2024-10-09 00:36:42.709774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:12.170 qpair failed and we were unable to recover it. 00:29:12.170 [2024-10-09 00:36:42.710160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.170 [2024-10-09 00:36:42.710191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:12.170 qpair failed and we were unable to recover it. 00:29:12.170 [2024-10-09 00:36:42.710548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.170 [2024-10-09 00:36:42.710580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:12.170 qpair failed and we were unable to recover it. 00:29:12.170 [2024-10-09 00:36:42.710978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.170 [2024-10-09 00:36:42.711009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:12.170 qpair failed and we were unable to recover it. 00:29:12.170 [2024-10-09 00:36:42.711361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.170 [2024-10-09 00:36:42.711391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:12.170 qpair failed and we were unable to recover it. 00:29:12.171 [2024-10-09 00:36:42.711639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.171 [2024-10-09 00:36:42.711669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:12.171 qpair failed and we were unable to recover it. 00:29:12.171 [2024-10-09 00:36:42.712012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.171 [2024-10-09 00:36:42.712043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:12.171 qpair failed and we were unable to recover it. 00:29:12.171 [2024-10-09 00:36:42.712409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.171 [2024-10-09 00:36:42.712439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:12.171 qpair failed and we were unable to recover it. 00:29:12.171 [2024-10-09 00:36:42.712781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.171 [2024-10-09 00:36:42.712812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:12.171 qpair failed and we were unable to recover it. 00:29:12.171 [2024-10-09 00:36:42.713187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.171 [2024-10-09 00:36:42.713218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:12.171 qpair failed and we were unable to recover it. 00:29:12.171 [2024-10-09 00:36:42.713429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.171 [2024-10-09 00:36:42.713458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:12.171 qpair failed and we were unable to recover it. 00:29:12.171 [2024-10-09 00:36:42.713696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.171 [2024-10-09 00:36:42.713746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:12.171 qpair failed and we were unable to recover it. 00:29:12.171 [2024-10-09 00:36:42.714002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.171 [2024-10-09 00:36:42.714032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:12.171 qpair failed and we were unable to recover it. 00:29:12.171 [2024-10-09 00:36:42.714466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.171 [2024-10-09 00:36:42.714495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:12.171 qpair failed and we were unable to recover it. 00:29:12.171 [2024-10-09 00:36:42.714863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.171 [2024-10-09 00:36:42.714894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:12.171 qpair failed and we were unable to recover it. 00:29:12.171 [2024-10-09 00:36:42.715268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.171 [2024-10-09 00:36:42.715298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:12.171 qpair failed and we were unable to recover it. 00:29:12.171 [2024-10-09 00:36:42.715677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.171 [2024-10-09 00:36:42.715707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:12.171 qpair failed and we were unable to recover it. 00:29:12.171 [2024-10-09 00:36:42.716106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.171 [2024-10-09 00:36:42.716137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:12.171 qpair failed and we were unable to recover it. 00:29:12.171 [2024-10-09 00:36:42.716484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.171 [2024-10-09 00:36:42.716514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:12.171 qpair failed and we were unable to recover it. 00:29:12.171 [2024-10-09 00:36:42.716799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.171 [2024-10-09 00:36:42.716830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:12.171 qpair failed and we were unable to recover it. 00:29:12.171 [2024-10-09 00:36:42.717208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.171 [2024-10-09 00:36:42.717237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:12.171 qpair failed and we were unable to recover it. 00:29:12.171 [2024-10-09 00:36:42.717445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.171 [2024-10-09 00:36:42.717475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:12.171 qpair failed and we were unable to recover it. 00:29:12.171 [2024-10-09 00:36:42.717830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.171 [2024-10-09 00:36:42.717863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:12.171 qpair failed and we were unable to recover it. 00:29:12.171 [2024-10-09 00:36:42.718224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.171 [2024-10-09 00:36:42.718254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:12.171 qpair failed and we were unable to recover it. 00:29:12.171 [2024-10-09 00:36:42.718600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.171 [2024-10-09 00:36:42.718629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:12.171 qpair failed and we were unable to recover it. 00:29:12.171 [2024-10-09 00:36:42.719001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.171 [2024-10-09 00:36:42.719032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:12.171 qpair failed and we were unable to recover it. 00:29:12.171 [2024-10-09 00:36:42.719237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.171 [2024-10-09 00:36:42.719266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:12.171 qpair failed and we were unable to recover it. 00:29:12.171 [2024-10-09 00:36:42.719369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.171 [2024-10-09 00:36:42.719398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9f8000b90 with addr=10.0.0.2, port=4420 00:29:12.171 qpair failed and we were unable to recover it. 00:29:12.171 [2024-10-09 00:36:42.719628] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x132aed0 is same with the state(6) to be set 00:29:12.171 Read completed with error (sct=0, sc=8) 00:29:12.171 starting I/O failed 00:29:12.171 Read completed with error (sct=0, sc=8) 00:29:12.171 starting I/O failed 00:29:12.171 Read completed with error (sct=0, sc=8) 00:29:12.171 starting I/O failed 00:29:12.171 Read completed with error (sct=0, sc=8) 00:29:12.171 starting I/O failed 00:29:12.171 Read completed with error (sct=0, sc=8) 00:29:12.171 starting I/O failed 00:29:12.171 Read completed with error (sct=0, sc=8) 00:29:12.171 starting I/O failed 00:29:12.171 Read completed with error (sct=0, sc=8) 00:29:12.171 starting I/O failed 00:29:12.171 Read completed with error (sct=0, sc=8) 00:29:12.171 starting I/O failed 00:29:12.171 Read completed with error (sct=0, sc=8) 00:29:12.171 starting I/O failed 00:29:12.171 Read completed with error (sct=0, sc=8) 00:29:12.171 starting I/O failed 00:29:12.171 Read completed with error (sct=0, sc=8) 00:29:12.171 starting I/O failed 00:29:12.171 Read completed with error (sct=0, sc=8) 00:29:12.171 starting I/O failed 00:29:12.171 Read completed with error (sct=0, sc=8) 00:29:12.171 starting I/O failed 00:29:12.171 Read completed with error (sct=0, sc=8) 00:29:12.171 starting I/O failed 00:29:12.171 Read completed with error (sct=0, sc=8) 00:29:12.171 starting I/O failed 00:29:12.171 Read completed with error (sct=0, sc=8) 00:29:12.171 starting I/O failed 00:29:12.171 Read completed with error (sct=0, sc=8) 00:29:12.171 starting I/O failed 00:29:12.171 Read completed with error (sct=0, sc=8) 00:29:12.171 starting I/O failed 00:29:12.171 Write completed with error (sct=0, sc=8) 00:29:12.171 starting I/O failed 00:29:12.171 Read completed with error (sct=0, sc=8) 00:29:12.171 starting I/O failed 00:29:12.171 Write completed with error (sct=0, sc=8) 00:29:12.171 starting I/O failed 00:29:12.171 Write completed with error (sct=0, sc=8) 00:29:12.171 starting I/O failed 00:29:12.171 Read completed with error (sct=0, sc=8) 00:29:12.171 starting I/O failed 00:29:12.171 Read completed with error (sct=0, sc=8) 00:29:12.171 starting I/O failed 00:29:12.171 Write completed with error (sct=0, sc=8) 00:29:12.171 starting I/O failed 00:29:12.171 Write completed with error (sct=0, sc=8) 00:29:12.171 starting I/O failed 00:29:12.171 Write completed with error (sct=0, sc=8) 00:29:12.171 starting I/O failed 00:29:12.171 Write completed with error (sct=0, sc=8) 00:29:12.171 starting I/O failed 00:29:12.171 Write completed with error (sct=0, sc=8) 00:29:12.171 starting I/O failed 00:29:12.171 Write completed with error (sct=0, sc=8) 00:29:12.171 starting I/O failed 00:29:12.171 Write completed with error (sct=0, sc=8) 00:29:12.171 starting I/O failed 00:29:12.171 Write completed with error (sct=0, sc=8) 00:29:12.171 starting I/O failed 00:29:12.171 [2024-10-09 00:36:42.720451] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.171 [2024-10-09 00:36:42.721029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.171 [2024-10-09 00:36:42.721148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.171 qpair failed and we were unable to recover it. 00:29:12.171 [2024-10-09 00:36:42.721566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.171 [2024-10-09 00:36:42.721606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.171 qpair failed and we were unable to recover it. 00:29:12.171 [2024-10-09 00:36:42.721991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.171 [2024-10-09 00:36:42.722096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.171 qpair failed and we were unable to recover it. 00:29:12.171 [2024-10-09 00:36:42.722503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.171 [2024-10-09 00:36:42.722541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.171 qpair failed and we were unable to recover it. 00:29:12.171 [2024-10-09 00:36:42.722922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.171 [2024-10-09 00:36:42.722956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.171 qpair failed and we were unable to recover it. 00:29:12.171 [2024-10-09 00:36:42.723252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.171 [2024-10-09 00:36:42.723281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.171 qpair failed and we were unable to recover it. 00:29:12.172 [2024-10-09 00:36:42.723560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.172 [2024-10-09 00:36:42.723590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.172 qpair failed and we were unable to recover it. 00:29:12.172 [2024-10-09 00:36:42.723931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.172 [2024-10-09 00:36:42.723963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.172 qpair failed and we were unable to recover it. 00:29:12.172 [2024-10-09 00:36:42.724310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.172 [2024-10-09 00:36:42.724341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.172 qpair failed and we were unable to recover it. 00:29:12.172 [2024-10-09 00:36:42.724573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.172 [2024-10-09 00:36:42.724603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.172 qpair failed and we were unable to recover it. 00:29:12.172 [2024-10-09 00:36:42.724997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.172 [2024-10-09 00:36:42.725029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.172 qpair failed and we were unable to recover it. 00:29:12.172 [2024-10-09 00:36:42.725403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.172 [2024-10-09 00:36:42.725432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.172 qpair failed and we were unable to recover it. 00:29:12.172 [2024-10-09 00:36:42.725679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.172 [2024-10-09 00:36:42.725709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.172 qpair failed and we were unable to recover it. 00:29:12.172 [2024-10-09 00:36:42.725963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.172 [2024-10-09 00:36:42.725995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.172 qpair failed and we were unable to recover it. 00:29:12.172 [2024-10-09 00:36:42.726401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.172 [2024-10-09 00:36:42.726432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.172 qpair failed and we were unable to recover it. 00:29:12.172 [2024-10-09 00:36:42.726862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.172 [2024-10-09 00:36:42.726893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.172 qpair failed and we were unable to recover it. 00:29:12.172 [2024-10-09 00:36:42.727277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.172 [2024-10-09 00:36:42.727307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.172 qpair failed and we were unable to recover it. 00:29:12.172 [2024-10-09 00:36:42.727577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.172 [2024-10-09 00:36:42.727607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.172 qpair failed and we were unable to recover it. 00:29:12.172 [2024-10-09 00:36:42.727958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.172 [2024-10-09 00:36:42.727989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.172 qpair failed and we were unable to recover it. 00:29:12.172 [2024-10-09 00:36:42.728351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.172 [2024-10-09 00:36:42.728380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.172 qpair failed and we were unable to recover it. 00:29:12.172 [2024-10-09 00:36:42.728605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.172 [2024-10-09 00:36:42.728634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.172 qpair failed and we were unable to recover it. 00:29:12.172 [2024-10-09 00:36:42.728867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.172 [2024-10-09 00:36:42.728900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.172 qpair failed and we were unable to recover it. 00:29:12.172 [2024-10-09 00:36:42.729280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.172 [2024-10-09 00:36:42.729310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.172 qpair failed and we were unable to recover it. 00:29:12.172 [2024-10-09 00:36:42.729667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.172 [2024-10-09 00:36:42.729697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.172 qpair failed and we were unable to recover it. 00:29:12.172 [2024-10-09 00:36:42.729940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.172 [2024-10-09 00:36:42.729970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.172 qpair failed and we were unable to recover it. 00:29:12.172 [2024-10-09 00:36:42.730189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.172 [2024-10-09 00:36:42.730218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.172 qpair failed and we were unable to recover it. 00:29:12.172 [2024-10-09 00:36:42.730630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.172 [2024-10-09 00:36:42.730660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.172 qpair failed and we were unable to recover it. 00:29:12.172 [2024-10-09 00:36:42.731043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.172 [2024-10-09 00:36:42.731074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.172 qpair failed and we were unable to recover it. 00:29:12.172 [2024-10-09 00:36:42.731431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.172 [2024-10-09 00:36:42.731462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.172 qpair failed and we were unable to recover it. 00:29:12.172 [2024-10-09 00:36:42.731863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.172 [2024-10-09 00:36:42.731894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.172 qpair failed and we were unable to recover it. 00:29:12.172 [2024-10-09 00:36:42.732253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.172 [2024-10-09 00:36:42.732282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.172 qpair failed and we were unable to recover it. 00:29:12.172 [2024-10-09 00:36:42.732664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.172 [2024-10-09 00:36:42.732693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.172 qpair failed and we were unable to recover it. 00:29:12.172 [2024-10-09 00:36:42.732837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.172 [2024-10-09 00:36:42.732868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.172 qpair failed and we were unable to recover it. 00:29:12.172 [2024-10-09 00:36:42.733243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.172 [2024-10-09 00:36:42.733272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.172 qpair failed and we were unable to recover it. 00:29:12.172 [2024-10-09 00:36:42.733622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.172 [2024-10-09 00:36:42.733658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.172 qpair failed and we were unable to recover it. 00:29:12.172 [2024-10-09 00:36:42.734012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.172 [2024-10-09 00:36:42.734044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.172 qpair failed and we were unable to recover it. 00:29:12.172 [2024-10-09 00:36:42.734412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.172 [2024-10-09 00:36:42.734442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.172 qpair failed and we were unable to recover it. 00:29:12.172 [2024-10-09 00:36:42.734809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.172 [2024-10-09 00:36:42.734840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.172 qpair failed and we were unable to recover it. 00:29:12.172 00:36:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:12.172 [2024-10-09 00:36:42.735097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.172 [2024-10-09 00:36:42.735130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.172 qpair failed and we were unable to recover it. 00:29:12.173 00:36:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # return 0 00:29:12.173 [2024-10-09 00:36:42.735510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.173 [2024-10-09 00:36:42.735548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.173 qpair failed and we were unable to recover it. 00:29:12.173 00:36:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:29:12.173 [2024-10-09 00:36:42.735937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.173 [2024-10-09 00:36:42.735970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.173 qpair failed and we were unable to recover it. 00:29:12.173 00:36:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:12.173 00:36:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:12.173 [2024-10-09 00:36:42.736329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.173 [2024-10-09 00:36:42.736362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.173 qpair failed and we were unable to recover it. 00:29:12.173 [2024-10-09 00:36:42.736683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.173 [2024-10-09 00:36:42.736712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.173 qpair failed and we were unable to recover it. 00:29:12.173 [2024-10-09 00:36:42.737119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.173 [2024-10-09 00:36:42.737152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.173 qpair failed and we were unable to recover it. 00:29:12.173 [2024-10-09 00:36:42.737500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.173 [2024-10-09 00:36:42.737530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.173 qpair failed and we were unable to recover it. 00:29:12.173 [2024-10-09 00:36:42.737898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.173 [2024-10-09 00:36:42.737939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.173 qpair failed and we were unable to recover it. 00:29:12.173 [2024-10-09 00:36:42.738197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.173 [2024-10-09 00:36:42.738238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.173 qpair failed and we were unable to recover it. 00:29:12.173 [2024-10-09 00:36:42.738588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.173 [2024-10-09 00:36:42.738619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.173 qpair failed and we were unable to recover it. 00:29:12.173 [2024-10-09 00:36:42.738989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.173 [2024-10-09 00:36:42.739021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.173 qpair failed and we were unable to recover it. 00:29:12.173 [2024-10-09 00:36:42.739386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.173 [2024-10-09 00:36:42.739417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.173 qpair failed and we were unable to recover it. 00:29:12.173 [2024-10-09 00:36:42.739790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.173 [2024-10-09 00:36:42.739820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.173 qpair failed and we were unable to recover it. 00:29:12.173 [2024-10-09 00:36:42.740178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.173 [2024-10-09 00:36:42.740216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.173 qpair failed and we were unable to recover it. 00:29:12.173 [2024-10-09 00:36:42.740562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.173 [2024-10-09 00:36:42.740592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.173 qpair failed and we were unable to recover it. 00:29:12.173 [2024-10-09 00:36:42.740949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.173 [2024-10-09 00:36:42.740982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.173 qpair failed and we were unable to recover it. 00:29:12.173 [2024-10-09 00:36:42.741200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.173 [2024-10-09 00:36:42.741230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.173 qpair failed and we were unable to recover it. 00:29:12.173 [2024-10-09 00:36:42.741589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.173 [2024-10-09 00:36:42.741620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.173 qpair failed and we were unable to recover it. 00:29:12.173 [2024-10-09 00:36:42.741976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.173 [2024-10-09 00:36:42.742009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.173 qpair failed and we were unable to recover it. 00:29:12.173 [2024-10-09 00:36:42.742348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.173 [2024-10-09 00:36:42.742379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.173 qpair failed and we were unable to recover it. 00:29:12.173 [2024-10-09 00:36:42.742739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.173 [2024-10-09 00:36:42.742770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.173 qpair failed and we were unable to recover it. 00:29:12.173 [2024-10-09 00:36:42.743177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.173 [2024-10-09 00:36:42.743206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.173 qpair failed and we were unable to recover it. 00:29:12.173 [2024-10-09 00:36:42.743422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.173 [2024-10-09 00:36:42.743452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.173 qpair failed and we were unable to recover it. 00:29:12.173 [2024-10-09 00:36:42.743663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.173 [2024-10-09 00:36:42.743694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.173 qpair failed and we were unable to recover it. 00:29:12.173 [2024-10-09 00:36:42.744059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.173 [2024-10-09 00:36:42.744089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.173 qpair failed and we were unable to recover it. 00:29:12.173 [2024-10-09 00:36:42.744440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.173 [2024-10-09 00:36:42.744470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.173 qpair failed and we were unable to recover it. 00:29:12.173 [2024-10-09 00:36:42.744850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.173 [2024-10-09 00:36:42.744881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.173 qpair failed and we were unable to recover it. 00:29:12.173 [2024-10-09 00:36:42.745233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.173 [2024-10-09 00:36:42.745264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.173 qpair failed and we were unable to recover it. 00:29:12.173 [2024-10-09 00:36:42.745625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.173 [2024-10-09 00:36:42.745654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.173 qpair failed and we were unable to recover it. 00:29:12.173 [2024-10-09 00:36:42.746028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.173 [2024-10-09 00:36:42.746059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.173 qpair failed and we were unable to recover it. 00:29:12.173 [2024-10-09 00:36:42.746431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.173 [2024-10-09 00:36:42.746461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.173 qpair failed and we were unable to recover it. 00:29:12.173 [2024-10-09 00:36:42.746833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.173 [2024-10-09 00:36:42.746864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.173 qpair failed and we were unable to recover it. 00:29:12.173 [2024-10-09 00:36:42.747074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.173 [2024-10-09 00:36:42.747103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.173 qpair failed and we were unable to recover it. 00:29:12.173 [2024-10-09 00:36:42.747537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.173 [2024-10-09 00:36:42.747570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.173 qpair failed and we were unable to recover it. 00:29:12.173 [2024-10-09 00:36:42.747796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.173 [2024-10-09 00:36:42.747827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.173 qpair failed and we were unable to recover it. 00:29:12.173 [2024-10-09 00:36:42.748228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.173 [2024-10-09 00:36:42.748257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.173 qpair failed and we were unable to recover it. 00:29:12.173 [2024-10-09 00:36:42.748526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.173 [2024-10-09 00:36:42.748563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.173 qpair failed and we were unable to recover it. 00:29:12.173 [2024-10-09 00:36:42.748828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.173 [2024-10-09 00:36:42.748865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.173 qpair failed and we were unable to recover it. 00:29:12.173 [2024-10-09 00:36:42.749127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.173 [2024-10-09 00:36:42.749159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.173 qpair failed and we were unable to recover it. 00:29:12.173 [2024-10-09 00:36:42.749531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.174 [2024-10-09 00:36:42.749562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.174 qpair failed and we were unable to recover it. 00:29:12.174 [2024-10-09 00:36:42.749931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.174 [2024-10-09 00:36:42.749963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.174 qpair failed and we were unable to recover it. 00:29:12.174 [2024-10-09 00:36:42.750331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.174 [2024-10-09 00:36:42.750361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.174 qpair failed and we were unable to recover it. 00:29:12.174 [2024-10-09 00:36:42.750740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.174 [2024-10-09 00:36:42.750773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.174 qpair failed and we were unable to recover it. 00:29:12.174 [2024-10-09 00:36:42.751150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.174 [2024-10-09 00:36:42.751181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.174 qpair failed and we were unable to recover it. 00:29:12.174 [2024-10-09 00:36:42.751539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.174 [2024-10-09 00:36:42.751569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.174 qpair failed and we were unable to recover it. 00:29:12.174 [2024-10-09 00:36:42.751910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.174 [2024-10-09 00:36:42.751940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.174 qpair failed and we were unable to recover it. 00:29:12.174 [2024-10-09 00:36:42.752306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.174 [2024-10-09 00:36:42.752337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.174 qpair failed and we were unable to recover it. 00:29:12.174 [2024-10-09 00:36:42.752699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.174 [2024-10-09 00:36:42.752756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.174 qpair failed and we were unable to recover it. 00:29:12.174 [2024-10-09 00:36:42.753127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.174 [2024-10-09 00:36:42.753157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.174 qpair failed and we were unable to recover it. 00:29:12.174 [2024-10-09 00:36:42.753507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.174 [2024-10-09 00:36:42.753538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.174 qpair failed and we were unable to recover it. 00:29:12.174 [2024-10-09 00:36:42.753771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.174 [2024-10-09 00:36:42.753802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.174 qpair failed and we were unable to recover it. 00:29:12.174 [2024-10-09 00:36:42.754037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.174 [2024-10-09 00:36:42.754066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.174 qpair failed and we were unable to recover it. 00:29:12.174 [2024-10-09 00:36:42.754436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.174 [2024-10-09 00:36:42.754468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.174 qpair failed and we were unable to recover it. 00:29:12.174 [2024-10-09 00:36:42.754676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.174 [2024-10-09 00:36:42.754711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.174 qpair failed and we were unable to recover it. 00:29:12.174 [2024-10-09 00:36:42.755067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.174 [2024-10-09 00:36:42.755097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.174 qpair failed and we were unable to recover it. 00:29:12.174 [2024-10-09 00:36:42.755468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.174 [2024-10-09 00:36:42.755499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.174 qpair failed and we were unable to recover it. 00:29:12.174 [2024-10-09 00:36:42.755879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.174 [2024-10-09 00:36:42.755909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.174 qpair failed and we were unable to recover it. 00:29:12.174 [2024-10-09 00:36:42.756283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.174 [2024-10-09 00:36:42.756313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.174 qpair failed and we were unable to recover it. 00:29:12.174 [2024-10-09 00:36:42.756646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.174 [2024-10-09 00:36:42.756677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.174 qpair failed and we were unable to recover it. 00:29:12.174 [2024-10-09 00:36:42.757098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.174 [2024-10-09 00:36:42.757129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.174 qpair failed and we were unable to recover it. 00:29:12.174 [2024-10-09 00:36:42.757477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.174 [2024-10-09 00:36:42.757508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.174 qpair failed and we were unable to recover it. 00:29:12.174 [2024-10-09 00:36:42.757868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.174 [2024-10-09 00:36:42.757899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.174 qpair failed and we were unable to recover it. 00:29:12.174 [2024-10-09 00:36:42.758268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.174 [2024-10-09 00:36:42.758301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.174 qpair failed and we were unable to recover it. 00:29:12.174 [2024-10-09 00:36:42.758657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.174 [2024-10-09 00:36:42.758686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.174 qpair failed and we were unable to recover it. 00:29:12.174 [2024-10-09 00:36:42.759081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.174 [2024-10-09 00:36:42.759112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.174 qpair failed and we were unable to recover it. 00:29:12.174 [2024-10-09 00:36:42.759328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.174 [2024-10-09 00:36:42.759358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.174 qpair failed and we were unable to recover it. 00:29:12.174 [2024-10-09 00:36:42.759634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.174 [2024-10-09 00:36:42.759669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.174 qpair failed and we were unable to recover it. 00:29:12.174 [2024-10-09 00:36:42.760101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.174 [2024-10-09 00:36:42.760133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.174 qpair failed and we were unable to recover it. 00:29:12.174 [2024-10-09 00:36:42.760480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.174 [2024-10-09 00:36:42.760511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.174 qpair failed and we were unable to recover it. 00:29:12.174 [2024-10-09 00:36:42.760874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.174 [2024-10-09 00:36:42.760905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.174 qpair failed and we were unable to recover it. 00:29:12.174 [2024-10-09 00:36:42.761258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.174 [2024-10-09 00:36:42.761289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.174 qpair failed and we were unable to recover it. 00:29:12.174 [2024-10-09 00:36:42.761556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.174 [2024-10-09 00:36:42.761589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.174 qpair failed and we were unable to recover it. 00:29:12.174 [2024-10-09 00:36:42.761745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.174 [2024-10-09 00:36:42.761775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.174 qpair failed and we were unable to recover it. 00:29:12.174 [2024-10-09 00:36:42.762045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.174 [2024-10-09 00:36:42.762074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.174 qpair failed and we were unable to recover it. 00:29:12.174 [2024-10-09 00:36:42.762463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.174 [2024-10-09 00:36:42.762493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.174 qpair failed and we were unable to recover it. 00:29:12.174 [2024-10-09 00:36:42.762861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.174 [2024-10-09 00:36:42.762892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.174 qpair failed and we were unable to recover it. 00:29:12.174 [2024-10-09 00:36:42.763138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.174 [2024-10-09 00:36:42.763168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.174 qpair failed and we were unable to recover it. 00:29:12.174 [2024-10-09 00:36:42.763376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.174 [2024-10-09 00:36:42.763405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.174 qpair failed and we were unable to recover it. 00:29:12.174 [2024-10-09 00:36:42.763816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.174 [2024-10-09 00:36:42.763847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.174 qpair failed and we were unable to recover it. 00:29:12.175 [2024-10-09 00:36:42.764203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.175 [2024-10-09 00:36:42.764235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.175 qpair failed and we were unable to recover it. 00:29:12.175 [2024-10-09 00:36:42.764609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.175 [2024-10-09 00:36:42.764640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.175 qpair failed and we were unable to recover it. 00:29:12.175 [2024-10-09 00:36:42.764886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.175 [2024-10-09 00:36:42.764916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.175 qpair failed and we were unable to recover it. 00:29:12.175 [2024-10-09 00:36:42.765279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.175 [2024-10-09 00:36:42.765313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.175 qpair failed and we were unable to recover it. 00:29:12.175 [2024-10-09 00:36:42.765457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.175 [2024-10-09 00:36:42.765487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.175 qpair failed and we were unable to recover it. 00:29:12.175 [2024-10-09 00:36:42.765781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.175 [2024-10-09 00:36:42.765815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.175 qpair failed and we were unable to recover it. 00:29:12.175 [2024-10-09 00:36:42.766187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.175 [2024-10-09 00:36:42.766217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.175 qpair failed and we were unable to recover it. 00:29:12.175 [2024-10-09 00:36:42.766562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.175 [2024-10-09 00:36:42.766591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.175 qpair failed and we were unable to recover it. 00:29:12.175 [2024-10-09 00:36:42.766946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.175 [2024-10-09 00:36:42.766979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.175 qpair failed and we were unable to recover it. 00:29:12.175 [2024-10-09 00:36:42.767348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.175 [2024-10-09 00:36:42.767378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.175 qpair failed and we were unable to recover it. 00:29:12.175 [2024-10-09 00:36:42.767742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.175 [2024-10-09 00:36:42.767774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.175 qpair failed and we were unable to recover it. 00:29:12.175 [2024-10-09 00:36:42.768057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.175 [2024-10-09 00:36:42.768086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.175 qpair failed and we were unable to recover it. 00:29:12.175 [2024-10-09 00:36:42.768451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.175 [2024-10-09 00:36:42.768480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.175 qpair failed and we were unable to recover it. 00:29:12.175 [2024-10-09 00:36:42.768701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.175 [2024-10-09 00:36:42.768739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.175 qpair failed and we were unable to recover it. 00:29:12.175 [2024-10-09 00:36:42.769108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.175 [2024-10-09 00:36:42.769144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.175 qpair failed and we were unable to recover it. 00:29:12.175 [2024-10-09 00:36:42.769421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.175 [2024-10-09 00:36:42.769451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.175 qpair failed and we were unable to recover it. 00:29:12.175 [2024-10-09 00:36:42.769699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.175 [2024-10-09 00:36:42.769745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.175 qpair failed and we were unable to recover it. 00:29:12.175 [2024-10-09 00:36:42.770144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.175 [2024-10-09 00:36:42.770173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.175 qpair failed and we were unable to recover it. 00:29:12.175 [2024-10-09 00:36:42.770533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.175 [2024-10-09 00:36:42.770563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.175 qpair failed and we were unable to recover it. 00:29:12.175 [2024-10-09 00:36:42.770956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.175 [2024-10-09 00:36:42.770988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.175 qpair failed and we were unable to recover it. 00:29:12.175 [2024-10-09 00:36:42.771234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.175 [2024-10-09 00:36:42.771263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.175 qpair failed and we were unable to recover it. 00:29:12.175 [2024-10-09 00:36:42.771631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.175 [2024-10-09 00:36:42.771661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.175 qpair failed and we were unable to recover it. 00:29:12.175 [2024-10-09 00:36:42.771932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.175 [2024-10-09 00:36:42.771964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.175 qpair failed and we were unable to recover it. 00:29:12.175 [2024-10-09 00:36:42.772330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.175 [2024-10-09 00:36:42.772362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.175 qpair failed and we were unable to recover it. 00:29:12.175 [2024-10-09 00:36:42.772615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.175 [2024-10-09 00:36:42.772648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.175 qpair failed and we were unable to recover it. 00:29:12.175 [2024-10-09 00:36:42.773058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.175 [2024-10-09 00:36:42.773088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.175 qpair failed and we were unable to recover it. 00:29:12.175 [2024-10-09 00:36:42.773444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.175 [2024-10-09 00:36:42.773475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.175 qpair failed and we were unable to recover it. 00:29:12.175 [2024-10-09 00:36:42.773846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.175 [2024-10-09 00:36:42.773878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.175 qpair failed and we were unable to recover it. 00:29:12.175 [2024-10-09 00:36:42.774094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.175 [2024-10-09 00:36:42.774124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.175 qpair failed and we were unable to recover it. 00:29:12.175 [2024-10-09 00:36:42.774363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.175 [2024-10-09 00:36:42.774393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.175 qpair failed and we were unable to recover it. 00:29:12.175 [2024-10-09 00:36:42.774765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.175 [2024-10-09 00:36:42.774797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.175 qpair failed and we were unable to recover it. 00:29:12.175 [2024-10-09 00:36:42.775143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.175 [2024-10-09 00:36:42.775175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.175 qpair failed and we were unable to recover it. 00:29:12.175 [2024-10-09 00:36:42.775576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.175 [2024-10-09 00:36:42.775605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.175 qpair failed and we were unable to recover it. 00:29:12.175 [2024-10-09 00:36:42.775958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.175 [2024-10-09 00:36:42.775990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.175 qpair failed and we were unable to recover it. 00:29:12.175 [2024-10-09 00:36:42.776343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.175 [2024-10-09 00:36:42.776373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.175 qpair failed and we were unable to recover it. 00:29:12.175 [2024-10-09 00:36:42.776585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.175 [2024-10-09 00:36:42.776615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.175 qpair failed and we were unable to recover it. 00:29:12.175 [2024-10-09 00:36:42.776990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.175 [2024-10-09 00:36:42.777020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.175 qpair failed and we were unable to recover it. 00:29:12.441 [2024-10-09 00:36:42.777382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.441 [2024-10-09 00:36:42.777416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.441 qpair failed and we were unable to recover it. 00:29:12.441 00:36:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:12.441 [2024-10-09 00:36:42.777801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.441 [2024-10-09 00:36:42.777833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.441 qpair failed and we were unable to recover it. 00:29:12.441 00:36:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:12.441 [2024-10-09 00:36:42.778209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.441 [2024-10-09 00:36:42.778241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.441 qpair failed and we were unable to recover it. 00:29:12.441 [2024-10-09 00:36:42.778354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.441 [2024-10-09 00:36:42.778385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.441 qpair failed and we were unable to recover it. 00:29:12.441 00:36:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:12.441 00:36:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:12.441 [2024-10-09 00:36:42.778781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.441 [2024-10-09 00:36:42.778814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.441 qpair failed and we were unable to recover it. 00:29:12.441 [2024-10-09 00:36:42.779162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.441 [2024-10-09 00:36:42.779193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.441 qpair failed and we were unable to recover it. 00:29:12.441 [2024-10-09 00:36:42.779563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.441 [2024-10-09 00:36:42.779593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.441 qpair failed and we were unable to recover it. 00:29:12.441 [2024-10-09 00:36:42.779936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.441 [2024-10-09 00:36:42.779970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.441 qpair failed and we were unable to recover it. 00:29:12.441 [2024-10-09 00:36:42.780346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.441 [2024-10-09 00:36:42.780376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.441 qpair failed and we were unable to recover it. 00:29:12.441 [2024-10-09 00:36:42.780759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.441 [2024-10-09 00:36:42.780789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.441 qpair failed and we were unable to recover it. 00:29:12.441 [2024-10-09 00:36:42.781150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.441 [2024-10-09 00:36:42.781182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.441 qpair failed and we were unable to recover it. 00:29:12.441 [2024-10-09 00:36:42.781522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.441 [2024-10-09 00:36:42.781552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.441 qpair failed and we were unable to recover it. 00:29:12.441 [2024-10-09 00:36:42.781895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.441 [2024-10-09 00:36:42.781927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.441 qpair failed and we were unable to recover it. 00:29:12.441 [2024-10-09 00:36:42.782158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.441 [2024-10-09 00:36:42.782187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.441 qpair failed and we were unable to recover it. 00:29:12.441 [2024-10-09 00:36:42.782526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.441 [2024-10-09 00:36:42.782557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.441 qpair failed and we were unable to recover it. 00:29:12.441 [2024-10-09 00:36:42.782914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.441 [2024-10-09 00:36:42.782945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.441 qpair failed and we were unable to recover it. 00:29:12.441 [2024-10-09 00:36:42.783289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.441 [2024-10-09 00:36:42.783320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.441 qpair failed and we were unable to recover it. 00:29:12.441 [2024-10-09 00:36:42.783668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.441 [2024-10-09 00:36:42.783697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.441 qpair failed and we were unable to recover it. 00:29:12.441 [2024-10-09 00:36:42.784110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.441 [2024-10-09 00:36:42.784141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.441 qpair failed and we were unable to recover it. 00:29:12.441 [2024-10-09 00:36:42.784403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.442 [2024-10-09 00:36:42.784431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.442 qpair failed and we were unable to recover it. 00:29:12.442 [2024-10-09 00:36:42.784778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.442 [2024-10-09 00:36:42.784809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.442 qpair failed and we were unable to recover it. 00:29:12.442 [2024-10-09 00:36:42.785049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.442 [2024-10-09 00:36:42.785078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.442 qpair failed and we were unable to recover it. 00:29:12.442 [2024-10-09 00:36:42.785464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.442 [2024-10-09 00:36:42.785494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.442 qpair failed and we were unable to recover it. 00:29:12.442 [2024-10-09 00:36:42.785862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.442 [2024-10-09 00:36:42.785893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.442 qpair failed and we were unable to recover it. 00:29:12.442 [2024-10-09 00:36:42.786138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.442 [2024-10-09 00:36:42.786171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.442 qpair failed and we were unable to recover it. 00:29:12.442 [2024-10-09 00:36:42.786413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.442 [2024-10-09 00:36:42.786441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.442 qpair failed and we were unable to recover it. 00:29:12.442 [2024-10-09 00:36:42.786813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.442 [2024-10-09 00:36:42.786844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.442 qpair failed and we were unable to recover it. 00:29:12.442 [2024-10-09 00:36:42.787204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.442 [2024-10-09 00:36:42.787235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.442 qpair failed and we were unable to recover it. 00:29:12.442 [2024-10-09 00:36:42.787604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.442 [2024-10-09 00:36:42.787635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.442 qpair failed and we were unable to recover it. 00:29:12.442 [2024-10-09 00:36:42.788015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.442 [2024-10-09 00:36:42.788046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.442 qpair failed and we were unable to recover it. 00:29:12.442 [2024-10-09 00:36:42.788414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.442 [2024-10-09 00:36:42.788444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.442 qpair failed and we were unable to recover it. 00:29:12.442 [2024-10-09 00:36:42.788815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.442 [2024-10-09 00:36:42.788845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.442 qpair failed and we were unable to recover it. 00:29:12.442 [2024-10-09 00:36:42.789202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.442 [2024-10-09 00:36:42.789231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.442 qpair failed and we were unable to recover it. 00:29:12.442 [2024-10-09 00:36:42.789595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.442 [2024-10-09 00:36:42.789623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.442 qpair failed and we were unable to recover it. 00:29:12.442 [2024-10-09 00:36:42.789989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.442 [2024-10-09 00:36:42.790019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.442 qpair failed and we were unable to recover it. 00:29:12.442 [2024-10-09 00:36:42.790119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.442 [2024-10-09 00:36:42.790148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.442 qpair failed and we were unable to recover it. 00:29:12.442 [2024-10-09 00:36:42.790424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.442 [2024-10-09 00:36:42.790462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.442 qpair failed and we were unable to recover it. 00:29:12.442 [2024-10-09 00:36:42.790827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.442 [2024-10-09 00:36:42.790858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.442 qpair failed and we were unable to recover it. 00:29:12.442 [2024-10-09 00:36:42.791227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.442 [2024-10-09 00:36:42.791255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.442 qpair failed and we were unable to recover it. 00:29:12.442 [2024-10-09 00:36:42.791644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.442 [2024-10-09 00:36:42.791673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.442 qpair failed and we were unable to recover it. 00:29:12.442 [2024-10-09 00:36:42.792039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.442 [2024-10-09 00:36:42.792068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.442 qpair failed and we were unable to recover it. 00:29:12.442 [2024-10-09 00:36:42.792419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.442 [2024-10-09 00:36:42.792449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.442 qpair failed and we were unable to recover it. 00:29:12.442 [2024-10-09 00:36:42.792817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.442 [2024-10-09 00:36:42.792854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.442 qpair failed and we were unable to recover it. 00:29:12.442 [2024-10-09 00:36:42.793219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.442 [2024-10-09 00:36:42.793248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.442 qpair failed and we were unable to recover it. 00:29:12.442 [2024-10-09 00:36:42.793620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.442 [2024-10-09 00:36:42.793649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.442 qpair failed and we were unable to recover it. 00:29:12.442 [2024-10-09 00:36:42.793875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.442 [2024-10-09 00:36:42.793906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.442 qpair failed and we were unable to recover it. 00:29:12.442 [2024-10-09 00:36:42.794268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.442 [2024-10-09 00:36:42.794297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.442 qpair failed and we were unable to recover it. 00:29:12.442 [2024-10-09 00:36:42.794570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.442 [2024-10-09 00:36:42.794599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.442 qpair failed and we were unable to recover it. 00:29:12.442 [2024-10-09 00:36:42.794970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.442 [2024-10-09 00:36:42.795002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.442 qpair failed and we were unable to recover it. 00:29:12.442 [2024-10-09 00:36:42.795337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.442 [2024-10-09 00:36:42.795368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.442 qpair failed and we were unable to recover it. 00:29:12.442 [2024-10-09 00:36:42.795729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.442 [2024-10-09 00:36:42.795760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.442 qpair failed and we were unable to recover it. 00:29:12.442 [2024-10-09 00:36:42.796108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.442 [2024-10-09 00:36:42.796137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.442 qpair failed and we were unable to recover it. 00:29:12.442 [2024-10-09 00:36:42.796502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.442 [2024-10-09 00:36:42.796533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.442 qpair failed and we were unable to recover it. 00:29:12.442 [2024-10-09 00:36:42.796890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.442 [2024-10-09 00:36:42.796919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.442 qpair failed and we were unable to recover it. 00:29:12.442 [2024-10-09 00:36:42.797215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.442 [2024-10-09 00:36:42.797244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.442 qpair failed and we were unable to recover it. 00:29:12.442 [2024-10-09 00:36:42.797597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.442 [2024-10-09 00:36:42.797627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.442 qpair failed and we were unable to recover it. 00:29:12.442 [2024-10-09 00:36:42.798072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.442 [2024-10-09 00:36:42.798103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.442 qpair failed and we were unable to recover it. 00:29:12.442 [2024-10-09 00:36:42.798491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.442 [2024-10-09 00:36:42.798520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.442 qpair failed and we were unable to recover it. 00:29:12.442 [2024-10-09 00:36:42.798869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.442 [2024-10-09 00:36:42.798900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.442 qpair failed and we were unable to recover it. 00:29:12.443 [2024-10-09 00:36:42.799270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.443 Malloc0 00:29:12.443 [2024-10-09 00:36:42.799300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.443 qpair failed and we were unable to recover it. 00:29:12.443 [2024-10-09 00:36:42.799667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.443 [2024-10-09 00:36:42.799697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.443 qpair failed and we were unable to recover it. 00:29:12.443 [2024-10-09 00:36:42.799922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.443 [2024-10-09 00:36:42.799955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.443 qpair failed and we were unable to recover it. 00:29:12.443 [2024-10-09 00:36:42.800186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.443 [2024-10-09 00:36:42.800216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.443 qpair failed and we were unable to recover it. 00:29:12.443 00:36:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:12.443 [2024-10-09 00:36:42.800599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.443 [2024-10-09 00:36:42.800630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.443 qpair failed and we were unable to recover it. 00:29:12.443 00:36:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:29:12.443 [2024-10-09 00:36:42.801003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.443 [2024-10-09 00:36:42.801035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.443 qpair failed and we were unable to recover it. 00:29:12.443 00:36:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:12.443 [2024-10-09 00:36:42.801386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.443 [2024-10-09 00:36:42.801418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.443 qpair failed and we were unable to recover it. 00:29:12.443 00:36:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:12.443 [2024-10-09 00:36:42.801781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.443 [2024-10-09 00:36:42.801814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.443 qpair failed and we were unable to recover it. 00:29:12.443 [2024-10-09 00:36:42.802229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.443 [2024-10-09 00:36:42.802258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.443 qpair failed and we were unable to recover it. 00:29:12.443 [2024-10-09 00:36:42.802615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.443 [2024-10-09 00:36:42.802645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.443 qpair failed and we were unable to recover it. 00:29:12.443 [2024-10-09 00:36:42.803017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.443 [2024-10-09 00:36:42.803048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.443 qpair failed and we were unable to recover it. 00:29:12.443 [2024-10-09 00:36:42.803419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.443 [2024-10-09 00:36:42.803448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.443 qpair failed and we were unable to recover it. 00:29:12.443 [2024-10-09 00:36:42.803826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.443 [2024-10-09 00:36:42.803857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.443 qpair failed and we were unable to recover it. 00:29:12.443 [2024-10-09 00:36:42.804078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.443 [2024-10-09 00:36:42.804106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.443 qpair failed and we were unable to recover it. 00:29:12.443 [2024-10-09 00:36:42.804465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.443 [2024-10-09 00:36:42.804495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.443 qpair failed and we were unable to recover it. 00:29:12.443 [2024-10-09 00:36:42.804731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.443 [2024-10-09 00:36:42.804763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.443 qpair failed and we were unable to recover it. 00:29:12.443 [2024-10-09 00:36:42.805088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.443 [2024-10-09 00:36:42.805117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.443 qpair failed and we were unable to recover it. 00:29:12.443 [2024-10-09 00:36:42.805339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.443 [2024-10-09 00:36:42.805368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.443 qpair failed and we were unable to recover it. 00:29:12.443 [2024-10-09 00:36:42.805621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.443 [2024-10-09 00:36:42.805654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.443 qpair failed and we were unable to recover it. 00:29:12.443 [2024-10-09 00:36:42.805977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.443 [2024-10-09 00:36:42.806007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.443 qpair failed and we were unable to recover it. 00:29:12.443 [2024-10-09 00:36:42.806378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.443 [2024-10-09 00:36:42.806408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.443 qpair failed and we were unable to recover it. 00:29:12.443 [2024-10-09 00:36:42.806712] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:12.443 [2024-10-09 00:36:42.806736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.443 [2024-10-09 00:36:42.806773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.443 qpair failed and we were unable to recover it. 00:29:12.443 [2024-10-09 00:36:42.807174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.443 [2024-10-09 00:36:42.807203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.443 qpair failed and we were unable to recover it. 00:29:12.443 [2024-10-09 00:36:42.807569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.443 [2024-10-09 00:36:42.807598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.443 qpair failed and we were unable to recover it. 00:29:12.443 [2024-10-09 00:36:42.807976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.443 [2024-10-09 00:36:42.808007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.443 qpair failed and we were unable to recover it. 00:29:12.443 [2024-10-09 00:36:42.808250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.443 [2024-10-09 00:36:42.808281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.443 qpair failed and we were unable to recover it. 00:29:12.443 [2024-10-09 00:36:42.808622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.443 [2024-10-09 00:36:42.808651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.443 qpair failed and we were unable to recover it. 00:29:12.443 [2024-10-09 00:36:42.809042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.443 [2024-10-09 00:36:42.809073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.443 qpair failed and we were unable to recover it. 00:29:12.443 [2024-10-09 00:36:42.809287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.443 [2024-10-09 00:36:42.809315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.443 qpair failed and we were unable to recover it. 00:29:12.443 [2024-10-09 00:36:42.809703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.443 [2024-10-09 00:36:42.809751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.443 qpair failed and we were unable to recover it. 00:29:12.443 [2024-10-09 00:36:42.810042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.443 [2024-10-09 00:36:42.810070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.443 qpair failed and we were unable to recover it. 00:29:12.443 [2024-10-09 00:36:42.810445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.443 [2024-10-09 00:36:42.810475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.443 qpair failed and we were unable to recover it. 00:29:12.443 [2024-10-09 00:36:42.810841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.443 [2024-10-09 00:36:42.810872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.443 qpair failed and we were unable to recover it. 00:29:12.443 [2024-10-09 00:36:42.811246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.443 [2024-10-09 00:36:42.811276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.443 qpair failed and we were unable to recover it. 00:29:12.443 [2024-10-09 00:36:42.811640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.443 [2024-10-09 00:36:42.811669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.443 qpair failed and we were unable to recover it. 00:29:12.443 [2024-10-09 00:36:42.812043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.444 [2024-10-09 00:36:42.812073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.444 qpair failed and we were unable to recover it. 00:29:12.444 [2024-10-09 00:36:42.812452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.444 [2024-10-09 00:36:42.812481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.444 qpair failed and we were unable to recover it. 00:29:12.444 [2024-10-09 00:36:42.812857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.444 [2024-10-09 00:36:42.812890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.444 qpair failed and we were unable to recover it. 00:29:12.444 [2024-10-09 00:36:42.813268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.444 [2024-10-09 00:36:42.813298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.444 qpair failed and we were unable to recover it. 00:29:12.444 [2024-10-09 00:36:42.813661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.444 [2024-10-09 00:36:42.813691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.444 qpair failed and we were unable to recover it. 00:29:12.444 [2024-10-09 00:36:42.814070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.444 [2024-10-09 00:36:42.814100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.444 qpair failed and we were unable to recover it. 00:29:12.444 [2024-10-09 00:36:42.814476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.444 [2024-10-09 00:36:42.814505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.444 qpair failed and we were unable to recover it. 00:29:12.444 [2024-10-09 00:36:42.814737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.444 [2024-10-09 00:36:42.814770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.444 qpair failed and we were unable to recover it. 00:29:12.444 [2024-10-09 00:36:42.815180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.444 [2024-10-09 00:36:42.815210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.444 qpair failed and we were unable to recover it. 00:29:12.444 [2024-10-09 00:36:42.815541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.444 [2024-10-09 00:36:42.815570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.444 qpair failed and we were unable to recover it. 00:29:12.444 00:36:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:12.444 [2024-10-09 00:36:42.815919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.444 [2024-10-09 00:36:42.815950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.444 qpair failed and we were unable to recover it. 00:29:12.444 [2024-10-09 00:36:42.816316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.444 00:36:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:12.444 [2024-10-09 00:36:42.816347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.444 qpair failed and we were unable to recover it. 00:29:12.444 00:36:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:12.444 [2024-10-09 00:36:42.816707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.444 [2024-10-09 00:36:42.816755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.444 qpair failed and we were unable to recover it. 00:29:12.444 00:36:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:12.444 [2024-10-09 00:36:42.817124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.444 [2024-10-09 00:36:42.817154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.444 qpair failed and we were unable to recover it. 00:29:12.444 [2024-10-09 00:36:42.817516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.444 [2024-10-09 00:36:42.817546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.444 qpair failed and we were unable to recover it. 00:29:12.444 [2024-10-09 00:36:42.817827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.444 [2024-10-09 00:36:42.817858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.444 qpair failed and we were unable to recover it. 00:29:12.444 [2024-10-09 00:36:42.818216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.444 [2024-10-09 00:36:42.818245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.444 qpair failed and we were unable to recover it. 00:29:12.444 [2024-10-09 00:36:42.818589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.444 [2024-10-09 00:36:42.818617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.444 qpair failed and we were unable to recover it. 00:29:12.444 [2024-10-09 00:36:42.818881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.444 [2024-10-09 00:36:42.818912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.444 qpair failed and we were unable to recover it. 00:29:12.444 [2024-10-09 00:36:42.819260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.444 [2024-10-09 00:36:42.819291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.444 qpair failed and we were unable to recover it. 00:29:12.444 [2024-10-09 00:36:42.819653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.444 [2024-10-09 00:36:42.819683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.444 qpair failed and we were unable to recover it. 00:29:12.444 [2024-10-09 00:36:42.819897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.444 [2024-10-09 00:36:42.819931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.444 qpair failed and we were unable to recover it. 00:29:12.444 [2024-10-09 00:36:42.820284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.444 [2024-10-09 00:36:42.820313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.444 qpair failed and we were unable to recover it. 00:29:12.444 [2024-10-09 00:36:42.820584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.444 [2024-10-09 00:36:42.820612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.444 qpair failed and we were unable to recover it. 00:29:12.444 [2024-10-09 00:36:42.820972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.444 [2024-10-09 00:36:42.821008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.444 qpair failed and we were unable to recover it. 00:29:12.444 [2024-10-09 00:36:42.821392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.444 [2024-10-09 00:36:42.821420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.444 qpair failed and we were unable to recover it. 00:29:12.444 [2024-10-09 00:36:42.821776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.444 [2024-10-09 00:36:42.821807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.444 qpair failed and we were unable to recover it. 00:29:12.444 [2024-10-09 00:36:42.822072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.444 [2024-10-09 00:36:42.822102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.444 qpair failed and we were unable to recover it. 00:29:12.444 [2024-10-09 00:36:42.822513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.444 [2024-10-09 00:36:42.822543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.444 qpair failed and we were unable to recover it. 00:29:12.444 [2024-10-09 00:36:42.822932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.444 [2024-10-09 00:36:42.822962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.444 qpair failed and we were unable to recover it. 00:29:12.444 [2024-10-09 00:36:42.823359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.444 [2024-10-09 00:36:42.823389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.444 qpair failed and we were unable to recover it. 00:29:12.444 [2024-10-09 00:36:42.823606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.444 [2024-10-09 00:36:42.823634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.444 qpair failed and we were unable to recover it. 00:29:12.444 [2024-10-09 00:36:42.824051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.444 [2024-10-09 00:36:42.824081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.444 qpair failed and we were unable to recover it. 00:29:12.444 [2024-10-09 00:36:42.824438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.444 [2024-10-09 00:36:42.824467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.444 qpair failed and we were unable to recover it. 00:29:12.444 [2024-10-09 00:36:42.824890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.444 [2024-10-09 00:36:42.824920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.444 qpair failed and we were unable to recover it. 00:29:12.444 [2024-10-09 00:36:42.825286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.444 [2024-10-09 00:36:42.825315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.444 qpair failed and we were unable to recover it. 00:29:12.444 [2024-10-09 00:36:42.825679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.444 [2024-10-09 00:36:42.825708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.444 qpair failed and we were unable to recover it. 00:29:12.444 [2024-10-09 00:36:42.826114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.444 [2024-10-09 00:36:42.826144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.444 qpair failed and we were unable to recover it. 00:29:12.445 [2024-10-09 00:36:42.826502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.445 [2024-10-09 00:36:42.826533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.445 qpair failed and we were unable to recover it. 00:29:12.445 [2024-10-09 00:36:42.826890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.445 [2024-10-09 00:36:42.826922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.445 qpair failed and we were unable to recover it. 00:29:12.445 [2024-10-09 00:36:42.827174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.445 [2024-10-09 00:36:42.827205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.445 qpair failed and we were unable to recover it. 00:29:12.445 [2024-10-09 00:36:42.827452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.445 [2024-10-09 00:36:42.827481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.445 qpair failed and we were unable to recover it. 00:29:12.445 [2024-10-09 00:36:42.827871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.445 [2024-10-09 00:36:42.827902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.445 qpair failed and we were unable to recover it. 00:29:12.445 00:36:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:12.445 [2024-10-09 00:36:42.828263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.445 [2024-10-09 00:36:42.828293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.445 qpair failed and we were unable to recover it. 00:29:12.445 00:36:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:12.445 [2024-10-09 00:36:42.828660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.445 [2024-10-09 00:36:42.828690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.445 qpair failed and we were unable to recover it. 00:29:12.445 00:36:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:12.445 [2024-10-09 00:36:42.829055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.445 [2024-10-09 00:36:42.829087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.445 qpair failed and we were unable to recover it. 00:29:12.445 00:36:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:12.445 [2024-10-09 00:36:42.829374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.445 [2024-10-09 00:36:42.829404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.445 qpair failed and we were unable to recover it. 00:29:12.445 [2024-10-09 00:36:42.829760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.445 [2024-10-09 00:36:42.829791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.445 qpair failed and we were unable to recover it. 00:29:12.445 [2024-10-09 00:36:42.830176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.445 [2024-10-09 00:36:42.830206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.445 qpair failed and we were unable to recover it. 00:29:12.445 [2024-10-09 00:36:42.830443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.445 [2024-10-09 00:36:42.830471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.445 qpair failed and we were unable to recover it. 00:29:12.445 [2024-10-09 00:36:42.830884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.445 [2024-10-09 00:36:42.830915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.445 qpair failed and we were unable to recover it. 00:29:12.445 [2024-10-09 00:36:42.831258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.445 [2024-10-09 00:36:42.831287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.445 qpair failed and we were unable to recover it. 00:29:12.445 [2024-10-09 00:36:42.831657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.445 [2024-10-09 00:36:42.831685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.445 qpair failed and we were unable to recover it. 00:29:12.445 [2024-10-09 00:36:42.832078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.445 [2024-10-09 00:36:42.832109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.445 qpair failed and we were unable to recover it. 00:29:12.445 [2024-10-09 00:36:42.832466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.445 [2024-10-09 00:36:42.832496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.445 qpair failed and we were unable to recover it. 00:29:12.445 [2024-10-09 00:36:42.832845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.445 [2024-10-09 00:36:42.832876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.445 qpair failed and we were unable to recover it. 00:29:12.445 [2024-10-09 00:36:42.833249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.445 [2024-10-09 00:36:42.833279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.445 qpair failed and we were unable to recover it. 00:29:12.445 [2024-10-09 00:36:42.833634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.445 [2024-10-09 00:36:42.833663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.445 qpair failed and we were unable to recover it. 00:29:12.445 [2024-10-09 00:36:42.834044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.445 [2024-10-09 00:36:42.834074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.445 qpair failed and we were unable to recover it. 00:29:12.445 [2024-10-09 00:36:42.834445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.445 [2024-10-09 00:36:42.834474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.445 qpair failed and we were unable to recover it. 00:29:12.445 [2024-10-09 00:36:42.834814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.445 [2024-10-09 00:36:42.834844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.445 qpair failed and we were unable to recover it. 00:29:12.445 [2024-10-09 00:36:42.835090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.445 [2024-10-09 00:36:42.835120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.445 qpair failed and we were unable to recover it. 00:29:12.445 [2024-10-09 00:36:42.835238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.445 [2024-10-09 00:36:42.835273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.445 qpair failed and we were unable to recover it. 00:29:12.445 [2024-10-09 00:36:42.835625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.445 [2024-10-09 00:36:42.835655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.445 qpair failed and we were unable to recover it. 00:29:12.445 [2024-10-09 00:36:42.836050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.445 [2024-10-09 00:36:42.836081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.445 qpair failed and we were unable to recover it. 00:29:12.445 [2024-10-09 00:36:42.836443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.445 [2024-10-09 00:36:42.836472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.445 qpair failed and we were unable to recover it. 00:29:12.445 [2024-10-09 00:36:42.836827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.445 [2024-10-09 00:36:42.836856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.445 qpair failed and we were unable to recover it. 00:29:12.445 [2024-10-09 00:36:42.837088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.445 [2024-10-09 00:36:42.837117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.445 qpair failed and we were unable to recover it. 00:29:12.445 [2024-10-09 00:36:42.837360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.445 [2024-10-09 00:36:42.837388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.445 qpair failed and we were unable to recover it. 00:29:12.445 [2024-10-09 00:36:42.837657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.445 [2024-10-09 00:36:42.837688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.445 qpair failed and we were unable to recover it. 00:29:12.446 [2024-10-09 00:36:42.838130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.446 [2024-10-09 00:36:42.838160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.446 qpair failed and we were unable to recover it. 00:29:12.446 [2024-10-09 00:36:42.838379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.446 [2024-10-09 00:36:42.838406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.446 qpair failed and we were unable to recover it. 00:29:12.446 [2024-10-09 00:36:42.838650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.446 [2024-10-09 00:36:42.838682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.446 qpair failed and we were unable to recover it. 00:29:12.446 [2024-10-09 00:36:42.838921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.446 [2024-10-09 00:36:42.838952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.446 qpair failed and we were unable to recover it. 00:29:12.446 [2024-10-09 00:36:42.839224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.446 [2024-10-09 00:36:42.839252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.446 qpair failed and we were unable to recover it. 00:29:12.446 [2024-10-09 00:36:42.839540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.446 [2024-10-09 00:36:42.839569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.446 qpair failed and we were unable to recover it. 00:29:12.446 [2024-10-09 00:36:42.839945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.446 [2024-10-09 00:36:42.839977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.446 qpair failed and we were unable to recover it. 00:29:12.446 00:36:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:12.446 [2024-10-09 00:36:42.840353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.446 [2024-10-09 00:36:42.840383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.446 qpair failed and we were unable to recover it. 00:29:12.446 00:36:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:12.446 [2024-10-09 00:36:42.840741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.446 [2024-10-09 00:36:42.840773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.446 qpair failed and we were unable to recover it. 00:29:12.446 [2024-10-09 00:36:42.841007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.446 00:36:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:12.446 [2024-10-09 00:36:42.841037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.446 qpair failed and we were unable to recover it. 00:29:12.446 00:36:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:12.446 [2024-10-09 00:36:42.841406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.446 [2024-10-09 00:36:42.841437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.446 qpair failed and we were unable to recover it. 00:29:12.446 [2024-10-09 00:36:42.841804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.446 [2024-10-09 00:36:42.841835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.446 qpair failed and we were unable to recover it. 00:29:12.446 [2024-10-09 00:36:42.842201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.446 [2024-10-09 00:36:42.842230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.446 qpair failed and we were unable to recover it. 00:29:12.446 [2024-10-09 00:36:42.842591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.446 [2024-10-09 00:36:42.842620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.446 qpair failed and we were unable to recover it. 00:29:12.446 [2024-10-09 00:36:42.842970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.446 [2024-10-09 00:36:42.843001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.446 qpair failed and we were unable to recover it. 00:29:12.446 [2024-10-09 00:36:42.843382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.446 [2024-10-09 00:36:42.843410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.446 qpair failed and we were unable to recover it. 00:29:12.446 [2024-10-09 00:36:42.843647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.446 [2024-10-09 00:36:42.843675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.446 qpair failed and we were unable to recover it. 00:29:12.446 [2024-10-09 00:36:42.843950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.446 [2024-10-09 00:36:42.843986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.446 qpair failed and we were unable to recover it. 00:29:12.446 [2024-10-09 00:36:42.844261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.446 [2024-10-09 00:36:42.844289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.446 qpair failed and we were unable to recover it. 00:29:12.446 [2024-10-09 00:36:42.844662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.446 [2024-10-09 00:36:42.844692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.446 qpair failed and we were unable to recover it. 00:29:12.446 [2024-10-09 00:36:42.844863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.446 [2024-10-09 00:36:42.844896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.446 qpair failed and we were unable to recover it. 00:29:12.446 [2024-10-09 00:36:42.845269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.446 [2024-10-09 00:36:42.845297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.446 qpair failed and we were unable to recover it. 00:29:12.446 [2024-10-09 00:36:42.845669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.446 [2024-10-09 00:36:42.845698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.446 qpair failed and we were unable to recover it. 00:29:12.446 [2024-10-09 00:36:42.846146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.446 [2024-10-09 00:36:42.846178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.446 qpair failed and we were unable to recover it. 00:29:12.446 [2024-10-09 00:36:42.846554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.446 [2024-10-09 00:36:42.846584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.446 qpair failed and we were unable to recover it. 00:29:12.446 [2024-10-09 00:36:42.846950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.446 [2024-10-09 00:36:42.846980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9fc000b90 with addr=10.0.0.2, port=4420 00:29:12.446 qpair failed and we were unable to recover it. 00:29:12.446 [2024-10-09 00:36:42.847126] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:12.446 00:36:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:12.446 00:36:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:12.446 00:36:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:12.446 00:36:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:12.446 [2024-10-09 00:36:42.858034] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.446 [2024-10-09 00:36:42.858177] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.446 [2024-10-09 00:36:42.858228] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.446 [2024-10-09 00:36:42.858251] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.446 [2024-10-09 00:36:42.858272] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:12.446 [2024-10-09 00:36:42.858341] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.446 qpair failed and we were unable to recover it. 00:29:12.446 00:36:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:12.446 00:36:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 3437596 00:29:12.446 [2024-10-09 00:36:42.867918] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.446 [2024-10-09 00:36:42.868014] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.446 [2024-10-09 00:36:42.868043] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.446 [2024-10-09 00:36:42.868058] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.446 [2024-10-09 00:36:42.868071] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:12.446 [2024-10-09 00:36:42.868101] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.446 qpair failed and we were unable to recover it. 00:29:12.446 [2024-10-09 00:36:42.877887] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.446 [2024-10-09 00:36:42.877962] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.446 [2024-10-09 00:36:42.877984] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.446 [2024-10-09 00:36:42.877994] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.446 [2024-10-09 00:36:42.878006] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:12.446 [2024-10-09 00:36:42.878030] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.446 qpair failed and we were unable to recover it. 00:29:12.446 [2024-10-09 00:36:42.887887] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.447 [2024-10-09 00:36:42.887970] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.447 [2024-10-09 00:36:42.887986] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.447 [2024-10-09 00:36:42.887993] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.447 [2024-10-09 00:36:42.887999] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:12.447 [2024-10-09 00:36:42.888016] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.447 qpair failed and we were unable to recover it. 00:29:12.447 [2024-10-09 00:36:42.897879] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.447 [2024-10-09 00:36:42.897985] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.447 [2024-10-09 00:36:42.898001] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.447 [2024-10-09 00:36:42.898011] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.447 [2024-10-09 00:36:42.898018] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:12.447 [2024-10-09 00:36:42.898034] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.447 qpair failed and we were unable to recover it. 00:29:12.447 [2024-10-09 00:36:42.907816] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.447 [2024-10-09 00:36:42.907882] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.447 [2024-10-09 00:36:42.907899] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.447 [2024-10-09 00:36:42.907906] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.447 [2024-10-09 00:36:42.907913] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:12.447 [2024-10-09 00:36:42.907930] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.447 qpair failed and we were unable to recover it. 00:29:12.447 [2024-10-09 00:36:42.917843] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.447 [2024-10-09 00:36:42.917905] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.447 [2024-10-09 00:36:42.917923] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.447 [2024-10-09 00:36:42.917930] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.447 [2024-10-09 00:36:42.917937] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:12.447 [2024-10-09 00:36:42.917954] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.447 qpair failed and we were unable to recover it. 00:29:12.447 [2024-10-09 00:36:42.927902] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.447 [2024-10-09 00:36:42.927982] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.447 [2024-10-09 00:36:42.927999] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.447 [2024-10-09 00:36:42.928006] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.447 [2024-10-09 00:36:42.928013] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:12.447 [2024-10-09 00:36:42.928029] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.447 qpair failed and we were unable to recover it. 00:29:12.447 [2024-10-09 00:36:42.938006] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.447 [2024-10-09 00:36:42.938079] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.447 [2024-10-09 00:36:42.938096] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.447 [2024-10-09 00:36:42.938103] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.447 [2024-10-09 00:36:42.938109] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:12.447 [2024-10-09 00:36:42.938126] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.447 qpair failed and we were unable to recover it. 00:29:12.447 [2024-10-09 00:36:42.947981] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.447 [2024-10-09 00:36:42.948058] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.447 [2024-10-09 00:36:42.948075] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.447 [2024-10-09 00:36:42.948087] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.447 [2024-10-09 00:36:42.948094] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:12.447 [2024-10-09 00:36:42.948111] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.447 qpair failed and we were unable to recover it. 00:29:12.447 [2024-10-09 00:36:42.958016] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.447 [2024-10-09 00:36:42.958127] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.447 [2024-10-09 00:36:42.958144] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.447 [2024-10-09 00:36:42.958151] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.447 [2024-10-09 00:36:42.958158] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:12.447 [2024-10-09 00:36:42.958174] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.447 qpair failed and we were unable to recover it. 00:29:12.447 [2024-10-09 00:36:42.968036] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.447 [2024-10-09 00:36:42.968109] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.447 [2024-10-09 00:36:42.968130] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.447 [2024-10-09 00:36:42.968139] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.447 [2024-10-09 00:36:42.968150] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:12.447 [2024-10-09 00:36:42.968169] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.447 qpair failed and we were unable to recover it. 00:29:12.447 [2024-10-09 00:36:42.977987] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.447 [2024-10-09 00:36:42.978058] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.447 [2024-10-09 00:36:42.978076] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.447 [2024-10-09 00:36:42.978083] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.447 [2024-10-09 00:36:42.978090] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:12.447 [2024-10-09 00:36:42.978108] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.447 qpair failed and we were unable to recover it. 00:29:12.447 [2024-10-09 00:36:42.988068] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.447 [2024-10-09 00:36:42.988128] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.447 [2024-10-09 00:36:42.988147] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.447 [2024-10-09 00:36:42.988154] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.447 [2024-10-09 00:36:42.988160] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:12.447 [2024-10-09 00:36:42.988178] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.447 qpair failed and we were unable to recover it. 00:29:12.447 [2024-10-09 00:36:42.998045] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.447 [2024-10-09 00:36:42.998142] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.447 [2024-10-09 00:36:42.998159] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.447 [2024-10-09 00:36:42.998166] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.447 [2024-10-09 00:36:42.998174] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:12.447 [2024-10-09 00:36:42.998190] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.447 qpair failed and we were unable to recover it. 00:29:12.447 [2024-10-09 00:36:43.008142] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.447 [2024-10-09 00:36:43.008206] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.447 [2024-10-09 00:36:43.008224] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.447 [2024-10-09 00:36:43.008231] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.447 [2024-10-09 00:36:43.008238] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:12.447 [2024-10-09 00:36:43.008255] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.447 qpair failed and we were unable to recover it. 00:29:12.447 [2024-10-09 00:36:43.018223] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.447 [2024-10-09 00:36:43.018296] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.447 [2024-10-09 00:36:43.018317] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.447 [2024-10-09 00:36:43.018325] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.447 [2024-10-09 00:36:43.018337] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:12.448 [2024-10-09 00:36:43.018356] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.448 qpair failed and we were unable to recover it. 00:29:12.448 [2024-10-09 00:36:43.028190] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.448 [2024-10-09 00:36:43.028247] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.448 [2024-10-09 00:36:43.028266] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.448 [2024-10-09 00:36:43.028274] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.448 [2024-10-09 00:36:43.028282] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:12.448 [2024-10-09 00:36:43.028300] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.448 qpair failed and we were unable to recover it. 00:29:12.448 [2024-10-09 00:36:43.038227] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.448 [2024-10-09 00:36:43.038294] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.448 [2024-10-09 00:36:43.038317] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.448 [2024-10-09 00:36:43.038325] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.448 [2024-10-09 00:36:43.038331] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:12.448 [2024-10-09 00:36:43.038348] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.448 qpair failed and we were unable to recover it. 00:29:12.448 [2024-10-09 00:36:43.048272] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.448 [2024-10-09 00:36:43.048338] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.448 [2024-10-09 00:36:43.048354] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.448 [2024-10-09 00:36:43.048362] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.448 [2024-10-09 00:36:43.048368] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:12.448 [2024-10-09 00:36:43.048385] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.448 qpair failed and we were unable to recover it. 00:29:12.448 [2024-10-09 00:36:43.058290] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.448 [2024-10-09 00:36:43.058401] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.448 [2024-10-09 00:36:43.058419] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.448 [2024-10-09 00:36:43.058427] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.448 [2024-10-09 00:36:43.058433] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:12.448 [2024-10-09 00:36:43.058449] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.448 qpair failed and we were unable to recover it. 00:29:12.448 [2024-10-09 00:36:43.068273] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.448 [2024-10-09 00:36:43.068349] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.448 [2024-10-09 00:36:43.068385] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.448 [2024-10-09 00:36:43.068394] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.448 [2024-10-09 00:36:43.068401] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:12.448 [2024-10-09 00:36:43.068425] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.448 qpair failed and we were unable to recover it. 00:29:12.711 [2024-10-09 00:36:43.078324] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.711 [2024-10-09 00:36:43.078437] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.711 [2024-10-09 00:36:43.078465] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.711 [2024-10-09 00:36:43.078474] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.711 [2024-10-09 00:36:43.078482] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:12.711 [2024-10-09 00:36:43.078509] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.711 qpair failed and we were unable to recover it. 00:29:12.711 [2024-10-09 00:36:43.088340] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.711 [2024-10-09 00:36:43.088418] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.711 [2024-10-09 00:36:43.088453] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.711 [2024-10-09 00:36:43.088462] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.711 [2024-10-09 00:36:43.088470] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:12.711 [2024-10-09 00:36:43.088493] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.711 qpair failed and we were unable to recover it. 00:29:12.711 [2024-10-09 00:36:43.098435] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.711 [2024-10-09 00:36:43.098513] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.711 [2024-10-09 00:36:43.098533] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.711 [2024-10-09 00:36:43.098541] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.711 [2024-10-09 00:36:43.098547] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:12.711 [2024-10-09 00:36:43.098565] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.711 qpair failed and we were unable to recover it. 00:29:12.711 [2024-10-09 00:36:43.108515] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.711 [2024-10-09 00:36:43.108631] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.711 [2024-10-09 00:36:43.108649] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.711 [2024-10-09 00:36:43.108657] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.711 [2024-10-09 00:36:43.108664] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:12.711 [2024-10-09 00:36:43.108681] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.711 qpair failed and we were unable to recover it. 00:29:12.711 [2024-10-09 00:36:43.118486] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.711 [2024-10-09 00:36:43.118562] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.711 [2024-10-09 00:36:43.118580] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.711 [2024-10-09 00:36:43.118588] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.711 [2024-10-09 00:36:43.118596] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:12.711 [2024-10-09 00:36:43.118615] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.711 qpair failed and we were unable to recover it. 00:29:12.711 [2024-10-09 00:36:43.128423] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.711 [2024-10-09 00:36:43.128486] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.711 [2024-10-09 00:36:43.128508] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.711 [2024-10-09 00:36:43.128516] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.711 [2024-10-09 00:36:43.128522] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:12.711 [2024-10-09 00:36:43.128539] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.711 qpair failed and we were unable to recover it. 00:29:12.711 [2024-10-09 00:36:43.138559] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.711 [2024-10-09 00:36:43.138639] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.712 [2024-10-09 00:36:43.138658] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.712 [2024-10-09 00:36:43.138665] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.712 [2024-10-09 00:36:43.138671] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:12.712 [2024-10-09 00:36:43.138688] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.712 qpair failed and we were unable to recover it. 00:29:12.712 [2024-10-09 00:36:43.148522] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.712 [2024-10-09 00:36:43.148596] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.712 [2024-10-09 00:36:43.148615] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.712 [2024-10-09 00:36:43.148622] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.712 [2024-10-09 00:36:43.148628] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:12.712 [2024-10-09 00:36:43.148646] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.712 qpair failed and we were unable to recover it. 00:29:12.712 [2024-10-09 00:36:43.158408] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.712 [2024-10-09 00:36:43.158472] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.712 [2024-10-09 00:36:43.158490] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.712 [2024-10-09 00:36:43.158497] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.712 [2024-10-09 00:36:43.158503] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:12.712 [2024-10-09 00:36:43.158520] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.712 qpair failed and we were unable to recover it. 00:29:12.712 [2024-10-09 00:36:43.168546] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.712 [2024-10-09 00:36:43.168609] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.712 [2024-10-09 00:36:43.168627] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.712 [2024-10-09 00:36:43.168634] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.712 [2024-10-09 00:36:43.168647] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:12.712 [2024-10-09 00:36:43.168663] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.712 qpair failed and we were unable to recover it. 00:29:12.712 [2024-10-09 00:36:43.178629] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.712 [2024-10-09 00:36:43.178697] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.712 [2024-10-09 00:36:43.178713] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.712 [2024-10-09 00:36:43.178726] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.712 [2024-10-09 00:36:43.178733] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:12.712 [2024-10-09 00:36:43.178749] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.712 qpair failed and we were unable to recover it. 00:29:12.712 [2024-10-09 00:36:43.188586] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.712 [2024-10-09 00:36:43.188645] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.712 [2024-10-09 00:36:43.188661] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.712 [2024-10-09 00:36:43.188668] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.712 [2024-10-09 00:36:43.188675] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:12.712 [2024-10-09 00:36:43.188691] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.712 qpair failed and we were unable to recover it. 00:29:12.712 [2024-10-09 00:36:43.198528] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.712 [2024-10-09 00:36:43.198597] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.712 [2024-10-09 00:36:43.198614] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.712 [2024-10-09 00:36:43.198622] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.712 [2024-10-09 00:36:43.198628] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:12.712 [2024-10-09 00:36:43.198644] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.712 qpair failed and we were unable to recover it. 00:29:12.712 [2024-10-09 00:36:43.208660] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.712 [2024-10-09 00:36:43.208734] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.712 [2024-10-09 00:36:43.208752] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.712 [2024-10-09 00:36:43.208759] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.712 [2024-10-09 00:36:43.208765] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:12.712 [2024-10-09 00:36:43.208782] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.712 qpair failed and we were unable to recover it. 00:29:12.712 [2024-10-09 00:36:43.218744] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.712 [2024-10-09 00:36:43.218821] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.712 [2024-10-09 00:36:43.218837] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.712 [2024-10-09 00:36:43.218845] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.712 [2024-10-09 00:36:43.218851] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:12.712 [2024-10-09 00:36:43.218867] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.712 qpair failed and we were unable to recover it. 00:29:12.712 [2024-10-09 00:36:43.228745] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.712 [2024-10-09 00:36:43.228840] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.712 [2024-10-09 00:36:43.228857] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.712 [2024-10-09 00:36:43.228865] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.712 [2024-10-09 00:36:43.228872] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:12.712 [2024-10-09 00:36:43.228888] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.712 qpair failed and we were unable to recover it. 00:29:12.712 [2024-10-09 00:36:43.238714] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.712 [2024-10-09 00:36:43.238783] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.712 [2024-10-09 00:36:43.238800] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.712 [2024-10-09 00:36:43.238807] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.712 [2024-10-09 00:36:43.238814] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:12.712 [2024-10-09 00:36:43.238830] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.712 qpair failed and we were unable to recover it. 00:29:12.712 [2024-10-09 00:36:43.248734] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.712 [2024-10-09 00:36:43.248793] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.712 [2024-10-09 00:36:43.248809] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.712 [2024-10-09 00:36:43.248817] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.712 [2024-10-09 00:36:43.248823] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:12.712 [2024-10-09 00:36:43.248840] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.712 qpair failed and we were unable to recover it. 00:29:12.712 [2024-10-09 00:36:43.258842] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.712 [2024-10-09 00:36:43.258911] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.712 [2024-10-09 00:36:43.258928] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.712 [2024-10-09 00:36:43.258936] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.712 [2024-10-09 00:36:43.258948] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:12.712 [2024-10-09 00:36:43.258964] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.712 qpair failed and we were unable to recover it. 00:29:12.712 [2024-10-09 00:36:43.268843] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.712 [2024-10-09 00:36:43.268909] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.712 [2024-10-09 00:36:43.268926] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.712 [2024-10-09 00:36:43.268933] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.712 [2024-10-09 00:36:43.268939] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:12.712 [2024-10-09 00:36:43.268955] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.712 qpair failed and we were unable to recover it. 00:29:12.712 [2024-10-09 00:36:43.278884] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.712 [2024-10-09 00:36:43.278951] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.712 [2024-10-09 00:36:43.278968] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.713 [2024-10-09 00:36:43.278975] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.713 [2024-10-09 00:36:43.278981] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:12.713 [2024-10-09 00:36:43.278997] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.713 qpair failed and we were unable to recover it. 00:29:12.713 [2024-10-09 00:36:43.288884] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.713 [2024-10-09 00:36:43.288943] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.713 [2024-10-09 00:36:43.288958] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.713 [2024-10-09 00:36:43.288966] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.713 [2024-10-09 00:36:43.288972] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:12.713 [2024-10-09 00:36:43.288988] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.713 qpair failed and we were unable to recover it. 00:29:12.713 [2024-10-09 00:36:43.299027] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.713 [2024-10-09 00:36:43.299123] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.713 [2024-10-09 00:36:43.299139] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.713 [2024-10-09 00:36:43.299147] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.713 [2024-10-09 00:36:43.299153] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:12.713 [2024-10-09 00:36:43.299169] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.713 qpair failed and we were unable to recover it. 00:29:12.713 [2024-10-09 00:36:43.308988] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.713 [2024-10-09 00:36:43.309062] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.713 [2024-10-09 00:36:43.309079] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.713 [2024-10-09 00:36:43.309086] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.713 [2024-10-09 00:36:43.309093] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:12.713 [2024-10-09 00:36:43.309109] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.713 qpair failed and we were unable to recover it. 00:29:12.713 [2024-10-09 00:36:43.319005] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.713 [2024-10-09 00:36:43.319072] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.713 [2024-10-09 00:36:43.319091] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.713 [2024-10-09 00:36:43.319098] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.713 [2024-10-09 00:36:43.319105] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:12.713 [2024-10-09 00:36:43.319121] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.713 qpair failed and we were unable to recover it. 00:29:12.713 [2024-10-09 00:36:43.328988] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.713 [2024-10-09 00:36:43.329049] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.713 [2024-10-09 00:36:43.329065] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.713 [2024-10-09 00:36:43.329073] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.713 [2024-10-09 00:36:43.329079] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:12.713 [2024-10-09 00:36:43.329095] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.713 qpair failed and we were unable to recover it. 00:29:12.713 [2024-10-09 00:36:43.339071] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.713 [2024-10-09 00:36:43.339140] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.713 [2024-10-09 00:36:43.339157] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.713 [2024-10-09 00:36:43.339164] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.713 [2024-10-09 00:36:43.339170] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:12.713 [2024-10-09 00:36:43.339185] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.713 qpair failed and we were unable to recover it. 00:29:12.975 [2024-10-09 00:36:43.349054] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.975 [2024-10-09 00:36:43.349119] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.975 [2024-10-09 00:36:43.349136] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.975 [2024-10-09 00:36:43.349150] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.975 [2024-10-09 00:36:43.349156] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:12.975 [2024-10-09 00:36:43.349172] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.975 qpair failed and we were unable to recover it. 00:29:12.975 [2024-10-09 00:36:43.358979] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.975 [2024-10-09 00:36:43.359044] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.975 [2024-10-09 00:36:43.359061] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.975 [2024-10-09 00:36:43.359068] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.975 [2024-10-09 00:36:43.359075] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:12.975 [2024-10-09 00:36:43.359090] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.975 qpair failed and we were unable to recover it. 00:29:12.975 [2024-10-09 00:36:43.369133] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.975 [2024-10-09 00:36:43.369193] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.975 [2024-10-09 00:36:43.369211] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.975 [2024-10-09 00:36:43.369218] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.975 [2024-10-09 00:36:43.369225] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:12.975 [2024-10-09 00:36:43.369241] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.975 qpair failed and we were unable to recover it. 00:29:12.975 [2024-10-09 00:36:43.379193] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.975 [2024-10-09 00:36:43.379264] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.975 [2024-10-09 00:36:43.379282] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.975 [2024-10-09 00:36:43.379289] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.975 [2024-10-09 00:36:43.379295] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:12.975 [2024-10-09 00:36:43.379311] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.975 qpair failed and we were unable to recover it. 00:29:12.975 [2024-10-09 00:36:43.389092] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.975 [2024-10-09 00:36:43.389165] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.975 [2024-10-09 00:36:43.389181] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.975 [2024-10-09 00:36:43.389189] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.975 [2024-10-09 00:36:43.389195] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:12.975 [2024-10-09 00:36:43.389211] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.975 qpair failed and we were unable to recover it. 00:29:12.975 [2024-10-09 00:36:43.399231] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.975 [2024-10-09 00:36:43.399341] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.975 [2024-10-09 00:36:43.399357] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.975 [2024-10-09 00:36:43.399364] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.975 [2024-10-09 00:36:43.399371] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:12.975 [2024-10-09 00:36:43.399388] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.975 qpair failed and we were unable to recover it. 00:29:12.975 [2024-10-09 00:36:43.409235] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.976 [2024-10-09 00:36:43.409328] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.976 [2024-10-09 00:36:43.409344] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.976 [2024-10-09 00:36:43.409351] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.976 [2024-10-09 00:36:43.409358] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:12.976 [2024-10-09 00:36:43.409374] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.976 qpair failed and we were unable to recover it. 00:29:12.976 [2024-10-09 00:36:43.419333] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.976 [2024-10-09 00:36:43.419446] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.976 [2024-10-09 00:36:43.419463] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.976 [2024-10-09 00:36:43.419472] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.976 [2024-10-09 00:36:43.419478] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:12.976 [2024-10-09 00:36:43.419494] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.976 qpair failed and we were unable to recover it. 00:29:12.976 [2024-10-09 00:36:43.429275] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.976 [2024-10-09 00:36:43.429339] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.976 [2024-10-09 00:36:43.429355] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.976 [2024-10-09 00:36:43.429362] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.976 [2024-10-09 00:36:43.429369] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:12.976 [2024-10-09 00:36:43.429385] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.976 qpair failed and we were unable to recover it. 00:29:12.976 [2024-10-09 00:36:43.439324] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.976 [2024-10-09 00:36:43.439395] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.976 [2024-10-09 00:36:43.439412] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.976 [2024-10-09 00:36:43.439424] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.976 [2024-10-09 00:36:43.439430] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:12.976 [2024-10-09 00:36:43.439447] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.976 qpair failed and we were unable to recover it. 00:29:12.976 [2024-10-09 00:36:43.449376] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.976 [2024-10-09 00:36:43.449446] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.976 [2024-10-09 00:36:43.449462] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.976 [2024-10-09 00:36:43.449469] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.976 [2024-10-09 00:36:43.449476] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:12.976 [2024-10-09 00:36:43.449492] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.976 qpair failed and we were unable to recover it. 00:29:12.976 [2024-10-09 00:36:43.459420] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.976 [2024-10-09 00:36:43.459495] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.976 [2024-10-09 00:36:43.459512] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.976 [2024-10-09 00:36:43.459519] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.976 [2024-10-09 00:36:43.459526] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:12.976 [2024-10-09 00:36:43.459542] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.976 qpair failed and we were unable to recover it. 00:29:12.976 [2024-10-09 00:36:43.469412] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.976 [2024-10-09 00:36:43.469478] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.976 [2024-10-09 00:36:43.469494] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.976 [2024-10-09 00:36:43.469501] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.976 [2024-10-09 00:36:43.469508] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:12.976 [2024-10-09 00:36:43.469524] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.976 qpair failed and we were unable to recover it. 00:29:12.976 [2024-10-09 00:36:43.479447] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.976 [2024-10-09 00:36:43.479519] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.976 [2024-10-09 00:36:43.479536] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.976 [2024-10-09 00:36:43.479543] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.976 [2024-10-09 00:36:43.479549] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:12.976 [2024-10-09 00:36:43.479565] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.976 qpair failed and we were unable to recover it. 00:29:12.976 [2024-10-09 00:36:43.489471] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.976 [2024-10-09 00:36:43.489575] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.976 [2024-10-09 00:36:43.489592] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.976 [2024-10-09 00:36:43.489600] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.976 [2024-10-09 00:36:43.489606] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:12.976 [2024-10-09 00:36:43.489623] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.976 qpair failed and we were unable to recover it. 00:29:12.976 [2024-10-09 00:36:43.499560] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.976 [2024-10-09 00:36:43.499621] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.976 [2024-10-09 00:36:43.499637] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.976 [2024-10-09 00:36:43.499644] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.976 [2024-10-09 00:36:43.499651] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:12.976 [2024-10-09 00:36:43.499667] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.976 qpair failed and we were unable to recover it. 00:29:12.976 [2024-10-09 00:36:43.509574] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.976 [2024-10-09 00:36:43.509643] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.976 [2024-10-09 00:36:43.509660] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.976 [2024-10-09 00:36:43.509668] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.976 [2024-10-09 00:36:43.509674] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:12.976 [2024-10-09 00:36:43.509690] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.976 qpair failed and we were unable to recover it. 00:29:12.976 [2024-10-09 00:36:43.519589] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.976 [2024-10-09 00:36:43.519654] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.976 [2024-10-09 00:36:43.519671] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.976 [2024-10-09 00:36:43.519678] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.976 [2024-10-09 00:36:43.519684] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:12.976 [2024-10-09 00:36:43.519700] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.976 qpair failed and we were unable to recover it. 00:29:12.976 [2024-10-09 00:36:43.529572] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.976 [2024-10-09 00:36:43.529636] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.976 [2024-10-09 00:36:43.529662] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.976 [2024-10-09 00:36:43.529674] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.976 [2024-10-09 00:36:43.529681] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:12.976 [2024-10-09 00:36:43.529698] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.976 qpair failed and we were unable to recover it. 00:29:12.976 [2024-10-09 00:36:43.539666] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.976 [2024-10-09 00:36:43.539781] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.976 [2024-10-09 00:36:43.539800] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.976 [2024-10-09 00:36:43.539809] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.976 [2024-10-09 00:36:43.539815] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:12.977 [2024-10-09 00:36:43.539832] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.977 qpair failed and we were unable to recover it. 00:29:12.977 [2024-10-09 00:36:43.549649] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.977 [2024-10-09 00:36:43.549715] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.977 [2024-10-09 00:36:43.549737] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.977 [2024-10-09 00:36:43.549745] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.977 [2024-10-09 00:36:43.549751] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:12.977 [2024-10-09 00:36:43.549767] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.977 qpair failed and we were unable to recover it. 00:29:12.977 [2024-10-09 00:36:43.559694] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.977 [2024-10-09 00:36:43.559759] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.977 [2024-10-09 00:36:43.559776] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.977 [2024-10-09 00:36:43.559784] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.977 [2024-10-09 00:36:43.559791] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:12.977 [2024-10-09 00:36:43.559808] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.977 qpair failed and we were unable to recover it. 00:29:12.977 [2024-10-09 00:36:43.569690] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.977 [2024-10-09 00:36:43.569755] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.977 [2024-10-09 00:36:43.569772] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.977 [2024-10-09 00:36:43.569779] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.977 [2024-10-09 00:36:43.569786] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:12.977 [2024-10-09 00:36:43.569808] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.977 qpair failed and we were unable to recover it. 00:29:12.977 [2024-10-09 00:36:43.579821] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.977 [2024-10-09 00:36:43.579890] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.977 [2024-10-09 00:36:43.579906] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.977 [2024-10-09 00:36:43.579913] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.977 [2024-10-09 00:36:43.579919] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:12.977 [2024-10-09 00:36:43.579936] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.977 qpair failed and we were unable to recover it. 00:29:12.977 [2024-10-09 00:36:43.589781] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.977 [2024-10-09 00:36:43.589839] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.977 [2024-10-09 00:36:43.589855] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.977 [2024-10-09 00:36:43.589862] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.977 [2024-10-09 00:36:43.589869] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:12.977 [2024-10-09 00:36:43.589885] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.977 qpair failed and we were unable to recover it. 00:29:12.977 [2024-10-09 00:36:43.599815] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.977 [2024-10-09 00:36:43.599882] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.977 [2024-10-09 00:36:43.599898] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.977 [2024-10-09 00:36:43.599906] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.977 [2024-10-09 00:36:43.599912] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:12.977 [2024-10-09 00:36:43.599928] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.977 qpair failed and we were unable to recover it. 00:29:13.240 [2024-10-09 00:36:43.609835] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.240 [2024-10-09 00:36:43.609901] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.240 [2024-10-09 00:36:43.609917] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.240 [2024-10-09 00:36:43.609925] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.240 [2024-10-09 00:36:43.609931] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:13.240 [2024-10-09 00:36:43.609948] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.240 qpair failed and we were unable to recover it. 00:29:13.240 [2024-10-09 00:36:43.619922] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.240 [2024-10-09 00:36:43.619992] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.240 [2024-10-09 00:36:43.620021] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.240 [2024-10-09 00:36:43.620028] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.240 [2024-10-09 00:36:43.620035] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:13.240 [2024-10-09 00:36:43.620052] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.240 qpair failed and we were unable to recover it. 00:29:13.240 [2024-10-09 00:36:43.629891] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.240 [2024-10-09 00:36:43.629951] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.240 [2024-10-09 00:36:43.629968] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.240 [2024-10-09 00:36:43.629975] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.240 [2024-10-09 00:36:43.629982] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:13.240 [2024-10-09 00:36:43.629998] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.240 qpair failed and we were unable to recover it. 00:29:13.240 [2024-10-09 00:36:43.639919] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.240 [2024-10-09 00:36:43.639989] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.240 [2024-10-09 00:36:43.640006] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.240 [2024-10-09 00:36:43.640013] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.240 [2024-10-09 00:36:43.640019] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:13.240 [2024-10-09 00:36:43.640035] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.240 qpair failed and we were unable to recover it. 00:29:13.240 [2024-10-09 00:36:43.649926] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.240 [2024-10-09 00:36:43.649984] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.240 [2024-10-09 00:36:43.650001] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.240 [2024-10-09 00:36:43.650008] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.240 [2024-10-09 00:36:43.650014] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:13.240 [2024-10-09 00:36:43.650030] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.240 qpair failed and we were unable to recover it. 00:29:13.240 [2024-10-09 00:36:43.660010] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.240 [2024-10-09 00:36:43.660085] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.240 [2024-10-09 00:36:43.660102] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.240 [2024-10-09 00:36:43.660109] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.240 [2024-10-09 00:36:43.660116] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:13.240 [2024-10-09 00:36:43.660137] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.240 qpair failed and we were unable to recover it. 00:29:13.240 [2024-10-09 00:36:43.669984] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.240 [2024-10-09 00:36:43.670052] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.240 [2024-10-09 00:36:43.670068] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.240 [2024-10-09 00:36:43.670076] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.240 [2024-10-09 00:36:43.670082] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:13.240 [2024-10-09 00:36:43.670098] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.240 qpair failed and we were unable to recover it. 00:29:13.240 [2024-10-09 00:36:43.680048] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.240 [2024-10-09 00:36:43.680106] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.240 [2024-10-09 00:36:43.680122] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.240 [2024-10-09 00:36:43.680129] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.240 [2024-10-09 00:36:43.680136] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:13.240 [2024-10-09 00:36:43.680152] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.240 qpair failed and we were unable to recover it. 00:29:13.240 [2024-10-09 00:36:43.690036] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.240 [2024-10-09 00:36:43.690096] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.240 [2024-10-09 00:36:43.690111] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.240 [2024-10-09 00:36:43.690119] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.240 [2024-10-09 00:36:43.690125] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:13.241 [2024-10-09 00:36:43.690141] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.241 qpair failed and we were unable to recover it. 00:29:13.241 [2024-10-09 00:36:43.700130] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.241 [2024-10-09 00:36:43.700203] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.241 [2024-10-09 00:36:43.700220] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.241 [2024-10-09 00:36:43.700227] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.241 [2024-10-09 00:36:43.700233] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:13.241 [2024-10-09 00:36:43.700250] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.241 qpair failed and we were unable to recover it. 00:29:13.241 [2024-10-09 00:36:43.710031] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.241 [2024-10-09 00:36:43.710093] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.241 [2024-10-09 00:36:43.710114] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.241 [2024-10-09 00:36:43.710121] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.241 [2024-10-09 00:36:43.710128] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:13.241 [2024-10-09 00:36:43.710143] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.241 qpair failed and we were unable to recover it. 00:29:13.241 [2024-10-09 00:36:43.720178] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.241 [2024-10-09 00:36:43.720242] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.241 [2024-10-09 00:36:43.720258] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.241 [2024-10-09 00:36:43.720265] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.241 [2024-10-09 00:36:43.720271] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:13.241 [2024-10-09 00:36:43.720287] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.241 qpair failed and we were unable to recover it. 00:29:13.241 [2024-10-09 00:36:43.730254] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.241 [2024-10-09 00:36:43.730312] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.241 [2024-10-09 00:36:43.730328] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.241 [2024-10-09 00:36:43.730336] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.241 [2024-10-09 00:36:43.730342] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:13.241 [2024-10-09 00:36:43.730358] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.241 qpair failed and we were unable to recover it. 00:29:13.241 [2024-10-09 00:36:43.740232] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.241 [2024-10-09 00:36:43.740299] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.241 [2024-10-09 00:36:43.740317] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.241 [2024-10-09 00:36:43.740324] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.241 [2024-10-09 00:36:43.740331] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:13.241 [2024-10-09 00:36:43.740347] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.241 qpair failed and we were unable to recover it. 00:29:13.241 [2024-10-09 00:36:43.750258] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.241 [2024-10-09 00:36:43.750312] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.241 [2024-10-09 00:36:43.750329] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.241 [2024-10-09 00:36:43.750336] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.241 [2024-10-09 00:36:43.750347] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:13.241 [2024-10-09 00:36:43.750364] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.241 qpair failed and we were unable to recover it. 00:29:13.241 [2024-10-09 00:36:43.760292] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.241 [2024-10-09 00:36:43.760348] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.241 [2024-10-09 00:36:43.760365] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.241 [2024-10-09 00:36:43.760373] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.241 [2024-10-09 00:36:43.760380] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:13.241 [2024-10-09 00:36:43.760396] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.241 qpair failed and we were unable to recover it. 00:29:13.241 [2024-10-09 00:36:43.770305] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.241 [2024-10-09 00:36:43.770366] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.241 [2024-10-09 00:36:43.770384] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.241 [2024-10-09 00:36:43.770391] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.241 [2024-10-09 00:36:43.770398] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:13.241 [2024-10-09 00:36:43.770414] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.241 qpair failed and we were unable to recover it. 00:29:13.241 [2024-10-09 00:36:43.780366] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.241 [2024-10-09 00:36:43.780472] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.241 [2024-10-09 00:36:43.780489] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.241 [2024-10-09 00:36:43.780497] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.241 [2024-10-09 00:36:43.780504] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:13.241 [2024-10-09 00:36:43.780520] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.241 qpair failed and we were unable to recover it. 00:29:13.241 [2024-10-09 00:36:43.790331] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.241 [2024-10-09 00:36:43.790389] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.241 [2024-10-09 00:36:43.790406] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.241 [2024-10-09 00:36:43.790413] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.241 [2024-10-09 00:36:43.790419] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:13.241 [2024-10-09 00:36:43.790436] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.241 qpair failed and we were unable to recover it. 00:29:13.241 [2024-10-09 00:36:43.800416] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.241 [2024-10-09 00:36:43.800486] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.241 [2024-10-09 00:36:43.800502] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.241 [2024-10-09 00:36:43.800510] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.241 [2024-10-09 00:36:43.800516] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:13.241 [2024-10-09 00:36:43.800532] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.241 qpair failed and we were unable to recover it. 00:29:13.241 [2024-10-09 00:36:43.810422] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.241 [2024-10-09 00:36:43.810488] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.241 [2024-10-09 00:36:43.810523] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.241 [2024-10-09 00:36:43.810532] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.241 [2024-10-09 00:36:43.810539] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:13.241 [2024-10-09 00:36:43.810562] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.241 qpair failed and we were unable to recover it. 00:29:13.241 [2024-10-09 00:36:43.820490] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.241 [2024-10-09 00:36:43.820560] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.241 [2024-10-09 00:36:43.820581] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.241 [2024-10-09 00:36:43.820588] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.241 [2024-10-09 00:36:43.820595] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:13.241 [2024-10-09 00:36:43.820613] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.241 qpair failed and we were unable to recover it. 00:29:13.241 [2024-10-09 00:36:43.830510] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.241 [2024-10-09 00:36:43.830616] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.241 [2024-10-09 00:36:43.830633] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.241 [2024-10-09 00:36:43.830640] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.242 [2024-10-09 00:36:43.830648] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:13.242 [2024-10-09 00:36:43.830664] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.242 qpair failed and we were unable to recover it. 00:29:13.242 [2024-10-09 00:36:43.840523] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.242 [2024-10-09 00:36:43.840587] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.242 [2024-10-09 00:36:43.840605] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.242 [2024-10-09 00:36:43.840618] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.242 [2024-10-09 00:36:43.840625] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:13.242 [2024-10-09 00:36:43.840642] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.242 qpair failed and we were unable to recover it. 00:29:13.242 [2024-10-09 00:36:43.850505] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.242 [2024-10-09 00:36:43.850562] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.242 [2024-10-09 00:36:43.850579] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.242 [2024-10-09 00:36:43.850586] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.242 [2024-10-09 00:36:43.850593] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:13.242 [2024-10-09 00:36:43.850609] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.242 qpair failed and we were unable to recover it. 00:29:13.242 [2024-10-09 00:36:43.860601] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.242 [2024-10-09 00:36:43.860658] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.242 [2024-10-09 00:36:43.860676] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.242 [2024-10-09 00:36:43.860683] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.242 [2024-10-09 00:36:43.860690] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:13.242 [2024-10-09 00:36:43.860707] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.242 qpair failed and we were unable to recover it. 00:29:13.242 [2024-10-09 00:36:43.870607] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.242 [2024-10-09 00:36:43.870673] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.242 [2024-10-09 00:36:43.870690] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.242 [2024-10-09 00:36:43.870698] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.242 [2024-10-09 00:36:43.870705] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:13.242 [2024-10-09 00:36:43.870729] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.242 qpair failed and we were unable to recover it. 00:29:13.504 [2024-10-09 00:36:43.880641] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.505 [2024-10-09 00:36:43.880712] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.505 [2024-10-09 00:36:43.880735] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.505 [2024-10-09 00:36:43.880742] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.505 [2024-10-09 00:36:43.880749] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:13.505 [2024-10-09 00:36:43.880766] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.505 qpair failed and we were unable to recover it. 00:29:13.505 [2024-10-09 00:36:43.890657] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.505 [2024-10-09 00:36:43.890724] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.505 [2024-10-09 00:36:43.890743] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.505 [2024-10-09 00:36:43.890750] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.505 [2024-10-09 00:36:43.890757] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:13.505 [2024-10-09 00:36:43.890773] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.505 qpair failed and we were unable to recover it. 00:29:13.505 [2024-10-09 00:36:43.900786] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.505 [2024-10-09 00:36:43.900881] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.505 [2024-10-09 00:36:43.900897] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.505 [2024-10-09 00:36:43.900905] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.505 [2024-10-09 00:36:43.900911] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:13.505 [2024-10-09 00:36:43.900927] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.505 qpair failed and we were unable to recover it. 00:29:13.505 [2024-10-09 00:36:43.910671] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.505 [2024-10-09 00:36:43.910731] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.505 [2024-10-09 00:36:43.910749] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.505 [2024-10-09 00:36:43.910756] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.505 [2024-10-09 00:36:43.910763] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:13.505 [2024-10-09 00:36:43.910779] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.505 qpair failed and we were unable to recover it. 00:29:13.505 [2024-10-09 00:36:43.920778] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.505 [2024-10-09 00:36:43.920831] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.505 [2024-10-09 00:36:43.920848] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.505 [2024-10-09 00:36:43.920855] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.505 [2024-10-09 00:36:43.920862] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:13.505 [2024-10-09 00:36:43.920878] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.505 qpair failed and we were unable to recover it. 00:29:13.505 [2024-10-09 00:36:43.930791] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.505 [2024-10-09 00:36:43.930850] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.505 [2024-10-09 00:36:43.930866] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.505 [2024-10-09 00:36:43.930879] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.505 [2024-10-09 00:36:43.930885] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:13.505 [2024-10-09 00:36:43.930901] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.505 qpair failed and we were unable to recover it. 00:29:13.505 [2024-10-09 00:36:43.940891] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.505 [2024-10-09 00:36:43.940954] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.505 [2024-10-09 00:36:43.940970] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.505 [2024-10-09 00:36:43.940977] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.505 [2024-10-09 00:36:43.940984] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:13.505 [2024-10-09 00:36:43.941000] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.505 qpair failed and we were unable to recover it. 00:29:13.505 [2024-10-09 00:36:43.950839] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.505 [2024-10-09 00:36:43.950904] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.505 [2024-10-09 00:36:43.950921] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.505 [2024-10-09 00:36:43.950928] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.505 [2024-10-09 00:36:43.950934] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:13.505 [2024-10-09 00:36:43.950950] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.505 qpair failed and we were unable to recover it. 00:29:13.505 [2024-10-09 00:36:43.960913] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.505 [2024-10-09 00:36:43.960965] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.505 [2024-10-09 00:36:43.960982] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.505 [2024-10-09 00:36:43.960989] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.505 [2024-10-09 00:36:43.960996] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:13.505 [2024-10-09 00:36:43.961012] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.505 qpair failed and we were unable to recover it. 00:29:13.505 [2024-10-09 00:36:43.970893] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.505 [2024-10-09 00:36:43.970951] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.505 [2024-10-09 00:36:43.970968] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.505 [2024-10-09 00:36:43.970975] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.505 [2024-10-09 00:36:43.970981] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:13.505 [2024-10-09 00:36:43.970997] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.505 qpair failed and we were unable to recover it. 00:29:13.505 [2024-10-09 00:36:43.981007] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.505 [2024-10-09 00:36:43.981117] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.505 [2024-10-09 00:36:43.981134] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.505 [2024-10-09 00:36:43.981142] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.505 [2024-10-09 00:36:43.981149] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:13.505 [2024-10-09 00:36:43.981164] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.505 qpair failed and we were unable to recover it. 00:29:13.505 [2024-10-09 00:36:43.991038] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.505 [2024-10-09 00:36:43.991138] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.505 [2024-10-09 00:36:43.991155] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.505 [2024-10-09 00:36:43.991162] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.505 [2024-10-09 00:36:43.991168] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:13.505 [2024-10-09 00:36:43.991184] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.505 qpair failed and we were unable to recover it. 00:29:13.505 [2024-10-09 00:36:44.001022] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.505 [2024-10-09 00:36:44.001092] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.505 [2024-10-09 00:36:44.001108] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.505 [2024-10-09 00:36:44.001115] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.505 [2024-10-09 00:36:44.001122] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:13.505 [2024-10-09 00:36:44.001138] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.505 qpair failed and we were unable to recover it. 00:29:13.505 [2024-10-09 00:36:44.011059] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.505 [2024-10-09 00:36:44.011127] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.505 [2024-10-09 00:36:44.011143] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.505 [2024-10-09 00:36:44.011150] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.505 [2024-10-09 00:36:44.011157] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:13.505 [2024-10-09 00:36:44.011173] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.505 qpair failed and we were unable to recover it. 00:29:13.505 [2024-10-09 00:36:44.021122] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.505 [2024-10-09 00:36:44.021195] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.505 [2024-10-09 00:36:44.021217] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.505 [2024-10-09 00:36:44.021224] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.505 [2024-10-09 00:36:44.021231] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:13.505 [2024-10-09 00:36:44.021247] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.505 qpair failed and we were unable to recover it. 00:29:13.505 [2024-10-09 00:36:44.031112] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.505 [2024-10-09 00:36:44.031179] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.505 [2024-10-09 00:36:44.031226] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.505 [2024-10-09 00:36:44.031235] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.505 [2024-10-09 00:36:44.031242] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:13.505 [2024-10-09 00:36:44.031271] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.505 qpair failed and we were unable to recover it. 00:29:13.505 [2024-10-09 00:36:44.041162] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.505 [2024-10-09 00:36:44.041227] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.505 [2024-10-09 00:36:44.041246] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.505 [2024-10-09 00:36:44.041254] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.505 [2024-10-09 00:36:44.041260] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:13.505 [2024-10-09 00:36:44.041278] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.505 qpair failed and we were unable to recover it. 00:29:13.505 [2024-10-09 00:36:44.051122] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.505 [2024-10-09 00:36:44.051182] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.505 [2024-10-09 00:36:44.051199] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.505 [2024-10-09 00:36:44.051206] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.505 [2024-10-09 00:36:44.051213] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:13.505 [2024-10-09 00:36:44.051229] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.505 qpair failed and we were unable to recover it. 00:29:13.505 [2024-10-09 00:36:44.061251] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.505 [2024-10-09 00:36:44.061341] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.505 [2024-10-09 00:36:44.061358] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.505 [2024-10-09 00:36:44.061365] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.505 [2024-10-09 00:36:44.061372] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:13.505 [2024-10-09 00:36:44.061394] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.505 qpair failed and we were unable to recover it. 00:29:13.505 [2024-10-09 00:36:44.071113] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.506 [2024-10-09 00:36:44.071197] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.506 [2024-10-09 00:36:44.071215] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.506 [2024-10-09 00:36:44.071222] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.506 [2024-10-09 00:36:44.071229] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:13.506 [2024-10-09 00:36:44.071246] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.506 qpair failed and we were unable to recover it. 00:29:13.506 [2024-10-09 00:36:44.081252] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.506 [2024-10-09 00:36:44.081314] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.506 [2024-10-09 00:36:44.081330] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.506 [2024-10-09 00:36:44.081337] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.506 [2024-10-09 00:36:44.081344] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:13.506 [2024-10-09 00:36:44.081360] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.506 qpair failed and we were unable to recover it. 00:29:13.506 [2024-10-09 00:36:44.091300] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.506 [2024-10-09 00:36:44.091363] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.506 [2024-10-09 00:36:44.091379] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.506 [2024-10-09 00:36:44.091387] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.506 [2024-10-09 00:36:44.091394] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:13.506 [2024-10-09 00:36:44.091410] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.506 qpair failed and we were unable to recover it. 00:29:13.506 [2024-10-09 00:36:44.101355] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.506 [2024-10-09 00:36:44.101460] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.506 [2024-10-09 00:36:44.101477] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.506 [2024-10-09 00:36:44.101484] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.506 [2024-10-09 00:36:44.101491] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:13.506 [2024-10-09 00:36:44.101507] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.506 qpair failed and we were unable to recover it. 00:29:13.506 [2024-10-09 00:36:44.111389] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.506 [2024-10-09 00:36:44.111441] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.506 [2024-10-09 00:36:44.111461] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.506 [2024-10-09 00:36:44.111469] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.506 [2024-10-09 00:36:44.111476] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:13.506 [2024-10-09 00:36:44.111492] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.506 qpair failed and we were unable to recover it. 00:29:13.506 [2024-10-09 00:36:44.121426] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.506 [2024-10-09 00:36:44.121488] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.506 [2024-10-09 00:36:44.121503] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.506 [2024-10-09 00:36:44.121511] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.506 [2024-10-09 00:36:44.121517] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:13.506 [2024-10-09 00:36:44.121534] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.506 qpair failed and we were unable to recover it. 00:29:13.506 [2024-10-09 00:36:44.131397] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.506 [2024-10-09 00:36:44.131459] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.506 [2024-10-09 00:36:44.131490] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.506 [2024-10-09 00:36:44.131499] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.506 [2024-10-09 00:36:44.131506] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:13.506 [2024-10-09 00:36:44.131528] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.506 qpair failed and we were unable to recover it. 00:29:13.768 [2024-10-09 00:36:44.141439] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.768 [2024-10-09 00:36:44.141493] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.768 [2024-10-09 00:36:44.141510] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.768 [2024-10-09 00:36:44.141518] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.768 [2024-10-09 00:36:44.141525] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:13.768 [2024-10-09 00:36:44.141542] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.768 qpair failed and we were unable to recover it. 00:29:13.768 [2024-10-09 00:36:44.151460] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.768 [2024-10-09 00:36:44.151536] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.768 [2024-10-09 00:36:44.151552] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.768 [2024-10-09 00:36:44.151559] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.768 [2024-10-09 00:36:44.151566] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:13.768 [2024-10-09 00:36:44.151587] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.768 qpair failed and we were unable to recover it. 00:29:13.768 [2024-10-09 00:36:44.161459] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.768 [2024-10-09 00:36:44.161518] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.768 [2024-10-09 00:36:44.161533] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.768 [2024-10-09 00:36:44.161540] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.768 [2024-10-09 00:36:44.161547] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:13.768 [2024-10-09 00:36:44.161562] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.768 qpair failed and we were unable to recover it. 00:29:13.768 [2024-10-09 00:36:44.171344] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.768 [2024-10-09 00:36:44.171397] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.768 [2024-10-09 00:36:44.171414] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.768 [2024-10-09 00:36:44.171421] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.768 [2024-10-09 00:36:44.171428] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:13.768 [2024-10-09 00:36:44.171443] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.768 qpair failed and we were unable to recover it. 00:29:13.768 [2024-10-09 00:36:44.181550] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.768 [2024-10-09 00:36:44.181600] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.768 [2024-10-09 00:36:44.181615] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.768 [2024-10-09 00:36:44.181623] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.768 [2024-10-09 00:36:44.181630] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:13.768 [2024-10-09 00:36:44.181644] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.768 qpair failed and we were unable to recover it. 00:29:13.768 [2024-10-09 00:36:44.191431] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.768 [2024-10-09 00:36:44.191486] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.768 [2024-10-09 00:36:44.191500] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.768 [2024-10-09 00:36:44.191508] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.768 [2024-10-09 00:36:44.191514] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:13.768 [2024-10-09 00:36:44.191529] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.768 qpair failed and we were unable to recover it. 00:29:13.768 [2024-10-09 00:36:44.201450] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.768 [2024-10-09 00:36:44.201502] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.768 [2024-10-09 00:36:44.201520] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.768 [2024-10-09 00:36:44.201527] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.768 [2024-10-09 00:36:44.201533] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:13.768 [2024-10-09 00:36:44.201548] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.768 qpair failed and we were unable to recover it. 00:29:13.768 [2024-10-09 00:36:44.211569] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.769 [2024-10-09 00:36:44.211618] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.769 [2024-10-09 00:36:44.211633] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.769 [2024-10-09 00:36:44.211640] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.769 [2024-10-09 00:36:44.211646] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:13.769 [2024-10-09 00:36:44.211661] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.769 qpair failed and we were unable to recover it. 00:29:13.769 [2024-10-09 00:36:44.221563] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.769 [2024-10-09 00:36:44.221612] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.769 [2024-10-09 00:36:44.221626] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.769 [2024-10-09 00:36:44.221632] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.769 [2024-10-09 00:36:44.221639] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:13.769 [2024-10-09 00:36:44.221653] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.769 qpair failed and we were unable to recover it. 00:29:13.769 [2024-10-09 00:36:44.231656] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.769 [2024-10-09 00:36:44.231708] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.769 [2024-10-09 00:36:44.231725] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.769 [2024-10-09 00:36:44.231733] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.769 [2024-10-09 00:36:44.231739] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:13.769 [2024-10-09 00:36:44.231753] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.769 qpair failed and we were unable to recover it. 00:29:13.769 [2024-10-09 00:36:44.241666] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.769 [2024-10-09 00:36:44.241713] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.769 [2024-10-09 00:36:44.241731] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.769 [2024-10-09 00:36:44.241738] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.769 [2024-10-09 00:36:44.241747] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:13.769 [2024-10-09 00:36:44.241762] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.769 qpair failed and we were unable to recover it. 00:29:13.769 [2024-10-09 00:36:44.251675] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.769 [2024-10-09 00:36:44.251724] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.769 [2024-10-09 00:36:44.251738] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.769 [2024-10-09 00:36:44.251745] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.769 [2024-10-09 00:36:44.251751] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:13.769 [2024-10-09 00:36:44.251765] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.769 qpair failed and we were unable to recover it. 00:29:13.769 [2024-10-09 00:36:44.261566] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.769 [2024-10-09 00:36:44.261612] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.769 [2024-10-09 00:36:44.261626] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.769 [2024-10-09 00:36:44.261633] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.769 [2024-10-09 00:36:44.261639] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:13.769 [2024-10-09 00:36:44.261653] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.769 qpair failed and we were unable to recover it. 00:29:13.769 [2024-10-09 00:36:44.271716] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.769 [2024-10-09 00:36:44.271781] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.769 [2024-10-09 00:36:44.271795] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.769 [2024-10-09 00:36:44.271802] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.769 [2024-10-09 00:36:44.271808] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:13.769 [2024-10-09 00:36:44.271822] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.769 qpair failed and we were unable to recover it. 00:29:13.769 [2024-10-09 00:36:44.281779] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.769 [2024-10-09 00:36:44.281830] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.769 [2024-10-09 00:36:44.281844] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.769 [2024-10-09 00:36:44.281851] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.769 [2024-10-09 00:36:44.281857] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:13.769 [2024-10-09 00:36:44.281871] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.769 qpair failed and we were unable to recover it. 00:29:13.769 [2024-10-09 00:36:44.291764] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.769 [2024-10-09 00:36:44.291820] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.769 [2024-10-09 00:36:44.291833] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.769 [2024-10-09 00:36:44.291840] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.769 [2024-10-09 00:36:44.291847] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:13.769 [2024-10-09 00:36:44.291861] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.769 qpair failed and we were unable to recover it. 00:29:13.769 [2024-10-09 00:36:44.301803] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.769 [2024-10-09 00:36:44.301859] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.769 [2024-10-09 00:36:44.301873] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.769 [2024-10-09 00:36:44.301880] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.769 [2024-10-09 00:36:44.301886] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:13.769 [2024-10-09 00:36:44.301904] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.769 qpair failed and we were unable to recover it. 00:29:13.769 [2024-10-09 00:36:44.311868] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.769 [2024-10-09 00:36:44.311917] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.769 [2024-10-09 00:36:44.311931] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.769 [2024-10-09 00:36:44.311938] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.769 [2024-10-09 00:36:44.311944] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:13.769 [2024-10-09 00:36:44.311958] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.769 qpair failed and we were unable to recover it. 00:29:13.769 [2024-10-09 00:36:44.321815] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.769 [2024-10-09 00:36:44.321883] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.769 [2024-10-09 00:36:44.321896] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.769 [2024-10-09 00:36:44.321903] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.769 [2024-10-09 00:36:44.321909] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:13.769 [2024-10-09 00:36:44.321923] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.769 qpair failed and we were unable to recover it. 00:29:13.769 [2024-10-09 00:36:44.331850] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.769 [2024-10-09 00:36:44.331893] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.769 [2024-10-09 00:36:44.331906] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.769 [2024-10-09 00:36:44.331913] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.769 [2024-10-09 00:36:44.331922] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:13.769 [2024-10-09 00:36:44.331936] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.769 qpair failed and we were unable to recover it. 00:29:13.769 [2024-10-09 00:36:44.341881] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.769 [2024-10-09 00:36:44.341930] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.769 [2024-10-09 00:36:44.341944] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.769 [2024-10-09 00:36:44.341951] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.769 [2024-10-09 00:36:44.341957] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:13.769 [2024-10-09 00:36:44.341971] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.769 qpair failed and we were unable to recover it. 00:29:13.769 [2024-10-09 00:36:44.351961] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.769 [2024-10-09 00:36:44.352012] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.770 [2024-10-09 00:36:44.352025] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.770 [2024-10-09 00:36:44.352033] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.770 [2024-10-09 00:36:44.352039] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:13.770 [2024-10-09 00:36:44.352053] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.770 qpair failed and we were unable to recover it. 00:29:13.770 [2024-10-09 00:36:44.361970] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.770 [2024-10-09 00:36:44.362023] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.770 [2024-10-09 00:36:44.362035] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.770 [2024-10-09 00:36:44.362043] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.770 [2024-10-09 00:36:44.362049] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:13.770 [2024-10-09 00:36:44.362063] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.770 qpair failed and we were unable to recover it. 00:29:13.770 [2024-10-09 00:36:44.372003] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.770 [2024-10-09 00:36:44.372053] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.770 [2024-10-09 00:36:44.372067] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.770 [2024-10-09 00:36:44.372075] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.770 [2024-10-09 00:36:44.372082] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:13.770 [2024-10-09 00:36:44.372097] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.770 qpair failed and we were unable to recover it. 00:29:13.770 [2024-10-09 00:36:44.382013] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.770 [2024-10-09 00:36:44.382065] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.770 [2024-10-09 00:36:44.382078] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.770 [2024-10-09 00:36:44.382085] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.770 [2024-10-09 00:36:44.382091] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:13.770 [2024-10-09 00:36:44.382104] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.770 qpair failed and we were unable to recover it. 00:29:13.770 [2024-10-09 00:36:44.392058] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.770 [2024-10-09 00:36:44.392107] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.770 [2024-10-09 00:36:44.392120] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.770 [2024-10-09 00:36:44.392127] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.770 [2024-10-09 00:36:44.392134] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:13.770 [2024-10-09 00:36:44.392148] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.770 qpair failed and we were unable to recover it. 00:29:14.033 [2024-10-09 00:36:44.402100] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.033 [2024-10-09 00:36:44.402148] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.033 [2024-10-09 00:36:44.402160] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.033 [2024-10-09 00:36:44.402167] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.033 [2024-10-09 00:36:44.402174] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:14.033 [2024-10-09 00:36:44.402187] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.033 qpair failed and we were unable to recover it. 00:29:14.033 [2024-10-09 00:36:44.412090] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.033 [2024-10-09 00:36:44.412138] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.033 [2024-10-09 00:36:44.412152] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.033 [2024-10-09 00:36:44.412159] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.033 [2024-10-09 00:36:44.412165] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:14.033 [2024-10-09 00:36:44.412178] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.033 qpair failed and we were unable to recover it. 00:29:14.033 [2024-10-09 00:36:44.422156] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.033 [2024-10-09 00:36:44.422245] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.033 [2024-10-09 00:36:44.422258] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.033 [2024-10-09 00:36:44.422269] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.033 [2024-10-09 00:36:44.422276] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:14.033 [2024-10-09 00:36:44.422289] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.033 qpair failed and we were unable to recover it. 00:29:14.033 [2024-10-09 00:36:44.432172] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.033 [2024-10-09 00:36:44.432224] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.033 [2024-10-09 00:36:44.432238] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.033 [2024-10-09 00:36:44.432245] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.033 [2024-10-09 00:36:44.432252] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:14.033 [2024-10-09 00:36:44.432265] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.033 qpair failed and we were unable to recover it. 00:29:14.033 [2024-10-09 00:36:44.442238] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.033 [2024-10-09 00:36:44.442310] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.033 [2024-10-09 00:36:44.442324] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.033 [2024-10-09 00:36:44.442330] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.033 [2024-10-09 00:36:44.442337] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:14.033 [2024-10-09 00:36:44.442351] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.033 qpair failed and we were unable to recover it. 00:29:14.033 [2024-10-09 00:36:44.452202] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.033 [2024-10-09 00:36:44.452265] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.033 [2024-10-09 00:36:44.452277] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.033 [2024-10-09 00:36:44.452284] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.033 [2024-10-09 00:36:44.452291] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:14.033 [2024-10-09 00:36:44.452304] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.033 qpair failed and we were unable to recover it. 00:29:14.033 [2024-10-09 00:36:44.462209] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.033 [2024-10-09 00:36:44.462253] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.033 [2024-10-09 00:36:44.462267] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.033 [2024-10-09 00:36:44.462274] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.033 [2024-10-09 00:36:44.462280] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:14.033 [2024-10-09 00:36:44.462294] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.033 qpair failed and we were unable to recover it. 00:29:14.033 [2024-10-09 00:36:44.472286] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.033 [2024-10-09 00:36:44.472378] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.033 [2024-10-09 00:36:44.472392] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.033 [2024-10-09 00:36:44.472400] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.033 [2024-10-09 00:36:44.472407] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:14.033 [2024-10-09 00:36:44.472421] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.033 qpair failed and we were unable to recover it. 00:29:14.033 [2024-10-09 00:36:44.482305] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.033 [2024-10-09 00:36:44.482359] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.033 [2024-10-09 00:36:44.482372] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.033 [2024-10-09 00:36:44.482379] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.033 [2024-10-09 00:36:44.482386] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:14.033 [2024-10-09 00:36:44.482400] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.033 qpair failed and we were unable to recover it. 00:29:14.033 [2024-10-09 00:36:44.492259] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.033 [2024-10-09 00:36:44.492306] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.033 [2024-10-09 00:36:44.492318] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.033 [2024-10-09 00:36:44.492325] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.033 [2024-10-09 00:36:44.492332] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:14.033 [2024-10-09 00:36:44.492346] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.033 qpair failed and we were unable to recover it. 00:29:14.033 [2024-10-09 00:36:44.502331] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.033 [2024-10-09 00:36:44.502373] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.033 [2024-10-09 00:36:44.502386] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.033 [2024-10-09 00:36:44.502393] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.033 [2024-10-09 00:36:44.502400] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:14.033 [2024-10-09 00:36:44.502413] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.033 qpair failed and we were unable to recover it. 00:29:14.033 [2024-10-09 00:36:44.512299] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.033 [2024-10-09 00:36:44.512343] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.033 [2024-10-09 00:36:44.512356] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.033 [2024-10-09 00:36:44.512370] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.033 [2024-10-09 00:36:44.512377] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:14.033 [2024-10-09 00:36:44.512390] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.033 qpair failed and we were unable to recover it. 00:29:14.033 [2024-10-09 00:36:44.522456] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.033 [2024-10-09 00:36:44.522504] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.033 [2024-10-09 00:36:44.522517] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.033 [2024-10-09 00:36:44.522524] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.033 [2024-10-09 00:36:44.522531] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:14.033 [2024-10-09 00:36:44.522544] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.033 qpair failed and we were unable to recover it. 00:29:14.033 [2024-10-09 00:36:44.532420] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.034 [2024-10-09 00:36:44.532472] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.034 [2024-10-09 00:36:44.532497] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.034 [2024-10-09 00:36:44.532506] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.034 [2024-10-09 00:36:44.532513] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:14.034 [2024-10-09 00:36:44.532531] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.034 qpair failed and we were unable to recover it. 00:29:14.034 [2024-10-09 00:36:44.542450] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.034 [2024-10-09 00:36:44.542534] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.034 [2024-10-09 00:36:44.542558] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.034 [2024-10-09 00:36:44.542566] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.034 [2024-10-09 00:36:44.542574] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:14.034 [2024-10-09 00:36:44.542592] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.034 qpair failed and we were unable to recover it. 00:29:14.034 [2024-10-09 00:36:44.552453] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.034 [2024-10-09 00:36:44.552499] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.034 [2024-10-09 00:36:44.552515] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.034 [2024-10-09 00:36:44.552522] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.034 [2024-10-09 00:36:44.552528] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:14.034 [2024-10-09 00:36:44.552544] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.034 qpair failed and we were unable to recover it. 00:29:14.034 [2024-10-09 00:36:44.562520] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.034 [2024-10-09 00:36:44.562568] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.034 [2024-10-09 00:36:44.562582] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.034 [2024-10-09 00:36:44.562589] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.034 [2024-10-09 00:36:44.562595] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:14.034 [2024-10-09 00:36:44.562610] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.034 qpair failed and we were unable to recover it. 00:29:14.034 [2024-10-09 00:36:44.572508] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.034 [2024-10-09 00:36:44.572559] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.034 [2024-10-09 00:36:44.572573] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.034 [2024-10-09 00:36:44.572580] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.034 [2024-10-09 00:36:44.572587] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:14.034 [2024-10-09 00:36:44.572601] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.034 qpair failed and we were unable to recover it. 00:29:14.034 [2024-10-09 00:36:44.582443] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.034 [2024-10-09 00:36:44.582491] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.034 [2024-10-09 00:36:44.582506] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.034 [2024-10-09 00:36:44.582514] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.034 [2024-10-09 00:36:44.582520] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:14.034 [2024-10-09 00:36:44.582535] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.034 qpair failed and we were unable to recover it. 00:29:14.034 [2024-10-09 00:36:44.592567] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.034 [2024-10-09 00:36:44.592611] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.034 [2024-10-09 00:36:44.592624] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.034 [2024-10-09 00:36:44.592632] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.034 [2024-10-09 00:36:44.592638] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:14.034 [2024-10-09 00:36:44.592652] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.034 qpair failed and we were unable to recover it. 00:29:14.034 [2024-10-09 00:36:44.602648] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.034 [2024-10-09 00:36:44.602692] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.034 [2024-10-09 00:36:44.602708] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.034 [2024-10-09 00:36:44.602715] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.034 [2024-10-09 00:36:44.602726] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:14.034 [2024-10-09 00:36:44.602742] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.034 qpair failed and we were unable to recover it. 00:29:14.034 [2024-10-09 00:36:44.612639] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.034 [2024-10-09 00:36:44.612688] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.034 [2024-10-09 00:36:44.612702] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.034 [2024-10-09 00:36:44.612709] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.034 [2024-10-09 00:36:44.612715] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:14.034 [2024-10-09 00:36:44.612732] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.034 qpair failed and we were unable to recover it. 00:29:14.034 [2024-10-09 00:36:44.622667] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.034 [2024-10-09 00:36:44.622751] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.034 [2024-10-09 00:36:44.622766] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.034 [2024-10-09 00:36:44.622773] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.034 [2024-10-09 00:36:44.622780] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:14.034 [2024-10-09 00:36:44.622794] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.034 qpair failed and we were unable to recover it. 00:29:14.034 [2024-10-09 00:36:44.632687] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.034 [2024-10-09 00:36:44.632743] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.034 [2024-10-09 00:36:44.632756] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.034 [2024-10-09 00:36:44.632763] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.034 [2024-10-09 00:36:44.632769] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:14.034 [2024-10-09 00:36:44.632783] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.034 qpair failed and we were unable to recover it. 00:29:14.034 [2024-10-09 00:36:44.642750] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.034 [2024-10-09 00:36:44.642797] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.034 [2024-10-09 00:36:44.642810] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.035 [2024-10-09 00:36:44.642817] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.035 [2024-10-09 00:36:44.642824] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:14.035 [2024-10-09 00:36:44.642841] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.035 qpair failed and we were unable to recover it. 00:29:14.035 [2024-10-09 00:36:44.652691] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.035 [2024-10-09 00:36:44.652737] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.035 [2024-10-09 00:36:44.652751] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.035 [2024-10-09 00:36:44.652758] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.035 [2024-10-09 00:36:44.652764] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:14.035 [2024-10-09 00:36:44.652778] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.035 qpair failed and we were unable to recover it. 00:29:14.035 [2024-10-09 00:36:44.662749] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.035 [2024-10-09 00:36:44.662796] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.035 [2024-10-09 00:36:44.662809] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.035 [2024-10-09 00:36:44.662816] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.035 [2024-10-09 00:36:44.662822] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:14.035 [2024-10-09 00:36:44.662836] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.035 qpair failed and we were unable to recover it. 00:29:14.297 [2024-10-09 00:36:44.672772] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.297 [2024-10-09 00:36:44.672850] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.297 [2024-10-09 00:36:44.672863] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.297 [2024-10-09 00:36:44.672870] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.297 [2024-10-09 00:36:44.672877] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:14.297 [2024-10-09 00:36:44.672891] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.297 qpair failed and we were unable to recover it. 00:29:14.297 [2024-10-09 00:36:44.682872] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.297 [2024-10-09 00:36:44.682929] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.298 [2024-10-09 00:36:44.682942] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.298 [2024-10-09 00:36:44.682949] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.298 [2024-10-09 00:36:44.682956] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:14.298 [2024-10-09 00:36:44.682969] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.298 qpair failed and we were unable to recover it. 00:29:14.298 [2024-10-09 00:36:44.692846] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.298 [2024-10-09 00:36:44.692890] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.298 [2024-10-09 00:36:44.692907] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.298 [2024-10-09 00:36:44.692914] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.298 [2024-10-09 00:36:44.692920] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:14.298 [2024-10-09 00:36:44.692934] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.298 qpair failed and we were unable to recover it. 00:29:14.298 [2024-10-09 00:36:44.702852] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.298 [2024-10-09 00:36:44.702899] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.298 [2024-10-09 00:36:44.702912] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.298 [2024-10-09 00:36:44.702919] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.298 [2024-10-09 00:36:44.702925] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:14.298 [2024-10-09 00:36:44.702939] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.298 qpair failed and we were unable to recover it. 00:29:14.298 [2024-10-09 00:36:44.712903] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.298 [2024-10-09 00:36:44.712947] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.298 [2024-10-09 00:36:44.712960] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.298 [2024-10-09 00:36:44.712967] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.298 [2024-10-09 00:36:44.712973] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:14.298 [2024-10-09 00:36:44.712987] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.298 qpair failed and we were unable to recover it. 00:29:14.298 [2024-10-09 00:36:44.722965] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.298 [2024-10-09 00:36:44.723055] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.298 [2024-10-09 00:36:44.723068] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.298 [2024-10-09 00:36:44.723075] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.298 [2024-10-09 00:36:44.723081] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:14.298 [2024-10-09 00:36:44.723095] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.298 qpair failed and we were unable to recover it. 00:29:14.298 [2024-10-09 00:36:44.732961] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.298 [2024-10-09 00:36:44.733009] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.298 [2024-10-09 00:36:44.733022] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.298 [2024-10-09 00:36:44.733029] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.298 [2024-10-09 00:36:44.733039] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:14.298 [2024-10-09 00:36:44.733053] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.298 qpair failed and we were unable to recover it. 00:29:14.298 [2024-10-09 00:36:44.742971] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.298 [2024-10-09 00:36:44.743017] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.298 [2024-10-09 00:36:44.743030] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.298 [2024-10-09 00:36:44.743037] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.298 [2024-10-09 00:36:44.743044] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:14.298 [2024-10-09 00:36:44.743057] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.298 qpair failed and we were unable to recover it. 00:29:14.298 [2024-10-09 00:36:44.752983] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.298 [2024-10-09 00:36:44.753031] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.298 [2024-10-09 00:36:44.753044] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.298 [2024-10-09 00:36:44.753051] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.298 [2024-10-09 00:36:44.753057] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:14.298 [2024-10-09 00:36:44.753071] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.298 qpair failed and we were unable to recover it. 00:29:14.298 [2024-10-09 00:36:44.763071] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.298 [2024-10-09 00:36:44.763119] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.298 [2024-10-09 00:36:44.763132] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.298 [2024-10-09 00:36:44.763139] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.298 [2024-10-09 00:36:44.763145] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:14.298 [2024-10-09 00:36:44.763159] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.298 qpair failed and we were unable to recover it. 00:29:14.298 [2024-10-09 00:36:44.773068] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.298 [2024-10-09 00:36:44.773117] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.298 [2024-10-09 00:36:44.773129] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.298 [2024-10-09 00:36:44.773136] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.298 [2024-10-09 00:36:44.773143] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:14.298 [2024-10-09 00:36:44.773156] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.298 qpair failed and we were unable to recover it. 00:29:14.298 [2024-10-09 00:36:44.783084] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.298 [2024-10-09 00:36:44.783137] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.298 [2024-10-09 00:36:44.783150] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.298 [2024-10-09 00:36:44.783157] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.298 [2024-10-09 00:36:44.783163] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:14.298 [2024-10-09 00:36:44.783177] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.298 qpair failed and we were unable to recover it. 00:29:14.298 [2024-10-09 00:36:44.793099] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.298 [2024-10-09 00:36:44.793146] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.298 [2024-10-09 00:36:44.793159] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.298 [2024-10-09 00:36:44.793166] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.298 [2024-10-09 00:36:44.793172] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:14.298 [2024-10-09 00:36:44.793186] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.298 qpair failed and we were unable to recover it. 00:29:14.298 [2024-10-09 00:36:44.803028] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.298 [2024-10-09 00:36:44.803075] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.298 [2024-10-09 00:36:44.803088] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.299 [2024-10-09 00:36:44.803095] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.299 [2024-10-09 00:36:44.803102] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:14.299 [2024-10-09 00:36:44.803116] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.299 qpair failed and we were unable to recover it. 00:29:14.299 [2024-10-09 00:36:44.813183] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.299 [2024-10-09 00:36:44.813235] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.299 [2024-10-09 00:36:44.813249] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.299 [2024-10-09 00:36:44.813256] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.299 [2024-10-09 00:36:44.813262] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:14.299 [2024-10-09 00:36:44.813276] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.299 qpair failed and we were unable to recover it. 00:29:14.299 [2024-10-09 00:36:44.823185] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.299 [2024-10-09 00:36:44.823227] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.299 [2024-10-09 00:36:44.823240] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.299 [2024-10-09 00:36:44.823247] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.299 [2024-10-09 00:36:44.823257] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:14.299 [2024-10-09 00:36:44.823271] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.299 qpair failed and we were unable to recover it. 00:29:14.299 [2024-10-09 00:36:44.833208] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.299 [2024-10-09 00:36:44.833250] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.299 [2024-10-09 00:36:44.833264] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.299 [2024-10-09 00:36:44.833271] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.299 [2024-10-09 00:36:44.833277] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:14.299 [2024-10-09 00:36:44.833291] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.299 qpair failed and we were unable to recover it. 00:29:14.299 [2024-10-09 00:36:44.843269] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.299 [2024-10-09 00:36:44.843313] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.299 [2024-10-09 00:36:44.843326] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.299 [2024-10-09 00:36:44.843333] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.299 [2024-10-09 00:36:44.843339] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:14.299 [2024-10-09 00:36:44.843353] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.299 qpair failed and we were unable to recover it. 00:29:14.299 [2024-10-09 00:36:44.853233] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.299 [2024-10-09 00:36:44.853282] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.299 [2024-10-09 00:36:44.853295] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.299 [2024-10-09 00:36:44.853302] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.299 [2024-10-09 00:36:44.853308] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:14.299 [2024-10-09 00:36:44.853322] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.299 qpair failed and we were unable to recover it. 00:29:14.299 [2024-10-09 00:36:44.863306] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.299 [2024-10-09 00:36:44.863372] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.299 [2024-10-09 00:36:44.863385] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.299 [2024-10-09 00:36:44.863392] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.299 [2024-10-09 00:36:44.863398] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:14.299 [2024-10-09 00:36:44.863412] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.299 qpair failed and we were unable to recover it. 00:29:14.299 [2024-10-09 00:36:44.873292] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.299 [2024-10-09 00:36:44.873344] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.299 [2024-10-09 00:36:44.873357] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.299 [2024-10-09 00:36:44.873364] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.299 [2024-10-09 00:36:44.873370] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:14.299 [2024-10-09 00:36:44.873384] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.299 qpair failed and we were unable to recover it. 00:29:14.299 [2024-10-09 00:36:44.883385] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.299 [2024-10-09 00:36:44.883432] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.299 [2024-10-09 00:36:44.883445] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.299 [2024-10-09 00:36:44.883452] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.299 [2024-10-09 00:36:44.883458] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:14.299 [2024-10-09 00:36:44.883472] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.299 qpair failed and we were unable to recover it. 00:29:14.299 [2024-10-09 00:36:44.893350] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.299 [2024-10-09 00:36:44.893396] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.299 [2024-10-09 00:36:44.893409] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.299 [2024-10-09 00:36:44.893416] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.299 [2024-10-09 00:36:44.893422] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:14.299 [2024-10-09 00:36:44.893436] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.299 qpair failed and we were unable to recover it. 00:29:14.299 [2024-10-09 00:36:44.903406] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.299 [2024-10-09 00:36:44.903456] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.299 [2024-10-09 00:36:44.903469] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.299 [2024-10-09 00:36:44.903476] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.299 [2024-10-09 00:36:44.903482] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:14.299 [2024-10-09 00:36:44.903496] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.299 qpair failed and we were unable to recover it. 00:29:14.299 [2024-10-09 00:36:44.913410] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.299 [2024-10-09 00:36:44.913472] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.299 [2024-10-09 00:36:44.913497] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.299 [2024-10-09 00:36:44.913510] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.299 [2024-10-09 00:36:44.913518] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:14.299 [2024-10-09 00:36:44.913537] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.299 qpair failed and we were unable to recover it. 00:29:14.299 [2024-10-09 00:36:44.923480] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.299 [2024-10-09 00:36:44.923533] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.299 [2024-10-09 00:36:44.923557] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.299 [2024-10-09 00:36:44.923566] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.299 [2024-10-09 00:36:44.923573] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:14.299 [2024-10-09 00:36:44.923592] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.299 qpair failed and we were unable to recover it. 00:29:14.562 [2024-10-09 00:36:44.933483] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.562 [2024-10-09 00:36:44.933537] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.562 [2024-10-09 00:36:44.933561] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.562 [2024-10-09 00:36:44.933570] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.562 [2024-10-09 00:36:44.933577] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:14.562 [2024-10-09 00:36:44.933596] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.562 qpair failed and we were unable to recover it. 00:29:14.562 [2024-10-09 00:36:44.943425] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.562 [2024-10-09 00:36:44.943472] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.562 [2024-10-09 00:36:44.943487] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.562 [2024-10-09 00:36:44.943495] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.562 [2024-10-09 00:36:44.943501] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:14.562 [2024-10-09 00:36:44.943516] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.562 qpair failed and we were unable to recover it. 00:29:14.562 [2024-10-09 00:36:44.953529] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.562 [2024-10-09 00:36:44.953611] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.562 [2024-10-09 00:36:44.953625] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.562 [2024-10-09 00:36:44.953632] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.562 [2024-10-09 00:36:44.953639] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:14.562 [2024-10-09 00:36:44.953653] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.562 qpair failed and we were unable to recover it. 00:29:14.562 [2024-10-09 00:36:44.963557] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.562 [2024-10-09 00:36:44.963599] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.562 [2024-10-09 00:36:44.963612] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.562 [2024-10-09 00:36:44.963619] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.562 [2024-10-09 00:36:44.963626] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:14.562 [2024-10-09 00:36:44.963640] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.562 qpair failed and we were unable to recover it. 00:29:14.562 [2024-10-09 00:36:44.973555] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.562 [2024-10-09 00:36:44.973601] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.562 [2024-10-09 00:36:44.973615] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.562 [2024-10-09 00:36:44.973623] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.562 [2024-10-09 00:36:44.973629] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:14.562 [2024-10-09 00:36:44.973643] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.562 qpair failed and we were unable to recover it. 00:29:14.562 [2024-10-09 00:36:44.983487] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.562 [2024-10-09 00:36:44.983533] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.562 [2024-10-09 00:36:44.983546] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.562 [2024-10-09 00:36:44.983553] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.562 [2024-10-09 00:36:44.983560] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:14.562 [2024-10-09 00:36:44.983574] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.562 qpair failed and we were unable to recover it. 00:29:14.562 [2024-10-09 00:36:44.993628] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.562 [2024-10-09 00:36:44.993671] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.562 [2024-10-09 00:36:44.993684] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.562 [2024-10-09 00:36:44.993692] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.562 [2024-10-09 00:36:44.993698] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:14.562 [2024-10-09 00:36:44.993712] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.562 qpair failed and we were unable to recover it. 00:29:14.562 [2024-10-09 00:36:45.003650] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.562 [2024-10-09 00:36:45.003704] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.562 [2024-10-09 00:36:45.003717] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.562 [2024-10-09 00:36:45.003734] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.562 [2024-10-09 00:36:45.003740] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:14.562 [2024-10-09 00:36:45.003755] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.562 qpair failed and we were unable to recover it. 00:29:14.562 [2024-10-09 00:36:45.013684] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.562 [2024-10-09 00:36:45.013730] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.562 [2024-10-09 00:36:45.013744] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.562 [2024-10-09 00:36:45.013751] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.562 [2024-10-09 00:36:45.013757] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:14.562 [2024-10-09 00:36:45.013772] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.562 qpair failed and we were unable to recover it. 00:29:14.562 [2024-10-09 00:36:45.023726] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.562 [2024-10-09 00:36:45.023823] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.562 [2024-10-09 00:36:45.023836] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.562 [2024-10-09 00:36:45.023843] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.563 [2024-10-09 00:36:45.023849] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:14.563 [2024-10-09 00:36:45.023863] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.563 qpair failed and we were unable to recover it. 00:29:14.563 [2024-10-09 00:36:45.033749] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.563 [2024-10-09 00:36:45.033797] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.563 [2024-10-09 00:36:45.033810] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.563 [2024-10-09 00:36:45.033817] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.563 [2024-10-09 00:36:45.033824] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:14.563 [2024-10-09 00:36:45.033838] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.563 qpair failed and we were unable to recover it. 00:29:14.563 [2024-10-09 00:36:45.043740] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.563 [2024-10-09 00:36:45.043780] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.563 [2024-10-09 00:36:45.043793] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.563 [2024-10-09 00:36:45.043800] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.563 [2024-10-09 00:36:45.043807] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:14.563 [2024-10-09 00:36:45.043821] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.563 qpair failed and we were unable to recover it. 00:29:14.563 [2024-10-09 00:36:45.053797] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.563 [2024-10-09 00:36:45.053843] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.563 [2024-10-09 00:36:45.053856] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.563 [2024-10-09 00:36:45.053863] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.563 [2024-10-09 00:36:45.053870] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:14.563 [2024-10-09 00:36:45.053884] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.563 qpair failed and we were unable to recover it. 00:29:14.563 [2024-10-09 00:36:45.063841] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.563 [2024-10-09 00:36:45.063887] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.563 [2024-10-09 00:36:45.063899] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.563 [2024-10-09 00:36:45.063906] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.563 [2024-10-09 00:36:45.063913] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:14.563 [2024-10-09 00:36:45.063927] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.563 qpair failed and we were unable to recover it. 00:29:14.563 [2024-10-09 00:36:45.073869] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.563 [2024-10-09 00:36:45.073937] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.563 [2024-10-09 00:36:45.073950] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.563 [2024-10-09 00:36:45.073957] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.563 [2024-10-09 00:36:45.073963] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:14.563 [2024-10-09 00:36:45.073977] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.563 qpair failed and we were unable to recover it. 00:29:14.563 [2024-10-09 00:36:45.083862] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.563 [2024-10-09 00:36:45.083911] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.563 [2024-10-09 00:36:45.083924] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.563 [2024-10-09 00:36:45.083931] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.563 [2024-10-09 00:36:45.083937] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:14.563 [2024-10-09 00:36:45.083951] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.563 qpair failed and we were unable to recover it. 00:29:14.563 [2024-10-09 00:36:45.093905] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.563 [2024-10-09 00:36:45.093952] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.563 [2024-10-09 00:36:45.093969] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.563 [2024-10-09 00:36:45.093976] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.563 [2024-10-09 00:36:45.093982] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:14.563 [2024-10-09 00:36:45.093996] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.563 qpair failed and we were unable to recover it. 00:29:14.563 [2024-10-09 00:36:45.103914] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.563 [2024-10-09 00:36:45.103961] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.563 [2024-10-09 00:36:45.103974] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.563 [2024-10-09 00:36:45.103981] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.563 [2024-10-09 00:36:45.103987] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:14.563 [2024-10-09 00:36:45.104001] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.563 qpair failed and we were unable to recover it. 00:29:14.563 [2024-10-09 00:36:45.113989] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.563 [2024-10-09 00:36:45.114044] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.563 [2024-10-09 00:36:45.114058] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.563 [2024-10-09 00:36:45.114065] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.563 [2024-10-09 00:36:45.114071] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:14.563 [2024-10-09 00:36:45.114084] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.563 qpair failed and we were unable to recover it. 00:29:14.563 [2024-10-09 00:36:45.124003] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.563 [2024-10-09 00:36:45.124071] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.563 [2024-10-09 00:36:45.124084] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.563 [2024-10-09 00:36:45.124092] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.563 [2024-10-09 00:36:45.124098] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:14.563 [2024-10-09 00:36:45.124111] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.563 qpair failed and we were unable to recover it. 00:29:14.563 [2024-10-09 00:36:45.134023] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.563 [2024-10-09 00:36:45.134070] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.563 [2024-10-09 00:36:45.134083] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.563 [2024-10-09 00:36:45.134090] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.563 [2024-10-09 00:36:45.134096] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:14.563 [2024-10-09 00:36:45.134113] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.563 qpair failed and we were unable to recover it. 00:29:14.563 [2024-10-09 00:36:45.144086] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.563 [2024-10-09 00:36:45.144130] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.563 [2024-10-09 00:36:45.144143] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.563 [2024-10-09 00:36:45.144150] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.563 [2024-10-09 00:36:45.144156] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:14.563 [2024-10-09 00:36:45.144170] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.563 qpair failed and we were unable to recover it. 00:29:14.563 [2024-10-09 00:36:45.154075] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.563 [2024-10-09 00:36:45.154129] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.563 [2024-10-09 00:36:45.154142] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.563 [2024-10-09 00:36:45.154149] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.563 [2024-10-09 00:36:45.154155] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:14.563 [2024-10-09 00:36:45.154169] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.563 qpair failed and we were unable to recover it. 00:29:14.563 [2024-10-09 00:36:45.164098] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.563 [2024-10-09 00:36:45.164138] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.563 [2024-10-09 00:36:45.164152] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.563 [2024-10-09 00:36:45.164159] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.563 [2024-10-09 00:36:45.164165] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:14.564 [2024-10-09 00:36:45.164179] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.564 qpair failed and we were unable to recover it. 00:29:14.564 [2024-10-09 00:36:45.174120] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.564 [2024-10-09 00:36:45.174167] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.564 [2024-10-09 00:36:45.174181] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.564 [2024-10-09 00:36:45.174188] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.564 [2024-10-09 00:36:45.174194] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:14.564 [2024-10-09 00:36:45.174208] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.564 qpair failed and we were unable to recover it. 00:29:14.564 [2024-10-09 00:36:45.184166] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.564 [2024-10-09 00:36:45.184227] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.564 [2024-10-09 00:36:45.184243] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.564 [2024-10-09 00:36:45.184250] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.564 [2024-10-09 00:36:45.184257] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:14.564 [2024-10-09 00:36:45.184273] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.564 qpair failed and we were unable to recover it. 00:29:14.564 [2024-10-09 00:36:45.194179] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.564 [2024-10-09 00:36:45.194224] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.564 [2024-10-09 00:36:45.194238] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.564 [2024-10-09 00:36:45.194245] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.564 [2024-10-09 00:36:45.194251] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:14.564 [2024-10-09 00:36:45.194265] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.564 qpair failed and we were unable to recover it. 00:29:14.826 [2024-10-09 00:36:45.204187] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.826 [2024-10-09 00:36:45.204269] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.826 [2024-10-09 00:36:45.204282] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.826 [2024-10-09 00:36:45.204289] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.826 [2024-10-09 00:36:45.204295] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:14.826 [2024-10-09 00:36:45.204309] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.826 qpair failed and we were unable to recover it. 00:29:14.826 [2024-10-09 00:36:45.214228] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.826 [2024-10-09 00:36:45.214277] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.826 [2024-10-09 00:36:45.214290] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.826 [2024-10-09 00:36:45.214297] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.826 [2024-10-09 00:36:45.214304] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:14.826 [2024-10-09 00:36:45.214317] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.826 qpair failed and we were unable to recover it. 00:29:14.826 [2024-10-09 00:36:45.224258] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.826 [2024-10-09 00:36:45.224343] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.826 [2024-10-09 00:36:45.224356] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.826 [2024-10-09 00:36:45.224363] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.826 [2024-10-09 00:36:45.224369] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:14.826 [2024-10-09 00:36:45.224391] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.826 qpair failed and we were unable to recover it. 00:29:14.826 [2024-10-09 00:36:45.234270] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.826 [2024-10-09 00:36:45.234314] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.826 [2024-10-09 00:36:45.234327] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.826 [2024-10-09 00:36:45.234334] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.826 [2024-10-09 00:36:45.234340] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:14.826 [2024-10-09 00:36:45.234354] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.826 qpair failed and we were unable to recover it. 00:29:14.826 [2024-10-09 00:36:45.244263] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.826 [2024-10-09 00:36:45.244331] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.826 [2024-10-09 00:36:45.244344] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.826 [2024-10-09 00:36:45.244351] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.826 [2024-10-09 00:36:45.244357] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:14.826 [2024-10-09 00:36:45.244371] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.826 qpair failed and we were unable to recover it. 00:29:14.826 [2024-10-09 00:36:45.254324] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.826 [2024-10-09 00:36:45.254374] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.826 [2024-10-09 00:36:45.254387] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.826 [2024-10-09 00:36:45.254394] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.826 [2024-10-09 00:36:45.254400] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:14.826 [2024-10-09 00:36:45.254414] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.826 qpair failed and we were unable to recover it. 00:29:14.826 [2024-10-09 00:36:45.264359] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.826 [2024-10-09 00:36:45.264410] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.826 [2024-10-09 00:36:45.264423] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.826 [2024-10-09 00:36:45.264430] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.826 [2024-10-09 00:36:45.264436] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:14.826 [2024-10-09 00:36:45.264450] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.826 qpair failed and we were unable to recover it. 00:29:14.826 [2024-10-09 00:36:45.274377] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.826 [2024-10-09 00:36:45.274425] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.826 [2024-10-09 00:36:45.274453] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.826 [2024-10-09 00:36:45.274462] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.826 [2024-10-09 00:36:45.274469] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:14.826 [2024-10-09 00:36:45.274488] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.826 qpair failed and we were unable to recover it. 00:29:14.826 [2024-10-09 00:36:45.284381] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.826 [2024-10-09 00:36:45.284428] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.826 [2024-10-09 00:36:45.284452] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.826 [2024-10-09 00:36:45.284461] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.826 [2024-10-09 00:36:45.284467] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:14.826 [2024-10-09 00:36:45.284486] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.826 qpair failed and we were unable to recover it. 00:29:14.826 [2024-10-09 00:36:45.294434] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.827 [2024-10-09 00:36:45.294486] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.827 [2024-10-09 00:36:45.294510] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.827 [2024-10-09 00:36:45.294519] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.827 [2024-10-09 00:36:45.294526] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:14.827 [2024-10-09 00:36:45.294545] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.827 qpair failed and we were unable to recover it. 00:29:14.827 [2024-10-09 00:36:45.304521] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.827 [2024-10-09 00:36:45.304574] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.827 [2024-10-09 00:36:45.304598] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.827 [2024-10-09 00:36:45.304607] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.827 [2024-10-09 00:36:45.304614] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:14.827 [2024-10-09 00:36:45.304633] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.827 qpair failed and we were unable to recover it. 00:29:14.827 [2024-10-09 00:36:45.314360] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.827 [2024-10-09 00:36:45.314411] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.827 [2024-10-09 00:36:45.314426] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.827 [2024-10-09 00:36:45.314433] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.827 [2024-10-09 00:36:45.314444] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:14.827 [2024-10-09 00:36:45.314459] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.827 qpair failed and we were unable to recover it. 00:29:14.827 [2024-10-09 00:36:45.324514] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.827 [2024-10-09 00:36:45.324564] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.827 [2024-10-09 00:36:45.324578] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.827 [2024-10-09 00:36:45.324585] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.827 [2024-10-09 00:36:45.324591] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:14.827 [2024-10-09 00:36:45.324605] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.827 qpair failed and we were unable to recover it. 00:29:14.827 [2024-10-09 00:36:45.334528] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.827 [2024-10-09 00:36:45.334573] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.827 [2024-10-09 00:36:45.334587] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.827 [2024-10-09 00:36:45.334594] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.827 [2024-10-09 00:36:45.334600] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:14.827 [2024-10-09 00:36:45.334614] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.827 qpair failed and we were unable to recover it. 00:29:14.827 [2024-10-09 00:36:45.344573] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.827 [2024-10-09 00:36:45.344616] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.827 [2024-10-09 00:36:45.344629] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.827 [2024-10-09 00:36:45.344636] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.827 [2024-10-09 00:36:45.344642] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:14.827 [2024-10-09 00:36:45.344656] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.827 qpair failed and we were unable to recover it. 00:29:14.827 [2024-10-09 00:36:45.354594] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.827 [2024-10-09 00:36:45.354636] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.827 [2024-10-09 00:36:45.354649] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.827 [2024-10-09 00:36:45.354656] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.827 [2024-10-09 00:36:45.354663] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:14.827 [2024-10-09 00:36:45.354676] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.827 qpair failed and we were unable to recover it. 00:29:14.827 [2024-10-09 00:36:45.364616] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.827 [2024-10-09 00:36:45.364667] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.827 [2024-10-09 00:36:45.364681] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.827 [2024-10-09 00:36:45.364688] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.827 [2024-10-09 00:36:45.364694] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:14.827 [2024-10-09 00:36:45.364708] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.827 qpair failed and we were unable to recover it. 00:29:14.827 [2024-10-09 00:36:45.374634] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.827 [2024-10-09 00:36:45.374719] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.827 [2024-10-09 00:36:45.374736] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.827 [2024-10-09 00:36:45.374744] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.827 [2024-10-09 00:36:45.374750] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:14.827 [2024-10-09 00:36:45.374764] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.827 qpair failed and we were unable to recover it. 00:29:14.827 [2024-10-09 00:36:45.384688] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.827 [2024-10-09 00:36:45.384739] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.827 [2024-10-09 00:36:45.384753] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.827 [2024-10-09 00:36:45.384760] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.827 [2024-10-09 00:36:45.384766] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:14.827 [2024-10-09 00:36:45.384781] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.827 qpair failed and we were unable to recover it. 00:29:14.827 [2024-10-09 00:36:45.394712] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.827 [2024-10-09 00:36:45.394758] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.827 [2024-10-09 00:36:45.394771] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.827 [2024-10-09 00:36:45.394779] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.827 [2024-10-09 00:36:45.394785] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:14.827 [2024-10-09 00:36:45.394799] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.827 qpair failed and we were unable to recover it. 00:29:14.827 [2024-10-09 00:36:45.404725] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.827 [2024-10-09 00:36:45.404785] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.827 [2024-10-09 00:36:45.404798] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.827 [2024-10-09 00:36:45.404809] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.827 [2024-10-09 00:36:45.404815] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:14.827 [2024-10-09 00:36:45.404829] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.827 qpair failed and we were unable to recover it. 00:29:14.827 [2024-10-09 00:36:45.414753] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.827 [2024-10-09 00:36:45.414800] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.827 [2024-10-09 00:36:45.414813] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.827 [2024-10-09 00:36:45.414820] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.827 [2024-10-09 00:36:45.414826] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:14.827 [2024-10-09 00:36:45.414840] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.827 qpair failed and we were unable to recover it. 00:29:14.827 [2024-10-09 00:36:45.424768] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.827 [2024-10-09 00:36:45.424830] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.827 [2024-10-09 00:36:45.424843] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.827 [2024-10-09 00:36:45.424850] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.827 [2024-10-09 00:36:45.424856] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:14.827 [2024-10-09 00:36:45.424870] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.827 qpair failed and we were unable to recover it. 00:29:14.827 [2024-10-09 00:36:45.434670] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.827 [2024-10-09 00:36:45.434714] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.828 [2024-10-09 00:36:45.434730] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.828 [2024-10-09 00:36:45.434737] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.828 [2024-10-09 00:36:45.434743] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:14.828 [2024-10-09 00:36:45.434757] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.828 qpair failed and we were unable to recover it. 00:29:14.828 [2024-10-09 00:36:45.444818] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.828 [2024-10-09 00:36:45.444868] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.828 [2024-10-09 00:36:45.444880] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.828 [2024-10-09 00:36:45.444887] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.828 [2024-10-09 00:36:45.444893] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:14.828 [2024-10-09 00:36:45.444907] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.828 qpair failed and we were unable to recover it. 00:29:14.828 [2024-10-09 00:36:45.454879] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.828 [2024-10-09 00:36:45.454927] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.828 [2024-10-09 00:36:45.454940] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.828 [2024-10-09 00:36:45.454947] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.828 [2024-10-09 00:36:45.454953] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:14.828 [2024-10-09 00:36:45.454967] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.828 qpair failed and we were unable to recover it. 00:29:15.090 [2024-10-09 00:36:45.464804] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.090 [2024-10-09 00:36:45.464861] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.090 [2024-10-09 00:36:45.464876] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.090 [2024-10-09 00:36:45.464883] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.090 [2024-10-09 00:36:45.464889] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:15.090 [2024-10-09 00:36:45.464904] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.090 qpair failed and we were unable to recover it. 00:29:15.090 [2024-10-09 00:36:45.474906] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.090 [2024-10-09 00:36:45.474951] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.091 [2024-10-09 00:36:45.474965] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.091 [2024-10-09 00:36:45.474972] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.091 [2024-10-09 00:36:45.474979] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:15.091 [2024-10-09 00:36:45.474992] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.091 qpair failed and we were unable to recover it. 00:29:15.091 [2024-10-09 00:36:45.484913] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.091 [2024-10-09 00:36:45.484975] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.091 [2024-10-09 00:36:45.484989] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.091 [2024-10-09 00:36:45.484996] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.091 [2024-10-09 00:36:45.485002] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:15.091 [2024-10-09 00:36:45.485016] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.091 qpair failed and we were unable to recover it. 00:29:15.091 [2024-10-09 00:36:45.494951] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.091 [2024-10-09 00:36:45.494994] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.091 [2024-10-09 00:36:45.495008] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.091 [2024-10-09 00:36:45.495018] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.091 [2024-10-09 00:36:45.495025] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:15.091 [2024-10-09 00:36:45.495038] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.091 qpair failed and we were unable to recover it. 00:29:15.091 [2024-10-09 00:36:45.504879] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.091 [2024-10-09 00:36:45.504930] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.091 [2024-10-09 00:36:45.504943] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.091 [2024-10-09 00:36:45.504950] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.091 [2024-10-09 00:36:45.504956] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:15.091 [2024-10-09 00:36:45.504969] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.091 qpair failed and we were unable to recover it. 00:29:15.091 [2024-10-09 00:36:45.515004] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.091 [2024-10-09 00:36:45.515051] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.091 [2024-10-09 00:36:45.515064] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.091 [2024-10-09 00:36:45.515071] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.091 [2024-10-09 00:36:45.515078] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:15.091 [2024-10-09 00:36:45.515091] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.091 qpair failed and we were unable to recover it. 00:29:15.091 [2024-10-09 00:36:45.525060] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.091 [2024-10-09 00:36:45.525121] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.091 [2024-10-09 00:36:45.525134] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.091 [2024-10-09 00:36:45.525141] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.091 [2024-10-09 00:36:45.525148] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:15.091 [2024-10-09 00:36:45.525161] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.091 qpair failed and we were unable to recover it. 00:29:15.091 [2024-10-09 00:36:45.535052] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.091 [2024-10-09 00:36:45.535094] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.091 [2024-10-09 00:36:45.535107] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.091 [2024-10-09 00:36:45.535114] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.091 [2024-10-09 00:36:45.535121] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:15.091 [2024-10-09 00:36:45.535134] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.091 qpair failed and we were unable to recover it. 00:29:15.091 [2024-10-09 00:36:45.545112] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.091 [2024-10-09 00:36:45.545163] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.091 [2024-10-09 00:36:45.545176] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.091 [2024-10-09 00:36:45.545183] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.091 [2024-10-09 00:36:45.545189] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:15.091 [2024-10-09 00:36:45.545203] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.091 qpair failed and we were unable to recover it. 00:29:15.091 [2024-10-09 00:36:45.555131] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.091 [2024-10-09 00:36:45.555179] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.091 [2024-10-09 00:36:45.555192] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.091 [2024-10-09 00:36:45.555199] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.091 [2024-10-09 00:36:45.555205] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:15.091 [2024-10-09 00:36:45.555219] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.091 qpair failed and we were unable to recover it. 00:29:15.091 [2024-10-09 00:36:45.565159] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.091 [2024-10-09 00:36:45.565200] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.091 [2024-10-09 00:36:45.565212] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.091 [2024-10-09 00:36:45.565219] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.091 [2024-10-09 00:36:45.565226] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:15.091 [2024-10-09 00:36:45.565239] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.091 qpair failed and we were unable to recover it. 00:29:15.091 [2024-10-09 00:36:45.575202] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.091 [2024-10-09 00:36:45.575250] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.091 [2024-10-09 00:36:45.575263] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.091 [2024-10-09 00:36:45.575270] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.091 [2024-10-09 00:36:45.575277] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:15.091 [2024-10-09 00:36:45.575290] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.091 qpair failed and we were unable to recover it. 00:29:15.091 [2024-10-09 00:36:45.585228] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.091 [2024-10-09 00:36:45.585273] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.091 [2024-10-09 00:36:45.585290] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.091 [2024-10-09 00:36:45.585297] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.091 [2024-10-09 00:36:45.585303] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:15.091 [2024-10-09 00:36:45.585317] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.091 qpair failed and we were unable to recover it. 00:29:15.091 [2024-10-09 00:36:45.595221] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.091 [2024-10-09 00:36:45.595266] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.091 [2024-10-09 00:36:45.595279] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.091 [2024-10-09 00:36:45.595286] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.091 [2024-10-09 00:36:45.595292] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:15.091 [2024-10-09 00:36:45.595306] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.091 qpair failed and we were unable to recover it. 00:29:15.091 [2024-10-09 00:36:45.605267] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.091 [2024-10-09 00:36:45.605353] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.091 [2024-10-09 00:36:45.605366] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.091 [2024-10-09 00:36:45.605373] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.091 [2024-10-09 00:36:45.605380] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:15.091 [2024-10-09 00:36:45.605393] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.091 qpair failed and we were unable to recover it. 00:29:15.091 [2024-10-09 00:36:45.615289] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.091 [2024-10-09 00:36:45.615340] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.092 [2024-10-09 00:36:45.615353] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.092 [2024-10-09 00:36:45.615359] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.092 [2024-10-09 00:36:45.615366] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:15.092 [2024-10-09 00:36:45.615379] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.092 qpair failed and we were unable to recover it. 00:29:15.092 [2024-10-09 00:36:45.625330] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.092 [2024-10-09 00:36:45.625383] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.092 [2024-10-09 00:36:45.625396] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.092 [2024-10-09 00:36:45.625403] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.092 [2024-10-09 00:36:45.625410] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:15.092 [2024-10-09 00:36:45.625428] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.092 qpair failed and we were unable to recover it. 00:29:15.092 [2024-10-09 00:36:45.635362] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.092 [2024-10-09 00:36:45.635403] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.092 [2024-10-09 00:36:45.635417] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.092 [2024-10-09 00:36:45.635423] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.092 [2024-10-09 00:36:45.635430] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:15.092 [2024-10-09 00:36:45.635443] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.092 qpair failed and we were unable to recover it. 00:29:15.092 [2024-10-09 00:36:45.645380] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.092 [2024-10-09 00:36:45.645420] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.092 [2024-10-09 00:36:45.645433] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.092 [2024-10-09 00:36:45.645440] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.092 [2024-10-09 00:36:45.645446] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:15.092 [2024-10-09 00:36:45.645460] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.092 qpair failed and we were unable to recover it. 00:29:15.092 [2024-10-09 00:36:45.655436] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.092 [2024-10-09 00:36:45.655513] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.092 [2024-10-09 00:36:45.655538] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.092 [2024-10-09 00:36:45.655547] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.092 [2024-10-09 00:36:45.655554] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:15.092 [2024-10-09 00:36:45.655573] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.092 qpair failed and we were unable to recover it. 00:29:15.092 [2024-10-09 00:36:45.665444] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.092 [2024-10-09 00:36:45.665499] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.092 [2024-10-09 00:36:45.665514] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.092 [2024-10-09 00:36:45.665521] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.092 [2024-10-09 00:36:45.665528] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:15.092 [2024-10-09 00:36:45.665543] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.092 qpair failed and we were unable to recover it. 00:29:15.092 [2024-10-09 00:36:45.675332] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.092 [2024-10-09 00:36:45.675377] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.092 [2024-10-09 00:36:45.675395] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.092 [2024-10-09 00:36:45.675403] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.092 [2024-10-09 00:36:45.675409] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:15.092 [2024-10-09 00:36:45.675423] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.092 qpair failed and we were unable to recover it. 00:29:15.092 [2024-10-09 00:36:45.685472] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.092 [2024-10-09 00:36:45.685521] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.092 [2024-10-09 00:36:45.685535] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.092 [2024-10-09 00:36:45.685542] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.092 [2024-10-09 00:36:45.685548] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:15.092 [2024-10-09 00:36:45.685562] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.092 qpair failed and we were unable to recover it. 00:29:15.092 [2024-10-09 00:36:45.695573] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.092 [2024-10-09 00:36:45.695617] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.092 [2024-10-09 00:36:45.695630] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.092 [2024-10-09 00:36:45.695637] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.092 [2024-10-09 00:36:45.695643] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:15.092 [2024-10-09 00:36:45.695657] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.092 qpair failed and we were unable to recover it. 00:29:15.092 [2024-10-09 00:36:45.705563] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.092 [2024-10-09 00:36:45.705652] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.092 [2024-10-09 00:36:45.705665] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.092 [2024-10-09 00:36:45.705671] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.092 [2024-10-09 00:36:45.705678] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:15.092 [2024-10-09 00:36:45.705692] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.092 qpair failed and we were unable to recover it. 00:29:15.092 [2024-10-09 00:36:45.715574] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.092 [2024-10-09 00:36:45.715628] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.092 [2024-10-09 00:36:45.715641] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.092 [2024-10-09 00:36:45.715648] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.092 [2024-10-09 00:36:45.715654] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:15.092 [2024-10-09 00:36:45.715672] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.092 qpair failed and we were unable to recover it. 00:29:15.354 [2024-10-09 00:36:45.725575] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.354 [2024-10-09 00:36:45.725621] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.354 [2024-10-09 00:36:45.725634] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.354 [2024-10-09 00:36:45.725641] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.354 [2024-10-09 00:36:45.725648] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:15.354 [2024-10-09 00:36:45.725661] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.354 qpair failed and we were unable to recover it. 00:29:15.354 [2024-10-09 00:36:45.735637] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.354 [2024-10-09 00:36:45.735683] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.354 [2024-10-09 00:36:45.735695] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.354 [2024-10-09 00:36:45.735702] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.354 [2024-10-09 00:36:45.735709] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:15.354 [2024-10-09 00:36:45.735725] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.354 qpair failed and we were unable to recover it. 00:29:15.354 [2024-10-09 00:36:45.745658] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.354 [2024-10-09 00:36:45.745709] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.354 [2024-10-09 00:36:45.745726] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.354 [2024-10-09 00:36:45.745734] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.354 [2024-10-09 00:36:45.745740] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:15.354 [2024-10-09 00:36:45.745754] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.354 qpair failed and we were unable to recover it. 00:29:15.354 [2024-10-09 00:36:45.755640] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.354 [2024-10-09 00:36:45.755685] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.354 [2024-10-09 00:36:45.755698] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.354 [2024-10-09 00:36:45.755705] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.354 [2024-10-09 00:36:45.755711] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:15.354 [2024-10-09 00:36:45.755734] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.354 qpair failed and we were unable to recover it. 00:29:15.354 [2024-10-09 00:36:45.765697] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.354 [2024-10-09 00:36:45.765769] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.354 [2024-10-09 00:36:45.765786] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.354 [2024-10-09 00:36:45.765793] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.354 [2024-10-09 00:36:45.765799] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:15.354 [2024-10-09 00:36:45.765813] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.355 qpair failed and we were unable to recover it. 00:29:15.355 [2024-10-09 00:36:45.775741] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.355 [2024-10-09 00:36:45.775786] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.355 [2024-10-09 00:36:45.775799] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.355 [2024-10-09 00:36:45.775806] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.355 [2024-10-09 00:36:45.775812] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:15.355 [2024-10-09 00:36:45.775826] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.355 qpair failed and we were unable to recover it. 00:29:15.355 [2024-10-09 00:36:45.785652] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.355 [2024-10-09 00:36:45.785704] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.355 [2024-10-09 00:36:45.785718] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.355 [2024-10-09 00:36:45.785730] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.355 [2024-10-09 00:36:45.785737] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:15.355 [2024-10-09 00:36:45.785751] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.355 qpair failed and we were unable to recover it. 00:29:15.355 [2024-10-09 00:36:45.795767] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.355 [2024-10-09 00:36:45.795808] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.355 [2024-10-09 00:36:45.795822] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.355 [2024-10-09 00:36:45.795829] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.355 [2024-10-09 00:36:45.795836] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:15.355 [2024-10-09 00:36:45.795850] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.355 qpair failed and we were unable to recover it. 00:29:15.355 [2024-10-09 00:36:45.805694] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.355 [2024-10-09 00:36:45.805740] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.355 [2024-10-09 00:36:45.805754] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.355 [2024-10-09 00:36:45.805761] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.355 [2024-10-09 00:36:45.805771] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:15.355 [2024-10-09 00:36:45.805785] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.355 qpair failed and we were unable to recover it. 00:29:15.355 [2024-10-09 00:36:45.815868] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.355 [2024-10-09 00:36:45.815919] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.355 [2024-10-09 00:36:45.815932] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.355 [2024-10-09 00:36:45.815939] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.355 [2024-10-09 00:36:45.815945] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:15.355 [2024-10-09 00:36:45.815959] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.355 qpair failed and we were unable to recover it. 00:29:15.355 [2024-10-09 00:36:45.825747] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.355 [2024-10-09 00:36:45.825797] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.355 [2024-10-09 00:36:45.825810] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.355 [2024-10-09 00:36:45.825817] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.355 [2024-10-09 00:36:45.825823] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:15.355 [2024-10-09 00:36:45.825837] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.355 qpair failed and we were unable to recover it. 00:29:15.355 [2024-10-09 00:36:45.835906] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.355 [2024-10-09 00:36:45.835948] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.355 [2024-10-09 00:36:45.835961] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.355 [2024-10-09 00:36:45.835968] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.355 [2024-10-09 00:36:45.835975] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:15.355 [2024-10-09 00:36:45.835988] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.355 qpair failed and we were unable to recover it. 00:29:15.355 [2024-10-09 00:36:45.845938] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.355 [2024-10-09 00:36:45.845982] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.355 [2024-10-09 00:36:45.845995] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.355 [2024-10-09 00:36:45.846001] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.355 [2024-10-09 00:36:45.846008] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:15.355 [2024-10-09 00:36:45.846021] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.355 qpair failed and we were unable to recover it. 00:29:15.355 [2024-10-09 00:36:45.855954] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.355 [2024-10-09 00:36:45.856006] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.355 [2024-10-09 00:36:45.856019] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.355 [2024-10-09 00:36:45.856026] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.355 [2024-10-09 00:36:45.856032] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:15.355 [2024-10-09 00:36:45.856046] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.355 qpair failed and we were unable to recover it. 00:29:15.355 [2024-10-09 00:36:45.865968] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.355 [2024-10-09 00:36:45.866023] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.355 [2024-10-09 00:36:45.866037] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.355 [2024-10-09 00:36:45.866044] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.355 [2024-10-09 00:36:45.866051] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:15.355 [2024-10-09 00:36:45.866069] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.355 qpair failed and we were unable to recover it. 00:29:15.355 [2024-10-09 00:36:45.876014] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.355 [2024-10-09 00:36:45.876060] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.355 [2024-10-09 00:36:45.876075] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.355 [2024-10-09 00:36:45.876082] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.355 [2024-10-09 00:36:45.876088] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:15.355 [2024-10-09 00:36:45.876102] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.355 qpair failed and we were unable to recover it. 00:29:15.355 [2024-10-09 00:36:45.886075] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.355 [2024-10-09 00:36:45.886122] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.355 [2024-10-09 00:36:45.886135] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.355 [2024-10-09 00:36:45.886142] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.355 [2024-10-09 00:36:45.886148] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:15.355 [2024-10-09 00:36:45.886162] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.355 qpair failed and we were unable to recover it. 00:29:15.355 [2024-10-09 00:36:45.896075] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.355 [2024-10-09 00:36:45.896150] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.355 [2024-10-09 00:36:45.896163] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.355 [2024-10-09 00:36:45.896170] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.355 [2024-10-09 00:36:45.896180] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:15.355 [2024-10-09 00:36:45.896194] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.355 qpair failed and we were unable to recover it. 00:29:15.355 [2024-10-09 00:36:45.906103] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.355 [2024-10-09 00:36:45.906147] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.355 [2024-10-09 00:36:45.906160] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.355 [2024-10-09 00:36:45.906167] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.355 [2024-10-09 00:36:45.906174] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:15.355 [2024-10-09 00:36:45.906187] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.355 qpair failed and we were unable to recover it. 00:29:15.355 [2024-10-09 00:36:45.916132] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.356 [2024-10-09 00:36:45.916183] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.356 [2024-10-09 00:36:45.916196] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.356 [2024-10-09 00:36:45.916203] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.356 [2024-10-09 00:36:45.916209] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:15.356 [2024-10-09 00:36:45.916223] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.356 qpair failed and we were unable to recover it. 00:29:15.356 [2024-10-09 00:36:45.926048] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.356 [2024-10-09 00:36:45.926098] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.356 [2024-10-09 00:36:45.926111] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.356 [2024-10-09 00:36:45.926117] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.356 [2024-10-09 00:36:45.926124] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:15.356 [2024-10-09 00:36:45.926137] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.356 qpair failed and we were unable to recover it. 00:29:15.356 [2024-10-09 00:36:45.936161] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.356 [2024-10-09 00:36:45.936206] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.356 [2024-10-09 00:36:45.936219] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.356 [2024-10-09 00:36:45.936226] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.356 [2024-10-09 00:36:45.936232] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:15.356 [2024-10-09 00:36:45.936246] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.356 qpair failed and we were unable to recover it. 00:29:15.356 [2024-10-09 00:36:45.946204] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.356 [2024-10-09 00:36:45.946248] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.356 [2024-10-09 00:36:45.946261] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.356 [2024-10-09 00:36:45.946268] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.356 [2024-10-09 00:36:45.946274] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:15.356 [2024-10-09 00:36:45.946288] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.356 qpair failed and we were unable to recover it. 00:29:15.356 [2024-10-09 00:36:45.956262] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.356 [2024-10-09 00:36:45.956376] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.356 [2024-10-09 00:36:45.956389] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.356 [2024-10-09 00:36:45.956396] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.356 [2024-10-09 00:36:45.956402] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:15.356 [2024-10-09 00:36:45.956416] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.356 qpair failed and we were unable to recover it. 00:29:15.356 [2024-10-09 00:36:45.966246] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.356 [2024-10-09 00:36:45.966329] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.356 [2024-10-09 00:36:45.966342] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.356 [2024-10-09 00:36:45.966349] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.356 [2024-10-09 00:36:45.966355] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:15.356 [2024-10-09 00:36:45.966369] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.356 qpair failed and we were unable to recover it. 00:29:15.356 [2024-10-09 00:36:45.976296] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.356 [2024-10-09 00:36:45.976386] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.356 [2024-10-09 00:36:45.976399] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.356 [2024-10-09 00:36:45.976406] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.356 [2024-10-09 00:36:45.976412] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:15.356 [2024-10-09 00:36:45.976426] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.356 qpair failed and we were unable to recover it. 00:29:15.356 [2024-10-09 00:36:45.986326] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.356 [2024-10-09 00:36:45.986400] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.356 [2024-10-09 00:36:45.986413] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.356 [2024-10-09 00:36:45.986427] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.356 [2024-10-09 00:36:45.986433] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:15.356 [2024-10-09 00:36:45.986447] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.356 qpair failed and we were unable to recover it. 00:29:15.618 [2024-10-09 00:36:45.996336] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.618 [2024-10-09 00:36:45.996384] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.618 [2024-10-09 00:36:45.996397] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.618 [2024-10-09 00:36:45.996405] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.618 [2024-10-09 00:36:45.996411] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:15.618 [2024-10-09 00:36:45.996424] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.618 qpair failed and we were unable to recover it. 00:29:15.618 [2024-10-09 00:36:46.006327] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.618 [2024-10-09 00:36:46.006372] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.618 [2024-10-09 00:36:46.006386] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.618 [2024-10-09 00:36:46.006393] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.618 [2024-10-09 00:36:46.006399] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:15.618 [2024-10-09 00:36:46.006412] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.618 qpair failed and we were unable to recover it. 00:29:15.618 [2024-10-09 00:36:46.016292] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.618 [2024-10-09 00:36:46.016349] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.618 [2024-10-09 00:36:46.016362] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.618 [2024-10-09 00:36:46.016369] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.618 [2024-10-09 00:36:46.016375] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:15.618 [2024-10-09 00:36:46.016389] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.618 qpair failed and we were unable to recover it. 00:29:15.618 [2024-10-09 00:36:46.026449] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.618 [2024-10-09 00:36:46.026497] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.618 [2024-10-09 00:36:46.026509] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.618 [2024-10-09 00:36:46.026516] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.618 [2024-10-09 00:36:46.026523] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:15.618 [2024-10-09 00:36:46.026536] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.618 qpair failed and we were unable to recover it. 00:29:15.618 [2024-10-09 00:36:46.036326] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.618 [2024-10-09 00:36:46.036377] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.618 [2024-10-09 00:36:46.036402] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.618 [2024-10-09 00:36:46.036410] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.618 [2024-10-09 00:36:46.036417] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:15.618 [2024-10-09 00:36:46.036436] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.618 qpair failed and we were unable to recover it. 00:29:15.618 [2024-10-09 00:36:46.046473] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.618 [2024-10-09 00:36:46.046517] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.618 [2024-10-09 00:36:46.046532] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.618 [2024-10-09 00:36:46.046539] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.618 [2024-10-09 00:36:46.046546] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:15.618 [2024-10-09 00:36:46.046561] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.618 qpair failed and we were unable to recover it. 00:29:15.618 [2024-10-09 00:36:46.056505] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.618 [2024-10-09 00:36:46.056593] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.618 [2024-10-09 00:36:46.056618] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.618 [2024-10-09 00:36:46.056627] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.618 [2024-10-09 00:36:46.056634] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:15.618 [2024-10-09 00:36:46.056653] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.618 qpair failed and we were unable to recover it. 00:29:15.618 [2024-10-09 00:36:46.066554] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.618 [2024-10-09 00:36:46.066647] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.618 [2024-10-09 00:36:46.066662] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.618 [2024-10-09 00:36:46.066670] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.618 [2024-10-09 00:36:46.066676] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:15.618 [2024-10-09 00:36:46.066691] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.618 qpair failed and we were unable to recover it. 00:29:15.618 [2024-10-09 00:36:46.076549] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.618 [2024-10-09 00:36:46.076592] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.618 [2024-10-09 00:36:46.076606] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.618 [2024-10-09 00:36:46.076617] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.618 [2024-10-09 00:36:46.076624] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:15.618 [2024-10-09 00:36:46.076638] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.618 qpair failed and we were unable to recover it. 00:29:15.618 [2024-10-09 00:36:46.086591] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.618 [2024-10-09 00:36:46.086637] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.618 [2024-10-09 00:36:46.086660] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.618 [2024-10-09 00:36:46.086667] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.618 [2024-10-09 00:36:46.086673] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:15.618 [2024-10-09 00:36:46.086693] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.618 qpair failed and we were unable to recover it. 00:29:15.618 [2024-10-09 00:36:46.096622] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.618 [2024-10-09 00:36:46.096668] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.618 [2024-10-09 00:36:46.096682] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.619 [2024-10-09 00:36:46.096689] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.619 [2024-10-09 00:36:46.096695] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:15.619 [2024-10-09 00:36:46.096709] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.619 qpair failed and we were unable to recover it. 00:29:15.619 [2024-10-09 00:36:46.106624] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.619 [2024-10-09 00:36:46.106669] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.619 [2024-10-09 00:36:46.106683] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.619 [2024-10-09 00:36:46.106689] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.619 [2024-10-09 00:36:46.106696] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:15.619 [2024-10-09 00:36:46.106710] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.619 qpair failed and we were unable to recover it. 00:29:15.619 [2024-10-09 00:36:46.116651] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.619 [2024-10-09 00:36:46.116695] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.619 [2024-10-09 00:36:46.116708] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.619 [2024-10-09 00:36:46.116715] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.619 [2024-10-09 00:36:46.116724] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:15.619 [2024-10-09 00:36:46.116738] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.619 qpair failed and we were unable to recover it. 00:29:15.619 [2024-10-09 00:36:46.126553] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.619 [2024-10-09 00:36:46.126624] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.619 [2024-10-09 00:36:46.126637] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.619 [2024-10-09 00:36:46.126644] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.619 [2024-10-09 00:36:46.126651] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:15.619 [2024-10-09 00:36:46.126665] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.619 qpair failed and we were unable to recover it. 00:29:15.619 [2024-10-09 00:36:46.136730] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.619 [2024-10-09 00:36:46.136778] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.619 [2024-10-09 00:36:46.136792] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.619 [2024-10-09 00:36:46.136798] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.619 [2024-10-09 00:36:46.136805] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:15.619 [2024-10-09 00:36:46.136819] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.619 qpair failed and we were unable to recover it. 00:29:15.619 [2024-10-09 00:36:46.146716] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.619 [2024-10-09 00:36:46.146764] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.619 [2024-10-09 00:36:46.146777] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.619 [2024-10-09 00:36:46.146784] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.619 [2024-10-09 00:36:46.146790] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:15.619 [2024-10-09 00:36:46.146804] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.619 qpair failed and we were unable to recover it. 00:29:15.619 [2024-10-09 00:36:46.156772] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.619 [2024-10-09 00:36:46.156825] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.619 [2024-10-09 00:36:46.156838] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.619 [2024-10-09 00:36:46.156845] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.619 [2024-10-09 00:36:46.156851] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:15.619 [2024-10-09 00:36:46.156865] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.619 qpair failed and we were unable to recover it. 00:29:15.619 [2024-10-09 00:36:46.166651] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.619 [2024-10-09 00:36:46.166693] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.619 [2024-10-09 00:36:46.166709] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.619 [2024-10-09 00:36:46.166716] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.619 [2024-10-09 00:36:46.166726] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:15.619 [2024-10-09 00:36:46.166740] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.619 qpair failed and we were unable to recover it. 00:29:15.619 [2024-10-09 00:36:46.176807] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.619 [2024-10-09 00:36:46.176854] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.619 [2024-10-09 00:36:46.176867] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.619 [2024-10-09 00:36:46.176874] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.619 [2024-10-09 00:36:46.176880] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:15.619 [2024-10-09 00:36:46.176894] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.619 qpair failed and we were unable to recover it. 00:29:15.619 [2024-10-09 00:36:46.186833] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.619 [2024-10-09 00:36:46.186915] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.619 [2024-10-09 00:36:46.186927] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.619 [2024-10-09 00:36:46.186934] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.619 [2024-10-09 00:36:46.186941] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:15.619 [2024-10-09 00:36:46.186955] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.619 qpair failed and we were unable to recover it. 00:29:15.619 [2024-10-09 00:36:46.196860] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.619 [2024-10-09 00:36:46.196901] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.619 [2024-10-09 00:36:46.196915] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.619 [2024-10-09 00:36:46.196922] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.619 [2024-10-09 00:36:46.196928] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:15.619 [2024-10-09 00:36:46.196941] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.619 qpair failed and we were unable to recover it. 00:29:15.619 [2024-10-09 00:36:46.206856] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.619 [2024-10-09 00:36:46.206904] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.619 [2024-10-09 00:36:46.206917] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.619 [2024-10-09 00:36:46.206924] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.619 [2024-10-09 00:36:46.206930] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:15.619 [2024-10-09 00:36:46.206948] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.619 qpair failed and we were unable to recover it. 00:29:15.619 [2024-10-09 00:36:46.216929] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.619 [2024-10-09 00:36:46.217009] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.619 [2024-10-09 00:36:46.217023] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.619 [2024-10-09 00:36:46.217031] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.619 [2024-10-09 00:36:46.217037] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:15.619 [2024-10-09 00:36:46.217051] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.619 qpair failed and we were unable to recover it. 00:29:15.619 [2024-10-09 00:36:46.226967] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.619 [2024-10-09 00:36:46.227057] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.619 [2024-10-09 00:36:46.227070] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.619 [2024-10-09 00:36:46.227077] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.619 [2024-10-09 00:36:46.227083] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:15.619 [2024-10-09 00:36:46.227097] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.619 qpair failed and we were unable to recover it. 00:29:15.619 [2024-10-09 00:36:46.236960] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.619 [2024-10-09 00:36:46.237008] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.619 [2024-10-09 00:36:46.237022] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.619 [2024-10-09 00:36:46.237029] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.620 [2024-10-09 00:36:46.237035] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:15.620 [2024-10-09 00:36:46.237049] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.620 qpair failed and we were unable to recover it. 00:29:15.620 [2024-10-09 00:36:46.246890] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.620 [2024-10-09 00:36:46.246947] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.620 [2024-10-09 00:36:46.246961] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.620 [2024-10-09 00:36:46.246968] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.620 [2024-10-09 00:36:46.246977] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:15.620 [2024-10-09 00:36:46.246991] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.620 qpair failed and we were unable to recover it. 00:29:15.882 [2024-10-09 00:36:46.257022] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.882 [2024-10-09 00:36:46.257066] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.882 [2024-10-09 00:36:46.257083] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.882 [2024-10-09 00:36:46.257090] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.882 [2024-10-09 00:36:46.257096] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:15.882 [2024-10-09 00:36:46.257110] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.882 qpair failed and we were unable to recover it. 00:29:15.882 [2024-10-09 00:36:46.267063] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.882 [2024-10-09 00:36:46.267113] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.882 [2024-10-09 00:36:46.267127] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.882 [2024-10-09 00:36:46.267134] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.882 [2024-10-09 00:36:46.267141] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:15.882 [2024-10-09 00:36:46.267154] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.882 qpair failed and we were unable to recover it. 00:29:15.882 [2024-10-09 00:36:46.277066] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.882 [2024-10-09 00:36:46.277109] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.882 [2024-10-09 00:36:46.277121] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.882 [2024-10-09 00:36:46.277128] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.882 [2024-10-09 00:36:46.277135] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:15.882 [2024-10-09 00:36:46.277148] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.882 qpair failed and we were unable to recover it. 00:29:15.882 [2024-10-09 00:36:46.287106] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.882 [2024-10-09 00:36:46.287186] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.882 [2024-10-09 00:36:46.287199] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.882 [2024-10-09 00:36:46.287206] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.882 [2024-10-09 00:36:46.287212] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:15.882 [2024-10-09 00:36:46.287227] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.882 qpair failed and we were unable to recover it. 00:29:15.882 [2024-10-09 00:36:46.297114] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.882 [2024-10-09 00:36:46.297159] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.882 [2024-10-09 00:36:46.297172] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.882 [2024-10-09 00:36:46.297179] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.882 [2024-10-09 00:36:46.297191] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:15.882 [2024-10-09 00:36:46.297205] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.882 qpair failed and we were unable to recover it. 00:29:15.882 [2024-10-09 00:36:46.307176] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.882 [2024-10-09 00:36:46.307227] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.882 [2024-10-09 00:36:46.307240] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.882 [2024-10-09 00:36:46.307247] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.882 [2024-10-09 00:36:46.307253] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:15.882 [2024-10-09 00:36:46.307267] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.882 qpair failed and we were unable to recover it. 00:29:15.882 [2024-10-09 00:36:46.317157] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.882 [2024-10-09 00:36:46.317198] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.882 [2024-10-09 00:36:46.317211] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.882 [2024-10-09 00:36:46.317218] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.882 [2024-10-09 00:36:46.317224] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:15.882 [2024-10-09 00:36:46.317238] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.882 qpair failed and we were unable to recover it. 00:29:15.882 [2024-10-09 00:36:46.327229] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.882 [2024-10-09 00:36:46.327284] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.882 [2024-10-09 00:36:46.327296] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.882 [2024-10-09 00:36:46.327303] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.882 [2024-10-09 00:36:46.327309] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:15.882 [2024-10-09 00:36:46.327323] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.882 qpair failed and we were unable to recover it. 00:29:15.882 [2024-10-09 00:36:46.337235] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.882 [2024-10-09 00:36:46.337279] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.882 [2024-10-09 00:36:46.337292] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.882 [2024-10-09 00:36:46.337299] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.882 [2024-10-09 00:36:46.337306] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:15.882 [2024-10-09 00:36:46.337319] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.882 qpair failed and we were unable to recover it. 00:29:15.882 [2024-10-09 00:36:46.347266] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.882 [2024-10-09 00:36:46.347317] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.882 [2024-10-09 00:36:46.347330] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.882 [2024-10-09 00:36:46.347337] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.882 [2024-10-09 00:36:46.347343] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:15.882 [2024-10-09 00:36:46.347356] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.882 qpair failed and we were unable to recover it. 00:29:15.882 [2024-10-09 00:36:46.357288] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.882 [2024-10-09 00:36:46.357334] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.882 [2024-10-09 00:36:46.357347] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.882 [2024-10-09 00:36:46.357354] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.882 [2024-10-09 00:36:46.357360] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:15.882 [2024-10-09 00:36:46.357374] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.882 qpair failed and we were unable to recover it. 00:29:15.882 [2024-10-09 00:36:46.367323] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.882 [2024-10-09 00:36:46.367366] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.882 [2024-10-09 00:36:46.367379] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.882 [2024-10-09 00:36:46.367387] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.882 [2024-10-09 00:36:46.367393] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:15.882 [2024-10-09 00:36:46.367406] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.882 qpair failed and we were unable to recover it. 00:29:15.882 [2024-10-09 00:36:46.377221] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.882 [2024-10-09 00:36:46.377273] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.882 [2024-10-09 00:36:46.377286] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.882 [2024-10-09 00:36:46.377292] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.883 [2024-10-09 00:36:46.377300] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:15.883 [2024-10-09 00:36:46.377314] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.883 qpair failed and we were unable to recover it. 00:29:15.883 [2024-10-09 00:36:46.387384] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.883 [2024-10-09 00:36:46.387430] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.883 [2024-10-09 00:36:46.387443] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.883 [2024-10-09 00:36:46.387450] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.883 [2024-10-09 00:36:46.387460] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:15.883 [2024-10-09 00:36:46.387473] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.883 qpair failed and we were unable to recover it. 00:29:15.883 [2024-10-09 00:36:46.397435] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.883 [2024-10-09 00:36:46.397516] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.883 [2024-10-09 00:36:46.397529] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.883 [2024-10-09 00:36:46.397536] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.883 [2024-10-09 00:36:46.397542] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:15.883 [2024-10-09 00:36:46.397556] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.883 qpair failed and we were unable to recover it. 00:29:15.883 [2024-10-09 00:36:46.407400] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.883 [2024-10-09 00:36:46.407442] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.883 [2024-10-09 00:36:46.407455] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.883 [2024-10-09 00:36:46.407462] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.883 [2024-10-09 00:36:46.407468] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:15.883 [2024-10-09 00:36:46.407482] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.883 qpair failed and we were unable to recover it. 00:29:15.883 [2024-10-09 00:36:46.417461] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.883 [2024-10-09 00:36:46.417508] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.883 [2024-10-09 00:36:46.417521] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.883 [2024-10-09 00:36:46.417528] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.883 [2024-10-09 00:36:46.417534] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:15.883 [2024-10-09 00:36:46.417548] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.883 qpair failed and we were unable to recover it. 00:29:15.883 [2024-10-09 00:36:46.427493] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.883 [2024-10-09 00:36:46.427566] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.883 [2024-10-09 00:36:46.427579] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.883 [2024-10-09 00:36:46.427586] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.883 [2024-10-09 00:36:46.427592] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:15.883 [2024-10-09 00:36:46.427606] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.883 qpair failed and we were unable to recover it. 00:29:15.883 [2024-10-09 00:36:46.437547] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.883 [2024-10-09 00:36:46.437599] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.883 [2024-10-09 00:36:46.437613] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.883 [2024-10-09 00:36:46.437620] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.883 [2024-10-09 00:36:46.437626] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:15.883 [2024-10-09 00:36:46.437640] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.883 qpair failed and we were unable to recover it. 00:29:15.883 [2024-10-09 00:36:46.447537] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.883 [2024-10-09 00:36:46.447580] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.883 [2024-10-09 00:36:46.447593] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.883 [2024-10-09 00:36:46.447600] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.883 [2024-10-09 00:36:46.447606] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:15.883 [2024-10-09 00:36:46.447620] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.883 qpair failed and we were unable to recover it. 00:29:15.883 [2024-10-09 00:36:46.457568] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.883 [2024-10-09 00:36:46.457613] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.883 [2024-10-09 00:36:46.457626] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.883 [2024-10-09 00:36:46.457633] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.883 [2024-10-09 00:36:46.457639] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:15.883 [2024-10-09 00:36:46.457653] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.883 qpair failed and we were unable to recover it. 00:29:15.883 [2024-10-09 00:36:46.467614] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.883 [2024-10-09 00:36:46.467669] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.883 [2024-10-09 00:36:46.467682] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.883 [2024-10-09 00:36:46.467689] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.883 [2024-10-09 00:36:46.467695] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:15.883 [2024-10-09 00:36:46.467708] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.883 qpair failed and we were unable to recover it. 00:29:15.883 [2024-10-09 00:36:46.477615] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.883 [2024-10-09 00:36:46.477675] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.883 [2024-10-09 00:36:46.477688] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.883 [2024-10-09 00:36:46.477699] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.883 [2024-10-09 00:36:46.477705] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:15.883 [2024-10-09 00:36:46.477723] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.883 qpair failed and we were unable to recover it. 00:29:15.883 [2024-10-09 00:36:46.487629] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.883 [2024-10-09 00:36:46.487678] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.883 [2024-10-09 00:36:46.487690] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.883 [2024-10-09 00:36:46.487697] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.883 [2024-10-09 00:36:46.487703] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:15.883 [2024-10-09 00:36:46.487717] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.883 qpair failed and we were unable to recover it. 00:29:15.883 [2024-10-09 00:36:46.497676] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.883 [2024-10-09 00:36:46.497727] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.883 [2024-10-09 00:36:46.497741] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.883 [2024-10-09 00:36:46.497748] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.884 [2024-10-09 00:36:46.497754] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:15.884 [2024-10-09 00:36:46.497768] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.884 qpair failed and we were unable to recover it. 00:29:15.884 [2024-10-09 00:36:46.507742] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.884 [2024-10-09 00:36:46.507812] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.884 [2024-10-09 00:36:46.507825] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.884 [2024-10-09 00:36:46.507832] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.884 [2024-10-09 00:36:46.507838] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:15.884 [2024-10-09 00:36:46.507852] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.884 qpair failed and we were unable to recover it. 00:29:16.145 [2024-10-09 00:36:46.517732] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.145 [2024-10-09 00:36:46.517775] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.145 [2024-10-09 00:36:46.517788] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.145 [2024-10-09 00:36:46.517795] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.145 [2024-10-09 00:36:46.517802] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:16.145 [2024-10-09 00:36:46.517815] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.145 qpair failed and we were unable to recover it. 00:29:16.145 [2024-10-09 00:36:46.527632] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.145 [2024-10-09 00:36:46.527701] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.145 [2024-10-09 00:36:46.527716] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.145 [2024-10-09 00:36:46.527727] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.145 [2024-10-09 00:36:46.527734] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:16.145 [2024-10-09 00:36:46.527749] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.145 qpair failed and we were unable to recover it. 00:29:16.145 [2024-10-09 00:36:46.537739] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.145 [2024-10-09 00:36:46.537785] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.145 [2024-10-09 00:36:46.537799] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.145 [2024-10-09 00:36:46.537806] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.145 [2024-10-09 00:36:46.537813] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:16.145 [2024-10-09 00:36:46.537827] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.145 qpair failed and we were unable to recover it. 00:29:16.145 [2024-10-09 00:36:46.547819] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.145 [2024-10-09 00:36:46.547862] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.145 [2024-10-09 00:36:46.547875] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.145 [2024-10-09 00:36:46.547882] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.145 [2024-10-09 00:36:46.547889] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:16.145 [2024-10-09 00:36:46.547902] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.145 qpair failed and we were unable to recover it. 00:29:16.145 [2024-10-09 00:36:46.557820] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.145 [2024-10-09 00:36:46.557865] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.145 [2024-10-09 00:36:46.557879] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.145 [2024-10-09 00:36:46.557886] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.145 [2024-10-09 00:36:46.557892] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:16.145 [2024-10-09 00:36:46.557906] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.145 qpair failed and we were unable to recover it. 00:29:16.145 [2024-10-09 00:36:46.567870] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.145 [2024-10-09 00:36:46.567912] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.145 [2024-10-09 00:36:46.567925] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.145 [2024-10-09 00:36:46.567935] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.145 [2024-10-09 00:36:46.567941] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:16.145 [2024-10-09 00:36:46.567954] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.145 qpair failed and we were unable to recover it. 00:29:16.145 [2024-10-09 00:36:46.577759] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.145 [2024-10-09 00:36:46.577806] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.145 [2024-10-09 00:36:46.577819] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.145 [2024-10-09 00:36:46.577826] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.145 [2024-10-09 00:36:46.577832] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:16.145 [2024-10-09 00:36:46.577846] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.146 qpair failed and we were unable to recover it. 00:29:16.146 [2024-10-09 00:36:46.587920] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.146 [2024-10-09 00:36:46.587977] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.146 [2024-10-09 00:36:46.587992] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.146 [2024-10-09 00:36:46.587999] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.146 [2024-10-09 00:36:46.588006] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:16.146 [2024-10-09 00:36:46.588025] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.146 qpair failed and we were unable to recover it. 00:29:16.146 [2024-10-09 00:36:46.597953] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.146 [2024-10-09 00:36:46.598000] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.146 [2024-10-09 00:36:46.598014] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.146 [2024-10-09 00:36:46.598021] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.146 [2024-10-09 00:36:46.598027] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:16.146 [2024-10-09 00:36:46.598041] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.146 qpair failed and we were unable to recover it. 00:29:16.146 [2024-10-09 00:36:46.607965] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.146 [2024-10-09 00:36:46.608013] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.146 [2024-10-09 00:36:46.608027] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.146 [2024-10-09 00:36:46.608034] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.146 [2024-10-09 00:36:46.608040] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:16.146 [2024-10-09 00:36:46.608054] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.146 qpair failed and we were unable to recover it. 00:29:16.146 [2024-10-09 00:36:46.618019] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.146 [2024-10-09 00:36:46.618065] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.146 [2024-10-09 00:36:46.618078] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.146 [2024-10-09 00:36:46.618085] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.146 [2024-10-09 00:36:46.618091] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:16.146 [2024-10-09 00:36:46.618105] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.146 qpair failed and we were unable to recover it. 00:29:16.146 [2024-10-09 00:36:46.628038] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.146 [2024-10-09 00:36:46.628081] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.146 [2024-10-09 00:36:46.628093] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.146 [2024-10-09 00:36:46.628100] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.146 [2024-10-09 00:36:46.628107] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:16.146 [2024-10-09 00:36:46.628121] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.146 qpair failed and we were unable to recover it. 00:29:16.146 [2024-10-09 00:36:46.638061] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.146 [2024-10-09 00:36:46.638103] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.146 [2024-10-09 00:36:46.638116] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.146 [2024-10-09 00:36:46.638123] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.146 [2024-10-09 00:36:46.638129] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:16.146 [2024-10-09 00:36:46.638143] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.146 qpair failed and we were unable to recover it. 00:29:16.146 [2024-10-09 00:36:46.648104] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.146 [2024-10-09 00:36:46.648148] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.146 [2024-10-09 00:36:46.648162] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.146 [2024-10-09 00:36:46.648169] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.146 [2024-10-09 00:36:46.648175] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:16.146 [2024-10-09 00:36:46.648189] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.146 qpair failed and we were unable to recover it. 00:29:16.146 [2024-10-09 00:36:46.658134] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.146 [2024-10-09 00:36:46.658178] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.146 [2024-10-09 00:36:46.658195] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.146 [2024-10-09 00:36:46.658202] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.146 [2024-10-09 00:36:46.658209] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:16.146 [2024-10-09 00:36:46.658222] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.146 qpair failed and we were unable to recover it. 00:29:16.146 [2024-10-09 00:36:46.668145] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.146 [2024-10-09 00:36:46.668194] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.146 [2024-10-09 00:36:46.668208] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.146 [2024-10-09 00:36:46.668214] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.146 [2024-10-09 00:36:46.668221] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:16.146 [2024-10-09 00:36:46.668234] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.146 qpair failed and we were unable to recover it. 00:29:16.146 [2024-10-09 00:36:46.678022] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.146 [2024-10-09 00:36:46.678072] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.146 [2024-10-09 00:36:46.678084] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.146 [2024-10-09 00:36:46.678091] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.146 [2024-10-09 00:36:46.678097] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:16.146 [2024-10-09 00:36:46.678111] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.146 qpair failed and we were unable to recover it. 00:29:16.146 [2024-10-09 00:36:46.688182] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.146 [2024-10-09 00:36:46.688228] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.146 [2024-10-09 00:36:46.688242] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.146 [2024-10-09 00:36:46.688249] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.146 [2024-10-09 00:36:46.688255] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:16.146 [2024-10-09 00:36:46.688268] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.146 qpair failed and we were unable to recover it. 00:29:16.146 [2024-10-09 00:36:46.698202] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.146 [2024-10-09 00:36:46.698250] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.146 [2024-10-09 00:36:46.698263] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.146 [2024-10-09 00:36:46.698270] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.146 [2024-10-09 00:36:46.698276] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:16.146 [2024-10-09 00:36:46.698297] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.146 qpair failed and we were unable to recover it. 00:29:16.146 [2024-10-09 00:36:46.708108] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.146 [2024-10-09 00:36:46.708207] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.146 [2024-10-09 00:36:46.708220] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.146 [2024-10-09 00:36:46.708227] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.146 [2024-10-09 00:36:46.708233] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:16.146 [2024-10-09 00:36:46.708247] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.146 qpair failed and we were unable to recover it. 00:29:16.146 [2024-10-09 00:36:46.718257] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.146 [2024-10-09 00:36:46.718307] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.146 [2024-10-09 00:36:46.718320] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.146 [2024-10-09 00:36:46.718327] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.146 [2024-10-09 00:36:46.718333] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:16.146 [2024-10-09 00:36:46.718347] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.146 qpair failed and we were unable to recover it. 00:29:16.147 [2024-10-09 00:36:46.728281] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.147 [2024-10-09 00:36:46.728323] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.147 [2024-10-09 00:36:46.728336] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.147 [2024-10-09 00:36:46.728342] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.147 [2024-10-09 00:36:46.728349] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:16.147 [2024-10-09 00:36:46.728364] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.147 qpair failed and we were unable to recover it. 00:29:16.147 [2024-10-09 00:36:46.738282] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.147 [2024-10-09 00:36:46.738329] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.147 [2024-10-09 00:36:46.738342] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.147 [2024-10-09 00:36:46.738349] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.147 [2024-10-09 00:36:46.738355] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:16.147 [2024-10-09 00:36:46.738369] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.147 qpair failed and we were unable to recover it. 00:29:16.147 [2024-10-09 00:36:46.748350] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.147 [2024-10-09 00:36:46.748441] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.147 [2024-10-09 00:36:46.748457] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.147 [2024-10-09 00:36:46.748464] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.147 [2024-10-09 00:36:46.748470] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:16.147 [2024-10-09 00:36:46.748483] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.147 qpair failed and we were unable to recover it. 00:29:16.147 [2024-10-09 00:36:46.758373] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.147 [2024-10-09 00:36:46.758426] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.147 [2024-10-09 00:36:46.758450] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.147 [2024-10-09 00:36:46.758459] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.147 [2024-10-09 00:36:46.758466] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:16.147 [2024-10-09 00:36:46.758484] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.147 qpair failed and we were unable to recover it. 00:29:16.147 [2024-10-09 00:36:46.768417] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.147 [2024-10-09 00:36:46.768531] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.147 [2024-10-09 00:36:46.768546] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.147 [2024-10-09 00:36:46.768554] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.147 [2024-10-09 00:36:46.768560] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:16.147 [2024-10-09 00:36:46.768575] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.147 qpair failed and we were unable to recover it. 00:29:16.409 [2024-10-09 00:36:46.778431] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.409 [2024-10-09 00:36:46.778479] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.409 [2024-10-09 00:36:46.778492] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.409 [2024-10-09 00:36:46.778500] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.409 [2024-10-09 00:36:46.778507] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:16.409 [2024-10-09 00:36:46.778521] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.409 qpair failed and we were unable to recover it. 00:29:16.409 [2024-10-09 00:36:46.788450] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.409 [2024-10-09 00:36:46.788504] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.409 [2024-10-09 00:36:46.788517] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.409 [2024-10-09 00:36:46.788524] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.409 [2024-10-09 00:36:46.788531] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:16.409 [2024-10-09 00:36:46.788549] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.409 qpair failed and we were unable to recover it. 00:29:16.409 [2024-10-09 00:36:46.798459] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.409 [2024-10-09 00:36:46.798503] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.409 [2024-10-09 00:36:46.798516] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.409 [2024-10-09 00:36:46.798523] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.409 [2024-10-09 00:36:46.798529] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:16.409 [2024-10-09 00:36:46.798543] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.409 qpair failed and we were unable to recover it. 00:29:16.409 [2024-10-09 00:36:46.808492] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.409 [2024-10-09 00:36:46.808540] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.409 [2024-10-09 00:36:46.808554] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.409 [2024-10-09 00:36:46.808561] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.409 [2024-10-09 00:36:46.808567] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:16.409 [2024-10-09 00:36:46.808581] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.409 qpair failed and we were unable to recover it. 00:29:16.409 [2024-10-09 00:36:46.818530] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.409 [2024-10-09 00:36:46.818577] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.409 [2024-10-09 00:36:46.818591] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.410 [2024-10-09 00:36:46.818597] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.410 [2024-10-09 00:36:46.818604] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:16.410 [2024-10-09 00:36:46.818617] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.410 qpair failed and we were unable to recover it. 00:29:16.410 [2024-10-09 00:36:46.828541] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.410 [2024-10-09 00:36:46.828593] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.410 [2024-10-09 00:36:46.828606] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.410 [2024-10-09 00:36:46.828613] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.410 [2024-10-09 00:36:46.828620] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:16.410 [2024-10-09 00:36:46.828633] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.410 qpair failed and we were unable to recover it. 00:29:16.410 [2024-10-09 00:36:46.838580] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.410 [2024-10-09 00:36:46.838628] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.410 [2024-10-09 00:36:46.838641] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.410 [2024-10-09 00:36:46.838648] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.410 [2024-10-09 00:36:46.838655] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:16.410 [2024-10-09 00:36:46.838669] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.410 qpair failed and we were unable to recover it. 00:29:16.410 [2024-10-09 00:36:46.848615] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.410 [2024-10-09 00:36:46.848659] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.410 [2024-10-09 00:36:46.848672] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.410 [2024-10-09 00:36:46.848679] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.410 [2024-10-09 00:36:46.848685] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:16.410 [2024-10-09 00:36:46.848699] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.410 qpair failed and we were unable to recover it. 00:29:16.410 [2024-10-09 00:36:46.858514] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.410 [2024-10-09 00:36:46.858598] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.410 [2024-10-09 00:36:46.858611] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.410 [2024-10-09 00:36:46.858618] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.410 [2024-10-09 00:36:46.858625] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:16.410 [2024-10-09 00:36:46.858639] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.410 qpair failed and we were unable to recover it. 00:29:16.410 [2024-10-09 00:36:46.868676] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.410 [2024-10-09 00:36:46.868730] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.410 [2024-10-09 00:36:46.868743] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.410 [2024-10-09 00:36:46.868751] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.410 [2024-10-09 00:36:46.868757] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:16.410 [2024-10-09 00:36:46.868771] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.410 qpair failed and we were unable to recover it. 00:29:16.410 [2024-10-09 00:36:46.878686] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.410 [2024-10-09 00:36:46.878744] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.410 [2024-10-09 00:36:46.878757] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.410 [2024-10-09 00:36:46.878764] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.410 [2024-10-09 00:36:46.878774] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:16.410 [2024-10-09 00:36:46.878789] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.410 qpair failed and we were unable to recover it. 00:29:16.410 [2024-10-09 00:36:46.888685] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.410 [2024-10-09 00:36:46.888729] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.410 [2024-10-09 00:36:46.888742] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.410 [2024-10-09 00:36:46.888749] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.410 [2024-10-09 00:36:46.888755] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:16.410 [2024-10-09 00:36:46.888769] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.410 qpair failed and we were unable to recover it. 00:29:16.410 [2024-10-09 00:36:46.898763] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.410 [2024-10-09 00:36:46.898808] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.410 [2024-10-09 00:36:46.898821] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.410 [2024-10-09 00:36:46.898828] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.410 [2024-10-09 00:36:46.898834] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:16.410 [2024-10-09 00:36:46.898848] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.410 qpair failed and we were unable to recover it. 00:29:16.410 [2024-10-09 00:36:46.908778] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.410 [2024-10-09 00:36:46.908827] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.410 [2024-10-09 00:36:46.908840] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.410 [2024-10-09 00:36:46.908846] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.410 [2024-10-09 00:36:46.908853] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:16.410 [2024-10-09 00:36:46.908867] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.410 qpair failed and we were unable to recover it. 00:29:16.410 [2024-10-09 00:36:46.918796] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.410 [2024-10-09 00:36:46.918838] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.410 [2024-10-09 00:36:46.918851] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.410 [2024-10-09 00:36:46.918858] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.410 [2024-10-09 00:36:46.918864] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:16.410 [2024-10-09 00:36:46.918878] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.410 qpair failed and we were unable to recover it. 00:29:16.410 [2024-10-09 00:36:46.928858] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.410 [2024-10-09 00:36:46.928935] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.410 [2024-10-09 00:36:46.928948] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.410 [2024-10-09 00:36:46.928955] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.410 [2024-10-09 00:36:46.928962] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:16.410 [2024-10-09 00:36:46.928976] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.410 qpair failed and we were unable to recover it. 00:29:16.410 [2024-10-09 00:36:46.938824] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.410 [2024-10-09 00:36:46.938872] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.410 [2024-10-09 00:36:46.938885] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.410 [2024-10-09 00:36:46.938891] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.410 [2024-10-09 00:36:46.938898] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:16.410 [2024-10-09 00:36:46.938911] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.410 qpair failed and we were unable to recover it. 00:29:16.410 [2024-10-09 00:36:46.948884] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.410 [2024-10-09 00:36:46.948930] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.410 [2024-10-09 00:36:46.948942] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.410 [2024-10-09 00:36:46.948949] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.410 [2024-10-09 00:36:46.948956] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:16.410 [2024-10-09 00:36:46.948970] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.410 qpair failed and we were unable to recover it. 00:29:16.410 [2024-10-09 00:36:46.958900] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.410 [2024-10-09 00:36:46.958952] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.410 [2024-10-09 00:36:46.958966] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.411 [2024-10-09 00:36:46.958972] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.411 [2024-10-09 00:36:46.958979] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:16.411 [2024-10-09 00:36:46.958992] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.411 qpair failed and we were unable to recover it. 00:29:16.411 [2024-10-09 00:36:46.968932] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.411 [2024-10-09 00:36:46.969013] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.411 [2024-10-09 00:36:46.969026] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.411 [2024-10-09 00:36:46.969036] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.411 [2024-10-09 00:36:46.969043] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:16.411 [2024-10-09 00:36:46.969056] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.411 qpair failed and we were unable to recover it. 00:29:16.411 [2024-10-09 00:36:46.979009] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.411 [2024-10-09 00:36:46.979054] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.411 [2024-10-09 00:36:46.979067] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.411 [2024-10-09 00:36:46.979074] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.411 [2024-10-09 00:36:46.979080] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:16.411 [2024-10-09 00:36:46.979094] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.411 qpair failed and we were unable to recover it. 00:29:16.411 [2024-10-09 00:36:46.989009] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.411 [2024-10-09 00:36:46.989059] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.411 [2024-10-09 00:36:46.989072] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.411 [2024-10-09 00:36:46.989079] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.411 [2024-10-09 00:36:46.989085] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:16.411 [2024-10-09 00:36:46.989099] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.411 qpair failed and we were unable to recover it. 00:29:16.411 [2024-10-09 00:36:46.999013] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.411 [2024-10-09 00:36:46.999055] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.411 [2024-10-09 00:36:46.999069] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.411 [2024-10-09 00:36:46.999075] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.411 [2024-10-09 00:36:46.999082] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:16.411 [2024-10-09 00:36:46.999095] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.411 qpair failed and we were unable to recover it. 00:29:16.411 [2024-10-09 00:36:47.009019] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.411 [2024-10-09 00:36:47.009061] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.411 [2024-10-09 00:36:47.009074] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.411 [2024-10-09 00:36:47.009081] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.411 [2024-10-09 00:36:47.009087] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:16.411 [2024-10-09 00:36:47.009101] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.411 qpair failed and we were unable to recover it. 00:29:16.411 [2024-10-09 00:36:47.019079] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.411 [2024-10-09 00:36:47.019131] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.411 [2024-10-09 00:36:47.019144] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.411 [2024-10-09 00:36:47.019151] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.411 [2024-10-09 00:36:47.019158] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:16.411 [2024-10-09 00:36:47.019171] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.411 qpair failed and we were unable to recover it. 00:29:16.411 [2024-10-09 00:36:47.029092] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.411 [2024-10-09 00:36:47.029142] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.411 [2024-10-09 00:36:47.029154] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.411 [2024-10-09 00:36:47.029161] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.411 [2024-10-09 00:36:47.029167] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:16.411 [2024-10-09 00:36:47.029181] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.411 qpair failed and we were unable to recover it. 00:29:16.411 [2024-10-09 00:36:47.039094] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.411 [2024-10-09 00:36:47.039137] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.411 [2024-10-09 00:36:47.039149] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.411 [2024-10-09 00:36:47.039156] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.411 [2024-10-09 00:36:47.039162] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:16.411 [2024-10-09 00:36:47.039175] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.411 qpair failed and we were unable to recover it. 00:29:16.673 [2024-10-09 00:36:47.049145] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.673 [2024-10-09 00:36:47.049187] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.673 [2024-10-09 00:36:47.049200] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.673 [2024-10-09 00:36:47.049207] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.673 [2024-10-09 00:36:47.049213] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:16.673 [2024-10-09 00:36:47.049229] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.673 qpair failed and we were unable to recover it. 00:29:16.673 [2024-10-09 00:36:47.059133] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.673 [2024-10-09 00:36:47.059181] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.673 [2024-10-09 00:36:47.059194] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.673 [2024-10-09 00:36:47.059204] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.673 [2024-10-09 00:36:47.059211] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:16.673 [2024-10-09 00:36:47.059225] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.673 qpair failed and we were unable to recover it. 00:29:16.673 [2024-10-09 00:36:47.069215] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.673 [2024-10-09 00:36:47.069262] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.673 [2024-10-09 00:36:47.069274] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.673 [2024-10-09 00:36:47.069281] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.673 [2024-10-09 00:36:47.069288] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:16.673 [2024-10-09 00:36:47.069301] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.673 qpair failed and we were unable to recover it. 00:29:16.673 [2024-10-09 00:36:47.079225] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.673 [2024-10-09 00:36:47.079274] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.673 [2024-10-09 00:36:47.079287] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.673 [2024-10-09 00:36:47.079294] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.673 [2024-10-09 00:36:47.079301] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:16.673 [2024-10-09 00:36:47.079314] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.673 qpair failed and we were unable to recover it. 00:29:16.673 [2024-10-09 00:36:47.089250] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.673 [2024-10-09 00:36:47.089291] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.673 [2024-10-09 00:36:47.089304] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.673 [2024-10-09 00:36:47.089311] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.673 [2024-10-09 00:36:47.089318] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:16.673 [2024-10-09 00:36:47.089331] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.673 qpair failed and we were unable to recover it. 00:29:16.673 [2024-10-09 00:36:47.099260] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.673 [2024-10-09 00:36:47.099304] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.673 [2024-10-09 00:36:47.099317] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.673 [2024-10-09 00:36:47.099324] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.673 [2024-10-09 00:36:47.099330] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:16.673 [2024-10-09 00:36:47.099343] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.673 qpair failed and we were unable to recover it. 00:29:16.673 [2024-10-09 00:36:47.109329] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.673 [2024-10-09 00:36:47.109427] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.673 [2024-10-09 00:36:47.109440] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.673 [2024-10-09 00:36:47.109447] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.673 [2024-10-09 00:36:47.109453] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:16.673 [2024-10-09 00:36:47.109466] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.673 qpair failed and we were unable to recover it. 00:29:16.673 [2024-10-09 00:36:47.119351] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.673 [2024-10-09 00:36:47.119394] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.673 [2024-10-09 00:36:47.119407] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.674 [2024-10-09 00:36:47.119414] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.674 [2024-10-09 00:36:47.119420] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:16.674 [2024-10-09 00:36:47.119434] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.674 qpair failed and we were unable to recover it. 00:29:16.674 [2024-10-09 00:36:47.129369] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.674 [2024-10-09 00:36:47.129413] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.674 [2024-10-09 00:36:47.129426] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.674 [2024-10-09 00:36:47.129433] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.674 [2024-10-09 00:36:47.129439] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:16.674 [2024-10-09 00:36:47.129453] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.674 qpair failed and we were unable to recover it. 00:29:16.674 [2024-10-09 00:36:47.139391] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.674 [2024-10-09 00:36:47.139437] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.674 [2024-10-09 00:36:47.139450] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.674 [2024-10-09 00:36:47.139457] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.674 [2024-10-09 00:36:47.139464] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:16.674 [2024-10-09 00:36:47.139477] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.674 qpair failed and we were unable to recover it. 00:29:16.674 [2024-10-09 00:36:47.149303] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.674 [2024-10-09 00:36:47.149348] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.674 [2024-10-09 00:36:47.149364] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.674 [2024-10-09 00:36:47.149371] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.674 [2024-10-09 00:36:47.149377] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:16.674 [2024-10-09 00:36:47.149391] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.674 qpair failed and we were unable to recover it. 00:29:16.674 [2024-10-09 00:36:47.159442] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.674 [2024-10-09 00:36:47.159488] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.674 [2024-10-09 00:36:47.159502] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.674 [2024-10-09 00:36:47.159509] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.674 [2024-10-09 00:36:47.159515] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:16.674 [2024-10-09 00:36:47.159529] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.674 qpair failed and we were unable to recover it. 00:29:16.674 [2024-10-09 00:36:47.169364] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.674 [2024-10-09 00:36:47.169409] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.674 [2024-10-09 00:36:47.169422] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.674 [2024-10-09 00:36:47.169429] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.674 [2024-10-09 00:36:47.169435] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:16.674 [2024-10-09 00:36:47.169449] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.674 qpair failed and we were unable to recover it. 00:29:16.674 [2024-10-09 00:36:47.179489] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.674 [2024-10-09 00:36:47.179553] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.674 [2024-10-09 00:36:47.179566] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.674 [2024-10-09 00:36:47.179573] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.674 [2024-10-09 00:36:47.179579] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:16.674 [2024-10-09 00:36:47.179593] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.674 qpair failed and we were unable to recover it. 00:29:16.674 [2024-10-09 00:36:47.189549] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.674 [2024-10-09 00:36:47.189596] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.674 [2024-10-09 00:36:47.189610] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.674 [2024-10-09 00:36:47.189617] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.674 [2024-10-09 00:36:47.189623] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:16.674 [2024-10-09 00:36:47.189640] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.674 qpair failed and we were unable to recover it. 00:29:16.674 [2024-10-09 00:36:47.199527] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.674 [2024-10-09 00:36:47.199613] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.674 [2024-10-09 00:36:47.199625] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.674 [2024-10-09 00:36:47.199632] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.674 [2024-10-09 00:36:47.199638] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:16.674 [2024-10-09 00:36:47.199652] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.674 qpair failed and we were unable to recover it. 00:29:16.674 [2024-10-09 00:36:47.209572] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.674 [2024-10-09 00:36:47.209615] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.674 [2024-10-09 00:36:47.209628] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.674 [2024-10-09 00:36:47.209634] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.674 [2024-10-09 00:36:47.209641] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:16.674 [2024-10-09 00:36:47.209654] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.674 qpair failed and we were unable to recover it. 00:29:16.674 [2024-10-09 00:36:47.219608] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.674 [2024-10-09 00:36:47.219652] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.674 [2024-10-09 00:36:47.219665] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.674 [2024-10-09 00:36:47.219672] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.674 [2024-10-09 00:36:47.219678] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:16.674 [2024-10-09 00:36:47.219692] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.674 qpair failed and we were unable to recover it. 00:29:16.674 [2024-10-09 00:36:47.229604] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.674 [2024-10-09 00:36:47.229653] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.674 [2024-10-09 00:36:47.229666] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.674 [2024-10-09 00:36:47.229673] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.674 [2024-10-09 00:36:47.229680] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:16.674 [2024-10-09 00:36:47.229693] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.674 qpair failed and we were unable to recover it. 00:29:16.674 [2024-10-09 00:36:47.239660] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.674 [2024-10-09 00:36:47.239708] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.674 [2024-10-09 00:36:47.239728] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.674 [2024-10-09 00:36:47.239735] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.674 [2024-10-09 00:36:47.239741] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:16.674 [2024-10-09 00:36:47.239755] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.674 qpair failed and we were unable to recover it. 00:29:16.674 [2024-10-09 00:36:47.249689] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.674 [2024-10-09 00:36:47.249739] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.674 [2024-10-09 00:36:47.249752] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.674 [2024-10-09 00:36:47.249759] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.674 [2024-10-09 00:36:47.249765] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:16.674 [2024-10-09 00:36:47.249779] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.674 qpair failed and we were unable to recover it. 00:29:16.674 [2024-10-09 00:36:47.259724] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.674 [2024-10-09 00:36:47.259772] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.674 [2024-10-09 00:36:47.259785] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.674 [2024-10-09 00:36:47.259792] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.675 [2024-10-09 00:36:47.259798] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:16.675 [2024-10-09 00:36:47.259812] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.675 qpair failed and we were unable to recover it. 00:29:16.675 [2024-10-09 00:36:47.269727] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.675 [2024-10-09 00:36:47.269781] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.675 [2024-10-09 00:36:47.269794] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.675 [2024-10-09 00:36:47.269800] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.675 [2024-10-09 00:36:47.269807] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:16.675 [2024-10-09 00:36:47.269821] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.675 qpair failed and we were unable to recover it. 00:29:16.675 [2024-10-09 00:36:47.279771] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.675 [2024-10-09 00:36:47.279814] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.675 [2024-10-09 00:36:47.279827] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.675 [2024-10-09 00:36:47.279834] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.675 [2024-10-09 00:36:47.279840] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:16.675 [2024-10-09 00:36:47.279857] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.675 qpair failed and we were unable to recover it. 00:29:16.675 [2024-10-09 00:36:47.289821] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.675 [2024-10-09 00:36:47.289876] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.675 [2024-10-09 00:36:47.289889] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.675 [2024-10-09 00:36:47.289896] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.675 [2024-10-09 00:36:47.289902] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:16.675 [2024-10-09 00:36:47.289916] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.675 qpair failed and we were unable to recover it. 00:29:16.675 [2024-10-09 00:36:47.299845] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.675 [2024-10-09 00:36:47.299893] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.675 [2024-10-09 00:36:47.299906] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.675 [2024-10-09 00:36:47.299913] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.675 [2024-10-09 00:36:47.299919] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:16.675 [2024-10-09 00:36:47.299933] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.675 qpair failed and we were unable to recover it. 00:29:16.937 [2024-10-09 00:36:47.309855] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.937 [2024-10-09 00:36:47.309924] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.937 [2024-10-09 00:36:47.309936] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.937 [2024-10-09 00:36:47.309943] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.937 [2024-10-09 00:36:47.309950] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:16.937 [2024-10-09 00:36:47.309964] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.937 qpair failed and we were unable to recover it. 00:29:16.937 [2024-10-09 00:36:47.319741] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.937 [2024-10-09 00:36:47.319783] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.937 [2024-10-09 00:36:47.319797] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.937 [2024-10-09 00:36:47.319805] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.937 [2024-10-09 00:36:47.319811] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:16.937 [2024-10-09 00:36:47.319826] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.937 qpair failed and we were unable to recover it. 00:29:16.937 [2024-10-09 00:36:47.329858] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.937 [2024-10-09 00:36:47.329900] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.937 [2024-10-09 00:36:47.329917] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.937 [2024-10-09 00:36:47.329924] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.937 [2024-10-09 00:36:47.329930] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:16.937 [2024-10-09 00:36:47.329944] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.937 qpair failed and we were unable to recover it. 00:29:16.937 [2024-10-09 00:36:47.339942] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.937 [2024-10-09 00:36:47.340031] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.937 [2024-10-09 00:36:47.340044] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.937 [2024-10-09 00:36:47.340051] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.937 [2024-10-09 00:36:47.340057] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:16.937 [2024-10-09 00:36:47.340071] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.937 qpair failed and we were unable to recover it. 00:29:16.937 [2024-10-09 00:36:47.349957] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.937 [2024-10-09 00:36:47.350024] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.937 [2024-10-09 00:36:47.350038] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.937 [2024-10-09 00:36:47.350045] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.937 [2024-10-09 00:36:47.350051] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:16.937 [2024-10-09 00:36:47.350069] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.937 qpair failed and we were unable to recover it. 00:29:16.937 [2024-10-09 00:36:47.359959] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.937 [2024-10-09 00:36:47.360003] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.937 [2024-10-09 00:36:47.360016] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.937 [2024-10-09 00:36:47.360023] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.937 [2024-10-09 00:36:47.360030] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:16.937 [2024-10-09 00:36:47.360043] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.937 qpair failed and we were unable to recover it. 00:29:16.937 [2024-10-09 00:36:47.370055] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.937 [2024-10-09 00:36:47.370122] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.937 [2024-10-09 00:36:47.370135] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.937 [2024-10-09 00:36:47.370142] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.937 [2024-10-09 00:36:47.370151] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:16.937 [2024-10-09 00:36:47.370166] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.937 qpair failed and we were unable to recover it. 00:29:16.937 [2024-10-09 00:36:47.380044] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.937 [2024-10-09 00:36:47.380092] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.937 [2024-10-09 00:36:47.380105] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.937 [2024-10-09 00:36:47.380112] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.937 [2024-10-09 00:36:47.380118] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:16.937 [2024-10-09 00:36:47.380132] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.937 qpair failed and we were unable to recover it. 00:29:16.937 [2024-10-09 00:36:47.390064] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.937 [2024-10-09 00:36:47.390109] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.937 [2024-10-09 00:36:47.390121] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.937 [2024-10-09 00:36:47.390128] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.937 [2024-10-09 00:36:47.390135] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:16.937 [2024-10-09 00:36:47.390149] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.937 qpair failed and we were unable to recover it. 00:29:16.937 [2024-10-09 00:36:47.400087] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.937 [2024-10-09 00:36:47.400133] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.937 [2024-10-09 00:36:47.400146] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.937 [2024-10-09 00:36:47.400153] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.937 [2024-10-09 00:36:47.400160] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:16.937 [2024-10-09 00:36:47.400174] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.937 qpair failed and we were unable to recover it. 00:29:16.937 [2024-10-09 00:36:47.410122] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.937 [2024-10-09 00:36:47.410164] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.937 [2024-10-09 00:36:47.410177] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.937 [2024-10-09 00:36:47.410184] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.937 [2024-10-09 00:36:47.410190] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:16.937 [2024-10-09 00:36:47.410204] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.937 qpair failed and we were unable to recover it. 00:29:16.937 [2024-10-09 00:36:47.420135] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.937 [2024-10-09 00:36:47.420182] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.937 [2024-10-09 00:36:47.420196] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.937 [2024-10-09 00:36:47.420203] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.937 [2024-10-09 00:36:47.420209] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:16.937 [2024-10-09 00:36:47.420222] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.937 qpair failed and we were unable to recover it. 00:29:16.937 [2024-10-09 00:36:47.430139] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.937 [2024-10-09 00:36:47.430188] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.938 [2024-10-09 00:36:47.430201] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.938 [2024-10-09 00:36:47.430208] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.938 [2024-10-09 00:36:47.430214] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:16.938 [2024-10-09 00:36:47.430228] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.938 qpair failed and we were unable to recover it. 00:29:16.938 [2024-10-09 00:36:47.440174] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.938 [2024-10-09 00:36:47.440218] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.938 [2024-10-09 00:36:47.440231] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.938 [2024-10-09 00:36:47.440238] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.938 [2024-10-09 00:36:47.440244] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:16.938 [2024-10-09 00:36:47.440257] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.938 qpair failed and we were unable to recover it. 00:29:16.938 [2024-10-09 00:36:47.450189] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.938 [2024-10-09 00:36:47.450241] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.938 [2024-10-09 00:36:47.450253] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.938 [2024-10-09 00:36:47.450260] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.938 [2024-10-09 00:36:47.450267] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:16.938 [2024-10-09 00:36:47.450280] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.938 qpair failed and we were unable to recover it. 00:29:16.938 [2024-10-09 00:36:47.460224] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.938 [2024-10-09 00:36:47.460272] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.938 [2024-10-09 00:36:47.460285] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.938 [2024-10-09 00:36:47.460292] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.938 [2024-10-09 00:36:47.460301] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:16.938 [2024-10-09 00:36:47.460315] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.938 qpair failed and we were unable to recover it. 00:29:16.938 [2024-10-09 00:36:47.470265] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.938 [2024-10-09 00:36:47.470314] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.938 [2024-10-09 00:36:47.470326] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.938 [2024-10-09 00:36:47.470333] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.938 [2024-10-09 00:36:47.470340] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:16.938 [2024-10-09 00:36:47.470353] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.938 qpair failed and we were unable to recover it. 00:29:16.938 [2024-10-09 00:36:47.480299] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.938 [2024-10-09 00:36:47.480342] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.938 [2024-10-09 00:36:47.480355] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.938 [2024-10-09 00:36:47.480362] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.938 [2024-10-09 00:36:47.480368] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:16.938 [2024-10-09 00:36:47.480382] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.938 qpair failed and we were unable to recover it. 00:29:16.938 [2024-10-09 00:36:47.490314] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.938 [2024-10-09 00:36:47.490399] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.938 [2024-10-09 00:36:47.490412] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.938 [2024-10-09 00:36:47.490419] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.938 [2024-10-09 00:36:47.490426] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:16.938 [2024-10-09 00:36:47.490439] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.938 qpair failed and we were unable to recover it. 00:29:16.938 [2024-10-09 00:36:47.500351] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.938 [2024-10-09 00:36:47.500398] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.938 [2024-10-09 00:36:47.500411] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.938 [2024-10-09 00:36:47.500418] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.938 [2024-10-09 00:36:47.500424] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:16.938 [2024-10-09 00:36:47.500437] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.938 qpair failed and we were unable to recover it. 00:29:16.938 [2024-10-09 00:36:47.510385] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.938 [2024-10-09 00:36:47.510432] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.938 [2024-10-09 00:36:47.510445] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.938 [2024-10-09 00:36:47.510452] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.938 [2024-10-09 00:36:47.510458] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:16.938 [2024-10-09 00:36:47.510472] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.938 qpair failed and we were unable to recover it. 00:29:16.938 [2024-10-09 00:36:47.520406] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.938 [2024-10-09 00:36:47.520461] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.938 [2024-10-09 00:36:47.520474] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.938 [2024-10-09 00:36:47.520481] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.938 [2024-10-09 00:36:47.520487] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:16.938 [2024-10-09 00:36:47.520501] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.938 qpair failed and we were unable to recover it. 00:29:16.938 [2024-10-09 00:36:47.530434] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.938 [2024-10-09 00:36:47.530528] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.938 [2024-10-09 00:36:47.530541] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.938 [2024-10-09 00:36:47.530548] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.938 [2024-10-09 00:36:47.530554] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:16.938 [2024-10-09 00:36:47.530568] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.938 qpair failed and we were unable to recover it. 00:29:16.938 [2024-10-09 00:36:47.540444] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.938 [2024-10-09 00:36:47.540495] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.938 [2024-10-09 00:36:47.540519] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.938 [2024-10-09 00:36:47.540527] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.938 [2024-10-09 00:36:47.540534] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:16.938 [2024-10-09 00:36:47.540553] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.938 qpair failed and we were unable to recover it. 00:29:16.938 [2024-10-09 00:36:47.550454] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.938 [2024-10-09 00:36:47.550505] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.938 [2024-10-09 00:36:47.550530] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.938 [2024-10-09 00:36:47.550543] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.938 [2024-10-09 00:36:47.550550] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:16.938 [2024-10-09 00:36:47.550569] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.938 qpair failed and we were unable to recover it. 00:29:16.938 [2024-10-09 00:36:47.560462] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.939 [2024-10-09 00:36:47.560513] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.939 [2024-10-09 00:36:47.560537] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.939 [2024-10-09 00:36:47.560546] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.939 [2024-10-09 00:36:47.560553] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:16.939 [2024-10-09 00:36:47.560572] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.939 qpair failed and we were unable to recover it. 00:29:17.200 [2024-10-09 00:36:47.570535] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.200 [2024-10-09 00:36:47.570584] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.200 [2024-10-09 00:36:47.570599] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.200 [2024-10-09 00:36:47.570607] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.200 [2024-10-09 00:36:47.570614] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:17.200 [2024-10-09 00:36:47.570629] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.200 qpair failed and we were unable to recover it. 00:29:17.200 [2024-10-09 00:36:47.580442] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.200 [2024-10-09 00:36:47.580489] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.200 [2024-10-09 00:36:47.580503] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.200 [2024-10-09 00:36:47.580510] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.200 [2024-10-09 00:36:47.580517] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:17.200 [2024-10-09 00:36:47.580532] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.200 qpair failed and we were unable to recover it. 00:29:17.200 [2024-10-09 00:36:47.590596] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.200 [2024-10-09 00:36:47.590649] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.200 [2024-10-09 00:36:47.590663] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.200 [2024-10-09 00:36:47.590670] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.200 [2024-10-09 00:36:47.590676] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:17.200 [2024-10-09 00:36:47.590690] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.200 qpair failed and we were unable to recover it. 00:29:17.200 [2024-10-09 00:36:47.600576] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.201 [2024-10-09 00:36:47.600617] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.201 [2024-10-09 00:36:47.600631] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.201 [2024-10-09 00:36:47.600638] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.201 [2024-10-09 00:36:47.600644] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:17.201 [2024-10-09 00:36:47.600658] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.201 qpair failed and we were unable to recover it. 00:29:17.201 [2024-10-09 00:36:47.610593] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.201 [2024-10-09 00:36:47.610636] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.201 [2024-10-09 00:36:47.610649] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.201 [2024-10-09 00:36:47.610655] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.201 [2024-10-09 00:36:47.610662] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:17.201 [2024-10-09 00:36:47.610675] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.201 qpair failed and we were unable to recover it. 00:29:17.201 [2024-10-09 00:36:47.620673] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.201 [2024-10-09 00:36:47.620718] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.201 [2024-10-09 00:36:47.620735] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.201 [2024-10-09 00:36:47.620742] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.201 [2024-10-09 00:36:47.620748] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:17.201 [2024-10-09 00:36:47.620762] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.201 qpair failed and we were unable to recover it. 00:29:17.201 [2024-10-09 00:36:47.630695] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.201 [2024-10-09 00:36:47.630744] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.201 [2024-10-09 00:36:47.630758] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.201 [2024-10-09 00:36:47.630764] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.201 [2024-10-09 00:36:47.630771] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:17.201 [2024-10-09 00:36:47.630785] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.201 qpair failed and we were unable to recover it. 00:29:17.201 [2024-10-09 00:36:47.640723] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.201 [2024-10-09 00:36:47.640768] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.201 [2024-10-09 00:36:47.640784] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.201 [2024-10-09 00:36:47.640791] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.201 [2024-10-09 00:36:47.640798] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:17.201 [2024-10-09 00:36:47.640811] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.201 qpair failed and we were unable to recover it. 00:29:17.201 [2024-10-09 00:36:47.650745] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.201 [2024-10-09 00:36:47.650815] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.201 [2024-10-09 00:36:47.650828] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.201 [2024-10-09 00:36:47.650835] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.201 [2024-10-09 00:36:47.650841] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:17.201 [2024-10-09 00:36:47.650855] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.201 qpair failed and we were unable to recover it. 00:29:17.201 [2024-10-09 00:36:47.660764] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.201 [2024-10-09 00:36:47.660808] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.201 [2024-10-09 00:36:47.660821] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.201 [2024-10-09 00:36:47.660828] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.201 [2024-10-09 00:36:47.660834] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:17.201 [2024-10-09 00:36:47.660848] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.201 qpair failed and we were unable to recover it. 00:29:17.201 [2024-10-09 00:36:47.670827] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.201 [2024-10-09 00:36:47.670919] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.201 [2024-10-09 00:36:47.670932] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.201 [2024-10-09 00:36:47.670939] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.201 [2024-10-09 00:36:47.670945] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:17.201 [2024-10-09 00:36:47.670959] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.201 qpair failed and we were unable to recover it. 00:29:17.201 [2024-10-09 00:36:47.680829] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.201 [2024-10-09 00:36:47.680876] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.201 [2024-10-09 00:36:47.680889] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.201 [2024-10-09 00:36:47.680896] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.201 [2024-10-09 00:36:47.680902] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:17.201 [2024-10-09 00:36:47.680916] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.201 qpair failed and we were unable to recover it. 00:29:17.201 [2024-10-09 00:36:47.690824] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.201 [2024-10-09 00:36:47.690873] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.201 [2024-10-09 00:36:47.690886] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.201 [2024-10-09 00:36:47.690893] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.201 [2024-10-09 00:36:47.690899] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:17.201 [2024-10-09 00:36:47.690913] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.201 qpair failed and we were unable to recover it. 00:29:17.201 [2024-10-09 00:36:47.700862] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.201 [2024-10-09 00:36:47.700909] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.201 [2024-10-09 00:36:47.700923] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.201 [2024-10-09 00:36:47.700930] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.201 [2024-10-09 00:36:47.700936] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:17.201 [2024-10-09 00:36:47.700950] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.201 qpair failed and we were unable to recover it. 00:29:17.201 [2024-10-09 00:36:47.710918] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.201 [2024-10-09 00:36:47.711017] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.201 [2024-10-09 00:36:47.711030] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.201 [2024-10-09 00:36:47.711037] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.201 [2024-10-09 00:36:47.711043] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:17.201 [2024-10-09 00:36:47.711057] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.201 qpair failed and we were unable to recover it. 00:29:17.201 [2024-10-09 00:36:47.720902] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.201 [2024-10-09 00:36:47.720944] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.201 [2024-10-09 00:36:47.720956] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.201 [2024-10-09 00:36:47.720963] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.201 [2024-10-09 00:36:47.720969] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:17.201 [2024-10-09 00:36:47.720983] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.201 qpair failed and we were unable to recover it. 00:29:17.201 [2024-10-09 00:36:47.730938] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.201 [2024-10-09 00:36:47.730983] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.201 [2024-10-09 00:36:47.730999] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.201 [2024-10-09 00:36:47.731006] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.201 [2024-10-09 00:36:47.731013] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:17.201 [2024-10-09 00:36:47.731026] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.201 qpair failed and we were unable to recover it. 00:29:17.201 [2024-10-09 00:36:47.740962] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.201 [2024-10-09 00:36:47.741008] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.202 [2024-10-09 00:36:47.741021] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.202 [2024-10-09 00:36:47.741028] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.202 [2024-10-09 00:36:47.741034] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:17.202 [2024-10-09 00:36:47.741048] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.202 qpair failed and we were unable to recover it. 00:29:17.202 [2024-10-09 00:36:47.751024] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.202 [2024-10-09 00:36:47.751074] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.202 [2024-10-09 00:36:47.751086] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.202 [2024-10-09 00:36:47.751093] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.202 [2024-10-09 00:36:47.751100] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:17.202 [2024-10-09 00:36:47.751113] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.202 qpair failed and we were unable to recover it. 00:29:17.202 [2024-10-09 00:36:47.761035] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.202 [2024-10-09 00:36:47.761076] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.202 [2024-10-09 00:36:47.761089] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.202 [2024-10-09 00:36:47.761096] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.202 [2024-10-09 00:36:47.761102] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:17.202 [2024-10-09 00:36:47.761116] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.202 qpair failed and we were unable to recover it. 00:29:17.202 [2024-10-09 00:36:47.771046] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.202 [2024-10-09 00:36:47.771090] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.202 [2024-10-09 00:36:47.771103] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.202 [2024-10-09 00:36:47.771109] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.202 [2024-10-09 00:36:47.771116] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:17.202 [2024-10-09 00:36:47.771133] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.202 qpair failed and we were unable to recover it. 00:29:17.202 [2024-10-09 00:36:47.781072] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.202 [2024-10-09 00:36:47.781118] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.202 [2024-10-09 00:36:47.781131] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.202 [2024-10-09 00:36:47.781138] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.202 [2024-10-09 00:36:47.781144] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:17.202 [2024-10-09 00:36:47.781158] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.202 qpair failed and we were unable to recover it. 00:29:17.202 [2024-10-09 00:36:47.791090] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.202 [2024-10-09 00:36:47.791138] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.202 [2024-10-09 00:36:47.791150] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.202 [2024-10-09 00:36:47.791157] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.202 [2024-10-09 00:36:47.791164] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:17.202 [2024-10-09 00:36:47.791178] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.202 qpair failed and we were unable to recover it. 00:29:17.202 [2024-10-09 00:36:47.801138] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.202 [2024-10-09 00:36:47.801187] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.202 [2024-10-09 00:36:47.801200] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.202 [2024-10-09 00:36:47.801207] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.202 [2024-10-09 00:36:47.801213] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:17.202 [2024-10-09 00:36:47.801227] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.202 qpair failed and we were unable to recover it. 00:29:17.202 [2024-10-09 00:36:47.811138] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.202 [2024-10-09 00:36:47.811178] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.202 [2024-10-09 00:36:47.811191] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.202 [2024-10-09 00:36:47.811198] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.202 [2024-10-09 00:36:47.811204] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:17.202 [2024-10-09 00:36:47.811219] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.202 qpair failed and we were unable to recover it. 00:29:17.202 [2024-10-09 00:36:47.821068] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.202 [2024-10-09 00:36:47.821115] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.202 [2024-10-09 00:36:47.821131] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.202 [2024-10-09 00:36:47.821138] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.202 [2024-10-09 00:36:47.821144] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:17.202 [2024-10-09 00:36:47.821157] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.202 qpair failed and we were unable to recover it. 00:29:17.202 [2024-10-09 00:36:47.831216] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.202 [2024-10-09 00:36:47.831276] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.202 [2024-10-09 00:36:47.831289] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.202 [2024-10-09 00:36:47.831296] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.202 [2024-10-09 00:36:47.831302] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:17.202 [2024-10-09 00:36:47.831316] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.202 qpair failed and we were unable to recover it. 00:29:17.464 [2024-10-09 00:36:47.841205] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.464 [2024-10-09 00:36:47.841249] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.464 [2024-10-09 00:36:47.841262] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.464 [2024-10-09 00:36:47.841269] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.464 [2024-10-09 00:36:47.841275] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:17.464 [2024-10-09 00:36:47.841288] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.464 qpair failed and we were unable to recover it. 00:29:17.464 [2024-10-09 00:36:47.851274] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.464 [2024-10-09 00:36:47.851319] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.464 [2024-10-09 00:36:47.851332] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.464 [2024-10-09 00:36:47.851339] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.464 [2024-10-09 00:36:47.851345] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:17.464 [2024-10-09 00:36:47.851358] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.464 qpair failed and we were unable to recover it. 00:29:17.464 [2024-10-09 00:36:47.861298] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.464 [2024-10-09 00:36:47.861394] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.464 [2024-10-09 00:36:47.861407] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.464 [2024-10-09 00:36:47.861413] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.464 [2024-10-09 00:36:47.861423] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:17.464 [2024-10-09 00:36:47.861437] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.464 qpair failed and we were unable to recover it. 00:29:17.464 [2024-10-09 00:36:47.871328] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.464 [2024-10-09 00:36:47.871375] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.464 [2024-10-09 00:36:47.871388] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.464 [2024-10-09 00:36:47.871395] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.464 [2024-10-09 00:36:47.871401] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:17.464 [2024-10-09 00:36:47.871415] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.464 qpair failed and we were unable to recover it. 00:29:17.464 [2024-10-09 00:36:47.881345] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.464 [2024-10-09 00:36:47.881393] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.464 [2024-10-09 00:36:47.881406] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.464 [2024-10-09 00:36:47.881413] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.464 [2024-10-09 00:36:47.881420] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:17.464 [2024-10-09 00:36:47.881435] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.464 qpair failed and we were unable to recover it. 00:29:17.464 [2024-10-09 00:36:47.891345] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.464 [2024-10-09 00:36:47.891388] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.464 [2024-10-09 00:36:47.891401] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.464 [2024-10-09 00:36:47.891409] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.464 [2024-10-09 00:36:47.891415] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:17.464 [2024-10-09 00:36:47.891429] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.464 qpair failed and we were unable to recover it. 00:29:17.464 [2024-10-09 00:36:47.901407] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.464 [2024-10-09 00:36:47.901450] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.464 [2024-10-09 00:36:47.901464] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.464 [2024-10-09 00:36:47.901471] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.464 [2024-10-09 00:36:47.901477] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:17.464 [2024-10-09 00:36:47.901490] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.464 qpair failed and we were unable to recover it. 00:29:17.464 [2024-10-09 00:36:47.911415] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.464 [2024-10-09 00:36:47.911470] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.464 [2024-10-09 00:36:47.911494] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.464 [2024-10-09 00:36:47.911502] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.464 [2024-10-09 00:36:47.911509] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:17.464 [2024-10-09 00:36:47.911528] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.464 qpair failed and we were unable to recover it. 00:29:17.464 [2024-10-09 00:36:47.921451] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.464 [2024-10-09 00:36:47.921501] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.464 [2024-10-09 00:36:47.921515] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.464 [2024-10-09 00:36:47.921523] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.464 [2024-10-09 00:36:47.921529] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:17.464 [2024-10-09 00:36:47.921544] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.464 qpair failed and we were unable to recover it. 00:29:17.464 [2024-10-09 00:36:47.931461] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.464 [2024-10-09 00:36:47.931511] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.464 [2024-10-09 00:36:47.931535] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.464 [2024-10-09 00:36:47.931544] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.464 [2024-10-09 00:36:47.931551] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:17.464 [2024-10-09 00:36:47.931570] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.464 qpair failed and we were unable to recover it. 00:29:17.464 [2024-10-09 00:36:47.941534] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.464 [2024-10-09 00:36:47.941600] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.465 [2024-10-09 00:36:47.941625] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.465 [2024-10-09 00:36:47.941633] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.465 [2024-10-09 00:36:47.941640] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:17.465 [2024-10-09 00:36:47.941659] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.465 qpair failed and we were unable to recover it. 00:29:17.465 [2024-10-09 00:36:47.951548] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.465 [2024-10-09 00:36:47.951603] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.465 [2024-10-09 00:36:47.951618] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.465 [2024-10-09 00:36:47.951625] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.465 [2024-10-09 00:36:47.951635] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:17.465 [2024-10-09 00:36:47.951650] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.465 qpair failed and we were unable to recover it. 00:29:17.465 [2024-10-09 00:36:47.961560] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.465 [2024-10-09 00:36:47.961604] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.465 [2024-10-09 00:36:47.961617] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.465 [2024-10-09 00:36:47.961624] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.465 [2024-10-09 00:36:47.961631] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:17.465 [2024-10-09 00:36:47.961645] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.465 qpair failed and we were unable to recover it. 00:29:17.465 [2024-10-09 00:36:47.971596] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.465 [2024-10-09 00:36:47.971644] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.465 [2024-10-09 00:36:47.971657] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.465 [2024-10-09 00:36:47.971664] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.465 [2024-10-09 00:36:47.971671] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:17.465 [2024-10-09 00:36:47.971685] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.465 qpair failed and we were unable to recover it. 00:29:17.465 [2024-10-09 00:36:47.981617] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.465 [2024-10-09 00:36:47.981663] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.465 [2024-10-09 00:36:47.981677] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.465 [2024-10-09 00:36:47.981683] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.465 [2024-10-09 00:36:47.981690] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:17.465 [2024-10-09 00:36:47.981703] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.465 qpair failed and we were unable to recover it. 00:29:17.465 [2024-10-09 00:36:47.991531] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.465 [2024-10-09 00:36:47.991580] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.465 [2024-10-09 00:36:47.991595] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.465 [2024-10-09 00:36:47.991602] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.465 [2024-10-09 00:36:47.991609] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:17.465 [2024-10-09 00:36:47.991623] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.465 qpair failed and we were unable to recover it. 00:29:17.465 [2024-10-09 00:36:48.001653] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.465 [2024-10-09 00:36:48.001696] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.465 [2024-10-09 00:36:48.001710] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.465 [2024-10-09 00:36:48.001717] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.465 [2024-10-09 00:36:48.001727] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:17.465 [2024-10-09 00:36:48.001742] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.465 qpair failed and we were unable to recover it. 00:29:17.465 [2024-10-09 00:36:48.011588] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.465 [2024-10-09 00:36:48.011632] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.465 [2024-10-09 00:36:48.011645] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.465 [2024-10-09 00:36:48.011652] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.465 [2024-10-09 00:36:48.011658] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:17.465 [2024-10-09 00:36:48.011672] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.465 qpair failed and we were unable to recover it. 00:29:17.465 [2024-10-09 00:36:48.021745] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.465 [2024-10-09 00:36:48.021798] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.465 [2024-10-09 00:36:48.021812] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.465 [2024-10-09 00:36:48.021819] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.465 [2024-10-09 00:36:48.021825] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:17.465 [2024-10-09 00:36:48.021840] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.465 qpair failed and we were unable to recover it. 00:29:17.465 [2024-10-09 00:36:48.031763] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.465 [2024-10-09 00:36:48.031811] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.465 [2024-10-09 00:36:48.031825] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.465 [2024-10-09 00:36:48.031834] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.465 [2024-10-09 00:36:48.031843] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:17.465 [2024-10-09 00:36:48.031858] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.465 qpair failed and we were unable to recover it. 00:29:17.465 [2024-10-09 00:36:48.041778] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.465 [2024-10-09 00:36:48.041822] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.465 [2024-10-09 00:36:48.041836] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.465 [2024-10-09 00:36:48.041846] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.465 [2024-10-09 00:36:48.041852] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:17.465 [2024-10-09 00:36:48.041867] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.465 qpair failed and we were unable to recover it. 00:29:17.465 [2024-10-09 00:36:48.051792] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.465 [2024-10-09 00:36:48.051839] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.465 [2024-10-09 00:36:48.051852] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.465 [2024-10-09 00:36:48.051859] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.465 [2024-10-09 00:36:48.051865] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:17.465 [2024-10-09 00:36:48.051879] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.465 qpair failed and we were unable to recover it. 00:29:17.465 [2024-10-09 00:36:48.061839] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.465 [2024-10-09 00:36:48.061901] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.465 [2024-10-09 00:36:48.061914] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.465 [2024-10-09 00:36:48.061921] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.465 [2024-10-09 00:36:48.061927] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:17.465 [2024-10-09 00:36:48.061941] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.465 qpair failed and we were unable to recover it. 00:29:17.465 [2024-10-09 00:36:48.071750] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.465 [2024-10-09 00:36:48.071802] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.465 [2024-10-09 00:36:48.071816] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.465 [2024-10-09 00:36:48.071823] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.465 [2024-10-09 00:36:48.071829] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:17.465 [2024-10-09 00:36:48.071844] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.465 qpair failed and we were unable to recover it. 00:29:17.465 [2024-10-09 00:36:48.081812] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.465 [2024-10-09 00:36:48.081856] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.466 [2024-10-09 00:36:48.081870] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.466 [2024-10-09 00:36:48.081877] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.466 [2024-10-09 00:36:48.081883] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:17.466 [2024-10-09 00:36:48.081897] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.466 qpair failed and we were unable to recover it. 00:29:17.466 [2024-10-09 00:36:48.091923] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.466 [2024-10-09 00:36:48.091979] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.466 [2024-10-09 00:36:48.091992] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.466 [2024-10-09 00:36:48.091999] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.466 [2024-10-09 00:36:48.092005] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:17.466 [2024-10-09 00:36:48.092019] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.466 qpair failed and we were unable to recover it. 00:29:17.726 [2024-10-09 00:36:48.101958] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.727 [2024-10-09 00:36:48.102031] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.727 [2024-10-09 00:36:48.102044] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.727 [2024-10-09 00:36:48.102051] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.727 [2024-10-09 00:36:48.102057] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa9fc000b90 00:29:17.727 [2024-10-09 00:36:48.102071] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.727 qpair failed and we were unable to recover it. 00:29:17.727 [2024-10-09 00:36:48.111957] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.727 [2024-10-09 00:36:48.112049] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.727 [2024-10-09 00:36:48.112114] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.727 [2024-10-09 00:36:48.112141] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.727 [2024-10-09 00:36:48.112162] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa04000b90 00:29:17.727 [2024-10-09 00:36:48.112215] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:17.727 qpair failed and we were unable to recover it. 00:29:17.727 [2024-10-09 00:36:48.121992] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.727 [2024-10-09 00:36:48.122057] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.727 [2024-10-09 00:36:48.122086] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.727 [2024-10-09 00:36:48.122101] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.727 [2024-10-09 00:36:48.122114] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa04000b90 00:29:17.727 [2024-10-09 00:36:48.122143] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:17.727 qpair failed and we were unable to recover it. 00:29:17.727 [2024-10-09 00:36:48.122291] nvme_ctrlr.c:4505:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Submitting Keep Alive failed 00:29:17.727 A controller has encountered a failure and is being reset. 00:29:17.727 [2024-10-09 00:36:48.122350] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x132aed0 (9): Bad file descriptor 00:29:17.727 Controller properly reset. 00:29:17.727 Initializing NVMe Controllers 00:29:17.727 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:17.727 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:17.727 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:29:17.727 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:29:17.727 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:29:17.727 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:29:17.727 Initialization complete. Launching workers. 00:29:17.727 Starting thread on core 1 00:29:17.727 Starting thread on core 2 00:29:17.727 Starting thread on core 3 00:29:17.727 Starting thread on core 0 00:29:17.727 00:36:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:29:17.727 00:29:17.727 real 0m11.499s 00:29:17.727 user 0m21.919s 00:29:17.727 sys 0m3.890s 00:29:17.727 00:36:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:17.727 00:36:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:17.727 ************************************ 00:29:17.727 END TEST nvmf_target_disconnect_tc2 00:29:17.727 ************************************ 00:29:17.727 00:36:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:29:17.727 00:36:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:29:17.727 00:36:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:29:17.727 00:36:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@514 -- # nvmfcleanup 00:29:17.727 00:36:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # sync 00:29:17.727 00:36:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:17.727 00:36:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set +e 00:29:17.727 00:36:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:17.727 00:36:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:17.727 rmmod nvme_tcp 00:29:17.987 rmmod nvme_fabrics 00:29:17.987 rmmod nvme_keyring 00:29:17.987 00:36:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:17.988 00:36:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@128 -- # set -e 00:29:17.988 00:36:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@129 -- # return 0 00:29:17.988 00:36:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@515 -- # '[' -n 3438321 ']' 00:29:17.988 00:36:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@516 -- # killprocess 3438321 00:29:17.988 00:36:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@950 -- # '[' -z 3438321 ']' 00:29:17.988 00:36:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # kill -0 3438321 00:29:17.988 00:36:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@955 -- # uname 00:29:17.988 00:36:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:17.988 00:36:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3438321 00:29:17.988 00:36:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@956 -- # process_name=reactor_4 00:29:17.988 00:36:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # '[' reactor_4 = sudo ']' 00:29:17.988 00:36:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3438321' 00:29:17.988 killing process with pid 3438321 00:29:17.988 00:36:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@969 -- # kill 3438321 00:29:17.988 00:36:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@974 -- # wait 3438321 00:29:17.988 00:36:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:29:17.988 00:36:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:29:17.988 00:36:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:29:17.988 00:36:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # iptr 00:29:17.988 00:36:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@789 -- # iptables-save 00:29:17.988 00:36:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:29:17.988 00:36:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@789 -- # iptables-restore 00:29:17.988 00:36:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:17.988 00:36:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:18.249 00:36:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:18.249 00:36:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:18.249 00:36:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:20.161 00:36:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:20.161 00:29:20.161 real 0m21.903s 00:29:20.161 user 0m49.821s 00:29:20.161 sys 0m10.204s 00:29:20.161 00:36:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:20.161 00:36:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:29:20.161 ************************************ 00:29:20.161 END TEST nvmf_target_disconnect 00:29:20.161 ************************************ 00:29:20.161 00:36:50 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:29:20.161 00:29:20.161 real 6m29.719s 00:29:20.161 user 11m19.604s 00:29:20.161 sys 2m15.468s 00:29:20.161 00:36:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:20.161 00:36:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:20.161 ************************************ 00:29:20.161 END TEST nvmf_host 00:29:20.161 ************************************ 00:29:20.161 00:36:50 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:29:20.161 00:36:50 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 0 -eq 0 ]] 00:29:20.161 00:36:50 nvmf_tcp -- nvmf/nvmf.sh@20 -- # run_test nvmf_target_core_interrupt_mode /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:29:20.161 00:36:50 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:29:20.161 00:36:50 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:20.161 00:36:50 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:20.422 ************************************ 00:29:20.422 START TEST nvmf_target_core_interrupt_mode 00:29:20.422 ************************************ 00:29:20.422 00:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:29:20.422 * Looking for test storage... 00:29:20.422 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:29:20.422 00:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:29:20.422 00:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1681 -- # lcov --version 00:29:20.422 00:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:29:20.422 00:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:29:20.422 00:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:20.422 00:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:20.422 00:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:20.422 00:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # IFS=.-: 00:29:20.422 00:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # read -ra ver1 00:29:20.422 00:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # IFS=.-: 00:29:20.422 00:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # read -ra ver2 00:29:20.422 00:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@338 -- # local 'op=<' 00:29:20.423 00:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@340 -- # ver1_l=2 00:29:20.423 00:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@341 -- # ver2_l=1 00:29:20.423 00:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:20.423 00:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@344 -- # case "$op" in 00:29:20.423 00:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@345 -- # : 1 00:29:20.423 00:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:20.423 00:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:20.423 00:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # decimal 1 00:29:20.423 00:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=1 00:29:20.423 00:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:20.423 00:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 1 00:29:20.423 00:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # ver1[v]=1 00:29:20.423 00:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # decimal 2 00:29:20.423 00:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=2 00:29:20.423 00:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:20.423 00:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 2 00:29:20.423 00:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # ver2[v]=2 00:29:20.423 00:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:20.423 00:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:20.423 00:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # return 0 00:29:20.423 00:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:20.423 00:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:29:20.423 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:20.423 --rc genhtml_branch_coverage=1 00:29:20.423 --rc genhtml_function_coverage=1 00:29:20.423 --rc genhtml_legend=1 00:29:20.423 --rc geninfo_all_blocks=1 00:29:20.423 --rc geninfo_unexecuted_blocks=1 00:29:20.423 00:29:20.423 ' 00:29:20.423 00:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:29:20.423 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:20.423 --rc genhtml_branch_coverage=1 00:29:20.423 --rc genhtml_function_coverage=1 00:29:20.423 --rc genhtml_legend=1 00:29:20.423 --rc geninfo_all_blocks=1 00:29:20.423 --rc geninfo_unexecuted_blocks=1 00:29:20.423 00:29:20.423 ' 00:29:20.423 00:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:29:20.423 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:20.423 --rc genhtml_branch_coverage=1 00:29:20.423 --rc genhtml_function_coverage=1 00:29:20.423 --rc genhtml_legend=1 00:29:20.423 --rc geninfo_all_blocks=1 00:29:20.423 --rc geninfo_unexecuted_blocks=1 00:29:20.423 00:29:20.423 ' 00:29:20.423 00:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:29:20.423 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:20.423 --rc genhtml_branch_coverage=1 00:29:20.423 --rc genhtml_function_coverage=1 00:29:20.423 --rc genhtml_legend=1 00:29:20.423 --rc geninfo_all_blocks=1 00:29:20.423 --rc geninfo_unexecuted_blocks=1 00:29:20.423 00:29:20.423 ' 00:29:20.423 00:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:29:20.423 00:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:29:20.423 00:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:20.423 00:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # uname -s 00:29:20.423 00:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:20.423 00:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:20.423 00:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:20.423 00:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:20.423 00:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:20.423 00:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:20.423 00:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:20.423 00:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:20.423 00:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:20.423 00:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:20.423 00:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:20.423 00:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:20.423 00:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:20.423 00:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:20.423 00:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:20.423 00:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:20.423 00:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:20.423 00:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@15 -- # shopt -s extglob 00:29:20.423 00:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:20.423 00:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:20.423 00:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:20.423 00:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:20.423 00:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:20.423 00:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:20.423 00:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@5 -- # export PATH 00:29:20.423 00:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:20.423 00:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@51 -- # : 0 00:29:20.423 00:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:20.423 00:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:20.423 00:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:20.423 00:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:20.423 00:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:20.423 00:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:20.423 00:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:20.423 00:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:20.423 00:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:20.423 00:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:20.423 00:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:29:20.423 00:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:29:20.423 00:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:29:20.423 00:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:29:20.423 00:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:29:20.423 00:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:20.423 00:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:29:20.686 ************************************ 00:29:20.686 START TEST nvmf_abort 00:29:20.686 ************************************ 00:29:20.686 00:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:29:20.686 * Looking for test storage... 00:29:20.686 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:20.686 00:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:29:20.686 00:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1681 -- # lcov --version 00:29:20.686 00:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:29:20.686 00:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:29:20.686 00:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:20.686 00:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:20.686 00:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:20.686 00:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:29:20.686 00:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:29:20.686 00:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:29:20.686 00:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:29:20.686 00:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:29:20.686 00:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:29:20.686 00:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:29:20.686 00:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:20.686 00:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:29:20.686 00:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:29:20.686 00:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:20.686 00:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:20.686 00:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:29:20.686 00:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:29:20.686 00:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:20.686 00:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:29:20.686 00:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:29:20.686 00:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:29:20.686 00:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:29:20.686 00:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:20.686 00:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:29:20.686 00:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:29:20.686 00:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:20.686 00:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:20.686 00:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:29:20.686 00:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:20.686 00:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:29:20.686 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:20.686 --rc genhtml_branch_coverage=1 00:29:20.686 --rc genhtml_function_coverage=1 00:29:20.686 --rc genhtml_legend=1 00:29:20.686 --rc geninfo_all_blocks=1 00:29:20.686 --rc geninfo_unexecuted_blocks=1 00:29:20.686 00:29:20.686 ' 00:29:20.686 00:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:29:20.686 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:20.686 --rc genhtml_branch_coverage=1 00:29:20.686 --rc genhtml_function_coverage=1 00:29:20.686 --rc genhtml_legend=1 00:29:20.686 --rc geninfo_all_blocks=1 00:29:20.686 --rc geninfo_unexecuted_blocks=1 00:29:20.686 00:29:20.686 ' 00:29:20.686 00:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:29:20.686 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:20.686 --rc genhtml_branch_coverage=1 00:29:20.686 --rc genhtml_function_coverage=1 00:29:20.686 --rc genhtml_legend=1 00:29:20.686 --rc geninfo_all_blocks=1 00:29:20.686 --rc geninfo_unexecuted_blocks=1 00:29:20.686 00:29:20.686 ' 00:29:20.686 00:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:29:20.686 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:20.686 --rc genhtml_branch_coverage=1 00:29:20.686 --rc genhtml_function_coverage=1 00:29:20.686 --rc genhtml_legend=1 00:29:20.686 --rc geninfo_all_blocks=1 00:29:20.686 --rc geninfo_unexecuted_blocks=1 00:29:20.686 00:29:20.686 ' 00:29:20.686 00:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:20.686 00:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:29:20.686 00:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:20.686 00:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:20.686 00:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:20.686 00:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:20.686 00:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:20.686 00:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:20.686 00:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:20.686 00:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:20.686 00:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:20.686 00:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:20.686 00:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:20.686 00:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:20.686 00:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:20.686 00:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:20.686 00:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:20.686 00:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:20.686 00:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:20.686 00:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:29:20.686 00:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:20.686 00:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:20.686 00:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:20.686 00:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:20.686 00:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:20.686 00:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:20.686 00:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:29:20.686 00:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:20.686 00:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:29:20.686 00:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:20.686 00:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:20.686 00:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:20.687 00:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:20.687 00:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:20.687 00:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:20.687 00:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:20.687 00:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:20.687 00:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:20.687 00:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:20.948 00:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:20.948 00:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:29:20.948 00:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:29:20.948 00:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:29:20.948 00:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:20.948 00:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@474 -- # prepare_net_devs 00:29:20.948 00:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@436 -- # local -g is_hw=no 00:29:20.948 00:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@438 -- # remove_spdk_ns 00:29:20.948 00:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:20.948 00:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:20.948 00:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:20.948 00:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:29:20.948 00:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:29:20.948 00:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:29:20.948 00:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:29.101 00:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:29.101 00:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:29:29.101 00:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:29.101 00:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:29.101 00:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:29.101 00:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:29.101 00:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:29.101 00:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:29:29.101 00:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:29.101 00:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:29:29.101 00:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:29:29.101 00:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:29:29.101 00:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:29:29.101 00:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:29:29.101 00:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:29:29.101 00:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:29.101 00:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:29.101 00:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:29.101 00:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:29.101 00:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:29.101 00:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:29.101 00:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:29.101 00:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:29.101 00:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:29.101 00:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:29.101 00:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:29.101 00:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:29.101 00:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:29.101 00:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:29.101 00:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:29.101 00:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:29.101 00:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:29.101 00:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:29.101 00:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:29.101 00:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:29:29.101 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:29:29.101 00:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:29.101 00:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:29.101 00:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:29.101 00:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:29.101 00:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:29.101 00:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:29.101 00:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:29:29.101 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:29:29.101 00:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:29.101 00:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:29.101 00:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:29.101 00:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:29.101 00:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:29.101 00:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:29.101 00:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:29.101 00:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:29.101 00:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:29:29.101 00:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:29.101 00:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:29:29.101 00:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:29.101 00:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ up == up ]] 00:29:29.101 00:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:29:29.101 00:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:29.101 00:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:29:29.101 Found net devices under 0000:4b:00.0: cvl_0_0 00:29:29.101 00:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:29:29.101 00:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:29:29.101 00:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:29.101 00:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:29:29.101 00:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:29.101 00:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ up == up ]] 00:29:29.101 00:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:29:29.102 00:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:29.102 00:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:29:29.102 Found net devices under 0000:4b:00.1: cvl_0_1 00:29:29.102 00:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:29:29.102 00:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:29:29.102 00:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # is_hw=yes 00:29:29.102 00:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:29:29.102 00:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:29:29.102 00:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:29:29.102 00:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:29.102 00:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:29.102 00:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:29.102 00:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:29.102 00:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:29.102 00:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:29.102 00:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:29.102 00:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:29.102 00:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:29.102 00:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:29.102 00:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:29.102 00:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:29.102 00:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:29.102 00:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:29.102 00:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:29.102 00:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:29.102 00:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:29.102 00:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:29.102 00:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:29.102 00:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:29.102 00:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:29.102 00:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:29.102 00:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:29.102 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:29.102 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.614 ms 00:29:29.102 00:29:29.102 --- 10.0.0.2 ping statistics --- 00:29:29.102 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:29.102 rtt min/avg/max/mdev = 0.614/0.614/0.614/0.000 ms 00:29:29.102 00:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:29.102 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:29.102 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.324 ms 00:29:29.102 00:29:29.102 --- 10.0.0.1 ping statistics --- 00:29:29.102 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:29.102 rtt min/avg/max/mdev = 0.324/0.324/0.324/0.000 ms 00:29:29.102 00:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:29.102 00:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@448 -- # return 0 00:29:29.102 00:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:29:29.102 00:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:29.102 00:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:29:29.102 00:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:29:29.102 00:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:29.102 00:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:29:29.102 00:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:29:29.102 00:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:29:29.102 00:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:29:29.102 00:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:29.102 00:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:29.102 00:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@507 -- # nvmfpid=3443756 00:29:29.102 00:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@508 -- # waitforlisten 3443756 00:29:29.102 00:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:29:29.102 00:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@831 -- # '[' -z 3443756 ']' 00:29:29.102 00:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:29.102 00:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:29.102 00:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:29.102 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:29.102 00:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:29.102 00:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:29.102 [2024-10-09 00:36:58.825564] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:29:29.102 [2024-10-09 00:36:58.826702] Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 initialization... 00:29:29.102 [2024-10-09 00:36:58.826759] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:29.102 [2024-10-09 00:36:58.916833] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:29.102 [2024-10-09 00:36:59.010062] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:29.102 [2024-10-09 00:36:59.010118] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:29.102 [2024-10-09 00:36:59.010127] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:29.102 [2024-10-09 00:36:59.010134] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:29.102 [2024-10-09 00:36:59.010142] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:29.102 [2024-10-09 00:36:59.011691] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:29:29.102 [2024-10-09 00:36:59.011852] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:29:29.102 [2024-10-09 00:36:59.011853] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:29:29.102 [2024-10-09 00:36:59.100272] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:29.102 [2024-10-09 00:36:59.101263] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:29:29.102 [2024-10-09 00:36:59.101396] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:29:29.102 [2024-10-09 00:36:59.101626] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:29:29.102 00:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:29.102 00:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@864 -- # return 0 00:29:29.102 00:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:29:29.102 00:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:29.102 00:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:29.102 00:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:29.102 00:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:29:29.102 00:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:29.102 00:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:29.102 [2024-10-09 00:36:59.684770] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:29.102 00:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:29.102 00:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:29:29.102 00:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:29.102 00:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:29.102 Malloc0 00:29:29.102 00:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:29.102 00:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:29:29.102 00:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:29.102 00:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:29.364 Delay0 00:29:29.364 00:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:29.364 00:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:29:29.364 00:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:29.364 00:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:29.364 00:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:29.364 00:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:29:29.364 00:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:29.364 00:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:29.364 00:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:29.364 00:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:29.364 00:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:29.364 00:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:29.364 [2024-10-09 00:36:59.772735] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:29.364 00:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:29.364 00:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:29.364 00:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:29.364 00:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:29.364 00:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:29.364 00:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:29:29.364 [2024-10-09 00:36:59.860949] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:29:31.981 Initializing NVMe Controllers 00:29:31.981 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:29:31.981 controller IO queue size 128 less than required 00:29:31.981 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:29:31.981 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:29:31.981 Initialization complete. Launching workers. 00:29:31.981 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 127, failed: 28320 00:29:31.981 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 28381, failed to submit 66 00:29:31.981 success 28320, unsuccessful 61, failed 0 00:29:31.981 00:37:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:29:31.981 00:37:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:31.981 00:37:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:31.981 00:37:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:31.981 00:37:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:29:31.981 00:37:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:29:31.981 00:37:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@514 -- # nvmfcleanup 00:29:31.981 00:37:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:29:31.981 00:37:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:31.981 00:37:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:29:31.981 00:37:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:31.981 00:37:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:31.981 rmmod nvme_tcp 00:29:31.981 rmmod nvme_fabrics 00:29:31.981 rmmod nvme_keyring 00:29:31.981 00:37:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:31.981 00:37:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:29:31.981 00:37:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:29:31.981 00:37:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@515 -- # '[' -n 3443756 ']' 00:29:31.981 00:37:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@516 -- # killprocess 3443756 00:29:31.981 00:37:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@950 -- # '[' -z 3443756 ']' 00:29:31.981 00:37:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@954 -- # kill -0 3443756 00:29:31.981 00:37:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@955 -- # uname 00:29:31.981 00:37:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:31.981 00:37:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3443756 00:29:31.981 00:37:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:29:31.981 00:37:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:29:31.981 00:37:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3443756' 00:29:31.981 killing process with pid 3443756 00:29:31.981 00:37:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@969 -- # kill 3443756 00:29:31.981 00:37:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@974 -- # wait 3443756 00:29:31.981 00:37:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:29:31.981 00:37:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:29:31.981 00:37:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:29:31.981 00:37:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:29:31.981 00:37:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@789 -- # iptables-save 00:29:31.981 00:37:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:29:31.981 00:37:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@789 -- # iptables-restore 00:29:31.981 00:37:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:31.981 00:37:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:31.981 00:37:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:31.981 00:37:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:31.981 00:37:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:33.969 00:37:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:33.969 00:29:33.969 real 0m13.337s 00:29:33.969 user 0m10.867s 00:29:33.969 sys 0m7.002s 00:29:33.969 00:37:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:33.969 00:37:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:33.969 ************************************ 00:29:33.969 END TEST nvmf_abort 00:29:33.969 ************************************ 00:29:33.969 00:37:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:29:33.969 00:37:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:29:33.969 00:37:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:33.969 00:37:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:29:33.969 ************************************ 00:29:33.969 START TEST nvmf_ns_hotplug_stress 00:29:33.969 ************************************ 00:29:33.969 00:37:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:29:34.230 * Looking for test storage... 00:29:34.230 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:34.230 00:37:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:29:34.230 00:37:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1681 -- # lcov --version 00:29:34.230 00:37:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:29:34.230 00:37:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:29:34.230 00:37:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:34.230 00:37:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:34.230 00:37:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:34.230 00:37:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:29:34.230 00:37:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:29:34.230 00:37:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:29:34.230 00:37:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:29:34.230 00:37:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:29:34.230 00:37:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:29:34.230 00:37:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:29:34.230 00:37:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:34.230 00:37:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:29:34.230 00:37:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:29:34.230 00:37:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:34.230 00:37:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:34.230 00:37:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:29:34.230 00:37:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:29:34.230 00:37:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:34.230 00:37:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:29:34.230 00:37:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:29:34.230 00:37:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:29:34.230 00:37:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:29:34.230 00:37:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:34.230 00:37:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:29:34.230 00:37:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:29:34.230 00:37:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:34.230 00:37:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:34.230 00:37:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:29:34.230 00:37:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:34.230 00:37:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:29:34.230 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:34.230 --rc genhtml_branch_coverage=1 00:29:34.230 --rc genhtml_function_coverage=1 00:29:34.230 --rc genhtml_legend=1 00:29:34.230 --rc geninfo_all_blocks=1 00:29:34.230 --rc geninfo_unexecuted_blocks=1 00:29:34.230 00:29:34.230 ' 00:29:34.230 00:37:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:29:34.230 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:34.230 --rc genhtml_branch_coverage=1 00:29:34.230 --rc genhtml_function_coverage=1 00:29:34.230 --rc genhtml_legend=1 00:29:34.230 --rc geninfo_all_blocks=1 00:29:34.230 --rc geninfo_unexecuted_blocks=1 00:29:34.230 00:29:34.230 ' 00:29:34.230 00:37:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:29:34.230 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:34.230 --rc genhtml_branch_coverage=1 00:29:34.230 --rc genhtml_function_coverage=1 00:29:34.230 --rc genhtml_legend=1 00:29:34.230 --rc geninfo_all_blocks=1 00:29:34.230 --rc geninfo_unexecuted_blocks=1 00:29:34.230 00:29:34.230 ' 00:29:34.230 00:37:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:29:34.230 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:34.230 --rc genhtml_branch_coverage=1 00:29:34.230 --rc genhtml_function_coverage=1 00:29:34.230 --rc genhtml_legend=1 00:29:34.230 --rc geninfo_all_blocks=1 00:29:34.230 --rc geninfo_unexecuted_blocks=1 00:29:34.230 00:29:34.230 ' 00:29:34.230 00:37:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:34.230 00:37:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:29:34.230 00:37:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:34.230 00:37:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:34.230 00:37:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:34.230 00:37:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:34.230 00:37:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:34.230 00:37:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:34.230 00:37:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:34.230 00:37:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:34.230 00:37:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:34.230 00:37:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:34.230 00:37:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:34.230 00:37:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:34.230 00:37:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:34.230 00:37:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:34.230 00:37:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:34.230 00:37:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:34.231 00:37:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:34.231 00:37:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:29:34.231 00:37:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:34.231 00:37:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:34.231 00:37:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:34.231 00:37:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:34.231 00:37:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:34.231 00:37:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:34.231 00:37:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:29:34.231 00:37:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:34.231 00:37:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:29:34.231 00:37:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:34.231 00:37:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:34.231 00:37:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:34.231 00:37:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:34.231 00:37:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:34.231 00:37:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:34.231 00:37:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:34.231 00:37:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:34.231 00:37:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:34.231 00:37:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:34.231 00:37:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:34.231 00:37:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:29:34.231 00:37:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:29:34.231 00:37:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:34.231 00:37:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # prepare_net_devs 00:29:34.231 00:37:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@436 -- # local -g is_hw=no 00:29:34.231 00:37:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # remove_spdk_ns 00:29:34.231 00:37:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:34.231 00:37:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:34.231 00:37:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:34.231 00:37:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:29:34.231 00:37:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:29:34.231 00:37:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:29:34.231 00:37:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:29:42.390 00:37:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:42.390 00:37:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:29:42.390 00:37:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:42.390 00:37:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:42.390 00:37:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:42.390 00:37:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:42.390 00:37:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:42.390 00:37:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:29:42.390 00:37:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:42.390 00:37:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:29:42.390 00:37:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:29:42.390 00:37:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:29:42.390 00:37:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:29:42.390 00:37:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:29:42.390 00:37:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:29:42.390 00:37:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:42.390 00:37:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:42.390 00:37:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:42.390 00:37:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:42.390 00:37:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:42.390 00:37:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:42.390 00:37:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:42.390 00:37:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:42.390 00:37:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:42.390 00:37:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:42.390 00:37:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:42.390 00:37:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:42.390 00:37:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:42.390 00:37:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:42.390 00:37:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:42.390 00:37:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:42.390 00:37:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:42.390 00:37:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:42.390 00:37:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:42.390 00:37:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:29:42.390 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:29:42.390 00:37:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:42.390 00:37:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:42.390 00:37:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:42.390 00:37:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:42.390 00:37:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:42.390 00:37:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:42.390 00:37:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:29:42.390 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:29:42.390 00:37:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:42.390 00:37:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:42.390 00:37:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:42.390 00:37:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:42.390 00:37:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:42.390 00:37:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:42.390 00:37:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:42.390 00:37:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:42.390 00:37:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:29:42.390 00:37:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:42.390 00:37:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:29:42.390 00:37:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:42.390 00:37:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ up == up ]] 00:29:42.390 00:37:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:29:42.390 00:37:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:42.390 00:37:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:29:42.390 Found net devices under 0000:4b:00.0: cvl_0_0 00:29:42.390 00:37:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:29:42.390 00:37:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:29:42.390 00:37:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:42.390 00:37:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:29:42.390 00:37:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:42.390 00:37:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ up == up ]] 00:29:42.390 00:37:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:29:42.390 00:37:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:42.390 00:37:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:29:42.390 Found net devices under 0000:4b:00.1: cvl_0_1 00:29:42.390 00:37:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:29:42.390 00:37:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:29:42.390 00:37:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # is_hw=yes 00:29:42.390 00:37:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:29:42.390 00:37:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:29:42.390 00:37:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:29:42.390 00:37:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:42.390 00:37:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:42.390 00:37:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:42.390 00:37:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:42.390 00:37:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:42.390 00:37:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:42.390 00:37:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:42.390 00:37:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:42.390 00:37:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:42.390 00:37:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:42.390 00:37:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:42.390 00:37:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:42.390 00:37:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:42.390 00:37:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:42.390 00:37:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:42.390 00:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:42.390 00:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:42.390 00:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:42.390 00:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:42.390 00:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:42.390 00:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:42.390 00:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:42.390 00:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:42.390 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:42.390 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.732 ms 00:29:42.390 00:29:42.390 --- 10.0.0.2 ping statistics --- 00:29:42.390 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:42.390 rtt min/avg/max/mdev = 0.732/0.732/0.732/0.000 ms 00:29:42.390 00:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:42.390 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:42.390 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.311 ms 00:29:42.390 00:29:42.390 --- 10.0.0.1 ping statistics --- 00:29:42.390 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:42.390 rtt min/avg/max/mdev = 0.311/0.311/0.311/0.000 ms 00:29:42.390 00:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:42.390 00:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # return 0 00:29:42.390 00:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:29:42.390 00:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:42.390 00:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:29:42.390 00:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:29:42.390 00:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:42.390 00:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:29:42.390 00:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:29:42.390 00:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:29:42.390 00:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:29:42.390 00:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:42.390 00:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:29:42.390 00:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # nvmfpid=3448729 00:29:42.390 00:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # waitforlisten 3448729 00:29:42.390 00:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:29:42.390 00:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@831 -- # '[' -z 3448729 ']' 00:29:42.390 00:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:42.390 00:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:42.390 00:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:42.390 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:42.390 00:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:42.390 00:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:29:42.390 [2024-10-09 00:37:12.295611] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:29:42.390 [2024-10-09 00:37:12.296739] Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 initialization... 00:29:42.390 [2024-10-09 00:37:12.296789] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:42.390 [2024-10-09 00:37:12.384126] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:42.390 [2024-10-09 00:37:12.477024] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:42.390 [2024-10-09 00:37:12.477083] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:42.390 [2024-10-09 00:37:12.477092] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:42.390 [2024-10-09 00:37:12.477100] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:42.390 [2024-10-09 00:37:12.477107] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:42.390 [2024-10-09 00:37:12.478420] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:29:42.390 [2024-10-09 00:37:12.478580] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:29:42.390 [2024-10-09 00:37:12.478582] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:29:42.390 [2024-10-09 00:37:12.567617] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:42.390 [2024-10-09 00:37:12.567683] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:29:42.390 [2024-10-09 00:37:12.568400] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:29:42.390 [2024-10-09 00:37:12.568553] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:29:42.651 00:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:42.651 00:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # return 0 00:29:42.651 00:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:29:42.651 00:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:42.651 00:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:29:42.651 00:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:42.651 00:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:29:42.651 00:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:29:42.912 [2024-10-09 00:37:13.323447] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:42.912 00:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:29:42.912 00:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:43.172 [2024-10-09 00:37:13.688134] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:43.172 00:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:43.434 00:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:29:43.694 Malloc0 00:29:43.694 00:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:29:43.694 Delay0 00:29:43.694 00:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:43.955 00:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:29:44.216 NULL1 00:29:44.216 00:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:29:44.478 00:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=3449144 00:29:44.478 00:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3449144 00:29:44.478 00:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:29:44.478 00:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:45.435 Read completed with error (sct=0, sc=11) 00:29:45.435 00:37:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:45.435 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:45.699 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:45.699 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:45.699 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:45.699 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:45.699 00:37:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:29:45.699 00:37:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:29:45.960 true 00:29:45.960 00:37:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3449144 00:29:45.960 00:37:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:46.903 00:37:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:46.903 00:37:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:29:46.903 00:37:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:29:47.173 true 00:29:47.173 00:37:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3449144 00:29:47.173 00:37:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:47.437 00:37:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:47.437 00:37:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:29:47.437 00:37:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:29:47.697 true 00:29:47.697 00:37:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3449144 00:29:47.697 00:37:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:49.079 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:49.079 00:37:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:49.079 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:49.079 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:49.079 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:49.079 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:49.079 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:49.079 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:49.079 00:37:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:29:49.079 00:37:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:29:49.079 true 00:29:49.079 00:37:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3449144 00:29:49.079 00:37:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:50.022 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:50.022 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:50.022 00:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:50.282 00:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:29:50.282 00:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:29:50.282 true 00:29:50.282 00:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3449144 00:29:50.282 00:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:50.542 00:37:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:50.802 00:37:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:29:50.802 00:37:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:29:50.802 true 00:29:50.802 00:37:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3449144 00:29:50.802 00:37:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:51.063 00:37:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:51.323 00:37:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:29:51.323 00:37:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:29:51.323 true 00:29:51.323 00:37:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3449144 00:29:51.323 00:37:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:51.583 00:37:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:51.845 00:37:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:29:51.845 00:37:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:29:51.845 true 00:29:51.845 00:37:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3449144 00:29:52.105 00:37:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:53.048 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:53.048 00:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:53.048 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:53.048 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:53.309 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:53.309 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:53.309 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:53.309 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:53.309 00:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:29:53.309 00:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:29:53.570 true 00:29:53.570 00:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3449144 00:29:53.570 00:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:54.511 00:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:54.511 00:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:29:54.511 00:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:29:54.770 true 00:29:54.770 00:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3449144 00:29:54.770 00:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:54.770 00:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:55.030 00:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:29:55.030 00:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:29:55.289 true 00:29:55.289 00:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3449144 00:29:55.289 00:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:56.228 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:56.488 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:56.488 00:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:56.488 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:56.488 00:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:29:56.488 00:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:29:56.747 true 00:29:56.747 00:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3449144 00:29:56.747 00:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:57.008 00:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:57.008 00:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:29:57.008 00:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:29:57.268 true 00:29:57.268 00:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3449144 00:29:57.268 00:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:57.529 00:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:57.529 00:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:29:57.529 00:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:29:57.789 true 00:29:57.789 00:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3449144 00:29:57.789 00:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:58.050 00:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:58.050 00:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:29:58.051 00:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:29:58.311 true 00:29:58.311 00:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3449144 00:29:58.311 00:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:58.571 00:37:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:58.832 00:37:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:29:58.832 00:37:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:29:58.832 true 00:29:58.832 00:37:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3449144 00:29:58.832 00:37:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:59.093 00:37:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:59.354 00:37:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:29:59.354 00:37:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:29:59.354 true 00:29:59.354 00:37:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3449144 00:29:59.354 00:37:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:59.614 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:59.614 00:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:59.614 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:59.614 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:59.614 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:59.614 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:59.614 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:59.614 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:59.875 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:59.875 00:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:29:59.875 00:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:29:59.875 true 00:29:59.875 00:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3449144 00:29:59.875 00:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:00.818 00:37:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:01.079 00:37:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:30:01.079 00:37:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:30:01.079 true 00:30:01.079 00:37:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3449144 00:30:01.079 00:37:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:01.340 00:37:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:01.600 00:37:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:30:01.600 00:37:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:30:01.600 true 00:30:01.600 00:37:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3449144 00:30:01.600 00:37:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:02.991 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:02.991 00:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:02.991 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:02.991 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:02.991 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:02.991 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:02.991 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:02.991 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:02.991 00:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:30:02.991 00:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:30:03.252 true 00:30:03.252 00:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3449144 00:30:03.252 00:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:04.192 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:04.192 00:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:04.192 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:04.192 00:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:30:04.192 00:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:30:04.458 true 00:30:04.458 00:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3449144 00:30:04.458 00:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:04.721 00:37:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:04.721 00:37:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:30:04.721 00:37:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:30:04.982 true 00:30:04.982 00:37:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3449144 00:30:04.982 00:37:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:05.242 00:37:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:05.502 00:37:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:30:05.502 00:37:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:30:05.502 true 00:30:05.502 00:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3449144 00:30:05.502 00:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:05.762 00:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:06.022 00:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:30:06.022 00:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:30:06.022 true 00:30:06.022 00:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3449144 00:30:06.022 00:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:07.406 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:07.406 00:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:07.406 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:07.406 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:07.406 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:07.406 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:07.406 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:07.406 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:07.406 00:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:30:07.406 00:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:30:07.666 true 00:30:07.666 00:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3449144 00:30:07.666 00:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:08.616 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:08.616 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:08.616 00:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:08.616 00:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:30:08.616 00:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:30:08.877 true 00:30:08.877 00:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3449144 00:30:08.877 00:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:09.137 00:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:09.137 00:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:30:09.137 00:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:30:09.398 true 00:30:09.398 00:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3449144 00:30:09.398 00:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:09.658 00:37:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:09.918 00:37:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:30:09.918 00:37:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:30:09.918 true 00:30:09.918 00:37:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3449144 00:30:09.918 00:37:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:10.179 00:37:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:10.475 00:37:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:30:10.475 00:37:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:30:10.475 true 00:30:10.475 00:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3449144 00:30:10.475 00:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:10.776 00:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:11.059 00:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:30:11.059 00:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:30:11.059 true 00:30:11.059 00:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3449144 00:30:11.059 00:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:11.323 00:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:11.323 00:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:30:11.323 00:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:30:11.583 true 00:30:11.583 00:37:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3449144 00:30:11.583 00:37:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:12.963 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:12.963 00:37:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:12.963 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:12.963 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:12.963 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:12.963 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:12.963 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:12.963 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:12.963 00:37:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:30:12.963 00:37:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:30:13.222 true 00:30:13.222 00:37:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3449144 00:30:13.222 00:37:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:14.159 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:14.159 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:14.159 00:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:14.159 00:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:30:14.159 00:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:30:14.418 true 00:30:14.418 00:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3449144 00:30:14.418 00:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:14.418 00:37:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:14.677 Initializing NVMe Controllers 00:30:14.677 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:14.677 Controller IO queue size 128, less than required. 00:30:14.677 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:14.677 Controller IO queue size 128, less than required. 00:30:14.677 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:14.677 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:14.677 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:30:14.677 Initialization complete. Launching workers. 00:30:14.677 ======================================================== 00:30:14.677 Latency(us) 00:30:14.677 Device Information : IOPS MiB/s Average min max 00:30:14.677 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2047.10 1.00 33457.93 1456.35 1013747.58 00:30:14.677 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 15967.05 7.80 8016.18 1143.04 300876.79 00:30:14.677 ======================================================== 00:30:14.677 Total : 18014.16 8.80 10907.34 1143.04 1013747.58 00:30:14.677 00:30:14.677 00:37:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:30:14.677 00:37:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:30:14.937 true 00:30:14.937 00:37:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3449144 00:30:14.937 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (3449144) - No such process 00:30:14.937 00:37:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 3449144 00:30:14.937 00:37:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:15.198 00:37:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:15.198 00:37:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:30:15.198 00:37:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:30:15.198 00:37:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:30:15.198 00:37:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:30:15.198 00:37:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:30:15.458 null0 00:30:15.458 00:37:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:30:15.458 00:37:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:30:15.458 00:37:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:30:15.458 null1 00:30:15.731 00:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:30:15.731 00:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:30:15.731 00:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:30:15.731 null2 00:30:15.731 00:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:30:15.731 00:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:30:15.731 00:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:30:15.992 null3 00:30:15.992 00:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:30:15.992 00:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:30:15.992 00:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:30:15.992 null4 00:30:15.992 00:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:30:15.992 00:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:30:15.992 00:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:30:16.252 null5 00:30:16.252 00:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:30:16.252 00:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:30:16.252 00:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:30:16.514 null6 00:30:16.514 00:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:30:16.514 00:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:30:16.514 00:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:30:16.514 null7 00:30:16.514 00:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:30:16.514 00:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:30:16.514 00:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:30:16.514 00:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:30:16.514 00:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:30:16.514 00:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:30:16.514 00:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:30:16.514 00:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:30:16.514 00:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:30:16.514 00:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:16.514 00:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:30:16.514 00:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:30:16.514 00:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:30:16.514 00:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:30:16.514 00:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:30:16.514 00:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:30:16.514 00:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:30:16.514 00:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:30:16.514 00:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:16.514 00:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:30:16.514 00:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:30:16.514 00:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:30:16.514 00:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:30:16.514 00:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:30:16.514 00:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:30:16.514 00:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:30:16.514 00:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:16.514 00:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:30:16.514 00:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:30:16.514 00:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:30:16.514 00:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:30:16.514 00:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:30:16.514 00:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:30:16.514 00:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:30:16.514 00:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:16.514 00:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:30:16.514 00:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:30:16.514 00:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:30:16.514 00:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:30:16.514 00:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:30:16.514 00:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:30:16.514 00:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:30:16.514 00:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:16.514 00:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:30:16.514 00:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:30:16.514 00:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:30:16.514 00:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:30:16.514 00:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:30:16.514 00:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:30:16.514 00:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:30:16.514 00:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:30:16.514 00:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:16.514 00:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:30:16.514 00:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:30:16.514 00:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:30:16.514 00:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:30:16.514 00:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:30:16.514 00:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:30:16.514 00:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:30:16.514 00:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:30:16.514 00:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:16.514 00:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:30:16.514 00:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:30:16.514 00:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 3455281 3455284 3455285 3455287 3455290 3455292 3455295 3455298 00:30:16.514 00:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:30:16.514 00:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:30:16.514 00:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:30:16.514 00:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:16.514 00:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:30:16.776 00:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:16.776 00:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:30:16.776 00:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:30:16.776 00:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:30:16.776 00:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:30:16.776 00:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:16.776 00:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:30:16.776 00:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:30:17.044 00:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:17.044 00:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:17.044 00:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:30:17.044 00:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:17.044 00:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:17.044 00:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:30:17.044 00:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:17.044 00:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:17.044 00:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:30:17.044 00:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:17.044 00:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:17.044 00:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:30:17.044 00:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:17.044 00:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:17.044 00:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:30:17.044 00:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:17.044 00:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:17.044 00:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:30:17.044 00:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:17.044 00:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:17.044 00:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:30:17.044 00:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:17.044 00:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:17.044 00:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:30:17.044 00:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:30:17.044 00:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:17.044 00:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:30:17.306 00:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:30:17.306 00:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:30:17.306 00:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:17.306 00:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:30:17.306 00:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:30:17.306 00:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:17.306 00:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:17.306 00:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:30:17.306 00:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:17.306 00:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:17.306 00:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:30:17.306 00:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:17.306 00:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:17.306 00:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:30:17.306 00:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:17.306 00:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:17.306 00:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:30:17.306 00:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:17.306 00:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:17.306 00:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:30:17.306 00:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:17.306 00:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:17.306 00:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:17.306 00:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:17.306 00:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:30:17.307 00:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:30:17.307 00:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:17.307 00:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:17.307 00:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:30:17.568 00:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:30:17.568 00:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:17.568 00:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:30:17.568 00:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:30:17.568 00:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:30:17.568 00:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:30:17.568 00:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:30:17.568 00:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:17.568 00:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:17.568 00:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:17.568 00:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:30:17.568 00:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:17.568 00:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:17.568 00:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:30:17.568 00:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:17.568 00:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:17.568 00:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:30:17.829 00:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:17.829 00:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:17.829 00:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:30:17.829 00:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:17.829 00:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:17.829 00:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:30:17.829 00:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:17.829 00:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:17.829 00:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:30:17.829 00:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:17.829 00:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:17.829 00:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:30:17.829 00:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:17.829 00:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:17.829 00:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:30:17.829 00:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:30:17.829 00:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:17.829 00:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:30:17.829 00:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:30:17.829 00:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:30:17.829 00:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:30:17.829 00:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:17.829 00:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:30:17.829 00:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:17.829 00:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:17.829 00:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:30:18.089 00:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:18.089 00:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:18.089 00:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:30:18.089 00:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:18.089 00:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:18.089 00:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:30:18.089 00:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:18.089 00:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:18.089 00:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:30:18.089 00:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:18.089 00:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:18.089 00:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:30:18.089 00:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:18.089 00:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:18.089 00:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:30:18.089 00:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:18.089 00:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:18.089 00:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:30:18.089 00:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:18.089 00:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:18.089 00:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:30:18.089 00:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:30:18.089 00:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:18.350 00:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:30:18.350 00:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:30:18.350 00:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:30:18.350 00:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:18.350 00:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:18.350 00:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:30:18.350 00:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:30:18.350 00:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:18.350 00:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:18.350 00:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:30:18.350 00:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:30:18.350 00:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:18.350 00:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:18.350 00:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:18.350 00:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:30:18.350 00:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:30:18.350 00:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:18.350 00:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:18.350 00:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:30:18.610 00:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:18.610 00:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:18.610 00:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:30:18.610 00:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:18.610 00:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:18.610 00:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:18.610 00:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:30:18.611 00:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:18.611 00:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:18.611 00:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:30:18.611 00:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:18.611 00:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:18.611 00:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:30:18.611 00:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:30:18.611 00:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:18.611 00:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:18.611 00:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:30:18.611 00:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:30:18.611 00:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:30:18.611 00:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:30:18.611 00:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:30:18.871 00:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:18.871 00:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:18.871 00:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:18.871 00:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:30:18.871 00:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:18.871 00:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:18.871 00:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:30:18.871 00:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:18.871 00:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:18.871 00:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:30:18.871 00:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:30:18.871 00:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:18.871 00:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:18.871 00:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:30:18.871 00:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:18.871 00:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:18.871 00:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:30:18.871 00:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:18.871 00:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:18.871 00:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:30:18.871 00:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:18.871 00:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:18.871 00:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:30:18.871 00:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:30:18.871 00:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:18.871 00:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:18.871 00:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:30:18.871 00:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:18.871 00:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:30:19.132 00:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:30:19.132 00:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:19.132 00:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:19.133 00:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:30:19.133 00:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:30:19.133 00:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:30:19.133 00:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:30:19.133 00:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:19.133 00:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:19.133 00:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:30:19.133 00:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:19.133 00:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:19.133 00:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:30:19.133 00:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:19.133 00:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:19.133 00:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:19.133 00:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:30:19.393 00:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:19.393 00:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:19.393 00:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:30:19.393 00:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:19.393 00:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:19.393 00:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:19.393 00:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:30:19.393 00:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:30:19.393 00:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:30:19.393 00:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:19.393 00:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:19.393 00:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:30:19.393 00:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:30:19.393 00:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:19.393 00:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:19.393 00:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:30:19.393 00:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:30:19.393 00:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:30:19.393 00:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:19.393 00:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:19.393 00:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:30:19.394 00:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:30:19.654 00:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:19.654 00:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:19.654 00:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:30:19.654 00:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:19.654 00:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:19.654 00:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:30:19.654 00:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:19.654 00:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:19.654 00:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:30:19.654 00:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:19.654 00:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:19.654 00:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:19.654 00:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:30:19.654 00:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:19.654 00:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:19.654 00:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:30:19.654 00:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:19.654 00:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:30:19.654 00:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:30:19.654 00:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:30:19.654 00:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:19.654 00:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:19.654 00:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:30:19.654 00:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:19.654 00:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:19.654 00:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:30:19.915 00:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:30:19.915 00:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:19.915 00:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:19.915 00:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:30:19.915 00:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:19.915 00:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:19.915 00:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:30:19.915 00:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:19.915 00:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:19.915 00:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:30:19.915 00:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:30:19.915 00:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:30:19.915 00:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:19.915 00:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:19.915 00:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:30:19.915 00:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:19.915 00:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:19.915 00:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:19.915 00:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:30:19.915 00:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:30:20.176 00:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:30:20.176 00:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:30:20.176 00:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:20.176 00:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:20.176 00:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:20.176 00:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:20.176 00:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:30:20.176 00:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:20.176 00:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:20.177 00:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:30:20.177 00:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:20.177 00:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:30:20.177 00:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:20.177 00:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:20.177 00:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:20.177 00:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:20.177 00:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:20.177 00:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:20.439 00:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:20.439 00:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:30:20.439 00:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:20.439 00:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:20.439 00:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:20.439 00:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:20.439 00:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:20.439 00:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:20.439 00:37:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:20.439 00:37:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:20.439 00:37:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:30:20.439 00:37:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:30:20.439 00:37:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@514 -- # nvmfcleanup 00:30:20.439 00:37:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:30:20.439 00:37:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:20.439 00:37:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:30:20.439 00:37:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:20.439 00:37:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:20.439 rmmod nvme_tcp 00:30:20.439 rmmod nvme_fabrics 00:30:20.708 rmmod nvme_keyring 00:30:20.708 00:37:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:20.708 00:37:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:30:20.708 00:37:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:30:20.708 00:37:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@515 -- # '[' -n 3448729 ']' 00:30:20.708 00:37:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # killprocess 3448729 00:30:20.708 00:37:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@950 -- # '[' -z 3448729 ']' 00:30:20.708 00:37:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # kill -0 3448729 00:30:20.708 00:37:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # uname 00:30:20.708 00:37:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:20.708 00:37:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3448729 00:30:20.708 00:37:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:30:20.708 00:37:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:30:20.708 00:37:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3448729' 00:30:20.708 killing process with pid 3448729 00:30:20.708 00:37:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@969 -- # kill 3448729 00:30:20.708 00:37:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@974 -- # wait 3448729 00:30:20.709 00:37:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:30:20.709 00:37:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:30:20.709 00:37:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:30:20.709 00:37:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:30:20.709 00:37:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@789 -- # iptables-save 00:30:20.709 00:37:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:30:20.709 00:37:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@789 -- # iptables-restore 00:30:20.709 00:37:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:20.709 00:37:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:20.709 00:37:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:20.709 00:37:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:20.709 00:37:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:23.255 00:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:23.255 00:30:23.255 real 0m48.881s 00:30:23.255 user 2m58.973s 00:30:23.255 sys 0m21.832s 00:30:23.255 00:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:23.255 00:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:30:23.255 ************************************ 00:30:23.255 END TEST nvmf_ns_hotplug_stress 00:30:23.255 ************************************ 00:30:23.255 00:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:30:23.255 00:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:30:23.255 00:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:23.255 00:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:23.255 ************************************ 00:30:23.255 START TEST nvmf_delete_subsystem 00:30:23.255 ************************************ 00:30:23.255 00:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:30:23.255 * Looking for test storage... 00:30:23.255 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:23.255 00:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:30:23.255 00:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1681 -- # lcov --version 00:30:23.255 00:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:30:23.255 00:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:30:23.255 00:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:23.255 00:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:23.255 00:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:23.255 00:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:30:23.255 00:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:30:23.255 00:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:30:23.255 00:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:30:23.255 00:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:30:23.255 00:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:30:23.255 00:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:30:23.255 00:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:23.255 00:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:30:23.255 00:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:30:23.255 00:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:23.255 00:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:23.255 00:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:30:23.255 00:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:30:23.255 00:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:23.255 00:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:30:23.255 00:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:30:23.255 00:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:30:23.255 00:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:30:23.256 00:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:23.256 00:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:30:23.256 00:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:30:23.256 00:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:23.256 00:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:23.256 00:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:30:23.256 00:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:23.256 00:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:30:23.256 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:23.256 --rc genhtml_branch_coverage=1 00:30:23.256 --rc genhtml_function_coverage=1 00:30:23.256 --rc genhtml_legend=1 00:30:23.256 --rc geninfo_all_blocks=1 00:30:23.256 --rc geninfo_unexecuted_blocks=1 00:30:23.256 00:30:23.256 ' 00:30:23.256 00:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:30:23.256 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:23.256 --rc genhtml_branch_coverage=1 00:30:23.256 --rc genhtml_function_coverage=1 00:30:23.256 --rc genhtml_legend=1 00:30:23.256 --rc geninfo_all_blocks=1 00:30:23.256 --rc geninfo_unexecuted_blocks=1 00:30:23.256 00:30:23.256 ' 00:30:23.256 00:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:30:23.256 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:23.256 --rc genhtml_branch_coverage=1 00:30:23.256 --rc genhtml_function_coverage=1 00:30:23.256 --rc genhtml_legend=1 00:30:23.256 --rc geninfo_all_blocks=1 00:30:23.256 --rc geninfo_unexecuted_blocks=1 00:30:23.256 00:30:23.256 ' 00:30:23.256 00:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:30:23.256 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:23.256 --rc genhtml_branch_coverage=1 00:30:23.256 --rc genhtml_function_coverage=1 00:30:23.256 --rc genhtml_legend=1 00:30:23.256 --rc geninfo_all_blocks=1 00:30:23.256 --rc geninfo_unexecuted_blocks=1 00:30:23.256 00:30:23.256 ' 00:30:23.256 00:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:23.256 00:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:30:23.256 00:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:23.256 00:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:23.256 00:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:23.256 00:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:23.256 00:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:23.256 00:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:23.256 00:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:23.256 00:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:23.256 00:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:23.256 00:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:23.256 00:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:23.256 00:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:23.256 00:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:23.256 00:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:23.256 00:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:23.256 00:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:23.256 00:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:23.256 00:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:30:23.256 00:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:23.256 00:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:23.256 00:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:23.256 00:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:23.256 00:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:23.256 00:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:23.256 00:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:30:23.256 00:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:23.256 00:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:30:23.256 00:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:23.256 00:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:23.256 00:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:23.256 00:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:23.256 00:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:23.256 00:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:23.256 00:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:23.256 00:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:23.256 00:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:23.256 00:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:23.256 00:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:30:23.256 00:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:30:23.256 00:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:23.256 00:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # prepare_net_devs 00:30:23.256 00:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@436 -- # local -g is_hw=no 00:30:23.256 00:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # remove_spdk_ns 00:30:23.256 00:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:23.256 00:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:23.256 00:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:23.256 00:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:30:23.256 00:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:30:23.256 00:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:30:23.256 00:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:31.438 00:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:31.438 00:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:30:31.438 00:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:31.438 00:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:31.438 00:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:31.438 00:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:31.438 00:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:31.438 00:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:30:31.438 00:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:31.438 00:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:30:31.438 00:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:30:31.438 00:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:30:31.438 00:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:30:31.438 00:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:30:31.438 00:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:30:31.438 00:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:31.438 00:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:31.438 00:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:31.438 00:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:31.438 00:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:31.438 00:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:31.438 00:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:31.438 00:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:31.438 00:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:31.439 00:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:31.439 00:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:31.439 00:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:31.439 00:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:31.439 00:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:31.439 00:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:31.439 00:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:31.439 00:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:31.439 00:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:31.439 00:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:31.439 00:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:30:31.439 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:30:31.439 00:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:31.439 00:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:31.439 00:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:31.439 00:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:31.439 00:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:31.439 00:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:31.439 00:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:30:31.439 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:30:31.439 00:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:31.439 00:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:31.439 00:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:31.439 00:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:31.439 00:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:31.439 00:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:31.439 00:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:31.440 00:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:31.440 00:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:30:31.440 00:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:31.440 00:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:30:31.440 00:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:31.440 00:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ up == up ]] 00:30:31.440 00:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:30:31.440 00:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:31.440 00:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:30:31.440 Found net devices under 0000:4b:00.0: cvl_0_0 00:30:31.440 00:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:30:31.440 00:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:30:31.440 00:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:31.440 00:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:30:31.440 00:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:31.440 00:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ up == up ]] 00:30:31.440 00:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:30:31.440 00:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:31.440 00:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:30:31.440 Found net devices under 0000:4b:00.1: cvl_0_1 00:30:31.440 00:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:30:31.440 00:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:30:31.440 00:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # is_hw=yes 00:30:31.440 00:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:30:31.440 00:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:30:31.440 00:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:30:31.440 00:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:31.440 00:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:31.440 00:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:31.440 00:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:31.440 00:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:31.440 00:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:31.440 00:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:31.440 00:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:31.441 00:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:31.441 00:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:31.441 00:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:31.441 00:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:31.441 00:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:31.441 00:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:31.441 00:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:31.441 00:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:31.441 00:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:31.441 00:38:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:31.441 00:38:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:31.441 00:38:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:31.441 00:38:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:31.441 00:38:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:31.441 00:38:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:31.441 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:31.441 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.586 ms 00:30:31.441 00:30:31.441 --- 10.0.0.2 ping statistics --- 00:30:31.441 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:31.441 rtt min/avg/max/mdev = 0.586/0.586/0.586/0.000 ms 00:30:31.441 00:38:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:31.441 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:31.441 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.292 ms 00:30:31.441 00:30:31.441 --- 10.0.0.1 ping statistics --- 00:30:31.441 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:31.441 rtt min/avg/max/mdev = 0.292/0.292/0.292/0.000 ms 00:30:31.441 00:38:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:31.441 00:38:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # return 0 00:30:31.442 00:38:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:30:31.442 00:38:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:31.442 00:38:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:30:31.443 00:38:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:30:31.443 00:38:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:31.443 00:38:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:30:31.443 00:38:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:30:31.443 00:38:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:30:31.443 00:38:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:30:31.443 00:38:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:31.443 00:38:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:31.443 00:38:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # nvmfpid=3460430 00:30:31.443 00:38:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # waitforlisten 3460430 00:30:31.443 00:38:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:30:31.443 00:38:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@831 -- # '[' -z 3460430 ']' 00:30:31.443 00:38:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:31.443 00:38:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:31.443 00:38:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:31.443 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:31.443 00:38:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:31.443 00:38:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:31.443 [2024-10-09 00:38:01.240432] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:31.443 [2024-10-09 00:38:01.241595] Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 initialization... 00:30:31.443 [2024-10-09 00:38:01.241644] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:31.443 [2024-10-09 00:38:01.331352] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:30:31.443 [2024-10-09 00:38:01.426584] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:31.443 [2024-10-09 00:38:01.426649] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:31.443 [2024-10-09 00:38:01.426660] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:31.443 [2024-10-09 00:38:01.426668] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:31.443 [2024-10-09 00:38:01.426675] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:31.443 [2024-10-09 00:38:01.427784] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:30:31.443 [2024-10-09 00:38:01.427838] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:30:31.443 [2024-10-09 00:38:01.504388] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:31.443 [2024-10-09 00:38:01.504902] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:30:31.443 [2024-10-09 00:38:01.505248] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:31.710 00:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:31.710 00:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # return 0 00:30:31.710 00:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:30:31.710 00:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:31.710 00:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:31.710 00:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:31.710 00:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:31.710 00:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:31.710 00:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:31.710 [2024-10-09 00:38:02.136940] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:31.710 00:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:31.710 00:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:30:31.710 00:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:31.710 00:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:31.710 00:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:31.710 00:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:31.710 00:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:31.710 00:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:31.710 [2024-10-09 00:38:02.181563] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:31.710 00:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:31.710 00:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:30:31.710 00:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:31.710 00:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:31.710 NULL1 00:30:31.710 00:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:31.710 00:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:30:31.710 00:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:31.710 00:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:31.710 Delay0 00:30:31.710 00:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:31.710 00:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:31.710 00:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:31.710 00:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:31.710 00:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:31.710 00:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=3460778 00:30:31.710 00:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:30:31.710 00:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:30:31.710 [2024-10-09 00:38:02.292524] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:30:33.626 00:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:33.626 00:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:33.626 00:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:34.198 Read completed with error (sct=0, sc=8) 00:30:34.198 starting I/O failed: -6 00:30:34.198 Read completed with error (sct=0, sc=8) 00:30:34.198 Read completed with error (sct=0, sc=8) 00:30:34.198 Write completed with error (sct=0, sc=8) 00:30:34.198 Write completed with error (sct=0, sc=8) 00:30:34.198 starting I/O failed: -6 00:30:34.198 Write completed with error (sct=0, sc=8) 00:30:34.198 Read completed with error (sct=0, sc=8) 00:30:34.198 Read completed with error (sct=0, sc=8) 00:30:34.198 Write completed with error (sct=0, sc=8) 00:30:34.198 starting I/O failed: -6 00:30:34.198 Write completed with error (sct=0, sc=8) 00:30:34.198 Read completed with error (sct=0, sc=8) 00:30:34.198 Read completed with error (sct=0, sc=8) 00:30:34.198 Read completed with error (sct=0, sc=8) 00:30:34.198 starting I/O failed: -6 00:30:34.198 Read completed with error (sct=0, sc=8) 00:30:34.198 Write completed with error (sct=0, sc=8) 00:30:34.198 Write completed with error (sct=0, sc=8) 00:30:34.198 Write completed with error (sct=0, sc=8) 00:30:34.198 starting I/O failed: -6 00:30:34.198 Read completed with error (sct=0, sc=8) 00:30:34.198 Read completed with error (sct=0, sc=8) 00:30:34.198 Write completed with error (sct=0, sc=8) 00:30:34.198 Read completed with error (sct=0, sc=8) 00:30:34.198 starting I/O failed: -6 00:30:34.198 Read completed with error (sct=0, sc=8) 00:30:34.198 Read completed with error (sct=0, sc=8) 00:30:34.198 Write completed with error (sct=0, sc=8) 00:30:34.198 Read completed with error (sct=0, sc=8) 00:30:34.198 starting I/O failed: -6 00:30:34.198 Read completed with error (sct=0, sc=8) 00:30:34.198 Read completed with error (sct=0, sc=8) 00:30:34.198 Read completed with error (sct=0, sc=8) 00:30:34.198 Write completed with error (sct=0, sc=8) 00:30:34.198 starting I/O failed: -6 00:30:34.198 Write completed with error (sct=0, sc=8) 00:30:34.198 Read completed with error (sct=0, sc=8) 00:30:34.198 Read completed with error (sct=0, sc=8) 00:30:34.198 Write completed with error (sct=0, sc=8) 00:30:34.198 starting I/O failed: -6 00:30:34.198 Write completed with error (sct=0, sc=8) 00:30:34.198 Read completed with error (sct=0, sc=8) 00:30:34.198 Read completed with error (sct=0, sc=8) 00:30:34.198 Read completed with error (sct=0, sc=8) 00:30:34.198 starting I/O failed: -6 00:30:34.198 Read completed with error (sct=0, sc=8) 00:30:34.198 Read completed with error (sct=0, sc=8) 00:30:34.198 Read completed with error (sct=0, sc=8) 00:30:34.198 Read completed with error (sct=0, sc=8) 00:30:34.198 starting I/O failed: -6 00:30:34.198 Read completed with error (sct=0, sc=8) 00:30:34.198 Write completed with error (sct=0, sc=8) 00:30:34.198 Write completed with error (sct=0, sc=8) 00:30:34.198 Write completed with error (sct=0, sc=8) 00:30:34.198 starting I/O failed: -6 00:30:34.198 Write completed with error (sct=0, sc=8) 00:30:34.198 Read completed with error (sct=0, sc=8) 00:30:34.198 [2024-10-09 00:38:04.543501] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d44390 is same with the state(6) to be set 00:30:34.199 Read completed with error (sct=0, sc=8) 00:30:34.199 Read completed with error (sct=0, sc=8) 00:30:34.199 Read completed with error (sct=0, sc=8) 00:30:34.199 Read completed with error (sct=0, sc=8) 00:30:34.199 Read completed with error (sct=0, sc=8) 00:30:34.199 Read completed with error (sct=0, sc=8) 00:30:34.199 Read completed with error (sct=0, sc=8) 00:30:34.199 Write completed with error (sct=0, sc=8) 00:30:34.199 Write completed with error (sct=0, sc=8) 00:30:34.199 Read completed with error (sct=0, sc=8) 00:30:34.199 Read completed with error (sct=0, sc=8) 00:30:34.199 Read completed with error (sct=0, sc=8) 00:30:34.199 Read completed with error (sct=0, sc=8) 00:30:34.199 Write completed with error (sct=0, sc=8) 00:30:34.199 Read completed with error (sct=0, sc=8) 00:30:34.199 Read completed with error (sct=0, sc=8) 00:30:34.199 Read completed with error (sct=0, sc=8) 00:30:34.199 Read completed with error (sct=0, sc=8) 00:30:34.199 Write completed with error (sct=0, sc=8) 00:30:34.199 Read completed with error (sct=0, sc=8) 00:30:34.199 Write completed with error (sct=0, sc=8) 00:30:34.199 Read completed with error (sct=0, sc=8) 00:30:34.199 Read completed with error (sct=0, sc=8) 00:30:34.199 Read completed with error (sct=0, sc=8) 00:30:34.199 Read completed with error (sct=0, sc=8) 00:30:34.199 Write completed with error (sct=0, sc=8) 00:30:34.199 Read completed with error (sct=0, sc=8) 00:30:34.199 Write completed with error (sct=0, sc=8) 00:30:34.199 Read completed with error (sct=0, sc=8) 00:30:34.199 Read completed with error (sct=0, sc=8) 00:30:34.199 Read completed with error (sct=0, sc=8) 00:30:34.199 Write completed with error (sct=0, sc=8) 00:30:34.199 Read completed with error (sct=0, sc=8) 00:30:34.199 Write completed with error (sct=0, sc=8) 00:30:34.199 Read completed with error (sct=0, sc=8) 00:30:34.199 Read completed with error (sct=0, sc=8) 00:30:34.199 Write completed with error (sct=0, sc=8) 00:30:34.199 Read completed with error (sct=0, sc=8) 00:30:34.199 Read completed with error (sct=0, sc=8) 00:30:34.199 Write completed with error (sct=0, sc=8) 00:30:34.199 Read completed with error (sct=0, sc=8) 00:30:34.199 Read completed with error (sct=0, sc=8) 00:30:34.199 Read completed with error (sct=0, sc=8) 00:30:34.199 Read completed with error (sct=0, sc=8) 00:30:34.199 Read completed with error (sct=0, sc=8) 00:30:34.199 Read completed with error (sct=0, sc=8) 00:30:34.199 Read completed with error (sct=0, sc=8) 00:30:34.199 Write completed with error (sct=0, sc=8) 00:30:34.199 Write completed with error (sct=0, sc=8) 00:30:34.199 Read completed with error (sct=0, sc=8) 00:30:34.199 Write completed with error (sct=0, sc=8) 00:30:34.199 Write completed with error (sct=0, sc=8) 00:30:34.199 Read completed with error (sct=0, sc=8) 00:30:34.199 Read completed with error (sct=0, sc=8) 00:30:34.199 Read completed with error (sct=0, sc=8) 00:30:34.199 Write completed with error (sct=0, sc=8) 00:30:34.199 Write completed with error (sct=0, sc=8) 00:30:34.199 Read completed with error (sct=0, sc=8) 00:30:34.199 Write completed with error (sct=0, sc=8) 00:30:34.199 Read completed with error (sct=0, sc=8) 00:30:34.199 Write completed with error (sct=0, sc=8) 00:30:34.199 Write completed with error (sct=0, sc=8) 00:30:34.199 Read completed with error (sct=0, sc=8) 00:30:34.199 starting I/O failed: -6 00:30:34.199 Read completed with error (sct=0, sc=8) 00:30:34.199 Read completed with error (sct=0, sc=8) 00:30:34.199 Read completed with error (sct=0, sc=8) 00:30:34.199 Read completed with error (sct=0, sc=8) 00:30:34.199 starting I/O failed: -6 00:30:34.199 Read completed with error (sct=0, sc=8) 00:30:34.199 Read completed with error (sct=0, sc=8) 00:30:34.199 Read completed with error (sct=0, sc=8) 00:30:34.199 Read completed with error (sct=0, sc=8) 00:30:34.199 starting I/O failed: -6 00:30:34.199 Read completed with error (sct=0, sc=8) 00:30:34.199 Write completed with error (sct=0, sc=8) 00:30:34.199 Read completed with error (sct=0, sc=8) 00:30:34.199 Read completed with error (sct=0, sc=8) 00:30:34.199 starting I/O failed: -6 00:30:34.199 Write completed with error (sct=0, sc=8) 00:30:34.199 Read completed with error (sct=0, sc=8) 00:30:34.199 Read completed with error (sct=0, sc=8) 00:30:34.199 Read completed with error (sct=0, sc=8) 00:30:34.199 starting I/O failed: -6 00:30:34.199 Read completed with error (sct=0, sc=8) 00:30:34.199 Read completed with error (sct=0, sc=8) 00:30:34.199 Read completed with error (sct=0, sc=8) 00:30:34.199 Read completed with error (sct=0, sc=8) 00:30:34.199 starting I/O failed: -6 00:30:34.199 Read completed with error (sct=0, sc=8) 00:30:34.199 Read completed with error (sct=0, sc=8) 00:30:34.199 Read completed with error (sct=0, sc=8) 00:30:34.199 Write completed with error (sct=0, sc=8) 00:30:34.199 starting I/O failed: -6 00:30:34.199 Read completed with error (sct=0, sc=8) 00:30:34.199 Read completed with error (sct=0, sc=8) 00:30:34.199 Read completed with error (sct=0, sc=8) 00:30:34.199 Write completed with error (sct=0, sc=8) 00:30:34.199 starting I/O failed: -6 00:30:34.199 Read completed with error (sct=0, sc=8) 00:30:34.199 Read completed with error (sct=0, sc=8) 00:30:34.199 Read completed with error (sct=0, sc=8) 00:30:34.199 Read completed with error (sct=0, sc=8) 00:30:34.199 starting I/O failed: -6 00:30:34.199 Read completed with error (sct=0, sc=8) 00:30:34.199 Read completed with error (sct=0, sc=8) 00:30:34.199 Read completed with error (sct=0, sc=8) 00:30:34.199 Write completed with error (sct=0, sc=8) 00:30:34.199 starting I/O failed: -6 00:30:34.199 starting I/O failed: -6 00:30:34.199 starting I/O failed: -6 00:30:34.199 starting I/O failed: -6 00:30:34.199 starting I/O failed: -6 00:30:34.199 starting I/O failed: -6 00:30:34.199 starting I/O failed: -6 00:30:34.199 Read completed with error (sct=0, sc=8) 00:30:34.199 starting I/O failed: -6 00:30:34.199 Read completed with error (sct=0, sc=8) 00:30:34.199 starting I/O failed: -6 00:30:34.199 Write completed with error (sct=0, sc=8) 00:30:34.199 Read completed with error (sct=0, sc=8) 00:30:34.199 Read completed with error (sct=0, sc=8) 00:30:34.199 starting I/O failed: -6 00:30:34.199 Read completed with error (sct=0, sc=8) 00:30:34.199 starting I/O failed: -6 00:30:34.199 Read completed with error (sct=0, sc=8) 00:30:34.199 Read completed with error (sct=0, sc=8) 00:30:34.199 Write completed with error (sct=0, sc=8) 00:30:34.199 starting I/O failed: -6 00:30:34.199 Read completed with error (sct=0, sc=8) 00:30:34.199 starting I/O failed: -6 00:30:34.199 Read completed with error (sct=0, sc=8) 00:30:34.199 Read completed with error (sct=0, sc=8) 00:30:34.199 Read completed with error (sct=0, sc=8) 00:30:34.199 starting I/O failed: -6 00:30:34.199 Read completed with error (sct=0, sc=8) 00:30:34.199 starting I/O failed: -6 00:30:34.199 Write completed with error (sct=0, sc=8) 00:30:34.199 Read completed with error (sct=0, sc=8) 00:30:34.199 Read completed with error (sct=0, sc=8) 00:30:34.199 starting I/O failed: -6 00:30:34.199 Read completed with error (sct=0, sc=8) 00:30:34.199 starting I/O failed: -6 00:30:34.199 Read completed with error (sct=0, sc=8) 00:30:34.199 Read completed with error (sct=0, sc=8) 00:30:34.199 Read completed with error (sct=0, sc=8) 00:30:34.199 starting I/O failed: -6 00:30:34.199 Read completed with error (sct=0, sc=8) 00:30:34.199 starting I/O failed: -6 00:30:34.199 Read completed with error (sct=0, sc=8) 00:30:34.199 Read completed with error (sct=0, sc=8) 00:30:34.199 Write completed with error (sct=0, sc=8) 00:30:34.199 starting I/O failed: -6 00:30:34.199 Write completed with error (sct=0, sc=8) 00:30:34.199 starting I/O failed: -6 00:30:34.199 Read completed with error (sct=0, sc=8) 00:30:34.199 Write completed with error (sct=0, sc=8) 00:30:34.199 Read completed with error (sct=0, sc=8) 00:30:34.199 starting I/O failed: -6 00:30:34.199 Read completed with error (sct=0, sc=8) 00:30:34.199 starting I/O failed: -6 00:30:34.199 Read completed with error (sct=0, sc=8) 00:30:34.199 Write completed with error (sct=0, sc=8) 00:30:34.199 Read completed with error (sct=0, sc=8) 00:30:34.199 starting I/O failed: -6 00:30:34.199 Read completed with error (sct=0, sc=8) 00:30:34.199 starting I/O failed: -6 00:30:34.199 Read completed with error (sct=0, sc=8) 00:30:34.199 Read completed with error (sct=0, sc=8) 00:30:34.199 Write completed with error (sct=0, sc=8) 00:30:34.199 starting I/O failed: -6 00:30:34.199 Write completed with error (sct=0, sc=8) 00:30:34.199 starting I/O failed: -6 00:30:34.199 Read completed with error (sct=0, sc=8) 00:30:34.199 Read completed with error (sct=0, sc=8) 00:30:34.199 Write completed with error (sct=0, sc=8) 00:30:34.199 starting I/O failed: -6 00:30:34.199 Read completed with error (sct=0, sc=8) 00:30:34.199 starting I/O failed: -6 00:30:34.199 Read completed with error (sct=0, sc=8) 00:30:34.199 Write completed with error (sct=0, sc=8) 00:30:34.199 Write completed with error (sct=0, sc=8) 00:30:34.199 starting I/O failed: -6 00:30:34.199 Read completed with error (sct=0, sc=8) 00:30:34.199 starting I/O failed: -6 00:30:34.199 Read completed with error (sct=0, sc=8) 00:30:34.199 Read completed with error (sct=0, sc=8) 00:30:34.199 Read completed with error (sct=0, sc=8) 00:30:34.199 starting I/O failed: -6 00:30:34.199 Read completed with error (sct=0, sc=8) 00:30:34.199 starting I/O failed: -6 00:30:34.199 Write completed with error (sct=0, sc=8) 00:30:34.199 Read completed with error (sct=0, sc=8) 00:30:34.199 Read completed with error (sct=0, sc=8) 00:30:34.199 starting I/O failed: -6 00:30:34.199 Read completed with error (sct=0, sc=8) 00:30:34.199 starting I/O failed: -6 00:30:34.199 Read completed with error (sct=0, sc=8) 00:30:34.199 Read completed with error (sct=0, sc=8) 00:30:34.199 [2024-10-09 00:38:04.546938] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f8d4400cfe0 is same with the state(6) to be set 00:30:35.142 [2024-10-09 00:38:05.517085] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d45a70 is same with the state(6) to be set 00:30:35.142 Read completed with error (sct=0, sc=8) 00:30:35.143 Read completed with error (sct=0, sc=8) 00:30:35.143 Read completed with error (sct=0, sc=8) 00:30:35.143 Read completed with error (sct=0, sc=8) 00:30:35.143 Read completed with error (sct=0, sc=8) 00:30:35.143 Read completed with error (sct=0, sc=8) 00:30:35.143 Read completed with error (sct=0, sc=8) 00:30:35.143 Write completed with error (sct=0, sc=8) 00:30:35.143 Read completed with error (sct=0, sc=8) 00:30:35.143 Write completed with error (sct=0, sc=8) 00:30:35.143 Read completed with error (sct=0, sc=8) 00:30:35.143 Read completed with error (sct=0, sc=8) 00:30:35.143 Read completed with error (sct=0, sc=8) 00:30:35.143 Read completed with error (sct=0, sc=8) 00:30:35.143 Read completed with error (sct=0, sc=8) 00:30:35.143 Write completed with error (sct=0, sc=8) 00:30:35.143 Read completed with error (sct=0, sc=8) 00:30:35.143 Read completed with error (sct=0, sc=8) 00:30:35.143 Write completed with error (sct=0, sc=8) 00:30:35.143 Read completed with error (sct=0, sc=8) 00:30:35.143 Write completed with error (sct=0, sc=8) 00:30:35.143 Read completed with error (sct=0, sc=8) 00:30:35.143 Read completed with error (sct=0, sc=8) 00:30:35.143 Write completed with error (sct=0, sc=8) 00:30:35.143 Write completed with error (sct=0, sc=8) 00:30:35.143 Read completed with error (sct=0, sc=8) 00:30:35.143 Read completed with error (sct=0, sc=8) 00:30:35.143 [2024-10-09 00:38:05.546973] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d44570 is same with the state(6) to be set 00:30:35.143 Read completed with error (sct=0, sc=8) 00:30:35.143 Read completed with error (sct=0, sc=8) 00:30:35.143 Read completed with error (sct=0, sc=8) 00:30:35.143 Read completed with error (sct=0, sc=8) 00:30:35.143 Read completed with error (sct=0, sc=8) 00:30:35.143 Read completed with error (sct=0, sc=8) 00:30:35.143 Write completed with error (sct=0, sc=8) 00:30:35.143 Read completed with error (sct=0, sc=8) 00:30:35.143 Write completed with error (sct=0, sc=8) 00:30:35.143 Read completed with error (sct=0, sc=8) 00:30:35.143 Write completed with error (sct=0, sc=8) 00:30:35.143 Read completed with error (sct=0, sc=8) 00:30:35.143 Read completed with error (sct=0, sc=8) 00:30:35.143 Read completed with error (sct=0, sc=8) 00:30:35.143 Read completed with error (sct=0, sc=8) 00:30:35.143 Write completed with error (sct=0, sc=8) 00:30:35.143 Write completed with error (sct=0, sc=8) 00:30:35.143 Read completed with error (sct=0, sc=8) 00:30:35.143 Write completed with error (sct=0, sc=8) 00:30:35.143 Read completed with error (sct=0, sc=8) 00:30:35.143 Read completed with error (sct=0, sc=8) 00:30:35.143 Read completed with error (sct=0, sc=8) 00:30:35.143 Read completed with error (sct=0, sc=8) 00:30:35.143 Read completed with error (sct=0, sc=8) 00:30:35.143 Read completed with error (sct=0, sc=8) 00:30:35.143 Write completed with error (sct=0, sc=8) 00:30:35.143 [2024-10-09 00:38:05.547895] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d44930 is same with the state(6) to be set 00:30:35.143 Read completed with error (sct=0, sc=8) 00:30:35.143 Read completed with error (sct=0, sc=8) 00:30:35.143 Write completed with error (sct=0, sc=8) 00:30:35.143 Write completed with error (sct=0, sc=8) 00:30:35.143 Read completed with error (sct=0, sc=8) 00:30:35.143 Read completed with error (sct=0, sc=8) 00:30:35.143 Read completed with error (sct=0, sc=8) 00:30:35.143 Read completed with error (sct=0, sc=8) 00:30:35.143 Read completed with error (sct=0, sc=8) 00:30:35.143 Read completed with error (sct=0, sc=8) 00:30:35.143 Read completed with error (sct=0, sc=8) 00:30:35.143 Read completed with error (sct=0, sc=8) 00:30:35.143 Read completed with error (sct=0, sc=8) 00:30:35.143 Read completed with error (sct=0, sc=8) 00:30:35.143 Read completed with error (sct=0, sc=8) 00:30:35.143 Read completed with error (sct=0, sc=8) 00:30:35.143 Read completed with error (sct=0, sc=8) 00:30:35.143 Read completed with error (sct=0, sc=8) 00:30:35.143 Read completed with error (sct=0, sc=8) 00:30:35.143 Write completed with error (sct=0, sc=8) 00:30:35.143 Write completed with error (sct=0, sc=8) 00:30:35.143 Read completed with error (sct=0, sc=8) 00:30:35.143 Read completed with error (sct=0, sc=8) 00:30:35.143 Read completed with error (sct=0, sc=8) 00:30:35.143 Read completed with error (sct=0, sc=8) 00:30:35.143 Write completed with error (sct=0, sc=8) 00:30:35.143 Read completed with error (sct=0, sc=8) 00:30:35.143 Write completed with error (sct=0, sc=8) 00:30:35.143 Read completed with error (sct=0, sc=8) 00:30:35.143 Read completed with error (sct=0, sc=8) 00:30:35.143 Write completed with error (sct=0, sc=8) 00:30:35.143 Read completed with error (sct=0, sc=8) 00:30:35.143 Read completed with error (sct=0, sc=8) 00:30:35.143 Read completed with error (sct=0, sc=8) 00:30:35.143 Read completed with error (sct=0, sc=8) 00:30:35.143 Read completed with error (sct=0, sc=8) 00:30:35.143 Write completed with error (sct=0, sc=8) 00:30:35.143 Read completed with error (sct=0, sc=8) 00:30:35.143 [2024-10-09 00:38:05.548432] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f8d4400d640 is same with the state(6) to be set 00:30:35.143 Read completed with error (sct=0, sc=8) 00:30:35.143 Write completed with error (sct=0, sc=8) 00:30:35.143 Write completed with error (sct=0, sc=8) 00:30:35.143 Write completed with error (sct=0, sc=8) 00:30:35.143 Read completed with error (sct=0, sc=8) 00:30:35.143 Read completed with error (sct=0, sc=8) 00:30:35.143 Read completed with error (sct=0, sc=8) 00:30:35.143 Read completed with error (sct=0, sc=8) 00:30:35.143 Write completed with error (sct=0, sc=8) 00:30:35.143 Read completed with error (sct=0, sc=8) 00:30:35.143 Read completed with error (sct=0, sc=8) 00:30:35.143 Read completed with error (sct=0, sc=8) 00:30:35.143 Write completed with error (sct=0, sc=8) 00:30:35.143 Read completed with error (sct=0, sc=8) 00:30:35.143 Read completed with error (sct=0, sc=8) 00:30:35.143 Read completed with error (sct=0, sc=8) 00:30:35.143 Read completed with error (sct=0, sc=8) 00:30:35.143 Read completed with error (sct=0, sc=8) 00:30:35.143 Read completed with error (sct=0, sc=8) 00:30:35.143 Write completed with error (sct=0, sc=8) 00:30:35.143 Read completed with error (sct=0, sc=8) 00:30:35.143 Read completed with error (sct=0, sc=8) 00:30:35.143 Read completed with error (sct=0, sc=8) 00:30:35.143 Read completed with error (sct=0, sc=8) 00:30:35.143 Read completed with error (sct=0, sc=8) 00:30:35.143 Read completed with error (sct=0, sc=8) 00:30:35.143 Write completed with error (sct=0, sc=8) 00:30:35.143 Read completed with error (sct=0, sc=8) 00:30:35.143 Read completed with error (sct=0, sc=8) 00:30:35.143 Read completed with error (sct=0, sc=8) 00:30:35.143 Read completed with error (sct=0, sc=8) 00:30:35.143 Read completed with error (sct=0, sc=8) 00:30:35.143 Read completed with error (sct=0, sc=8) 00:30:35.143 Write completed with error (sct=0, sc=8) 00:30:35.143 Write completed with error (sct=0, sc=8) 00:30:35.143 Write completed with error (sct=0, sc=8) 00:30:35.143 Write completed with error (sct=0, sc=8) 00:30:35.143 Write completed with error (sct=0, sc=8) 00:30:35.143 Read completed with error (sct=0, sc=8) 00:30:35.143 [2024-10-09 00:38:05.548568] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f8d44000c00 is same with the state(6) to be set 00:30:35.143 Initializing NVMe Controllers 00:30:35.143 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:35.143 Controller IO queue size 128, less than required. 00:30:35.143 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:35.143 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:30:35.143 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:30:35.143 Initialization complete. Launching workers. 00:30:35.143 ======================================================== 00:30:35.143 Latency(us) 00:30:35.143 Device Information : IOPS MiB/s Average min max 00:30:35.143 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 174.44 0.09 884963.06 422.67 1010029.79 00:30:35.143 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 181.40 0.09 917172.64 406.62 1013064.30 00:30:35.143 ======================================================== 00:30:35.143 Total : 355.84 0.17 901382.75 406.62 1013064.30 00:30:35.143 00:30:35.143 [2024-10-09 00:38:05.549253] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d45a70 (9): Bad file descriptor 00:30:35.143 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:30:35.143 00:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:35.143 00:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:30:35.143 00:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3460778 00:30:35.143 00:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:30:35.726 00:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:30:35.726 00:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3460778 00:30:35.726 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (3460778) - No such process 00:30:35.726 00:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 3460778 00:30:35.726 00:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # local es=0 00:30:35.726 00:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # valid_exec_arg wait 3460778 00:30:35.726 00:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@638 -- # local arg=wait 00:30:35.726 00:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:35.726 00:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # type -t wait 00:30:35.726 00:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:35.726 00:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # wait 3460778 00:30:35.726 00:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # es=1 00:30:35.726 00:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:30:35.726 00:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:30:35.726 00:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:30:35.726 00:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:30:35.726 00:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:35.726 00:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:35.726 00:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:35.726 00:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:35.726 00:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:35.727 00:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:35.727 [2024-10-09 00:38:06.081456] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:35.727 00:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:35.727 00:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:35.727 00:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:35.727 00:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:35.727 00:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:35.727 00:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=3461458 00:30:35.727 00:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:30:35.727 00:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3461458 00:30:35.727 00:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:30:35.727 00:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:30:35.727 [2024-10-09 00:38:06.153822] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:30:35.987 00:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:30:35.987 00:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3461458 00:30:35.987 00:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:30:36.559 00:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:30:36.559 00:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3461458 00:30:36.559 00:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:30:37.130 00:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:30:37.130 00:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3461458 00:30:37.130 00:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:30:37.722 00:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:30:37.722 00:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3461458 00:30:37.722 00:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:30:37.990 00:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:30:38.250 00:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3461458 00:30:38.250 00:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:30:38.511 00:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:30:38.511 00:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3461458 00:30:38.511 00:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:30:38.771 Initializing NVMe Controllers 00:30:38.771 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:38.771 Controller IO queue size 128, less than required. 00:30:38.771 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:38.771 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:30:38.771 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:30:38.771 Initialization complete. Launching workers. 00:30:38.771 ======================================================== 00:30:38.771 Latency(us) 00:30:38.771 Device Information : IOPS MiB/s Average min max 00:30:38.771 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002304.26 1000202.59 1006647.83 00:30:38.771 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1005257.06 1000206.80 1043107.50 00:30:38.771 ======================================================== 00:30:38.771 Total : 256.00 0.12 1003780.66 1000202.59 1043107.50 00:30:38.771 00:30:39.044 00:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:30:39.044 00:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3461458 00:30:39.044 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (3461458) - No such process 00:30:39.045 00:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 3461458 00:30:39.045 00:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:30:39.045 00:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:30:39.045 00:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@514 -- # nvmfcleanup 00:30:39.045 00:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:30:39.045 00:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:39.045 00:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:30:39.045 00:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:39.045 00:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:39.045 rmmod nvme_tcp 00:30:39.045 rmmod nvme_fabrics 00:30:39.045 rmmod nvme_keyring 00:30:39.310 00:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:39.310 00:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:30:39.310 00:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:30:39.311 00:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@515 -- # '[' -n 3460430 ']' 00:30:39.311 00:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # killprocess 3460430 00:30:39.311 00:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@950 -- # '[' -z 3460430 ']' 00:30:39.311 00:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # kill -0 3460430 00:30:39.311 00:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # uname 00:30:39.311 00:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:39.311 00:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3460430 00:30:39.311 00:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:30:39.311 00:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:30:39.311 00:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3460430' 00:30:39.311 killing process with pid 3460430 00:30:39.311 00:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@969 -- # kill 3460430 00:30:39.311 00:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@974 -- # wait 3460430 00:30:39.311 00:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:30:39.311 00:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:30:39.311 00:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:30:39.311 00:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:30:39.311 00:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@789 -- # iptables-save 00:30:39.311 00:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:30:39.311 00:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@789 -- # iptables-restore 00:30:39.311 00:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:39.311 00:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:39.311 00:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:39.311 00:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:39.311 00:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:41.857 00:38:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:41.857 00:30:41.857 real 0m18.494s 00:30:41.857 user 0m26.958s 00:30:41.857 sys 0m7.512s 00:30:41.857 00:38:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:41.857 00:38:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:41.857 ************************************ 00:30:41.857 END TEST nvmf_delete_subsystem 00:30:41.857 ************************************ 00:30:41.857 00:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:30:41.857 00:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:30:41.857 00:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:41.857 00:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:41.857 ************************************ 00:30:41.857 START TEST nvmf_host_management 00:30:41.857 ************************************ 00:30:41.857 00:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:30:41.857 * Looking for test storage... 00:30:41.857 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:41.857 00:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:30:41.857 00:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1681 -- # lcov --version 00:30:41.857 00:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:30:41.857 00:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:30:41.857 00:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:41.857 00:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:41.857 00:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:41.857 00:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:30:41.857 00:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:30:41.857 00:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:30:41.857 00:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:30:41.857 00:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:30:41.857 00:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:30:41.857 00:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:30:41.857 00:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:41.857 00:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:30:41.857 00:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:30:41.857 00:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:41.857 00:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:41.857 00:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:30:41.857 00:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:30:41.857 00:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:41.857 00:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:30:41.857 00:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:30:41.857 00:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:30:41.857 00:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:30:41.857 00:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:41.857 00:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:30:41.857 00:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:30:41.857 00:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:41.857 00:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:41.857 00:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:30:41.857 00:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:41.857 00:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:30:41.857 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:41.857 --rc genhtml_branch_coverage=1 00:30:41.857 --rc genhtml_function_coverage=1 00:30:41.857 --rc genhtml_legend=1 00:30:41.857 --rc geninfo_all_blocks=1 00:30:41.857 --rc geninfo_unexecuted_blocks=1 00:30:41.857 00:30:41.857 ' 00:30:41.857 00:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:30:41.857 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:41.857 --rc genhtml_branch_coverage=1 00:30:41.857 --rc genhtml_function_coverage=1 00:30:41.857 --rc genhtml_legend=1 00:30:41.857 --rc geninfo_all_blocks=1 00:30:41.857 --rc geninfo_unexecuted_blocks=1 00:30:41.857 00:30:41.857 ' 00:30:41.857 00:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:30:41.857 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:41.857 --rc genhtml_branch_coverage=1 00:30:41.857 --rc genhtml_function_coverage=1 00:30:41.857 --rc genhtml_legend=1 00:30:41.857 --rc geninfo_all_blocks=1 00:30:41.857 --rc geninfo_unexecuted_blocks=1 00:30:41.857 00:30:41.857 ' 00:30:41.857 00:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:30:41.857 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:41.857 --rc genhtml_branch_coverage=1 00:30:41.857 --rc genhtml_function_coverage=1 00:30:41.857 --rc genhtml_legend=1 00:30:41.857 --rc geninfo_all_blocks=1 00:30:41.857 --rc geninfo_unexecuted_blocks=1 00:30:41.857 00:30:41.857 ' 00:30:41.857 00:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:41.857 00:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:30:41.857 00:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:41.857 00:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:41.857 00:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:41.857 00:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:41.857 00:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:41.857 00:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:41.857 00:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:41.857 00:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:41.857 00:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:41.857 00:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:41.857 00:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:41.857 00:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:41.857 00:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:41.857 00:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:41.857 00:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:41.858 00:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:41.858 00:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:41.858 00:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:30:41.858 00:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:41.858 00:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:41.858 00:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:41.858 00:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:41.858 00:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:41.858 00:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:41.858 00:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:30:41.858 00:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:41.858 00:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:30:41.858 00:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:41.858 00:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:41.858 00:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:41.858 00:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:41.858 00:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:41.858 00:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:41.858 00:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:41.858 00:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:41.858 00:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:41.858 00:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:41.858 00:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:41.858 00:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:41.858 00:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:30:41.858 00:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:30:41.858 00:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:41.858 00:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@474 -- # prepare_net_devs 00:30:41.858 00:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@436 -- # local -g is_hw=no 00:30:41.858 00:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@438 -- # remove_spdk_ns 00:30:41.858 00:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:41.858 00:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:41.858 00:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:41.858 00:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:30:41.858 00:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:30:41.858 00:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:30:41.858 00:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:49.999 00:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:49.999 00:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:30:49.999 00:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:49.999 00:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:49.999 00:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:49.999 00:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:49.999 00:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:49.999 00:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:30:49.999 00:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:49.999 00:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:30:49.999 00:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:30:49.999 00:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:30:49.999 00:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:30:49.999 00:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:30:49.999 00:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:30:49.999 00:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:49.999 00:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:49.999 00:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:49.999 00:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:49.999 00:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:49.999 00:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:49.999 00:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:49.999 00:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:49.999 00:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:49.999 00:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:49.999 00:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:49.999 00:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:49.999 00:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:49.999 00:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:49.999 00:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:49.999 00:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:49.999 00:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:49.999 00:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:49.999 00:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:49.999 00:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:30:49.999 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:30:49.999 00:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:50.000 00:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:50.000 00:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:50.000 00:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:50.000 00:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:50.000 00:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:50.000 00:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:30:50.000 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:30:50.000 00:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:50.000 00:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:50.000 00:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:50.000 00:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:50.000 00:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:50.000 00:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:50.000 00:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:50.000 00:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:50.000 00:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:30:50.000 00:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:50.000 00:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:30:50.000 00:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:50.000 00:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ up == up ]] 00:30:50.000 00:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:30:50.000 00:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:50.000 00:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:30:50.000 Found net devices under 0000:4b:00.0: cvl_0_0 00:30:50.000 00:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:30:50.000 00:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:30:50.000 00:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:50.000 00:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:30:50.000 00:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:50.000 00:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ up == up ]] 00:30:50.000 00:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:30:50.000 00:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:50.000 00:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:30:50.000 Found net devices under 0000:4b:00.1: cvl_0_1 00:30:50.000 00:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:30:50.000 00:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:30:50.000 00:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # is_hw=yes 00:30:50.000 00:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:30:50.000 00:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:30:50.000 00:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:30:50.000 00:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:50.000 00:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:50.000 00:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:50.000 00:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:50.000 00:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:50.000 00:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:50.000 00:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:50.000 00:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:50.000 00:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:50.000 00:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:50.000 00:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:50.000 00:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:50.000 00:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:50.000 00:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:50.000 00:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:50.000 00:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:50.000 00:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:50.000 00:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:50.000 00:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:50.000 00:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:50.000 00:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:50.000 00:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:50.000 00:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:50.000 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:50.000 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.625 ms 00:30:50.000 00:30:50.000 --- 10.0.0.2 ping statistics --- 00:30:50.000 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:50.000 rtt min/avg/max/mdev = 0.625/0.625/0.625/0.000 ms 00:30:50.000 00:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:50.000 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:50.000 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.285 ms 00:30:50.000 00:30:50.000 --- 10.0.0.1 ping statistics --- 00:30:50.000 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:50.000 rtt min/avg/max/mdev = 0.285/0.285/0.285/0.000 ms 00:30:50.000 00:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:50.000 00:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@448 -- # return 0 00:30:50.000 00:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:30:50.000 00:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:50.000 00:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:30:50.000 00:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:30:50.000 00:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:50.000 00:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:30:50.000 00:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:30:50.000 00:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:30:50.000 00:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:30:50.000 00:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:30:50.000 00:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:30:50.000 00:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:50.000 00:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:50.000 00:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@507 -- # nvmfpid=3466281 00:30:50.000 00:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@508 -- # waitforlisten 3466281 00:30:50.000 00:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1E 00:30:50.000 00:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 3466281 ']' 00:30:50.000 00:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:50.000 00:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:50.000 00:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:50.000 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:50.000 00:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:50.000 00:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:50.000 [2024-10-09 00:38:19.863397] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:50.000 [2024-10-09 00:38:19.864515] Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 initialization... 00:30:50.000 [2024-10-09 00:38:19.864565] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:50.000 [2024-10-09 00:38:19.953064] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:50.000 [2024-10-09 00:38:20.054469] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:50.000 [2024-10-09 00:38:20.054534] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:50.001 [2024-10-09 00:38:20.054546] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:50.001 [2024-10-09 00:38:20.054557] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:50.001 [2024-10-09 00:38:20.054566] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:50.001 [2024-10-09 00:38:20.056687] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:30:50.001 [2024-10-09 00:38:20.056851] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:30:50.001 [2024-10-09 00:38:20.057135] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 4 00:30:50.001 [2024-10-09 00:38:20.057139] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:30:50.001 [2024-10-09 00:38:20.155583] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:50.001 [2024-10-09 00:38:20.156589] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:50.001 [2024-10-09 00:38:20.156944] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:30:50.001 [2024-10-09 00:38:20.157356] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:30:50.001 [2024-10-09 00:38:20.157387] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:30:50.262 00:38:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:50.262 00:38:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:30:50.262 00:38:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:30:50.262 00:38:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:50.262 00:38:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:50.262 00:38:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:50.262 00:38:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:50.262 00:38:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:50.262 00:38:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:50.262 [2024-10-09 00:38:20.730267] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:50.262 00:38:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:50.262 00:38:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:30:50.262 00:38:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:50.262 00:38:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:50.262 00:38:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:30:50.262 00:38:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:30:50.262 00:38:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:30:50.262 00:38:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:50.262 00:38:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:50.262 Malloc0 00:30:50.262 [2024-10-09 00:38:20.822554] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:50.262 00:38:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:50.262 00:38:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:30:50.262 00:38:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:50.262 00:38:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:50.262 00:38:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=3466502 00:30:50.262 00:38:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 3466502 /var/tmp/bdevperf.sock 00:30:50.262 00:38:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 3466502 ']' 00:30:50.262 00:38:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:50.262 00:38:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:50.262 00:38:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:50.262 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:50.262 00:38:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:30:50.262 00:38:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:30:50.262 00:38:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:50.262 00:38:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:50.262 00:38:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@558 -- # config=() 00:30:50.262 00:38:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@558 -- # local subsystem config 00:30:50.262 00:38:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:30:50.262 00:38:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:30:50.262 { 00:30:50.262 "params": { 00:30:50.262 "name": "Nvme$subsystem", 00:30:50.262 "trtype": "$TEST_TRANSPORT", 00:30:50.262 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:50.262 "adrfam": "ipv4", 00:30:50.262 "trsvcid": "$NVMF_PORT", 00:30:50.262 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:50.262 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:50.262 "hdgst": ${hdgst:-false}, 00:30:50.262 "ddgst": ${ddgst:-false} 00:30:50.262 }, 00:30:50.262 "method": "bdev_nvme_attach_controller" 00:30:50.262 } 00:30:50.262 EOF 00:30:50.262 )") 00:30:50.263 00:38:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@580 -- # cat 00:30:50.263 00:38:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # jq . 00:30:50.263 00:38:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@583 -- # IFS=, 00:30:50.524 00:38:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:30:50.524 "params": { 00:30:50.524 "name": "Nvme0", 00:30:50.524 "trtype": "tcp", 00:30:50.524 "traddr": "10.0.0.2", 00:30:50.524 "adrfam": "ipv4", 00:30:50.524 "trsvcid": "4420", 00:30:50.524 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:50.524 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:50.524 "hdgst": false, 00:30:50.524 "ddgst": false 00:30:50.524 }, 00:30:50.524 "method": "bdev_nvme_attach_controller" 00:30:50.524 }' 00:30:50.524 [2024-10-09 00:38:20.933119] Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 initialization... 00:30:50.524 [2024-10-09 00:38:20.933192] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3466502 ] 00:30:50.524 [2024-10-09 00:38:21.015368] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:50.524 [2024-10-09 00:38:21.112090] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:30:50.785 Running I/O for 10 seconds... 00:30:51.373 00:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:51.373 00:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:30:51.373 00:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:30:51.373 00:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:51.373 00:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:51.373 00:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:51.373 00:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:51.373 00:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:30:51.373 00:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:30:51.373 00:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:30:51.373 00:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:30:51.373 00:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:30:51.373 00:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:30:51.373 00:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:30:51.373 00:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:30:51.373 00:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:30:51.373 00:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:51.373 00:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:51.373 00:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:51.373 00:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=707 00:30:51.373 00:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 707 -ge 100 ']' 00:30:51.373 00:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:30:51.373 00:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@60 -- # break 00:30:51.373 00:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:30:51.373 00:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:30:51.373 00:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:51.373 00:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:51.373 [2024-10-09 00:38:21.822221] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x673360 is same with the state(6) to be set 00:30:51.373 [2024-10-09 00:38:21.822273] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x673360 is same with the state(6) to be set 00:30:51.373 [2024-10-09 00:38:21.822282] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x673360 is same with the state(6) to be set 00:30:51.373 [2024-10-09 00:38:21.822289] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x673360 is same with the state(6) to be set 00:30:51.373 [2024-10-09 00:38:21.822297] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x673360 is same with the state(6) to be set 00:30:51.373 [2024-10-09 00:38:21.822313] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x673360 is same with the state(6) to be set 00:30:51.373 [2024-10-09 00:38:21.822321] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x673360 is same with the state(6) to be set 00:30:51.373 [2024-10-09 00:38:21.822328] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x673360 is same with the state(6) to be set 00:30:51.373 [2024-10-09 00:38:21.822335] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x673360 is same with the state(6) to be set 00:30:51.373 [2024-10-09 00:38:21.822342] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x673360 is same with the state(6) to be set 00:30:51.373 [2024-10-09 00:38:21.822350] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x673360 is same with the state(6) to be set 00:30:51.373 [2024-10-09 00:38:21.822357] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x673360 is same with the state(6) to be set 00:30:51.373 [2024-10-09 00:38:21.822364] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x673360 is same with the state(6) to be set 00:30:51.373 [2024-10-09 00:38:21.822371] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x673360 is same with the state(6) to be set 00:30:51.373 [2024-10-09 00:38:21.822378] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x673360 is same with the state(6) to be set 00:30:51.373 [2024-10-09 00:38:21.822385] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x673360 is same with the state(6) to be set 00:30:51.374 [2024-10-09 00:38:21.822393] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x673360 is same with the state(6) to be set 00:30:51.374 00:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:51.374 00:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:30:51.374 00:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:51.374 00:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:51.374 [2024-10-09 00:38:21.832126] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:51.374 [2024-10-09 00:38:21.832180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.374 [2024-10-09 00:38:21.832191] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:51.374 [2024-10-09 00:38:21.832200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.374 [2024-10-09 00:38:21.832209] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:51.374 [2024-10-09 00:38:21.832217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.374 [2024-10-09 00:38:21.832226] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:51.374 [2024-10-09 00:38:21.832234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.374 [2024-10-09 00:38:21.832242] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b90c0 is same with the state(6) to be set 00:30:51.374 [2024-10-09 00:38:21.832770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:99072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.374 [2024-10-09 00:38:21.832813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.374 [2024-10-09 00:38:21.832834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:99200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.374 [2024-10-09 00:38:21.832842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.374 [2024-10-09 00:38:21.832853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:99328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.374 [2024-10-09 00:38:21.832861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.374 [2024-10-09 00:38:21.832871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:99456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.374 [2024-10-09 00:38:21.832879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.374 [2024-10-09 00:38:21.832888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:99584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.374 [2024-10-09 00:38:21.832896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.374 [2024-10-09 00:38:21.832906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:99712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.374 [2024-10-09 00:38:21.832914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.374 [2024-10-09 00:38:21.832923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:99840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.374 [2024-10-09 00:38:21.832931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.374 [2024-10-09 00:38:21.832940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:99968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.374 [2024-10-09 00:38:21.832948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.374 [2024-10-09 00:38:21.832958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:100096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.374 [2024-10-09 00:38:21.832966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.374 [2024-10-09 00:38:21.832976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:100224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.374 [2024-10-09 00:38:21.832983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.374 [2024-10-09 00:38:21.832993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:100352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.374 [2024-10-09 00:38:21.833000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.374 [2024-10-09 00:38:21.833011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:100480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.374 [2024-10-09 00:38:21.833019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.374 [2024-10-09 00:38:21.833029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:100608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.374 [2024-10-09 00:38:21.833037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.374 [2024-10-09 00:38:21.833048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:100736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.374 [2024-10-09 00:38:21.833056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.374 [2024-10-09 00:38:21.833066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.374 [2024-10-09 00:38:21.833074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.374 [2024-10-09 00:38:21.833083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:100864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.374 [2024-10-09 00:38:21.833091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.374 [2024-10-09 00:38:21.833100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:100992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.374 [2024-10-09 00:38:21.833107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.374 [2024-10-09 00:38:21.833117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:101120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.374 [2024-10-09 00:38:21.833125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.374 [2024-10-09 00:38:21.833135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:98432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.374 [2024-10-09 00:38:21.833143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.374 [2024-10-09 00:38:21.833153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:101248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.374 [2024-10-09 00:38:21.833160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.374 [2024-10-09 00:38:21.833170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:101376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.374 [2024-10-09 00:38:21.833177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.374 [2024-10-09 00:38:21.833187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:98560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.374 [2024-10-09 00:38:21.833195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.374 [2024-10-09 00:38:21.833204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:101504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.374 [2024-10-09 00:38:21.833212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.374 [2024-10-09 00:38:21.833221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:101632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.374 [2024-10-09 00:38:21.833229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.374 [2024-10-09 00:38:21.833239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:98688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.374 [2024-10-09 00:38:21.833247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.374 [2024-10-09 00:38:21.833256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:101760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.374 [2024-10-09 00:38:21.833266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.374 [2024-10-09 00:38:21.833276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:101888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.374 [2024-10-09 00:38:21.833284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.374 [2024-10-09 00:38:21.833293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:98816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.374 [2024-10-09 00:38:21.833301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.374 [2024-10-09 00:38:21.833311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:102016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.374 [2024-10-09 00:38:21.833318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.374 [2024-10-09 00:38:21.833328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:102144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.374 [2024-10-09 00:38:21.833335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.374 [2024-10-09 00:38:21.833345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:98944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.374 [2024-10-09 00:38:21.833353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.374 [2024-10-09 00:38:21.833364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:102272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.374 [2024-10-09 00:38:21.833371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.374 [2024-10-09 00:38:21.833381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:102400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.374 [2024-10-09 00:38:21.833389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.374 [2024-10-09 00:38:21.833399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:102528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.374 [2024-10-09 00:38:21.833407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.374 [2024-10-09 00:38:21.833417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:102656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.375 [2024-10-09 00:38:21.833425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.375 [2024-10-09 00:38:21.833434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:102784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.375 [2024-10-09 00:38:21.833442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.375 [2024-10-09 00:38:21.833452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:102912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.375 [2024-10-09 00:38:21.833460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.375 [2024-10-09 00:38:21.833470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:103040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.375 [2024-10-09 00:38:21.833480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.375 [2024-10-09 00:38:21.833492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:103168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.375 [2024-10-09 00:38:21.833500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.375 [2024-10-09 00:38:21.833510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:103296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.375 [2024-10-09 00:38:21.833518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.375 [2024-10-09 00:38:21.833528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:103424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.375 [2024-10-09 00:38:21.833535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.375 [2024-10-09 00:38:21.833545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:103552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.375 [2024-10-09 00:38:21.833553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.375 [2024-10-09 00:38:21.833563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:103680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.375 [2024-10-09 00:38:21.833571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.375 [2024-10-09 00:38:21.833582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:103808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.375 [2024-10-09 00:38:21.833591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.375 [2024-10-09 00:38:21.833600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:103936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.375 [2024-10-09 00:38:21.833608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.375 [2024-10-09 00:38:21.833618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:104064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.375 [2024-10-09 00:38:21.833626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.375 [2024-10-09 00:38:21.833636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:104192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.375 [2024-10-09 00:38:21.833643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.375 [2024-10-09 00:38:21.833653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:104320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.375 [2024-10-09 00:38:21.833660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.375 [2024-10-09 00:38:21.833670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:104448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.375 [2024-10-09 00:38:21.833678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.375 [2024-10-09 00:38:21.833688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:104576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.375 [2024-10-09 00:38:21.833695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.375 [2024-10-09 00:38:21.833705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:104704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.375 [2024-10-09 00:38:21.833714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.375 [2024-10-09 00:38:21.833730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:104832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.375 [2024-10-09 00:38:21.833738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.375 [2024-10-09 00:38:21.833747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:104960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.375 [2024-10-09 00:38:21.833755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.375 [2024-10-09 00:38:21.833765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:105088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.375 [2024-10-09 00:38:21.833772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.375 [2024-10-09 00:38:21.833782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:105216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.375 [2024-10-09 00:38:21.833791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.375 [2024-10-09 00:38:21.833801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:105344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.375 [2024-10-09 00:38:21.833808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.375 [2024-10-09 00:38:21.833818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:105472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.375 [2024-10-09 00:38:21.833826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.375 [2024-10-09 00:38:21.833836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:105600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.375 [2024-10-09 00:38:21.833843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.375 [2024-10-09 00:38:21.833853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:105728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.375 [2024-10-09 00:38:21.833860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.375 [2024-10-09 00:38:21.833870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:105856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.375 [2024-10-09 00:38:21.833878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.375 [2024-10-09 00:38:21.833888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:105984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.375 [2024-10-09 00:38:21.833896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.375 [2024-10-09 00:38:21.833905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:106112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.375 [2024-10-09 00:38:21.833913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.375 [2024-10-09 00:38:21.833923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:106240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.375 [2024-10-09 00:38:21.833931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.375 [2024-10-09 00:38:21.833942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:106368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.375 [2024-10-09 00:38:21.833949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.375 [2024-10-09 00:38:21.834040] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x17d1e30 was disconnected and freed. reset controller. 00:30:51.375 00:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:51.375 00:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:30:51.375 [2024-10-09 00:38:21.835261] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:30:51.375 task offset: 99072 on job bdev=Nvme0n1 fails 00:30:51.375 00:30:51.375 Latency(us) 00:30:51.375 [2024-10-08T22:38:22.010Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:51.375 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:51.375 Job: Nvme0n1 ended in about 0.52 seconds with error 00:30:51.375 Verification LBA range: start 0x0 length 0x400 00:30:51.375 Nvme0n1 : 0.52 1489.58 93.10 124.13 0.00 38601.25 2389.33 36481.71 00:30:51.375 [2024-10-08T22:38:22.010Z] =================================================================================================================== 00:30:51.375 [2024-10-08T22:38:22.010Z] Total : 1489.58 93.10 124.13 0.00 38601.25 2389.33 36481.71 00:30:51.375 [2024-10-09 00:38:21.837462] app.c:1062:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:30:51.375 [2024-10-09 00:38:21.837499] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b90c0 (9): Bad file descriptor 00:30:51.375 [2024-10-09 00:38:21.971932] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:30:52.317 00:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 3466502 00:30:52.317 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (3466502) - No such process 00:30:52.317 00:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # true 00:30:52.317 00:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:30:52.317 00:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:30:52.317 00:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:30:52.317 00:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@558 -- # config=() 00:30:52.317 00:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@558 -- # local subsystem config 00:30:52.317 00:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:30:52.317 00:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:30:52.317 { 00:30:52.317 "params": { 00:30:52.317 "name": "Nvme$subsystem", 00:30:52.317 "trtype": "$TEST_TRANSPORT", 00:30:52.317 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:52.317 "adrfam": "ipv4", 00:30:52.317 "trsvcid": "$NVMF_PORT", 00:30:52.317 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:52.317 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:52.317 "hdgst": ${hdgst:-false}, 00:30:52.317 "ddgst": ${ddgst:-false} 00:30:52.317 }, 00:30:52.317 "method": "bdev_nvme_attach_controller" 00:30:52.317 } 00:30:52.317 EOF 00:30:52.317 )") 00:30:52.317 00:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@580 -- # cat 00:30:52.317 00:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # jq . 00:30:52.317 00:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@583 -- # IFS=, 00:30:52.317 00:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:30:52.317 "params": { 00:30:52.317 "name": "Nvme0", 00:30:52.317 "trtype": "tcp", 00:30:52.317 "traddr": "10.0.0.2", 00:30:52.317 "adrfam": "ipv4", 00:30:52.317 "trsvcid": "4420", 00:30:52.317 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:52.317 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:52.317 "hdgst": false, 00:30:52.317 "ddgst": false 00:30:52.317 }, 00:30:52.317 "method": "bdev_nvme_attach_controller" 00:30:52.317 }' 00:30:52.317 [2024-10-09 00:38:22.892959] Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 initialization... 00:30:52.317 [2024-10-09 00:38:22.893017] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3466855 ] 00:30:52.578 [2024-10-09 00:38:22.972059] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:52.578 [2024-10-09 00:38:23.035414] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:30:52.839 Running I/O for 1 seconds... 00:30:53.793 1640.00 IOPS, 102.50 MiB/s 00:30:53.793 Latency(us) 00:30:53.793 [2024-10-08T22:38:24.428Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:53.793 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:53.793 Verification LBA range: start 0x0 length 0x400 00:30:53.793 Nvme0n1 : 1.01 1686.92 105.43 0.00 0.00 37111.36 1993.39 36700.16 00:30:53.793 [2024-10-08T22:38:24.428Z] =================================================================================================================== 00:30:53.793 [2024-10-08T22:38:24.428Z] Total : 1686.92 105.43 0.00 0.00 37111.36 1993.39 36700.16 00:30:54.053 00:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:30:54.053 00:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:30:54.053 00:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:30:54.053 00:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:30:54.053 00:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:30:54.053 00:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@514 -- # nvmfcleanup 00:30:54.053 00:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:30:54.053 00:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:54.053 00:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:30:54.053 00:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:54.053 00:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:54.053 rmmod nvme_tcp 00:30:54.053 rmmod nvme_fabrics 00:30:54.053 rmmod nvme_keyring 00:30:54.053 00:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:54.053 00:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:30:54.053 00:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:30:54.053 00:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@515 -- # '[' -n 3466281 ']' 00:30:54.053 00:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@516 -- # killprocess 3466281 00:30:54.053 00:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@950 -- # '[' -z 3466281 ']' 00:30:54.053 00:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@954 -- # kill -0 3466281 00:30:54.053 00:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@955 -- # uname 00:30:54.053 00:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:54.053 00:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3466281 00:30:54.053 00:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:30:54.053 00:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:30:54.053 00:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3466281' 00:30:54.053 killing process with pid 3466281 00:30:54.053 00:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@969 -- # kill 3466281 00:30:54.053 00:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@974 -- # wait 3466281 00:30:54.314 [2024-10-09 00:38:24.732753] app.c: 719:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:30:54.314 00:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:30:54.315 00:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:30:54.315 00:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:30:54.315 00:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:30:54.315 00:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@789 -- # iptables-save 00:30:54.315 00:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:30:54.315 00:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@789 -- # iptables-restore 00:30:54.315 00:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:54.315 00:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:54.315 00:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:54.315 00:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:54.315 00:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:56.227 00:38:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:56.227 00:38:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:30:56.227 00:30:56.227 real 0m14.788s 00:30:56.227 user 0m20.055s 00:30:56.227 sys 0m7.446s 00:30:56.227 00:38:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:56.227 00:38:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:56.227 ************************************ 00:30:56.227 END TEST nvmf_host_management 00:30:56.227 ************************************ 00:30:56.488 00:38:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:30:56.488 00:38:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:30:56.488 00:38:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:56.488 00:38:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:56.488 ************************************ 00:30:56.488 START TEST nvmf_lvol 00:30:56.488 ************************************ 00:30:56.488 00:38:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:30:56.488 * Looking for test storage... 00:30:56.488 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:56.488 00:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:30:56.488 00:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1681 -- # lcov --version 00:30:56.488 00:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:30:56.488 00:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:30:56.488 00:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:56.488 00:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:56.488 00:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:56.488 00:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:30:56.488 00:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:30:56.488 00:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:30:56.488 00:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:30:56.488 00:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:30:56.488 00:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:30:56.488 00:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:30:56.488 00:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:56.488 00:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:30:56.488 00:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:30:56.488 00:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:56.488 00:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:56.488 00:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:30:56.488 00:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:30:56.488 00:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:56.488 00:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:30:56.750 00:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:30:56.750 00:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:30:56.750 00:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:30:56.750 00:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:56.750 00:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:30:56.750 00:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:30:56.750 00:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:56.750 00:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:56.750 00:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:30:56.750 00:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:56.750 00:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:30:56.750 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:56.750 --rc genhtml_branch_coverage=1 00:30:56.750 --rc genhtml_function_coverage=1 00:30:56.750 --rc genhtml_legend=1 00:30:56.750 --rc geninfo_all_blocks=1 00:30:56.750 --rc geninfo_unexecuted_blocks=1 00:30:56.750 00:30:56.750 ' 00:30:56.750 00:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:30:56.750 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:56.750 --rc genhtml_branch_coverage=1 00:30:56.750 --rc genhtml_function_coverage=1 00:30:56.750 --rc genhtml_legend=1 00:30:56.750 --rc geninfo_all_blocks=1 00:30:56.750 --rc geninfo_unexecuted_blocks=1 00:30:56.750 00:30:56.750 ' 00:30:56.750 00:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:30:56.750 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:56.750 --rc genhtml_branch_coverage=1 00:30:56.750 --rc genhtml_function_coverage=1 00:30:56.750 --rc genhtml_legend=1 00:30:56.750 --rc geninfo_all_blocks=1 00:30:56.750 --rc geninfo_unexecuted_blocks=1 00:30:56.750 00:30:56.750 ' 00:30:56.750 00:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:30:56.750 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:56.750 --rc genhtml_branch_coverage=1 00:30:56.750 --rc genhtml_function_coverage=1 00:30:56.750 --rc genhtml_legend=1 00:30:56.750 --rc geninfo_all_blocks=1 00:30:56.750 --rc geninfo_unexecuted_blocks=1 00:30:56.750 00:30:56.750 ' 00:30:56.750 00:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:56.750 00:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:30:56.750 00:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:56.750 00:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:56.750 00:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:56.750 00:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:56.750 00:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:56.750 00:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:56.750 00:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:56.750 00:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:56.750 00:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:56.750 00:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:56.750 00:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:56.750 00:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:56.750 00:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:56.750 00:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:56.750 00:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:56.750 00:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:56.750 00:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:56.750 00:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:30:56.750 00:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:56.750 00:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:56.750 00:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:56.750 00:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:56.750 00:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:56.750 00:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:56.750 00:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:30:56.750 00:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:56.750 00:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:30:56.750 00:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:56.750 00:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:56.750 00:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:56.750 00:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:56.750 00:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:56.750 00:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:56.750 00:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:56.750 00:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:56.750 00:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:56.750 00:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:56.750 00:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:56.750 00:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:56.750 00:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:30:56.750 00:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:30:56.750 00:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:56.750 00:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:30:56.750 00:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:30:56.750 00:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:56.750 00:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@474 -- # prepare_net_devs 00:30:56.750 00:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@436 -- # local -g is_hw=no 00:30:56.750 00:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@438 -- # remove_spdk_ns 00:30:56.750 00:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:56.750 00:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:56.750 00:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:56.750 00:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:30:56.750 00:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:30:56.750 00:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:30:56.750 00:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:31:04.887 00:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:04.887 00:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:31:04.887 00:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:04.887 00:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:04.887 00:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:04.887 00:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:04.887 00:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:04.887 00:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:31:04.887 00:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:04.887 00:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:31:04.887 00:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:31:04.887 00:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:31:04.887 00:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:31:04.887 00:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:31:04.887 00:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:31:04.887 00:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:04.887 00:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:04.887 00:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:04.887 00:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:04.887 00:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:04.887 00:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:04.887 00:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:04.887 00:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:04.887 00:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:04.887 00:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:04.887 00:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:04.887 00:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:04.887 00:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:04.887 00:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:04.887 00:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:04.887 00:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:04.887 00:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:04.887 00:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:04.887 00:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:04.887 00:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:31:04.887 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:31:04.887 00:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:04.887 00:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:04.887 00:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:04.887 00:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:04.887 00:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:04.887 00:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:04.887 00:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:31:04.887 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:31:04.887 00:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:04.887 00:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:04.887 00:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:04.887 00:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:04.887 00:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:04.887 00:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:04.887 00:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:04.887 00:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:04.887 00:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:31:04.887 00:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:04.887 00:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:31:04.887 00:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:04.887 00:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ up == up ]] 00:31:04.887 00:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:31:04.887 00:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:04.887 00:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:31:04.887 Found net devices under 0000:4b:00.0: cvl_0_0 00:31:04.887 00:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:31:04.887 00:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:31:04.887 00:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:04.887 00:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:31:04.887 00:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:04.887 00:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ up == up ]] 00:31:04.887 00:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:31:04.887 00:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:04.887 00:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:31:04.887 Found net devices under 0000:4b:00.1: cvl_0_1 00:31:04.887 00:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:31:04.887 00:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:31:04.887 00:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # is_hw=yes 00:31:04.887 00:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:31:04.887 00:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:31:04.887 00:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:31:04.887 00:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:04.888 00:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:04.888 00:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:04.888 00:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:04.888 00:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:04.888 00:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:04.888 00:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:04.888 00:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:04.888 00:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:04.888 00:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:04.888 00:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:04.888 00:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:04.888 00:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:04.888 00:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:04.888 00:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:04.888 00:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:04.888 00:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:04.888 00:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:04.888 00:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:04.888 00:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:04.888 00:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:04.888 00:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:04.888 00:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:04.888 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:04.888 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.674 ms 00:31:04.888 00:31:04.888 --- 10.0.0.2 ping statistics --- 00:31:04.888 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:04.888 rtt min/avg/max/mdev = 0.674/0.674/0.674/0.000 ms 00:31:04.888 00:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:04.888 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:04.888 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.313 ms 00:31:04.888 00:31:04.888 --- 10.0.0.1 ping statistics --- 00:31:04.888 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:04.888 rtt min/avg/max/mdev = 0.313/0.313/0.313/0.000 ms 00:31:04.888 00:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:04.888 00:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@448 -- # return 0 00:31:04.888 00:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:31:04.888 00:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:04.888 00:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:31:04.888 00:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:31:04.888 00:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:04.888 00:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:31:04.888 00:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:31:04.888 00:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:31:04.888 00:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:31:04.888 00:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:04.888 00:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:31:04.888 00:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@507 -- # nvmfpid=3471405 00:31:04.888 00:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@508 -- # waitforlisten 3471405 00:31:04.888 00:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x7 00:31:04.888 00:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@831 -- # '[' -z 3471405 ']' 00:31:04.888 00:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:04.888 00:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:04.888 00:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:04.888 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:04.888 00:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:04.888 00:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:31:04.888 [2024-10-09 00:38:34.456747] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:04.888 [2024-10-09 00:38:34.457917] Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 initialization... 00:31:04.888 [2024-10-09 00:38:34.457969] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:04.888 [2024-10-09 00:38:34.551857] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:31:04.888 [2024-10-09 00:38:34.647104] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:04.888 [2024-10-09 00:38:34.647168] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:04.888 [2024-10-09 00:38:34.647176] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:04.888 [2024-10-09 00:38:34.647184] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:04.888 [2024-10-09 00:38:34.647190] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:04.888 [2024-10-09 00:38:34.648539] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:31:04.888 [2024-10-09 00:38:34.648700] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:31:04.888 [2024-10-09 00:38:34.648701] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:31:04.888 [2024-10-09 00:38:34.733217] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:04.888 [2024-10-09 00:38:34.734179] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:31:04.888 [2024-10-09 00:38:34.734527] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:04.888 [2024-10-09 00:38:34.734690] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:31:04.888 00:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:04.888 00:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@864 -- # return 0 00:31:04.888 00:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:31:04.888 00:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:04.888 00:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:31:04.888 00:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:04.888 00:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:31:04.888 [2024-10-09 00:38:35.501570] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:05.152 00:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:05.152 00:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:31:05.153 00:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:05.431 00:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:31:05.431 00:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:31:05.697 00:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:31:05.697 00:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=f76c7f18-f95e-4d21-8960-4fe013e9e11c 00:31:05.697 00:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u f76c7f18-f95e-4d21-8960-4fe013e9e11c lvol 20 00:31:05.957 00:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=be7664d7-e722-4f07-a593-50dc2cf2f5cc 00:31:05.957 00:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:31:06.217 00:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 be7664d7-e722-4f07-a593-50dc2cf2f5cc 00:31:06.217 00:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:06.478 [2024-10-09 00:38:36.981576] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:06.478 00:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:06.748 00:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=3471897 00:31:06.748 00:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:31:06.748 00:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:31:07.692 00:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot be7664d7-e722-4f07-a593-50dc2cf2f5cc MY_SNAPSHOT 00:31:07.952 00:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=0631780c-f7ef-43a1-afd7-b6f60ed67ac8 00:31:07.953 00:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize be7664d7-e722-4f07-a593-50dc2cf2f5cc 30 00:31:08.213 00:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 0631780c-f7ef-43a1-afd7-b6f60ed67ac8 MY_CLONE 00:31:08.473 00:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=0057f7c2-cbfb-491f-8983-1d85994dbe8d 00:31:08.473 00:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 0057f7c2-cbfb-491f-8983-1d85994dbe8d 00:31:08.734 00:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 3471897 00:31:18.848 Initializing NVMe Controllers 00:31:18.848 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:31:18.848 Controller IO queue size 128, less than required. 00:31:18.848 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:18.848 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:31:18.848 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:31:18.848 Initialization complete. Launching workers. 00:31:18.848 ======================================================== 00:31:18.848 Latency(us) 00:31:18.848 Device Information : IOPS MiB/s Average min max 00:31:18.848 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 15388.80 60.11 8318.05 1233.98 57421.30 00:31:18.848 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 14951.10 58.40 8563.44 1462.82 58615.39 00:31:18.848 ======================================================== 00:31:18.848 Total : 30339.89 118.52 8438.97 1233.98 58615.39 00:31:18.848 00:31:18.848 00:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:18.848 00:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete be7664d7-e722-4f07-a593-50dc2cf2f5cc 00:31:18.848 00:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u f76c7f18-f95e-4d21-8960-4fe013e9e11c 00:31:18.848 00:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:31:18.848 00:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:31:18.848 00:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:31:18.848 00:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@514 -- # nvmfcleanup 00:31:18.848 00:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:31:18.848 00:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:18.848 00:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:31:18.848 00:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:18.848 00:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:18.848 rmmod nvme_tcp 00:31:18.848 rmmod nvme_fabrics 00:31:18.848 rmmod nvme_keyring 00:31:18.848 00:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:18.848 00:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:31:18.848 00:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:31:18.848 00:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@515 -- # '[' -n 3471405 ']' 00:31:18.848 00:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@516 -- # killprocess 3471405 00:31:18.848 00:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@950 -- # '[' -z 3471405 ']' 00:31:18.848 00:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@954 -- # kill -0 3471405 00:31:18.848 00:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@955 -- # uname 00:31:18.848 00:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:18.848 00:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3471405 00:31:18.848 00:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:31:18.848 00:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:31:18.848 00:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3471405' 00:31:18.848 killing process with pid 3471405 00:31:18.848 00:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@969 -- # kill 3471405 00:31:18.848 00:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@974 -- # wait 3471405 00:31:18.848 00:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:31:18.849 00:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:31:18.849 00:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:31:18.849 00:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:31:18.849 00:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@789 -- # iptables-save 00:31:18.849 00:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:31:18.849 00:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@789 -- # iptables-restore 00:31:18.849 00:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:18.849 00:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:18.849 00:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:18.849 00:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:18.849 00:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:20.231 00:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:20.231 00:31:20.231 real 0m23.633s 00:31:20.231 user 0m55.988s 00:31:20.231 sys 0m10.786s 00:31:20.231 00:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:20.231 00:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:31:20.231 ************************************ 00:31:20.231 END TEST nvmf_lvol 00:31:20.231 ************************************ 00:31:20.231 00:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:31:20.231 00:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:31:20.231 00:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:20.231 00:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:20.231 ************************************ 00:31:20.231 START TEST nvmf_lvs_grow 00:31:20.231 ************************************ 00:31:20.231 00:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:31:20.231 * Looking for test storage... 00:31:20.231 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:20.232 00:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:31:20.232 00:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1681 -- # lcov --version 00:31:20.232 00:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:31:20.232 00:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:31:20.232 00:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:20.232 00:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:20.232 00:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:20.232 00:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:31:20.232 00:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:31:20.232 00:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:31:20.232 00:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:31:20.232 00:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:31:20.232 00:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:31:20.232 00:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:31:20.232 00:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:20.232 00:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:31:20.232 00:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:31:20.232 00:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:20.232 00:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:20.232 00:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:31:20.232 00:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:31:20.232 00:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:20.232 00:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:31:20.232 00:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:31:20.232 00:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:31:20.232 00:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:31:20.232 00:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:20.232 00:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:31:20.232 00:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:31:20.232 00:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:20.232 00:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:20.232 00:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:31:20.232 00:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:20.232 00:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:31:20.232 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:20.232 --rc genhtml_branch_coverage=1 00:31:20.232 --rc genhtml_function_coverage=1 00:31:20.232 --rc genhtml_legend=1 00:31:20.232 --rc geninfo_all_blocks=1 00:31:20.232 --rc geninfo_unexecuted_blocks=1 00:31:20.232 00:31:20.232 ' 00:31:20.232 00:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:31:20.232 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:20.232 --rc genhtml_branch_coverage=1 00:31:20.232 --rc genhtml_function_coverage=1 00:31:20.232 --rc genhtml_legend=1 00:31:20.232 --rc geninfo_all_blocks=1 00:31:20.232 --rc geninfo_unexecuted_blocks=1 00:31:20.232 00:31:20.232 ' 00:31:20.232 00:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:31:20.232 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:20.232 --rc genhtml_branch_coverage=1 00:31:20.232 --rc genhtml_function_coverage=1 00:31:20.232 --rc genhtml_legend=1 00:31:20.232 --rc geninfo_all_blocks=1 00:31:20.232 --rc geninfo_unexecuted_blocks=1 00:31:20.232 00:31:20.232 ' 00:31:20.232 00:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:31:20.232 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:20.232 --rc genhtml_branch_coverage=1 00:31:20.232 --rc genhtml_function_coverage=1 00:31:20.232 --rc genhtml_legend=1 00:31:20.232 --rc geninfo_all_blocks=1 00:31:20.232 --rc geninfo_unexecuted_blocks=1 00:31:20.232 00:31:20.232 ' 00:31:20.232 00:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:20.232 00:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:31:20.232 00:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:20.232 00:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:20.232 00:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:20.232 00:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:20.232 00:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:20.232 00:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:20.232 00:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:20.232 00:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:20.232 00:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:20.232 00:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:20.232 00:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:31:20.232 00:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:31:20.232 00:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:20.232 00:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:20.232 00:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:20.232 00:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:20.493 00:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:20.493 00:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:31:20.493 00:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:20.493 00:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:20.493 00:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:20.493 00:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:20.493 00:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:20.494 00:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:20.494 00:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:31:20.494 00:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:20.494 00:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:31:20.494 00:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:20.494 00:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:20.494 00:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:20.494 00:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:20.494 00:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:20.494 00:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:20.494 00:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:20.494 00:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:20.494 00:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:20.494 00:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:20.494 00:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:20.494 00:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:31:20.494 00:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:31:20.494 00:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:31:20.494 00:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:20.494 00:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@474 -- # prepare_net_devs 00:31:20.494 00:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@436 -- # local -g is_hw=no 00:31:20.494 00:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@438 -- # remove_spdk_ns 00:31:20.494 00:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:20.494 00:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:20.494 00:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:20.494 00:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:31:20.494 00:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:31:20.494 00:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:31:20.494 00:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:31:28.637 00:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:28.637 00:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:31:28.637 00:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:28.637 00:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:28.637 00:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:28.637 00:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:28.637 00:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:28.637 00:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:31:28.637 00:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:28.637 00:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:31:28.637 00:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:31:28.637 00:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:31:28.637 00:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:31:28.637 00:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:31:28.637 00:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:31:28.637 00:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:28.637 00:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:28.637 00:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:28.637 00:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:28.637 00:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:28.637 00:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:28.637 00:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:28.637 00:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:28.637 00:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:28.637 00:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:28.637 00:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:28.637 00:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:28.637 00:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:28.637 00:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:28.637 00:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:28.637 00:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:28.637 00:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:28.637 00:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:28.637 00:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:28.637 00:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:31:28.637 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:31:28.637 00:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:28.637 00:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:28.637 00:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:28.637 00:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:28.637 00:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:28.637 00:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:28.637 00:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:31:28.637 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:31:28.637 00:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:28.637 00:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:28.637 00:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:28.637 00:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:28.637 00:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:28.637 00:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:28.637 00:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:28.637 00:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:28.637 00:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:31:28.637 00:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:28.637 00:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:31:28.637 00:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:28.637 00:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ up == up ]] 00:31:28.637 00:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:31:28.637 00:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:28.637 00:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:31:28.637 Found net devices under 0000:4b:00.0: cvl_0_0 00:31:28.637 00:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:31:28.637 00:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:31:28.637 00:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:28.637 00:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:31:28.637 00:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:28.637 00:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ up == up ]] 00:31:28.637 00:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:31:28.637 00:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:28.637 00:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:31:28.637 Found net devices under 0000:4b:00.1: cvl_0_1 00:31:28.637 00:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:31:28.637 00:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:31:28.637 00:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # is_hw=yes 00:31:28.637 00:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:31:28.637 00:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:31:28.637 00:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:31:28.637 00:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:28.637 00:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:28.637 00:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:28.637 00:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:28.637 00:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:28.637 00:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:28.637 00:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:28.637 00:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:28.637 00:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:28.637 00:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:28.637 00:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:28.637 00:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:28.637 00:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:28.637 00:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:28.637 00:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:28.637 00:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:28.637 00:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:28.637 00:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:28.637 00:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:28.637 00:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:28.637 00:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:28.637 00:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:28.637 00:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:28.637 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:28.637 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.573 ms 00:31:28.637 00:31:28.637 --- 10.0.0.2 ping statistics --- 00:31:28.637 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:28.637 rtt min/avg/max/mdev = 0.573/0.573/0.573/0.000 ms 00:31:28.637 00:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:28.637 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:28.637 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.270 ms 00:31:28.637 00:31:28.637 --- 10.0.0.1 ping statistics --- 00:31:28.637 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:28.637 rtt min/avg/max/mdev = 0.270/0.270/0.270/0.000 ms 00:31:28.637 00:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:28.637 00:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@448 -- # return 0 00:31:28.637 00:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:31:28.637 00:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:28.637 00:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:31:28.637 00:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:31:28.637 00:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:28.637 00:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:31:28.637 00:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:31:28.637 00:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:31:28.637 00:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:31:28.637 00:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:28.637 00:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:31:28.637 00:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@507 -- # nvmfpid=3478232 00:31:28.637 00:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@508 -- # waitforlisten 3478232 00:31:28.637 00:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:31:28.637 00:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@831 -- # '[' -z 3478232 ']' 00:31:28.637 00:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:28.637 00:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:28.637 00:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:28.637 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:28.637 00:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:28.637 00:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:31:28.637 [2024-10-09 00:38:58.147866] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:28.637 [2024-10-09 00:38:58.148918] Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 initialization... 00:31:28.637 [2024-10-09 00:38:58.148962] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:28.637 [2024-10-09 00:38:58.238644] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:28.637 [2024-10-09 00:38:58.330262] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:28.637 [2024-10-09 00:38:58.330327] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:28.637 [2024-10-09 00:38:58.330336] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:28.637 [2024-10-09 00:38:58.330343] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:28.637 [2024-10-09 00:38:58.330349] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:28.637 [2024-10-09 00:38:58.331153] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:31:28.637 [2024-10-09 00:38:58.407076] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:28.637 [2024-10-09 00:38:58.407361] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:28.637 00:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:28.637 00:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # return 0 00:31:28.637 00:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:31:28.637 00:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:28.637 00:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:31:28.637 00:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:28.637 00:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:31:28.637 [2024-10-09 00:38:59.180082] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:28.637 00:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:31:28.637 00:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:31:28.637 00:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:28.637 00:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:31:28.637 ************************************ 00:31:28.637 START TEST lvs_grow_clean 00:31:28.637 ************************************ 00:31:28.637 00:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1125 -- # lvs_grow 00:31:28.637 00:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:31:28.637 00:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:31:28.637 00:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:31:28.637 00:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:31:28.637 00:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:31:28.637 00:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:31:28.637 00:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:31:28.637 00:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:31:28.899 00:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:31:28.899 00:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:31:28.899 00:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:31:29.160 00:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=e5ba8dc1-ce13-40c3-b3ac-49d406bc8fe5 00:31:29.160 00:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e5ba8dc1-ce13-40c3-b3ac-49d406bc8fe5 00:31:29.160 00:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:31:29.422 00:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:31:29.422 00:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:31:29.422 00:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u e5ba8dc1-ce13-40c3-b3ac-49d406bc8fe5 lvol 150 00:31:29.683 00:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=ecda8d43-e78f-4a28-8f18-549c52f57fee 00:31:29.683 00:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:31:29.683 00:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:31:29.683 [2024-10-09 00:39:00.227702] bdev_aio.c:1044:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:31:29.683 [2024-10-09 00:39:00.227887] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:31:29.683 true 00:31:29.683 00:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e5ba8dc1-ce13-40c3-b3ac-49d406bc8fe5 00:31:29.683 00:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:31:29.954 00:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:31:29.954 00:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:31:30.217 00:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 ecda8d43-e78f-4a28-8f18-549c52f57fee 00:31:30.217 00:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:30.478 [2024-10-09 00:39:00.984472] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:30.478 00:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:30.740 00:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3478878 00:31:30.740 00:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:30.740 00:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:31:30.740 00:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3478878 /var/tmp/bdevperf.sock 00:31:30.740 00:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@831 -- # '[' -z 3478878 ']' 00:31:30.740 00:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:31:30.740 00:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:30.740 00:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:31:30.740 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:31:30.740 00:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:30.740 00:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:31:30.740 [2024-10-09 00:39:01.237191] Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 initialization... 00:31:30.740 [2024-10-09 00:39:01.237263] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3478878 ] 00:31:30.740 [2024-10-09 00:39:01.319673] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:31.001 [2024-10-09 00:39:01.416549] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:31:31.572 00:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:31.572 00:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # return 0 00:31:31.572 00:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:31:31.833 Nvme0n1 00:31:31.833 00:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:31:32.094 [ 00:31:32.094 { 00:31:32.094 "name": "Nvme0n1", 00:31:32.094 "aliases": [ 00:31:32.094 "ecda8d43-e78f-4a28-8f18-549c52f57fee" 00:31:32.094 ], 00:31:32.094 "product_name": "NVMe disk", 00:31:32.094 "block_size": 4096, 00:31:32.094 "num_blocks": 38912, 00:31:32.094 "uuid": "ecda8d43-e78f-4a28-8f18-549c52f57fee", 00:31:32.094 "numa_id": 0, 00:31:32.094 "assigned_rate_limits": { 00:31:32.094 "rw_ios_per_sec": 0, 00:31:32.094 "rw_mbytes_per_sec": 0, 00:31:32.094 "r_mbytes_per_sec": 0, 00:31:32.094 "w_mbytes_per_sec": 0 00:31:32.094 }, 00:31:32.094 "claimed": false, 00:31:32.094 "zoned": false, 00:31:32.094 "supported_io_types": { 00:31:32.094 "read": true, 00:31:32.094 "write": true, 00:31:32.094 "unmap": true, 00:31:32.094 "flush": true, 00:31:32.094 "reset": true, 00:31:32.094 "nvme_admin": true, 00:31:32.094 "nvme_io": true, 00:31:32.094 "nvme_io_md": false, 00:31:32.094 "write_zeroes": true, 00:31:32.094 "zcopy": false, 00:31:32.094 "get_zone_info": false, 00:31:32.094 "zone_management": false, 00:31:32.094 "zone_append": false, 00:31:32.094 "compare": true, 00:31:32.094 "compare_and_write": true, 00:31:32.094 "abort": true, 00:31:32.094 "seek_hole": false, 00:31:32.094 "seek_data": false, 00:31:32.094 "copy": true, 00:31:32.094 "nvme_iov_md": false 00:31:32.094 }, 00:31:32.094 "memory_domains": [ 00:31:32.094 { 00:31:32.094 "dma_device_id": "system", 00:31:32.094 "dma_device_type": 1 00:31:32.094 } 00:31:32.094 ], 00:31:32.094 "driver_specific": { 00:31:32.094 "nvme": [ 00:31:32.094 { 00:31:32.094 "trid": { 00:31:32.094 "trtype": "TCP", 00:31:32.094 "adrfam": "IPv4", 00:31:32.094 "traddr": "10.0.0.2", 00:31:32.094 "trsvcid": "4420", 00:31:32.094 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:31:32.094 }, 00:31:32.094 "ctrlr_data": { 00:31:32.094 "cntlid": 1, 00:31:32.094 "vendor_id": "0x8086", 00:31:32.094 "model_number": "SPDK bdev Controller", 00:31:32.094 "serial_number": "SPDK0", 00:31:32.094 "firmware_revision": "25.01", 00:31:32.094 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:32.094 "oacs": { 00:31:32.094 "security": 0, 00:31:32.094 "format": 0, 00:31:32.094 "firmware": 0, 00:31:32.094 "ns_manage": 0 00:31:32.094 }, 00:31:32.094 "multi_ctrlr": true, 00:31:32.094 "ana_reporting": false 00:31:32.094 }, 00:31:32.094 "vs": { 00:31:32.094 "nvme_version": "1.3" 00:31:32.094 }, 00:31:32.094 "ns_data": { 00:31:32.094 "id": 1, 00:31:32.094 "can_share": true 00:31:32.094 } 00:31:32.094 } 00:31:32.094 ], 00:31:32.094 "mp_policy": "active_passive" 00:31:32.095 } 00:31:32.095 } 00:31:32.095 ] 00:31:32.095 00:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3479084 00:31:32.095 00:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:31:32.095 00:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:31:32.095 Running I/O for 10 seconds... 00:31:33.037 Latency(us) 00:31:33.037 [2024-10-08T22:39:03.672Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:33.037 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:33.037 Nvme0n1 : 1.00 16630.00 64.96 0.00 0.00 0.00 0.00 0.00 00:31:33.037 [2024-10-08T22:39:03.672Z] =================================================================================================================== 00:31:33.037 [2024-10-08T22:39:03.672Z] Total : 16630.00 64.96 0.00 0.00 0.00 0.00 0.00 00:31:33.037 00:31:33.978 00:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u e5ba8dc1-ce13-40c3-b3ac-49d406bc8fe5 00:31:34.239 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:34.239 Nvme0n1 : 2.00 16858.50 65.85 0.00 0.00 0.00 0.00 0.00 00:31:34.239 [2024-10-08T22:39:04.874Z] =================================================================================================================== 00:31:34.239 [2024-10-08T22:39:04.874Z] Total : 16858.50 65.85 0.00 0.00 0.00 0.00 0.00 00:31:34.239 00:31:34.239 true 00:31:34.240 00:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e5ba8dc1-ce13-40c3-b3ac-49d406bc8fe5 00:31:34.240 00:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:31:34.501 00:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:31:34.501 00:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:31:34.501 00:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 3479084 00:31:35.072 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:35.072 Nvme0n1 : 3.00 16956.00 66.23 0.00 0.00 0.00 0.00 0.00 00:31:35.072 [2024-10-08T22:39:05.707Z] =================================================================================================================== 00:31:35.072 [2024-10-08T22:39:05.707Z] Total : 16956.00 66.23 0.00 0.00 0.00 0.00 0.00 00:31:35.072 00:31:36.013 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:36.013 Nvme0n1 : 4.00 17197.00 67.18 0.00 0.00 0.00 0.00 0.00 00:31:36.013 [2024-10-08T22:39:06.648Z] =================================================================================================================== 00:31:36.013 [2024-10-08T22:39:06.648Z] Total : 17197.00 67.18 0.00 0.00 0.00 0.00 0.00 00:31:36.013 00:31:37.418 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:37.418 Nvme0n1 : 5.00 18609.20 72.69 0.00 0.00 0.00 0.00 0.00 00:31:37.418 [2024-10-08T22:39:08.053Z] =================================================================================================================== 00:31:37.418 [2024-10-08T22:39:08.053Z] Total : 18609.20 72.69 0.00 0.00 0.00 0.00 0.00 00:31:37.418 00:31:38.360 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:38.360 Nvme0n1 : 6.00 19731.67 77.08 0.00 0.00 0.00 0.00 0.00 00:31:38.360 [2024-10-08T22:39:08.995Z] =================================================================================================================== 00:31:38.360 [2024-10-08T22:39:08.995Z] Total : 19731.67 77.08 0.00 0.00 0.00 0.00 0.00 00:31:38.360 00:31:39.317 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:39.317 Nvme0n1 : 7.00 20542.29 80.24 0.00 0.00 0.00 0.00 0.00 00:31:39.317 [2024-10-08T22:39:09.952Z] =================================================================================================================== 00:31:39.317 [2024-10-08T22:39:09.952Z] Total : 20542.29 80.24 0.00 0.00 0.00 0.00 0.00 00:31:39.317 00:31:40.264 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:40.264 Nvme0n1 : 8.00 21110.50 82.46 0.00 0.00 0.00 0.00 0.00 00:31:40.264 [2024-10-08T22:39:10.899Z] =================================================================================================================== 00:31:40.264 [2024-10-08T22:39:10.899Z] Total : 21110.50 82.46 0.00 0.00 0.00 0.00 0.00 00:31:40.264 00:31:41.211 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:41.211 Nvme0n1 : 9.00 21587.33 84.33 0.00 0.00 0.00 0.00 0.00 00:31:41.211 [2024-10-08T22:39:11.846Z] =================================================================================================================== 00:31:41.211 [2024-10-08T22:39:11.846Z] Total : 21587.33 84.33 0.00 0.00 0.00 0.00 0.00 00:31:41.211 00:31:42.151 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:42.152 Nvme0n1 : 10.00 21975.80 85.84 0.00 0.00 0.00 0.00 0.00 00:31:42.152 [2024-10-08T22:39:12.787Z] =================================================================================================================== 00:31:42.152 [2024-10-08T22:39:12.787Z] Total : 21975.80 85.84 0.00 0.00 0.00 0.00 0.00 00:31:42.152 00:31:42.152 00:31:42.152 Latency(us) 00:31:42.152 [2024-10-08T22:39:12.787Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:42.152 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:42.152 Nvme0n1 : 10.01 21977.43 85.85 0.00 0.00 5820.94 3208.53 32331.09 00:31:42.152 [2024-10-08T22:39:12.787Z] =================================================================================================================== 00:31:42.152 [2024-10-08T22:39:12.787Z] Total : 21977.43 85.85 0.00 0.00 5820.94 3208.53 32331.09 00:31:42.152 { 00:31:42.152 "results": [ 00:31:42.152 { 00:31:42.152 "job": "Nvme0n1", 00:31:42.152 "core_mask": "0x2", 00:31:42.152 "workload": "randwrite", 00:31:42.152 "status": "finished", 00:31:42.152 "queue_depth": 128, 00:31:42.152 "io_size": 4096, 00:31:42.152 "runtime": 10.005083, 00:31:42.152 "iops": 21977.428872903904, 00:31:42.152 "mibps": 85.84933153478087, 00:31:42.152 "io_failed": 0, 00:31:42.152 "io_timeout": 0, 00:31:42.152 "avg_latency_us": 5820.937902246315, 00:31:42.152 "min_latency_us": 3208.5333333333333, 00:31:42.152 "max_latency_us": 32331.093333333334 00:31:42.152 } 00:31:42.152 ], 00:31:42.152 "core_count": 1 00:31:42.152 } 00:31:42.152 00:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3478878 00:31:42.152 00:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@950 -- # '[' -z 3478878 ']' 00:31:42.152 00:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # kill -0 3478878 00:31:42.152 00:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # uname 00:31:42.152 00:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:42.152 00:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3478878 00:31:42.152 00:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:31:42.152 00:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:31:42.152 00:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3478878' 00:31:42.152 killing process with pid 3478878 00:31:42.152 00:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@969 -- # kill 3478878 00:31:42.152 Received shutdown signal, test time was about 10.000000 seconds 00:31:42.152 00:31:42.152 Latency(us) 00:31:42.152 [2024-10-08T22:39:12.787Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:42.152 [2024-10-08T22:39:12.787Z] =================================================================================================================== 00:31:42.152 [2024-10-08T22:39:12.787Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:42.152 00:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@974 -- # wait 3478878 00:31:42.412 00:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:42.412 00:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:42.673 00:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e5ba8dc1-ce13-40c3-b3ac-49d406bc8fe5 00:31:42.673 00:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:31:42.934 00:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:31:42.934 00:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:31:42.934 00:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:31:43.194 [2024-10-09 00:39:13.591869] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:31:43.194 00:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e5ba8dc1-ce13-40c3-b3ac-49d406bc8fe5 00:31:43.194 00:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # local es=0 00:31:43.194 00:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e5ba8dc1-ce13-40c3-b3ac-49d406bc8fe5 00:31:43.194 00:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:43.194 00:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:43.194 00:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:43.194 00:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:43.194 00:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:43.194 00:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:43.194 00:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:43.194 00:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:31:43.194 00:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e5ba8dc1-ce13-40c3-b3ac-49d406bc8fe5 00:31:43.455 request: 00:31:43.455 { 00:31:43.455 "uuid": "e5ba8dc1-ce13-40c3-b3ac-49d406bc8fe5", 00:31:43.455 "method": "bdev_lvol_get_lvstores", 00:31:43.455 "req_id": 1 00:31:43.455 } 00:31:43.455 Got JSON-RPC error response 00:31:43.455 response: 00:31:43.455 { 00:31:43.455 "code": -19, 00:31:43.455 "message": "No such device" 00:31:43.455 } 00:31:43.455 00:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # es=1 00:31:43.455 00:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:31:43.455 00:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:31:43.455 00:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:31:43.455 00:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:31:43.455 aio_bdev 00:31:43.455 00:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev ecda8d43-e78f-4a28-8f18-549c52f57fee 00:31:43.455 00:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local bdev_name=ecda8d43-e78f-4a28-8f18-549c52f57fee 00:31:43.455 00:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:31:43.455 00:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # local i 00:31:43.455 00:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:31:43.455 00:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:31:43.455 00:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:31:43.715 00:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b ecda8d43-e78f-4a28-8f18-549c52f57fee -t 2000 00:31:43.976 [ 00:31:43.976 { 00:31:43.976 "name": "ecda8d43-e78f-4a28-8f18-549c52f57fee", 00:31:43.976 "aliases": [ 00:31:43.976 "lvs/lvol" 00:31:43.976 ], 00:31:43.976 "product_name": "Logical Volume", 00:31:43.976 "block_size": 4096, 00:31:43.976 "num_blocks": 38912, 00:31:43.976 "uuid": "ecda8d43-e78f-4a28-8f18-549c52f57fee", 00:31:43.976 "assigned_rate_limits": { 00:31:43.976 "rw_ios_per_sec": 0, 00:31:43.976 "rw_mbytes_per_sec": 0, 00:31:43.976 "r_mbytes_per_sec": 0, 00:31:43.976 "w_mbytes_per_sec": 0 00:31:43.976 }, 00:31:43.976 "claimed": false, 00:31:43.976 "zoned": false, 00:31:43.976 "supported_io_types": { 00:31:43.976 "read": true, 00:31:43.976 "write": true, 00:31:43.976 "unmap": true, 00:31:43.976 "flush": false, 00:31:43.976 "reset": true, 00:31:43.976 "nvme_admin": false, 00:31:43.976 "nvme_io": false, 00:31:43.976 "nvme_io_md": false, 00:31:43.976 "write_zeroes": true, 00:31:43.976 "zcopy": false, 00:31:43.976 "get_zone_info": false, 00:31:43.976 "zone_management": false, 00:31:43.976 "zone_append": false, 00:31:43.976 "compare": false, 00:31:43.976 "compare_and_write": false, 00:31:43.976 "abort": false, 00:31:43.976 "seek_hole": true, 00:31:43.976 "seek_data": true, 00:31:43.976 "copy": false, 00:31:43.976 "nvme_iov_md": false 00:31:43.976 }, 00:31:43.976 "driver_specific": { 00:31:43.976 "lvol": { 00:31:43.976 "lvol_store_uuid": "e5ba8dc1-ce13-40c3-b3ac-49d406bc8fe5", 00:31:43.976 "base_bdev": "aio_bdev", 00:31:43.976 "thin_provision": false, 00:31:43.976 "num_allocated_clusters": 38, 00:31:43.976 "snapshot": false, 00:31:43.976 "clone": false, 00:31:43.976 "esnap_clone": false 00:31:43.976 } 00:31:43.976 } 00:31:43.976 } 00:31:43.976 ] 00:31:43.976 00:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@907 -- # return 0 00:31:43.976 00:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e5ba8dc1-ce13-40c3-b3ac-49d406bc8fe5 00:31:43.976 00:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:31:43.976 00:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:31:43.976 00:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e5ba8dc1-ce13-40c3-b3ac-49d406bc8fe5 00:31:43.976 00:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:31:44.238 00:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:31:44.238 00:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete ecda8d43-e78f-4a28-8f18-549c52f57fee 00:31:44.499 00:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u e5ba8dc1-ce13-40c3-b3ac-49d406bc8fe5 00:31:44.760 00:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:31:44.760 00:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:31:44.760 00:31:44.760 real 0m16.077s 00:31:44.760 user 0m15.679s 00:31:44.760 sys 0m1.493s 00:31:44.760 00:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:44.760 00:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:31:44.760 ************************************ 00:31:44.760 END TEST lvs_grow_clean 00:31:44.760 ************************************ 00:31:44.760 00:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:31:44.760 00:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:31:44.760 00:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:44.760 00:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:31:45.021 ************************************ 00:31:45.021 START TEST lvs_grow_dirty 00:31:45.021 ************************************ 00:31:45.021 00:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1125 -- # lvs_grow dirty 00:31:45.021 00:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:31:45.021 00:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:31:45.021 00:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:31:45.021 00:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:31:45.021 00:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:31:45.021 00:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:31:45.021 00:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:31:45.021 00:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:31:45.021 00:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:31:45.281 00:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:31:45.282 00:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:31:45.282 00:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=c6e9eec8-17ef-453b-b527-a75e354ae3d4 00:31:45.282 00:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c6e9eec8-17ef-453b-b527-a75e354ae3d4 00:31:45.282 00:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:31:45.542 00:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:31:45.542 00:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:31:45.542 00:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u c6e9eec8-17ef-453b-b527-a75e354ae3d4 lvol 150 00:31:45.802 00:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=64ccc1f4-9cda-4477-93d3-9117a23987c8 00:31:45.802 00:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:31:45.802 00:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:31:45.802 [2024-10-09 00:39:16.379771] bdev_aio.c:1044:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:31:45.802 [2024-10-09 00:39:16.379945] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:31:45.802 true 00:31:45.802 00:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c6e9eec8-17ef-453b-b527-a75e354ae3d4 00:31:45.802 00:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:31:46.062 00:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:31:46.062 00:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:31:46.323 00:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 64ccc1f4-9cda-4477-93d3-9117a23987c8 00:31:46.323 00:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:46.584 [2024-10-09 00:39:17.108367] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:46.584 00:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:46.845 00:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3482455 00:31:46.845 00:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:46.845 00:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:31:46.845 00:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3482455 /var/tmp/bdevperf.sock 00:31:46.845 00:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 3482455 ']' 00:31:46.845 00:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:31:46.845 00:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:46.845 00:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:31:46.845 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:31:46.845 00:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:46.845 00:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:31:46.845 [2024-10-09 00:39:17.391918] Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 initialization... 00:31:46.845 [2024-10-09 00:39:17.391995] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3482455 ] 00:31:46.845 [2024-10-09 00:39:17.472534] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:47.105 [2024-10-09 00:39:17.536587] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:31:47.691 00:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:47.691 00:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:31:47.692 00:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:31:47.953 Nvme0n1 00:31:47.953 00:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:31:48.212 [ 00:31:48.212 { 00:31:48.212 "name": "Nvme0n1", 00:31:48.212 "aliases": [ 00:31:48.212 "64ccc1f4-9cda-4477-93d3-9117a23987c8" 00:31:48.212 ], 00:31:48.212 "product_name": "NVMe disk", 00:31:48.212 "block_size": 4096, 00:31:48.212 "num_blocks": 38912, 00:31:48.212 "uuid": "64ccc1f4-9cda-4477-93d3-9117a23987c8", 00:31:48.212 "numa_id": 0, 00:31:48.212 "assigned_rate_limits": { 00:31:48.212 "rw_ios_per_sec": 0, 00:31:48.212 "rw_mbytes_per_sec": 0, 00:31:48.212 "r_mbytes_per_sec": 0, 00:31:48.212 "w_mbytes_per_sec": 0 00:31:48.212 }, 00:31:48.212 "claimed": false, 00:31:48.212 "zoned": false, 00:31:48.212 "supported_io_types": { 00:31:48.212 "read": true, 00:31:48.212 "write": true, 00:31:48.212 "unmap": true, 00:31:48.212 "flush": true, 00:31:48.212 "reset": true, 00:31:48.212 "nvme_admin": true, 00:31:48.212 "nvme_io": true, 00:31:48.212 "nvme_io_md": false, 00:31:48.212 "write_zeroes": true, 00:31:48.213 "zcopy": false, 00:31:48.213 "get_zone_info": false, 00:31:48.213 "zone_management": false, 00:31:48.213 "zone_append": false, 00:31:48.213 "compare": true, 00:31:48.213 "compare_and_write": true, 00:31:48.213 "abort": true, 00:31:48.213 "seek_hole": false, 00:31:48.213 "seek_data": false, 00:31:48.213 "copy": true, 00:31:48.213 "nvme_iov_md": false 00:31:48.213 }, 00:31:48.213 "memory_domains": [ 00:31:48.213 { 00:31:48.213 "dma_device_id": "system", 00:31:48.213 "dma_device_type": 1 00:31:48.213 } 00:31:48.213 ], 00:31:48.213 "driver_specific": { 00:31:48.213 "nvme": [ 00:31:48.213 { 00:31:48.213 "trid": { 00:31:48.213 "trtype": "TCP", 00:31:48.213 "adrfam": "IPv4", 00:31:48.213 "traddr": "10.0.0.2", 00:31:48.213 "trsvcid": "4420", 00:31:48.213 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:31:48.213 }, 00:31:48.213 "ctrlr_data": { 00:31:48.213 "cntlid": 1, 00:31:48.213 "vendor_id": "0x8086", 00:31:48.213 "model_number": "SPDK bdev Controller", 00:31:48.213 "serial_number": "SPDK0", 00:31:48.213 "firmware_revision": "25.01", 00:31:48.213 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:48.213 "oacs": { 00:31:48.213 "security": 0, 00:31:48.213 "format": 0, 00:31:48.213 "firmware": 0, 00:31:48.213 "ns_manage": 0 00:31:48.213 }, 00:31:48.213 "multi_ctrlr": true, 00:31:48.213 "ana_reporting": false 00:31:48.213 }, 00:31:48.213 "vs": { 00:31:48.213 "nvme_version": "1.3" 00:31:48.213 }, 00:31:48.213 "ns_data": { 00:31:48.213 "id": 1, 00:31:48.213 "can_share": true 00:31:48.213 } 00:31:48.213 } 00:31:48.213 ], 00:31:48.213 "mp_policy": "active_passive" 00:31:48.213 } 00:31:48.213 } 00:31:48.213 ] 00:31:48.213 00:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3482603 00:31:48.213 00:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:31:48.213 00:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:31:48.213 Running I/O for 10 seconds... 00:31:49.153 Latency(us) 00:31:49.153 [2024-10-08T22:39:19.788Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:49.153 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:49.153 Nvme0n1 : 1.00 24380.00 95.23 0.00 0.00 0.00 0.00 0.00 00:31:49.153 [2024-10-08T22:39:19.788Z] =================================================================================================================== 00:31:49.153 [2024-10-08T22:39:19.788Z] Total : 24380.00 95.23 0.00 0.00 0.00 0.00 0.00 00:31:49.153 00:31:50.094 00:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u c6e9eec8-17ef-453b-b527-a75e354ae3d4 00:31:50.094 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:50.094 Nvme0n1 : 2.00 24695.50 96.47 0.00 0.00 0.00 0.00 0.00 00:31:50.094 [2024-10-08T22:39:20.729Z] =================================================================================================================== 00:31:50.094 [2024-10-08T22:39:20.729Z] Total : 24695.50 96.47 0.00 0.00 0.00 0.00 0.00 00:31:50.094 00:31:50.354 true 00:31:50.355 00:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c6e9eec8-17ef-453b-b527-a75e354ae3d4 00:31:50.355 00:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:31:50.615 00:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:31:50.615 00:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:31:50.615 00:39:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 3482603 00:31:51.184 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:51.184 Nvme0n1 : 3.00 24915.67 97.33 0.00 0.00 0.00 0.00 0.00 00:31:51.184 [2024-10-08T22:39:21.819Z] =================================================================================================================== 00:31:51.184 [2024-10-08T22:39:21.819Z] Total : 24915.67 97.33 0.00 0.00 0.00 0.00 0.00 00:31:51.184 00:31:52.127 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:52.127 Nvme0n1 : 4.00 25039.00 97.81 0.00 0.00 0.00 0.00 0.00 00:31:52.127 [2024-10-08T22:39:22.762Z] =================================================================================================================== 00:31:52.127 [2024-10-08T22:39:22.762Z] Total : 25039.00 97.81 0.00 0.00 0.00 0.00 0.00 00:31:52.127 00:31:53.093 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:53.093 Nvme0n1 : 5.00 25112.60 98.10 0.00 0.00 0.00 0.00 0.00 00:31:53.093 [2024-10-08T22:39:23.728Z] =================================================================================================================== 00:31:53.093 [2024-10-08T22:39:23.728Z] Total : 25112.60 98.10 0.00 0.00 0.00 0.00 0.00 00:31:53.093 00:31:54.479 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:54.479 Nvme0n1 : 6.00 25172.33 98.33 0.00 0.00 0.00 0.00 0.00 00:31:54.479 [2024-10-08T22:39:25.114Z] =================================================================================================================== 00:31:54.479 [2024-10-08T22:39:25.114Z] Total : 25172.33 98.33 0.00 0.00 0.00 0.00 0.00 00:31:54.479 00:31:55.435 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:55.435 Nvme0n1 : 7.00 25215.14 98.50 0.00 0.00 0.00 0.00 0.00 00:31:55.435 [2024-10-08T22:39:26.070Z] =================================================================================================================== 00:31:55.435 [2024-10-08T22:39:26.070Z] Total : 25215.14 98.50 0.00 0.00 0.00 0.00 0.00 00:31:55.435 00:31:56.377 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:56.377 Nvme0n1 : 8.00 25239.50 98.59 0.00 0.00 0.00 0.00 0.00 00:31:56.377 [2024-10-08T22:39:27.012Z] =================================================================================================================== 00:31:56.377 [2024-10-08T22:39:27.012Z] Total : 25239.50 98.59 0.00 0.00 0.00 0.00 0.00 00:31:56.377 00:31:57.319 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:57.319 Nvme0n1 : 9.00 25272.11 98.72 0.00 0.00 0.00 0.00 0.00 00:31:57.319 [2024-10-08T22:39:27.954Z] =================================================================================================================== 00:31:57.319 [2024-10-08T22:39:27.954Z] Total : 25272.11 98.72 0.00 0.00 0.00 0.00 0.00 00:31:57.319 00:31:58.260 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:58.260 Nvme0n1 : 10.00 25292.10 98.80 0.00 0.00 0.00 0.00 0.00 00:31:58.260 [2024-10-08T22:39:28.895Z] =================================================================================================================== 00:31:58.260 [2024-10-08T22:39:28.895Z] Total : 25292.10 98.80 0.00 0.00 0.00 0.00 0.00 00:31:58.260 00:31:58.260 00:31:58.260 Latency(us) 00:31:58.260 [2024-10-08T22:39:28.895Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:58.260 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:58.260 Nvme0n1 : 10.00 25292.90 98.80 0.00 0.00 5057.80 3112.96 31675.73 00:31:58.260 [2024-10-08T22:39:28.895Z] =================================================================================================================== 00:31:58.260 [2024-10-08T22:39:28.895Z] Total : 25292.90 98.80 0.00 0.00 5057.80 3112.96 31675.73 00:31:58.260 { 00:31:58.260 "results": [ 00:31:58.260 { 00:31:58.260 "job": "Nvme0n1", 00:31:58.260 "core_mask": "0x2", 00:31:58.260 "workload": "randwrite", 00:31:58.260 "status": "finished", 00:31:58.260 "queue_depth": 128, 00:31:58.260 "io_size": 4096, 00:31:58.260 "runtime": 10.004745, 00:31:58.260 "iops": 25292.898519652426, 00:31:58.260 "mibps": 98.80038484239229, 00:31:58.260 "io_failed": 0, 00:31:58.260 "io_timeout": 0, 00:31:58.260 "avg_latency_us": 5057.80341188202, 00:31:58.260 "min_latency_us": 3112.96, 00:31:58.260 "max_latency_us": 31675.733333333334 00:31:58.260 } 00:31:58.260 ], 00:31:58.260 "core_count": 1 00:31:58.260 } 00:31:58.260 00:39:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3482455 00:31:58.260 00:39:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@950 -- # '[' -z 3482455 ']' 00:31:58.260 00:39:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # kill -0 3482455 00:31:58.260 00:39:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # uname 00:31:58.260 00:39:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:58.260 00:39:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3482455 00:31:58.260 00:39:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:31:58.260 00:39:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:31:58.260 00:39:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3482455' 00:31:58.260 killing process with pid 3482455 00:31:58.260 00:39:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@969 -- # kill 3482455 00:31:58.260 Received shutdown signal, test time was about 10.000000 seconds 00:31:58.260 00:31:58.260 Latency(us) 00:31:58.260 [2024-10-08T22:39:28.895Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:58.260 [2024-10-08T22:39:28.895Z] =================================================================================================================== 00:31:58.260 [2024-10-08T22:39:28.895Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:58.260 00:39:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@974 -- # wait 3482455 00:31:58.520 00:39:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:58.520 00:39:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:58.781 00:39:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c6e9eec8-17ef-453b-b527-a75e354ae3d4 00:31:58.781 00:39:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:31:59.043 00:39:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:31:59.043 00:39:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:31:59.043 00:39:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 3478232 00:31:59.043 00:39:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 3478232 00:31:59.043 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 3478232 Killed "${NVMF_APP[@]}" "$@" 00:31:59.043 00:39:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:31:59.043 00:39:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:31:59.043 00:39:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:31:59.043 00:39:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:59.043 00:39:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:31:59.043 00:39:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # nvmfpid=3484680 00:31:59.043 00:39:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # waitforlisten 3484680 00:31:59.043 00:39:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:31:59.043 00:39:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 3484680 ']' 00:31:59.043 00:39:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:59.043 00:39:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:59.043 00:39:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:59.043 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:59.043 00:39:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:59.043 00:39:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:31:59.043 [2024-10-09 00:39:29.570624] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:59.043 [2024-10-09 00:39:29.571643] Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 initialization... 00:31:59.043 [2024-10-09 00:39:29.571688] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:59.043 [2024-10-09 00:39:29.655090] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:59.304 [2024-10-09 00:39:29.711613] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:59.304 [2024-10-09 00:39:29.711645] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:59.304 [2024-10-09 00:39:29.711651] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:59.304 [2024-10-09 00:39:29.711656] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:59.304 [2024-10-09 00:39:29.711660] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:59.304 [2024-10-09 00:39:29.712119] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:31:59.304 [2024-10-09 00:39:29.762045] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:59.304 [2024-10-09 00:39:29.762234] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:59.875 00:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:59.875 00:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:31:59.876 00:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:31:59.876 00:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:59.876 00:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:31:59.876 00:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:59.876 00:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:32:00.207 [2024-10-09 00:39:30.554322] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:32:00.207 [2024-10-09 00:39:30.554547] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:32:00.207 [2024-10-09 00:39:30.554636] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:32:00.207 00:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:32:00.207 00:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 64ccc1f4-9cda-4477-93d3-9117a23987c8 00:32:00.207 00:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=64ccc1f4-9cda-4477-93d3-9117a23987c8 00:32:00.207 00:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:32:00.207 00:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:32:00.207 00:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:32:00.207 00:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:32:00.207 00:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:32:00.207 00:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 64ccc1f4-9cda-4477-93d3-9117a23987c8 -t 2000 00:32:00.467 [ 00:32:00.467 { 00:32:00.467 "name": "64ccc1f4-9cda-4477-93d3-9117a23987c8", 00:32:00.467 "aliases": [ 00:32:00.467 "lvs/lvol" 00:32:00.467 ], 00:32:00.467 "product_name": "Logical Volume", 00:32:00.467 "block_size": 4096, 00:32:00.467 "num_blocks": 38912, 00:32:00.467 "uuid": "64ccc1f4-9cda-4477-93d3-9117a23987c8", 00:32:00.467 "assigned_rate_limits": { 00:32:00.467 "rw_ios_per_sec": 0, 00:32:00.467 "rw_mbytes_per_sec": 0, 00:32:00.467 "r_mbytes_per_sec": 0, 00:32:00.467 "w_mbytes_per_sec": 0 00:32:00.467 }, 00:32:00.467 "claimed": false, 00:32:00.467 "zoned": false, 00:32:00.467 "supported_io_types": { 00:32:00.467 "read": true, 00:32:00.467 "write": true, 00:32:00.467 "unmap": true, 00:32:00.467 "flush": false, 00:32:00.467 "reset": true, 00:32:00.467 "nvme_admin": false, 00:32:00.467 "nvme_io": false, 00:32:00.467 "nvme_io_md": false, 00:32:00.467 "write_zeroes": true, 00:32:00.467 "zcopy": false, 00:32:00.467 "get_zone_info": false, 00:32:00.467 "zone_management": false, 00:32:00.467 "zone_append": false, 00:32:00.467 "compare": false, 00:32:00.467 "compare_and_write": false, 00:32:00.467 "abort": false, 00:32:00.467 "seek_hole": true, 00:32:00.467 "seek_data": true, 00:32:00.467 "copy": false, 00:32:00.467 "nvme_iov_md": false 00:32:00.467 }, 00:32:00.467 "driver_specific": { 00:32:00.467 "lvol": { 00:32:00.467 "lvol_store_uuid": "c6e9eec8-17ef-453b-b527-a75e354ae3d4", 00:32:00.467 "base_bdev": "aio_bdev", 00:32:00.467 "thin_provision": false, 00:32:00.467 "num_allocated_clusters": 38, 00:32:00.467 "snapshot": false, 00:32:00.467 "clone": false, 00:32:00.467 "esnap_clone": false 00:32:00.467 } 00:32:00.467 } 00:32:00.467 } 00:32:00.467 ] 00:32:00.467 00:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:32:00.467 00:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c6e9eec8-17ef-453b-b527-a75e354ae3d4 00:32:00.467 00:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:32:00.467 00:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:32:00.467 00:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c6e9eec8-17ef-453b-b527-a75e354ae3d4 00:32:00.467 00:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:32:00.728 00:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:32:00.728 00:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:32:00.989 [2024-10-09 00:39:31.412585] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:32:00.989 00:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c6e9eec8-17ef-453b-b527-a75e354ae3d4 00:32:00.989 00:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # local es=0 00:32:00.989 00:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c6e9eec8-17ef-453b-b527-a75e354ae3d4 00:32:00.989 00:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:00.989 00:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:00.989 00:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:00.989 00:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:00.989 00:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:00.989 00:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:00.989 00:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:00.989 00:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:32:00.989 00:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c6e9eec8-17ef-453b-b527-a75e354ae3d4 00:32:01.250 request: 00:32:01.250 { 00:32:01.250 "uuid": "c6e9eec8-17ef-453b-b527-a75e354ae3d4", 00:32:01.250 "method": "bdev_lvol_get_lvstores", 00:32:01.250 "req_id": 1 00:32:01.250 } 00:32:01.250 Got JSON-RPC error response 00:32:01.250 response: 00:32:01.250 { 00:32:01.250 "code": -19, 00:32:01.250 "message": "No such device" 00:32:01.250 } 00:32:01.250 00:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # es=1 00:32:01.250 00:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:32:01.250 00:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:32:01.250 00:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:32:01.250 00:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:32:01.250 aio_bdev 00:32:01.251 00:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 64ccc1f4-9cda-4477-93d3-9117a23987c8 00:32:01.251 00:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=64ccc1f4-9cda-4477-93d3-9117a23987c8 00:32:01.251 00:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:32:01.251 00:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:32:01.251 00:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:32:01.251 00:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:32:01.251 00:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:32:01.512 00:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 64ccc1f4-9cda-4477-93d3-9117a23987c8 -t 2000 00:32:01.784 [ 00:32:01.784 { 00:32:01.784 "name": "64ccc1f4-9cda-4477-93d3-9117a23987c8", 00:32:01.784 "aliases": [ 00:32:01.784 "lvs/lvol" 00:32:01.785 ], 00:32:01.785 "product_name": "Logical Volume", 00:32:01.785 "block_size": 4096, 00:32:01.785 "num_blocks": 38912, 00:32:01.785 "uuid": "64ccc1f4-9cda-4477-93d3-9117a23987c8", 00:32:01.785 "assigned_rate_limits": { 00:32:01.785 "rw_ios_per_sec": 0, 00:32:01.785 "rw_mbytes_per_sec": 0, 00:32:01.785 "r_mbytes_per_sec": 0, 00:32:01.785 "w_mbytes_per_sec": 0 00:32:01.785 }, 00:32:01.785 "claimed": false, 00:32:01.785 "zoned": false, 00:32:01.785 "supported_io_types": { 00:32:01.785 "read": true, 00:32:01.785 "write": true, 00:32:01.785 "unmap": true, 00:32:01.785 "flush": false, 00:32:01.785 "reset": true, 00:32:01.785 "nvme_admin": false, 00:32:01.785 "nvme_io": false, 00:32:01.785 "nvme_io_md": false, 00:32:01.785 "write_zeroes": true, 00:32:01.785 "zcopy": false, 00:32:01.785 "get_zone_info": false, 00:32:01.785 "zone_management": false, 00:32:01.785 "zone_append": false, 00:32:01.785 "compare": false, 00:32:01.785 "compare_and_write": false, 00:32:01.785 "abort": false, 00:32:01.785 "seek_hole": true, 00:32:01.785 "seek_data": true, 00:32:01.785 "copy": false, 00:32:01.785 "nvme_iov_md": false 00:32:01.785 }, 00:32:01.785 "driver_specific": { 00:32:01.785 "lvol": { 00:32:01.785 "lvol_store_uuid": "c6e9eec8-17ef-453b-b527-a75e354ae3d4", 00:32:01.785 "base_bdev": "aio_bdev", 00:32:01.786 "thin_provision": false, 00:32:01.786 "num_allocated_clusters": 38, 00:32:01.786 "snapshot": false, 00:32:01.786 "clone": false, 00:32:01.786 "esnap_clone": false 00:32:01.786 } 00:32:01.786 } 00:32:01.786 } 00:32:01.786 ] 00:32:01.786 00:39:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:32:01.786 00:39:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:32:01.786 00:39:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c6e9eec8-17ef-453b-b527-a75e354ae3d4 00:32:01.786 00:39:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:32:01.786 00:39:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c6e9eec8-17ef-453b-b527-a75e354ae3d4 00:32:01.786 00:39:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:32:02.110 00:39:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:32:02.110 00:39:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 64ccc1f4-9cda-4477-93d3-9117a23987c8 00:32:02.110 00:39:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u c6e9eec8-17ef-453b-b527-a75e354ae3d4 00:32:02.441 00:39:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:32:02.441 00:39:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:32:02.441 00:32:02.441 real 0m17.608s 00:32:02.441 user 0m35.371s 00:32:02.441 sys 0m3.283s 00:32:02.441 00:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:02.441 00:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:32:02.441 ************************************ 00:32:02.441 END TEST lvs_grow_dirty 00:32:02.441 ************************************ 00:32:02.745 00:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:32:02.745 00:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # type=--id 00:32:02.745 00:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@809 -- # id=0 00:32:02.745 00:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:32:02.745 00:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:32:02.745 00:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:32:02.745 00:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:32:02.745 00:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # for n in $shm_files 00:32:02.745 00:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:32:02.745 nvmf_trace.0 00:32:02.745 00:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@823 -- # return 0 00:32:02.745 00:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:32:02.745 00:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@514 -- # nvmfcleanup 00:32:02.745 00:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:32:02.745 00:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:02.745 00:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:32:02.745 00:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:02.745 00:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:02.745 rmmod nvme_tcp 00:32:02.745 rmmod nvme_fabrics 00:32:02.745 rmmod nvme_keyring 00:32:02.745 00:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:02.745 00:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:32:02.745 00:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:32:02.745 00:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@515 -- # '[' -n 3484680 ']' 00:32:02.745 00:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@516 -- # killprocess 3484680 00:32:02.745 00:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@950 -- # '[' -z 3484680 ']' 00:32:02.745 00:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # kill -0 3484680 00:32:02.745 00:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # uname 00:32:02.745 00:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:02.745 00:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3484680 00:32:02.745 00:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:32:02.745 00:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:32:02.745 00:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3484680' 00:32:02.745 killing process with pid 3484680 00:32:02.745 00:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@969 -- # kill 3484680 00:32:02.745 00:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@974 -- # wait 3484680 00:32:03.006 00:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:32:03.006 00:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:32:03.006 00:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:32:03.006 00:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:32:03.006 00:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@789 -- # iptables-save 00:32:03.006 00:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:32:03.006 00:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@789 -- # iptables-restore 00:32:03.006 00:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:03.006 00:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:03.006 00:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:03.006 00:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:03.006 00:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:04.917 00:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:04.917 00:32:04.917 real 0m44.848s 00:32:04.917 user 0m53.990s 00:32:04.917 sys 0m10.725s 00:32:04.917 00:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:04.917 00:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:32:04.917 ************************************ 00:32:04.917 END TEST nvmf_lvs_grow 00:32:04.917 ************************************ 00:32:04.917 00:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:32:04.917 00:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:32:04.917 00:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:04.917 00:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:05.190 ************************************ 00:32:05.190 START TEST nvmf_bdev_io_wait 00:32:05.190 ************************************ 00:32:05.190 00:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:32:05.190 * Looking for test storage... 00:32:05.190 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:05.190 00:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:32:05.190 00:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1681 -- # lcov --version 00:32:05.190 00:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:32:05.190 00:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:32:05.190 00:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:05.190 00:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:05.190 00:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:05.190 00:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:32:05.190 00:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:32:05.190 00:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:32:05.190 00:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:32:05.190 00:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:32:05.190 00:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:32:05.190 00:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:32:05.190 00:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:05.190 00:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:32:05.190 00:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:32:05.190 00:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:05.190 00:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:05.190 00:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:32:05.190 00:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:32:05.190 00:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:05.190 00:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:32:05.190 00:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:32:05.190 00:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:32:05.190 00:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:32:05.190 00:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:05.190 00:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:32:05.190 00:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:32:05.190 00:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:05.191 00:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:05.191 00:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:32:05.191 00:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:05.191 00:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:32:05.191 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:05.191 --rc genhtml_branch_coverage=1 00:32:05.191 --rc genhtml_function_coverage=1 00:32:05.191 --rc genhtml_legend=1 00:32:05.191 --rc geninfo_all_blocks=1 00:32:05.191 --rc geninfo_unexecuted_blocks=1 00:32:05.191 00:32:05.191 ' 00:32:05.191 00:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:32:05.191 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:05.191 --rc genhtml_branch_coverage=1 00:32:05.191 --rc genhtml_function_coverage=1 00:32:05.191 --rc genhtml_legend=1 00:32:05.191 --rc geninfo_all_blocks=1 00:32:05.191 --rc geninfo_unexecuted_blocks=1 00:32:05.191 00:32:05.191 ' 00:32:05.191 00:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:32:05.191 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:05.191 --rc genhtml_branch_coverage=1 00:32:05.191 --rc genhtml_function_coverage=1 00:32:05.191 --rc genhtml_legend=1 00:32:05.191 --rc geninfo_all_blocks=1 00:32:05.191 --rc geninfo_unexecuted_blocks=1 00:32:05.191 00:32:05.191 ' 00:32:05.191 00:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:32:05.191 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:05.191 --rc genhtml_branch_coverage=1 00:32:05.191 --rc genhtml_function_coverage=1 00:32:05.191 --rc genhtml_legend=1 00:32:05.191 --rc geninfo_all_blocks=1 00:32:05.191 --rc geninfo_unexecuted_blocks=1 00:32:05.191 00:32:05.191 ' 00:32:05.191 00:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:05.191 00:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:32:05.191 00:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:05.191 00:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:05.191 00:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:05.191 00:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:05.191 00:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:05.191 00:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:05.191 00:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:05.191 00:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:05.191 00:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:05.191 00:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:05.191 00:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:05.191 00:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:05.191 00:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:05.191 00:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:05.191 00:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:05.191 00:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:05.191 00:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:05.191 00:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:32:05.191 00:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:05.191 00:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:05.191 00:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:05.191 00:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:05.191 00:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:05.191 00:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:05.191 00:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:32:05.192 00:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:05.192 00:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:32:05.192 00:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:05.192 00:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:05.192 00:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:05.192 00:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:05.192 00:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:05.192 00:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:05.192 00:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:05.192 00:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:05.192 00:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:05.192 00:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:05.192 00:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:32:05.192 00:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:32:05.192 00:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:32:05.192 00:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:32:05.192 00:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:05.192 00:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # prepare_net_devs 00:32:05.192 00:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@436 -- # local -g is_hw=no 00:32:05.192 00:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # remove_spdk_ns 00:32:05.192 00:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:05.192 00:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:05.192 00:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:05.462 00:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:32:05.462 00:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:32:05.462 00:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:32:05.462 00:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:13.597 00:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:13.597 00:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:32:13.597 00:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:13.597 00:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:13.597 00:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:13.597 00:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:13.597 00:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:13.597 00:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:32:13.597 00:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:13.597 00:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:32:13.597 00:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:32:13.597 00:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:32:13.597 00:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:32:13.597 00:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:32:13.597 00:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:32:13.597 00:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:13.597 00:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:13.597 00:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:13.597 00:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:13.597 00:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:13.597 00:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:13.597 00:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:13.597 00:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:13.597 00:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:13.597 00:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:13.597 00:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:13.597 00:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:13.597 00:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:13.597 00:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:13.597 00:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:13.597 00:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:13.597 00:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:13.597 00:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:13.597 00:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:13.597 00:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:32:13.597 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:32:13.597 00:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:13.597 00:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:13.597 00:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:13.597 00:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:13.597 00:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:13.597 00:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:13.597 00:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:32:13.597 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:32:13.597 00:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:13.597 00:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:13.597 00:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:13.597 00:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:13.597 00:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:13.597 00:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:13.597 00:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:13.597 00:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:13.597 00:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:32:13.597 00:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:13.597 00:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:32:13.597 00:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:13.597 00:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ up == up ]] 00:32:13.597 00:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:32:13.597 00:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:13.597 00:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:32:13.597 Found net devices under 0000:4b:00.0: cvl_0_0 00:32:13.597 00:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:32:13.597 00:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:32:13.597 00:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:13.597 00:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:32:13.597 00:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:13.597 00:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ up == up ]] 00:32:13.597 00:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:32:13.597 00:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:13.597 00:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:32:13.597 Found net devices under 0000:4b:00.1: cvl_0_1 00:32:13.597 00:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:32:13.597 00:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:32:13.597 00:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # is_hw=yes 00:32:13.597 00:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:32:13.597 00:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:32:13.597 00:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:32:13.597 00:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:13.597 00:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:13.597 00:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:13.597 00:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:13.597 00:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:13.597 00:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:13.597 00:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:13.597 00:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:13.597 00:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:13.597 00:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:13.597 00:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:13.597 00:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:13.597 00:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:13.597 00:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:13.597 00:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:13.597 00:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:13.597 00:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:13.597 00:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:13.597 00:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:13.597 00:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:13.598 00:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:13.598 00:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:13.598 00:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:13.598 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:13.598 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.585 ms 00:32:13.598 00:32:13.598 --- 10.0.0.2 ping statistics --- 00:32:13.598 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:13.598 rtt min/avg/max/mdev = 0.585/0.585/0.585/0.000 ms 00:32:13.598 00:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:13.598 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:13.598 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.308 ms 00:32:13.598 00:32:13.598 --- 10.0.0.1 ping statistics --- 00:32:13.598 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:13.598 rtt min/avg/max/mdev = 0.308/0.308/0.308/0.000 ms 00:32:13.598 00:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:13.598 00:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # return 0 00:32:13.598 00:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:32:13.598 00:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:13.598 00:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:32:13.598 00:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:32:13.598 00:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:13.598 00:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:32:13.598 00:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:32:13.598 00:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:32:13.598 00:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:32:13.598 00:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:13.598 00:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:13.598 00:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # nvmfpid=3489677 00:32:13.598 00:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # waitforlisten 3489677 00:32:13.598 00:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF --wait-for-rpc 00:32:13.598 00:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@831 -- # '[' -z 3489677 ']' 00:32:13.598 00:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:13.598 00:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:13.598 00:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:13.598 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:13.598 00:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:13.598 00:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:13.598 [2024-10-09 00:39:43.387007] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:13.598 [2024-10-09 00:39:43.388172] Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 initialization... 00:32:13.598 [2024-10-09 00:39:43.388226] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:13.598 [2024-10-09 00:39:43.479381] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:13.598 [2024-10-09 00:39:43.575462] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:13.598 [2024-10-09 00:39:43.575525] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:13.598 [2024-10-09 00:39:43.575538] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:13.598 [2024-10-09 00:39:43.575546] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:13.598 [2024-10-09 00:39:43.575552] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:13.598 [2024-10-09 00:39:43.577932] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:32:13.598 [2024-10-09 00:39:43.578113] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:32:13.598 [2024-10-09 00:39:43.578277] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:32:13.598 [2024-10-09 00:39:43.578276] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:32:13.598 [2024-10-09 00:39:43.578766] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:13.598 00:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:13.598 00:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # return 0 00:32:13.598 00:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:32:13.598 00:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:13.598 00:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:13.860 00:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:13.860 00:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:32:13.860 00:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:13.860 00:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:13.860 00:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:13.860 00:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:32:13.860 00:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:13.860 00:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:13.860 [2024-10-09 00:39:44.323745] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:13.860 [2024-10-09 00:39:44.324421] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:32:13.860 [2024-10-09 00:39:44.324468] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:32:13.860 [2024-10-09 00:39:44.324633] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:32:13.860 00:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:13.860 00:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:13.860 00:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:13.860 00:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:13.860 [2024-10-09 00:39:44.335265] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:13.860 00:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:13.860 00:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:32:13.860 00:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:13.860 00:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:13.860 Malloc0 00:32:13.860 00:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:13.860 00:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:13.860 00:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:13.860 00:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:13.860 00:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:13.860 00:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:13.860 00:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:13.860 00:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:13.860 00:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:13.860 00:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:13.860 00:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:13.860 00:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:13.860 [2024-10-09 00:39:44.423656] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:13.860 00:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:13.860 00:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=3489901 00:32:13.860 00:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=3489903 00:32:13.860 00:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:32:13.860 00:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:32:13.860 00:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:32:13.860 00:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:32:13.860 00:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:32:13.860 00:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:32:13.860 { 00:32:13.860 "params": { 00:32:13.860 "name": "Nvme$subsystem", 00:32:13.860 "trtype": "$TEST_TRANSPORT", 00:32:13.860 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:13.860 "adrfam": "ipv4", 00:32:13.860 "trsvcid": "$NVMF_PORT", 00:32:13.860 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:13.860 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:13.860 "hdgst": ${hdgst:-false}, 00:32:13.860 "ddgst": ${ddgst:-false} 00:32:13.860 }, 00:32:13.860 "method": "bdev_nvme_attach_controller" 00:32:13.860 } 00:32:13.860 EOF 00:32:13.860 )") 00:32:13.860 00:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=3489906 00:32:13.860 00:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:32:13.860 00:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:32:13.860 00:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:32:13.860 00:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:32:13.860 00:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:32:13.860 00:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=3489910 00:32:13.860 00:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:32:13.860 { 00:32:13.860 "params": { 00:32:13.860 "name": "Nvme$subsystem", 00:32:13.860 "trtype": "$TEST_TRANSPORT", 00:32:13.860 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:13.860 "adrfam": "ipv4", 00:32:13.860 "trsvcid": "$NVMF_PORT", 00:32:13.860 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:13.860 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:13.860 "hdgst": ${hdgst:-false}, 00:32:13.860 "ddgst": ${ddgst:-false} 00:32:13.860 }, 00:32:13.860 "method": "bdev_nvme_attach_controller" 00:32:13.860 } 00:32:13.860 EOF 00:32:13.860 )") 00:32:13.860 00:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:32:13.860 00:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:32:13.860 00:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:32:13.860 00:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:32:13.860 00:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:32:13.860 00:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:32:13.860 00:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:32:13.860 00:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:32:13.860 { 00:32:13.860 "params": { 00:32:13.860 "name": "Nvme$subsystem", 00:32:13.860 "trtype": "$TEST_TRANSPORT", 00:32:13.860 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:13.860 "adrfam": "ipv4", 00:32:13.860 "trsvcid": "$NVMF_PORT", 00:32:13.860 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:13.860 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:13.860 "hdgst": ${hdgst:-false}, 00:32:13.860 "ddgst": ${ddgst:-false} 00:32:13.860 }, 00:32:13.860 "method": "bdev_nvme_attach_controller" 00:32:13.860 } 00:32:13.860 EOF 00:32:13.860 )") 00:32:13.860 00:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:32:13.860 00:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:32:13.860 00:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:32:13.860 00:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:32:13.860 00:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:32:13.860 00:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:32:13.860 00:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:32:13.860 { 00:32:13.860 "params": { 00:32:13.860 "name": "Nvme$subsystem", 00:32:13.860 "trtype": "$TEST_TRANSPORT", 00:32:13.860 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:13.860 "adrfam": "ipv4", 00:32:13.860 "trsvcid": "$NVMF_PORT", 00:32:13.860 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:13.860 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:13.860 "hdgst": ${hdgst:-false}, 00:32:13.861 "ddgst": ${ddgst:-false} 00:32:13.861 }, 00:32:13.861 "method": "bdev_nvme_attach_controller" 00:32:13.861 } 00:32:13.861 EOF 00:32:13.861 )") 00:32:13.861 00:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:32:13.861 00:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 3489901 00:32:13.861 00:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:32:13.861 00:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:32:13.861 00:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:32:13.861 00:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:32:13.861 00:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:32:13.861 00:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:32:13.861 "params": { 00:32:13.861 "name": "Nvme1", 00:32:13.861 "trtype": "tcp", 00:32:13.861 "traddr": "10.0.0.2", 00:32:13.861 "adrfam": "ipv4", 00:32:13.861 "trsvcid": "4420", 00:32:13.861 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:13.861 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:13.861 "hdgst": false, 00:32:13.861 "ddgst": false 00:32:13.861 }, 00:32:13.861 "method": "bdev_nvme_attach_controller" 00:32:13.861 }' 00:32:13.861 00:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:32:13.861 00:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:32:13.861 00:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:32:13.861 "params": { 00:32:13.861 "name": "Nvme1", 00:32:13.861 "trtype": "tcp", 00:32:13.861 "traddr": "10.0.0.2", 00:32:13.861 "adrfam": "ipv4", 00:32:13.861 "trsvcid": "4420", 00:32:13.861 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:13.861 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:13.861 "hdgst": false, 00:32:13.861 "ddgst": false 00:32:13.861 }, 00:32:13.861 "method": "bdev_nvme_attach_controller" 00:32:13.861 }' 00:32:13.861 00:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:32:13.861 00:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:32:13.861 "params": { 00:32:13.861 "name": "Nvme1", 00:32:13.861 "trtype": "tcp", 00:32:13.861 "traddr": "10.0.0.2", 00:32:13.861 "adrfam": "ipv4", 00:32:13.861 "trsvcid": "4420", 00:32:13.861 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:13.861 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:13.861 "hdgst": false, 00:32:13.861 "ddgst": false 00:32:13.861 }, 00:32:13.861 "method": "bdev_nvme_attach_controller" 00:32:13.861 }' 00:32:13.861 00:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:32:13.861 00:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:32:13.861 "params": { 00:32:13.861 "name": "Nvme1", 00:32:13.861 "trtype": "tcp", 00:32:13.861 "traddr": "10.0.0.2", 00:32:13.861 "adrfam": "ipv4", 00:32:13.861 "trsvcid": "4420", 00:32:13.861 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:13.861 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:13.861 "hdgst": false, 00:32:13.861 "ddgst": false 00:32:13.861 }, 00:32:13.861 "method": "bdev_nvme_attach_controller" 00:32:13.861 }' 00:32:13.861 [2024-10-09 00:39:44.479664] Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 initialization... 00:32:13.861 [2024-10-09 00:39:44.479743] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:32:13.861 [2024-10-09 00:39:44.484183] Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 initialization... 00:32:13.861 [2024-10-09 00:39:44.484249] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:32:13.861 [2024-10-09 00:39:44.485058] Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 initialization... 00:32:13.861 [2024-10-09 00:39:44.485122] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:32:13.861 [2024-10-09 00:39:44.487163] Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 initialization... 00:32:13.861 [2024-10-09 00:39:44.487228] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:32:14.120 [2024-10-09 00:39:44.685096] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:14.380 [2024-10-09 00:39:44.756967] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 4 00:32:14.380 [2024-10-09 00:39:44.781477] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:14.380 [2024-10-09 00:39:44.847113] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:14.380 [2024-10-09 00:39:44.851937] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 7 00:32:14.380 [2024-10-09 00:39:44.908377] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 5 00:32:14.380 [2024-10-09 00:39:44.913545] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:14.380 [2024-10-09 00:39:44.979345] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 6 00:32:14.641 Running I/O for 1 seconds... 00:32:14.641 Running I/O for 1 seconds... 00:32:14.901 Running I/O for 1 seconds... 00:32:14.901 Running I/O for 1 seconds... 00:32:15.472 11355.00 IOPS, 44.36 MiB/s 00:32:15.472 Latency(us) 00:32:15.472 [2024-10-08T22:39:46.107Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:15.472 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:32:15.472 Nvme1n1 : 1.01 11404.03 44.55 0.00 0.00 11181.82 2307.41 14199.47 00:32:15.472 [2024-10-08T22:39:46.107Z] =================================================================================================================== 00:32:15.472 [2024-10-08T22:39:46.107Z] Total : 11404.03 44.55 0.00 0.00 11181.82 2307.41 14199.47 00:32:15.732 10337.00 IOPS, 40.38 MiB/s 00:32:15.732 Latency(us) 00:32:15.732 [2024-10-08T22:39:46.367Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:15.732 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:32:15.732 Nvme1n1 : 1.01 10409.69 40.66 0.00 0.00 12256.56 4341.76 16384.00 00:32:15.732 [2024-10-08T22:39:46.367Z] =================================================================================================================== 00:32:15.732 [2024-10-08T22:39:46.367Z] Total : 10409.69 40.66 0.00 0.00 12256.56 4341.76 16384.00 00:32:15.732 00:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 3489903 00:32:15.732 10067.00 IOPS, 39.32 MiB/s 00:32:15.732 Latency(us) 00:32:15.732 [2024-10-08T22:39:46.368Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:15.733 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:32:15.733 Nvme1n1 : 1.01 10124.55 39.55 0.00 0.00 12594.97 5352.11 20206.93 00:32:15.733 [2024-10-08T22:39:46.368Z] =================================================================================================================== 00:32:15.733 [2024-10-08T22:39:46.368Z] Total : 10124.55 39.55 0.00 0.00 12594.97 5352.11 20206.93 00:32:15.993 00:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 3489906 00:32:15.993 187720.00 IOPS, 733.28 MiB/s 00:32:15.993 Latency(us) 00:32:15.993 [2024-10-08T22:39:46.628Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:15.993 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:32:15.993 Nvme1n1 : 1.00 187334.73 731.78 0.00 0.00 679.43 334.51 2088.96 00:32:15.993 [2024-10-08T22:39:46.628Z] =================================================================================================================== 00:32:15.993 [2024-10-08T22:39:46.628Z] Total : 187334.73 731.78 0.00 0.00 679.43 334.51 2088.96 00:32:16.254 00:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 3489910 00:32:16.254 00:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:16.254 00:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:16.254 00:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:16.254 00:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:16.254 00:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:32:16.254 00:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:32:16.254 00:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@514 -- # nvmfcleanup 00:32:16.254 00:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:32:16.254 00:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:16.254 00:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:32:16.254 00:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:16.254 00:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:16.254 rmmod nvme_tcp 00:32:16.254 rmmod nvme_fabrics 00:32:16.254 rmmod nvme_keyring 00:32:16.254 00:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:16.254 00:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:32:16.254 00:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:32:16.254 00:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@515 -- # '[' -n 3489677 ']' 00:32:16.254 00:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # killprocess 3489677 00:32:16.254 00:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@950 -- # '[' -z 3489677 ']' 00:32:16.254 00:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # kill -0 3489677 00:32:16.254 00:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # uname 00:32:16.254 00:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:16.254 00:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3489677 00:32:16.254 00:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:32:16.254 00:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:32:16.254 00:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3489677' 00:32:16.254 killing process with pid 3489677 00:32:16.254 00:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@969 -- # kill 3489677 00:32:16.254 00:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@974 -- # wait 3489677 00:32:16.515 00:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:32:16.515 00:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:32:16.515 00:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:32:16.515 00:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:32:16.515 00:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@789 -- # iptables-save 00:32:16.515 00:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:32:16.515 00:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@789 -- # iptables-restore 00:32:16.515 00:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:16.515 00:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:16.515 00:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:16.515 00:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:16.515 00:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:19.057 00:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:19.057 00:32:19.057 real 0m13.522s 00:32:19.057 user 0m17.788s 00:32:19.057 sys 0m7.998s 00:32:19.057 00:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:19.057 00:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:19.057 ************************************ 00:32:19.057 END TEST nvmf_bdev_io_wait 00:32:19.057 ************************************ 00:32:19.057 00:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:32:19.057 00:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:32:19.057 00:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:19.057 00:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:19.057 ************************************ 00:32:19.057 START TEST nvmf_queue_depth 00:32:19.057 ************************************ 00:32:19.057 00:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:32:19.057 * Looking for test storage... 00:32:19.057 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:19.057 00:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:32:19.057 00:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1681 -- # lcov --version 00:32:19.057 00:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:32:19.057 00:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:32:19.057 00:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:19.057 00:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:19.057 00:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:19.057 00:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:32:19.057 00:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:32:19.057 00:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:32:19.057 00:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:32:19.057 00:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:32:19.057 00:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:32:19.057 00:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:32:19.057 00:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:19.057 00:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:32:19.057 00:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:32:19.057 00:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:19.057 00:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:19.057 00:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:32:19.057 00:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:32:19.057 00:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:19.057 00:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:32:19.057 00:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:32:19.057 00:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:32:19.057 00:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:32:19.057 00:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:19.057 00:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:32:19.057 00:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:32:19.057 00:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:19.057 00:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:19.057 00:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:32:19.057 00:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:19.057 00:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:32:19.057 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:19.057 --rc genhtml_branch_coverage=1 00:32:19.057 --rc genhtml_function_coverage=1 00:32:19.057 --rc genhtml_legend=1 00:32:19.057 --rc geninfo_all_blocks=1 00:32:19.057 --rc geninfo_unexecuted_blocks=1 00:32:19.057 00:32:19.057 ' 00:32:19.057 00:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:32:19.057 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:19.057 --rc genhtml_branch_coverage=1 00:32:19.057 --rc genhtml_function_coverage=1 00:32:19.057 --rc genhtml_legend=1 00:32:19.057 --rc geninfo_all_blocks=1 00:32:19.057 --rc geninfo_unexecuted_blocks=1 00:32:19.057 00:32:19.057 ' 00:32:19.057 00:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:32:19.057 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:19.057 --rc genhtml_branch_coverage=1 00:32:19.057 --rc genhtml_function_coverage=1 00:32:19.057 --rc genhtml_legend=1 00:32:19.057 --rc geninfo_all_blocks=1 00:32:19.057 --rc geninfo_unexecuted_blocks=1 00:32:19.057 00:32:19.057 ' 00:32:19.057 00:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:32:19.057 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:19.057 --rc genhtml_branch_coverage=1 00:32:19.057 --rc genhtml_function_coverage=1 00:32:19.057 --rc genhtml_legend=1 00:32:19.057 --rc geninfo_all_blocks=1 00:32:19.057 --rc geninfo_unexecuted_blocks=1 00:32:19.057 00:32:19.057 ' 00:32:19.057 00:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:19.057 00:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:32:19.057 00:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:19.057 00:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:19.058 00:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:19.058 00:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:19.058 00:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:19.058 00:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:19.058 00:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:19.058 00:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:19.058 00:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:19.058 00:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:19.058 00:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:19.058 00:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:19.058 00:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:19.058 00:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:19.058 00:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:19.058 00:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:19.058 00:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:19.058 00:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:32:19.058 00:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:19.058 00:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:19.058 00:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:19.058 00:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:19.058 00:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:19.058 00:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:19.058 00:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:32:19.058 00:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:19.058 00:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:32:19.058 00:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:19.058 00:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:19.058 00:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:19.058 00:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:19.058 00:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:19.058 00:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:19.058 00:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:19.058 00:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:19.058 00:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:19.058 00:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:19.058 00:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:32:19.058 00:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:32:19.058 00:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:32:19.058 00:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:32:19.058 00:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:32:19.058 00:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:19.058 00:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@474 -- # prepare_net_devs 00:32:19.058 00:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@436 -- # local -g is_hw=no 00:32:19.058 00:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@438 -- # remove_spdk_ns 00:32:19.058 00:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:19.058 00:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:19.058 00:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:19.058 00:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:32:19.058 00:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:32:19.058 00:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:32:19.058 00:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:27.191 00:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:27.191 00:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:32:27.191 00:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:27.191 00:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:27.191 00:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:27.191 00:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:27.191 00:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:27.191 00:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:32:27.191 00:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:27.191 00:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:32:27.191 00:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:32:27.191 00:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:32:27.191 00:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:32:27.191 00:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:32:27.191 00:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:32:27.191 00:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:27.191 00:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:27.191 00:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:27.191 00:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:27.191 00:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:27.191 00:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:27.191 00:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:27.191 00:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:27.191 00:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:27.191 00:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:27.191 00:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:27.191 00:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:27.191 00:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:27.191 00:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:27.191 00:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:27.191 00:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:27.191 00:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:27.191 00:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:27.191 00:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:27.191 00:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:32:27.191 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:32:27.191 00:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:27.191 00:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:27.191 00:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:27.191 00:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:27.191 00:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:27.191 00:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:27.191 00:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:32:27.191 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:32:27.191 00:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:27.191 00:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:27.191 00:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:27.191 00:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:27.191 00:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:27.192 00:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:27.192 00:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:27.192 00:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:27.192 00:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:32:27.192 00:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:27.192 00:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:32:27.192 00:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:27.192 00:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ up == up ]] 00:32:27.192 00:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:32:27.192 00:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:27.192 00:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:32:27.192 Found net devices under 0000:4b:00.0: cvl_0_0 00:32:27.192 00:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:32:27.192 00:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:32:27.192 00:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:27.192 00:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:32:27.192 00:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:27.192 00:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ up == up ]] 00:32:27.192 00:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:32:27.192 00:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:27.192 00:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:32:27.192 Found net devices under 0000:4b:00.1: cvl_0_1 00:32:27.192 00:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:32:27.192 00:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:32:27.192 00:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # is_hw=yes 00:32:27.192 00:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:32:27.192 00:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:32:27.192 00:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:32:27.192 00:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:27.192 00:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:27.192 00:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:27.192 00:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:27.192 00:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:27.192 00:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:27.192 00:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:27.192 00:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:27.192 00:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:27.192 00:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:27.192 00:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:27.192 00:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:27.192 00:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:27.192 00:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:27.192 00:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:27.192 00:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:27.192 00:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:27.192 00:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:27.192 00:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:27.192 00:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:27.192 00:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:27.192 00:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:27.192 00:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:27.192 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:27.192 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.592 ms 00:32:27.192 00:32:27.192 --- 10.0.0.2 ping statistics --- 00:32:27.192 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:27.192 rtt min/avg/max/mdev = 0.592/0.592/0.592/0.000 ms 00:32:27.192 00:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:27.192 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:27.192 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.316 ms 00:32:27.192 00:32:27.192 --- 10.0.0.1 ping statistics --- 00:32:27.192 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:27.192 rtt min/avg/max/mdev = 0.316/0.316/0.316/0.000 ms 00:32:27.192 00:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:27.192 00:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@448 -- # return 0 00:32:27.192 00:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:32:27.192 00:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:27.192 00:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:32:27.192 00:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:32:27.192 00:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:27.192 00:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:32:27.192 00:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:32:27.192 00:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:32:27.192 00:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:32:27.192 00:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:27.192 00:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:27.192 00:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@507 -- # nvmfpid=3494422 00:32:27.192 00:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@508 -- # waitforlisten 3494422 00:32:27.192 00:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:32:27.192 00:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 3494422 ']' 00:32:27.192 00:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:27.192 00:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:27.192 00:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:27.192 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:27.192 00:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:27.192 00:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:27.192 [2024-10-09 00:39:57.029443] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:27.192 [2024-10-09 00:39:57.030591] Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 initialization... 00:32:27.192 [2024-10-09 00:39:57.030642] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:27.192 [2024-10-09 00:39:57.126265] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:27.192 [2024-10-09 00:39:57.219157] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:27.192 [2024-10-09 00:39:57.219216] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:27.192 [2024-10-09 00:39:57.219225] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:27.192 [2024-10-09 00:39:57.219238] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:27.192 [2024-10-09 00:39:57.219245] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:27.192 [2024-10-09 00:39:57.220008] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:32:27.192 [2024-10-09 00:39:57.296140] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:27.192 [2024-10-09 00:39:57.296436] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:27.452 00:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:27.452 00:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:32:27.452 00:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:32:27.452 00:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:27.452 00:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:27.452 00:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:27.452 00:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:27.452 00:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:27.452 00:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:27.452 [2024-10-09 00:39:57.912870] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:27.452 00:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:27.452 00:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:32:27.452 00:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:27.452 00:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:27.452 Malloc0 00:32:27.452 00:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:27.452 00:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:27.452 00:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:27.452 00:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:27.452 00:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:27.452 00:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:27.452 00:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:27.452 00:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:27.452 00:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:27.452 00:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:27.452 00:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:27.452 00:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:27.452 [2024-10-09 00:39:57.993106] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:27.452 00:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:27.452 00:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=3494746 00:32:27.452 00:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:27.452 00:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:32:27.452 00:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 3494746 /var/tmp/bdevperf.sock 00:32:27.452 00:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 3494746 ']' 00:32:27.452 00:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:32:27.452 00:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:27.452 00:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:32:27.452 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:32:27.452 00:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:27.452 00:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:27.452 [2024-10-09 00:39:58.051897] Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 initialization... 00:32:27.452 [2024-10-09 00:39:58.051964] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3494746 ] 00:32:27.712 [2024-10-09 00:39:58.134178] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:27.712 [2024-10-09 00:39:58.231064] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:32:28.281 00:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:28.281 00:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:32:28.281 00:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:32:28.281 00:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:28.281 00:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:28.542 NVMe0n1 00:32:28.542 00:39:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:28.542 00:39:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:32:28.803 Running I/O for 10 seconds... 00:32:30.684 8583.00 IOPS, 33.53 MiB/s [2024-10-08T22:40:02.258Z] 9037.00 IOPS, 35.30 MiB/s [2024-10-08T22:40:03.640Z] 9504.67 IOPS, 37.13 MiB/s [2024-10-08T22:40:04.578Z] 10502.50 IOPS, 41.03 MiB/s [2024-10-08T22:40:05.517Z] 11242.20 IOPS, 43.91 MiB/s [2024-10-08T22:40:06.469Z] 11682.50 IOPS, 45.63 MiB/s [2024-10-08T22:40:07.419Z] 12004.14 IOPS, 46.89 MiB/s [2024-10-08T22:40:08.361Z] 12298.00 IOPS, 48.04 MiB/s [2024-10-08T22:40:09.301Z] 12527.67 IOPS, 48.94 MiB/s [2024-10-08T22:40:09.301Z] 12705.10 IOPS, 49.63 MiB/s 00:32:38.666 Latency(us) 00:32:38.666 [2024-10-08T22:40:09.301Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:38.666 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:32:38.666 Verification LBA range: start 0x0 length 0x4000 00:32:38.666 NVMe0n1 : 10.06 12728.60 49.72 0.00 0.00 80176.73 25449.81 67720.53 00:32:38.666 [2024-10-08T22:40:09.301Z] =================================================================================================================== 00:32:38.666 [2024-10-08T22:40:09.301Z] Total : 12728.60 49.72 0.00 0.00 80176.73 25449.81 67720.53 00:32:38.927 { 00:32:38.927 "results": [ 00:32:38.927 { 00:32:38.927 "job": "NVMe0n1", 00:32:38.927 "core_mask": "0x1", 00:32:38.927 "workload": "verify", 00:32:38.927 "status": "finished", 00:32:38.927 "verify_range": { 00:32:38.927 "start": 0, 00:32:38.927 "length": 16384 00:32:38.927 }, 00:32:38.927 "queue_depth": 1024, 00:32:38.927 "io_size": 4096, 00:32:38.927 "runtime": 10.06136, 00:32:38.927 "iops": 12728.59732680274, 00:32:38.927 "mibps": 49.7210833078232, 00:32:38.927 "io_failed": 0, 00:32:38.927 "io_timeout": 0, 00:32:38.927 "avg_latency_us": 80176.73073250722, 00:32:38.927 "min_latency_us": 25449.81333333333, 00:32:38.927 "max_latency_us": 67720.53333333334 00:32:38.927 } 00:32:38.927 ], 00:32:38.927 "core_count": 1 00:32:38.927 } 00:32:38.927 00:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 3494746 00:32:38.927 00:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 3494746 ']' 00:32:38.927 00:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 3494746 00:32:38.927 00:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:32:38.927 00:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:38.927 00:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3494746 00:32:38.927 00:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:32:38.927 00:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:32:38.927 00:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3494746' 00:32:38.927 killing process with pid 3494746 00:32:38.927 00:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 3494746 00:32:38.927 Received shutdown signal, test time was about 10.000000 seconds 00:32:38.927 00:32:38.927 Latency(us) 00:32:38.927 [2024-10-08T22:40:09.562Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:38.927 [2024-10-08T22:40:09.562Z] =================================================================================================================== 00:32:38.927 [2024-10-08T22:40:09.562Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:38.927 00:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 3494746 00:32:38.927 00:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:32:38.927 00:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:32:38.927 00:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@514 -- # nvmfcleanup 00:32:38.927 00:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:32:38.927 00:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:38.927 00:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:32:38.927 00:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:38.927 00:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:38.927 rmmod nvme_tcp 00:32:38.927 rmmod nvme_fabrics 00:32:38.927 rmmod nvme_keyring 00:32:39.189 00:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:39.189 00:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:32:39.189 00:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:32:39.189 00:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@515 -- # '[' -n 3494422 ']' 00:32:39.189 00:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@516 -- # killprocess 3494422 00:32:39.189 00:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 3494422 ']' 00:32:39.189 00:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 3494422 00:32:39.189 00:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:32:39.189 00:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:39.189 00:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3494422 00:32:39.189 00:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:32:39.189 00:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:32:39.189 00:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3494422' 00:32:39.189 killing process with pid 3494422 00:32:39.189 00:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 3494422 00:32:39.189 00:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 3494422 00:32:39.189 00:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:32:39.189 00:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:32:39.189 00:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:32:39.189 00:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:32:39.189 00:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@789 -- # iptables-save 00:32:39.189 00:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:32:39.189 00:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@789 -- # iptables-restore 00:32:39.189 00:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:39.189 00:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:39.189 00:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:39.189 00:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:39.189 00:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:41.733 00:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:41.733 00:32:41.733 real 0m22.666s 00:32:41.733 user 0m24.839s 00:32:41.733 sys 0m7.633s 00:32:41.733 00:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:41.733 00:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:41.733 ************************************ 00:32:41.733 END TEST nvmf_queue_depth 00:32:41.733 ************************************ 00:32:41.733 00:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:32:41.733 00:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:32:41.733 00:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:41.733 00:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:41.733 ************************************ 00:32:41.733 START TEST nvmf_target_multipath 00:32:41.733 ************************************ 00:32:41.733 00:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:32:41.733 * Looking for test storage... 00:32:41.733 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:41.733 00:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:32:41.733 00:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1681 -- # lcov --version 00:32:41.733 00:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:32:41.733 00:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:32:41.733 00:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:41.733 00:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:41.733 00:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:41.733 00:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:32:41.733 00:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:32:41.733 00:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:32:41.733 00:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:32:41.733 00:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:32:41.733 00:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:32:41.733 00:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:32:41.733 00:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:41.733 00:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:32:41.733 00:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:32:41.733 00:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:41.733 00:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:41.733 00:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:32:41.733 00:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:32:41.733 00:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:41.733 00:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:32:41.733 00:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:32:41.733 00:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:32:41.733 00:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:32:41.733 00:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:41.733 00:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:32:41.733 00:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:32:41.733 00:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:41.733 00:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:41.733 00:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:32:41.734 00:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:41.734 00:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:32:41.734 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:41.734 --rc genhtml_branch_coverage=1 00:32:41.734 --rc genhtml_function_coverage=1 00:32:41.734 --rc genhtml_legend=1 00:32:41.734 --rc geninfo_all_blocks=1 00:32:41.734 --rc geninfo_unexecuted_blocks=1 00:32:41.734 00:32:41.734 ' 00:32:41.734 00:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:32:41.734 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:41.734 --rc genhtml_branch_coverage=1 00:32:41.734 --rc genhtml_function_coverage=1 00:32:41.734 --rc genhtml_legend=1 00:32:41.734 --rc geninfo_all_blocks=1 00:32:41.734 --rc geninfo_unexecuted_blocks=1 00:32:41.734 00:32:41.734 ' 00:32:41.734 00:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:32:41.734 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:41.734 --rc genhtml_branch_coverage=1 00:32:41.734 --rc genhtml_function_coverage=1 00:32:41.734 --rc genhtml_legend=1 00:32:41.734 --rc geninfo_all_blocks=1 00:32:41.734 --rc geninfo_unexecuted_blocks=1 00:32:41.734 00:32:41.734 ' 00:32:41.734 00:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:32:41.734 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:41.734 --rc genhtml_branch_coverage=1 00:32:41.734 --rc genhtml_function_coverage=1 00:32:41.734 --rc genhtml_legend=1 00:32:41.734 --rc geninfo_all_blocks=1 00:32:41.734 --rc geninfo_unexecuted_blocks=1 00:32:41.734 00:32:41.734 ' 00:32:41.734 00:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:41.734 00:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:32:41.734 00:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:41.734 00:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:41.734 00:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:41.734 00:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:41.734 00:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:41.734 00:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:41.734 00:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:41.734 00:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:41.734 00:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:41.734 00:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:41.734 00:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:41.734 00:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:41.734 00:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:41.734 00:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:41.734 00:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:41.734 00:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:41.734 00:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:41.734 00:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:32:41.734 00:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:41.734 00:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:41.734 00:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:41.734 00:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:41.734 00:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:41.734 00:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:41.734 00:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:32:41.734 00:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:41.734 00:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:32:41.734 00:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:41.734 00:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:41.734 00:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:41.734 00:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:41.734 00:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:41.734 00:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:41.734 00:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:41.734 00:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:41.734 00:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:41.734 00:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:41.734 00:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:32:41.734 00:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:32:41.734 00:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:32:41.734 00:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:41.734 00:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:32:41.734 00:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:32:41.734 00:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:41.734 00:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@474 -- # prepare_net_devs 00:32:41.734 00:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@436 -- # local -g is_hw=no 00:32:41.734 00:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@438 -- # remove_spdk_ns 00:32:41.734 00:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:41.734 00:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:41.734 00:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:41.734 00:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:32:41.734 00:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:32:41.734 00:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:32:41.734 00:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:32:49.909 00:40:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:49.909 00:40:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:32:49.909 00:40:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:49.909 00:40:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:49.909 00:40:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:49.910 00:40:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:49.910 00:40:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:49.910 00:40:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:32:49.910 00:40:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:49.910 00:40:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:32:49.910 00:40:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:32:49.910 00:40:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:32:49.910 00:40:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:32:49.910 00:40:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:32:49.910 00:40:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:32:49.910 00:40:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:49.910 00:40:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:49.910 00:40:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:49.910 00:40:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:49.910 00:40:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:49.910 00:40:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:49.910 00:40:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:49.910 00:40:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:49.910 00:40:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:49.910 00:40:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:49.910 00:40:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:49.910 00:40:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:49.910 00:40:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:49.910 00:40:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:49.910 00:40:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:49.910 00:40:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:49.910 00:40:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:49.910 00:40:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:49.910 00:40:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:49.910 00:40:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:32:49.910 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:32:49.910 00:40:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:49.910 00:40:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:49.910 00:40:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:49.910 00:40:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:49.910 00:40:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:49.910 00:40:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:49.910 00:40:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:32:49.910 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:32:49.910 00:40:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:49.910 00:40:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:49.910 00:40:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:49.910 00:40:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:49.910 00:40:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:49.910 00:40:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:49.910 00:40:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:49.910 00:40:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:49.910 00:40:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:32:49.910 00:40:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:49.910 00:40:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:32:49.910 00:40:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:49.910 00:40:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ up == up ]] 00:32:49.910 00:40:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:32:49.910 00:40:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:49.910 00:40:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:32:49.910 Found net devices under 0000:4b:00.0: cvl_0_0 00:32:49.910 00:40:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:32:49.910 00:40:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:32:49.910 00:40:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:49.910 00:40:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:32:49.910 00:40:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:49.910 00:40:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ up == up ]] 00:32:49.910 00:40:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:32:49.910 00:40:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:49.910 00:40:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:32:49.910 Found net devices under 0000:4b:00.1: cvl_0_1 00:32:49.910 00:40:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:32:49.910 00:40:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:32:49.910 00:40:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # is_hw=yes 00:32:49.910 00:40:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:32:49.910 00:40:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:32:49.910 00:40:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:32:49.910 00:40:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:49.910 00:40:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:49.910 00:40:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:49.910 00:40:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:49.910 00:40:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:49.910 00:40:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:49.910 00:40:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:49.910 00:40:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:49.910 00:40:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:49.910 00:40:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:49.910 00:40:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:49.910 00:40:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:49.910 00:40:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:49.910 00:40:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:49.910 00:40:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:49.910 00:40:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:49.910 00:40:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:49.910 00:40:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:49.910 00:40:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:49.910 00:40:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:49.911 00:40:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:49.911 00:40:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:49.911 00:40:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:49.911 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:49.911 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.530 ms 00:32:49.911 00:32:49.911 --- 10.0.0.2 ping statistics --- 00:32:49.911 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:49.911 rtt min/avg/max/mdev = 0.530/0.530/0.530/0.000 ms 00:32:49.911 00:40:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:49.911 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:49.911 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.273 ms 00:32:49.911 00:32:49.911 --- 10.0.0.1 ping statistics --- 00:32:49.911 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:49.911 rtt min/avg/max/mdev = 0.273/0.273/0.273/0.000 ms 00:32:49.911 00:40:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:49.911 00:40:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@448 -- # return 0 00:32:49.911 00:40:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:32:49.911 00:40:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:49.911 00:40:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:32:49.911 00:40:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:32:49.911 00:40:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:49.911 00:40:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:32:49.911 00:40:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:32:49.911 00:40:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:32:49.911 00:40:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:32:49.911 only one NIC for nvmf test 00:32:49.911 00:40:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:32:49.911 00:40:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@514 -- # nvmfcleanup 00:32:49.911 00:40:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:32:49.911 00:40:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:49.911 00:40:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:32:49.911 00:40:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:49.911 00:40:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:49.911 rmmod nvme_tcp 00:32:49.911 rmmod nvme_fabrics 00:32:49.911 rmmod nvme_keyring 00:32:49.911 00:40:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:49.911 00:40:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:32:49.911 00:40:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:32:49.911 00:40:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@515 -- # '[' -n '' ']' 00:32:49.911 00:40:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:32:49.911 00:40:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:32:49.911 00:40:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:32:49.911 00:40:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:32:49.911 00:40:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-save 00:32:49.911 00:40:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:32:49.911 00:40:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-restore 00:32:49.911 00:40:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:49.911 00:40:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:49.911 00:40:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:49.911 00:40:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:49.911 00:40:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:51.295 00:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:51.295 00:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:32:51.295 00:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:32:51.295 00:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@514 -- # nvmfcleanup 00:32:51.295 00:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:32:51.295 00:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:51.295 00:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:32:51.295 00:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:51.295 00:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:51.295 00:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:51.295 00:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:32:51.295 00:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:32:51.295 00:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@515 -- # '[' -n '' ']' 00:32:51.295 00:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:32:51.295 00:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:32:51.295 00:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:32:51.295 00:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:32:51.295 00:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-save 00:32:51.295 00:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:32:51.295 00:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-restore 00:32:51.295 00:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:51.295 00:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:51.296 00:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:51.296 00:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:51.296 00:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:51.296 00:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:51.296 00:32:51.296 real 0m9.911s 00:32:51.296 user 0m2.242s 00:32:51.296 sys 0m5.622s 00:32:51.296 00:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:51.296 00:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:32:51.296 ************************************ 00:32:51.296 END TEST nvmf_target_multipath 00:32:51.296 ************************************ 00:32:51.296 00:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:32:51.296 00:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:32:51.296 00:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:51.296 00:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:51.576 ************************************ 00:32:51.576 START TEST nvmf_zcopy 00:32:51.576 ************************************ 00:32:51.576 00:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:32:51.576 * Looking for test storage... 00:32:51.576 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:51.576 00:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:32:51.576 00:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1681 -- # lcov --version 00:32:51.576 00:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:32:51.576 00:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:32:51.576 00:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:51.576 00:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:51.576 00:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:51.576 00:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:32:51.576 00:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:32:51.576 00:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:32:51.576 00:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:32:51.576 00:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:32:51.576 00:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:32:51.576 00:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:32:51.576 00:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:51.576 00:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:32:51.576 00:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:32:51.576 00:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:51.576 00:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:51.576 00:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:32:51.576 00:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:32:51.576 00:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:51.576 00:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:32:51.576 00:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:32:51.576 00:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:32:51.576 00:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:32:51.576 00:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:51.576 00:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:32:51.576 00:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:32:51.576 00:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:51.576 00:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:51.576 00:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:32:51.576 00:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:51.576 00:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:32:51.576 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:51.576 --rc genhtml_branch_coverage=1 00:32:51.576 --rc genhtml_function_coverage=1 00:32:51.576 --rc genhtml_legend=1 00:32:51.576 --rc geninfo_all_blocks=1 00:32:51.576 --rc geninfo_unexecuted_blocks=1 00:32:51.576 00:32:51.576 ' 00:32:51.576 00:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:32:51.576 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:51.576 --rc genhtml_branch_coverage=1 00:32:51.576 --rc genhtml_function_coverage=1 00:32:51.576 --rc genhtml_legend=1 00:32:51.576 --rc geninfo_all_blocks=1 00:32:51.576 --rc geninfo_unexecuted_blocks=1 00:32:51.576 00:32:51.576 ' 00:32:51.576 00:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:32:51.576 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:51.576 --rc genhtml_branch_coverage=1 00:32:51.576 --rc genhtml_function_coverage=1 00:32:51.576 --rc genhtml_legend=1 00:32:51.576 --rc geninfo_all_blocks=1 00:32:51.576 --rc geninfo_unexecuted_blocks=1 00:32:51.576 00:32:51.576 ' 00:32:51.576 00:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:32:51.576 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:51.576 --rc genhtml_branch_coverage=1 00:32:51.576 --rc genhtml_function_coverage=1 00:32:51.576 --rc genhtml_legend=1 00:32:51.576 --rc geninfo_all_blocks=1 00:32:51.576 --rc geninfo_unexecuted_blocks=1 00:32:51.576 00:32:51.576 ' 00:32:51.576 00:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:51.576 00:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:32:51.576 00:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:51.576 00:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:51.576 00:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:51.576 00:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:51.576 00:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:51.576 00:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:51.576 00:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:51.576 00:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:51.576 00:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:51.576 00:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:51.576 00:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:51.576 00:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:51.576 00:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:51.577 00:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:51.577 00:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:51.577 00:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:51.577 00:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:51.577 00:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:32:51.577 00:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:51.577 00:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:51.577 00:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:51.577 00:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:51.577 00:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:51.577 00:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:51.577 00:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:32:51.577 00:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:51.577 00:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:32:51.577 00:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:51.577 00:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:51.577 00:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:51.577 00:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:51.577 00:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:51.577 00:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:51.577 00:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:51.577 00:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:51.577 00:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:51.577 00:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:51.577 00:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:32:51.577 00:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:32:51.577 00:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:51.577 00:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@474 -- # prepare_net_devs 00:32:51.577 00:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@436 -- # local -g is_hw=no 00:32:51.577 00:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@438 -- # remove_spdk_ns 00:32:51.577 00:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:51.577 00:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:51.577 00:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:51.577 00:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:32:51.577 00:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:32:51.577 00:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:32:51.577 00:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:59.745 00:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:59.745 00:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:32:59.745 00:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:59.745 00:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:59.745 00:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:59.745 00:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:59.745 00:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:59.745 00:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:32:59.745 00:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:59.745 00:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:32:59.745 00:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:32:59.745 00:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:32:59.745 00:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:32:59.745 00:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:32:59.745 00:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:32:59.745 00:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:59.745 00:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:59.745 00:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:59.746 00:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:59.746 00:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:59.746 00:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:59.746 00:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:59.746 00:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:59.746 00:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:59.746 00:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:59.746 00:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:59.746 00:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:59.746 00:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:59.746 00:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:59.746 00:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:59.746 00:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:59.746 00:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:59.746 00:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:59.746 00:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:59.746 00:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:32:59.746 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:32:59.746 00:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:59.746 00:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:59.746 00:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:59.746 00:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:59.746 00:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:59.746 00:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:59.746 00:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:32:59.746 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:32:59.746 00:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:59.746 00:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:59.746 00:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:59.746 00:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:59.746 00:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:59.746 00:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:59.746 00:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:59.746 00:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:59.746 00:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:32:59.746 00:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:59.746 00:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:32:59.746 00:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:59.746 00:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ up == up ]] 00:32:59.746 00:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:32:59.746 00:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:59.746 00:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:32:59.746 Found net devices under 0000:4b:00.0: cvl_0_0 00:32:59.746 00:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:32:59.746 00:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:32:59.746 00:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:59.746 00:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:32:59.746 00:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:59.746 00:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ up == up ]] 00:32:59.746 00:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:32:59.746 00:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:59.746 00:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:32:59.746 Found net devices under 0000:4b:00.1: cvl_0_1 00:32:59.746 00:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:32:59.746 00:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:32:59.746 00:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # is_hw=yes 00:32:59.746 00:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:32:59.746 00:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:32:59.746 00:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:32:59.746 00:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:59.746 00:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:59.746 00:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:59.746 00:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:59.746 00:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:59.746 00:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:59.746 00:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:59.746 00:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:59.746 00:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:59.746 00:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:59.746 00:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:59.746 00:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:59.746 00:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:59.746 00:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:59.746 00:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:59.746 00:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:59.746 00:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:59.746 00:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:59.746 00:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:59.746 00:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:59.746 00:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:59.746 00:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:59.746 00:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:59.746 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:59.746 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.646 ms 00:32:59.746 00:32:59.746 --- 10.0.0.2 ping statistics --- 00:32:59.746 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:59.746 rtt min/avg/max/mdev = 0.646/0.646/0.646/0.000 ms 00:32:59.746 00:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:59.746 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:59.746 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.347 ms 00:32:59.746 00:32:59.746 --- 10.0.0.1 ping statistics --- 00:32:59.746 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:59.746 rtt min/avg/max/mdev = 0.347/0.347/0.347/0.000 ms 00:32:59.746 00:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:59.746 00:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@448 -- # return 0 00:32:59.746 00:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:32:59.746 00:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:59.746 00:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:32:59.746 00:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:32:59.746 00:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:59.746 00:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:32:59.746 00:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:32:59.746 00:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:32:59.746 00:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:32:59.746 00:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:59.746 00:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:59.746 00:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@507 -- # nvmfpid=3505081 00:32:59.746 00:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@508 -- # waitforlisten 3505081 00:32:59.746 00:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:32:59.746 00:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@831 -- # '[' -z 3505081 ']' 00:32:59.747 00:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:59.747 00:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:59.747 00:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:59.747 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:59.747 00:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:59.747 00:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:59.747 [2024-10-09 00:40:29.700165] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:59.747 [2024-10-09 00:40:29.701298] Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 initialization... 00:32:59.747 [2024-10-09 00:40:29.701349] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:59.747 [2024-10-09 00:40:29.790515] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:59.747 [2024-10-09 00:40:29.883642] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:59.747 [2024-10-09 00:40:29.883702] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:59.747 [2024-10-09 00:40:29.883711] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:59.747 [2024-10-09 00:40:29.883718] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:59.747 [2024-10-09 00:40:29.883735] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:59.747 [2024-10-09 00:40:29.884485] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:32:59.747 [2024-10-09 00:40:29.961354] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:59.747 [2024-10-09 00:40:29.961641] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:33:00.007 00:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:00.007 00:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@864 -- # return 0 00:33:00.007 00:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:33:00.007 00:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:00.007 00:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:00.007 00:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:00.007 00:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:33:00.007 00:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:33:00.007 00:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:00.007 00:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:00.007 [2024-10-09 00:40:30.565344] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:00.007 00:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:00.007 00:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:33:00.007 00:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:00.007 00:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:00.007 00:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:00.007 00:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:00.007 00:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:00.007 00:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:00.007 [2024-10-09 00:40:30.593643] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:00.007 00:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:00.007 00:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:33:00.007 00:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:00.007 00:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:00.007 00:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:00.007 00:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:33:00.007 00:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:00.007 00:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:00.269 malloc0 00:33:00.269 00:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:00.269 00:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:33:00.269 00:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:00.269 00:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:00.269 00:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:00.269 00:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:33:00.269 00:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:33:00.269 00:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@558 -- # config=() 00:33:00.269 00:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@558 -- # local subsystem config 00:33:00.269 00:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:33:00.269 00:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:33:00.269 { 00:33:00.269 "params": { 00:33:00.269 "name": "Nvme$subsystem", 00:33:00.269 "trtype": "$TEST_TRANSPORT", 00:33:00.269 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:00.269 "adrfam": "ipv4", 00:33:00.269 "trsvcid": "$NVMF_PORT", 00:33:00.269 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:00.269 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:00.269 "hdgst": ${hdgst:-false}, 00:33:00.269 "ddgst": ${ddgst:-false} 00:33:00.269 }, 00:33:00.269 "method": "bdev_nvme_attach_controller" 00:33:00.269 } 00:33:00.269 EOF 00:33:00.269 )") 00:33:00.269 00:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@580 -- # cat 00:33:00.269 00:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # jq . 00:33:00.269 00:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@583 -- # IFS=, 00:33:00.269 00:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:33:00.269 "params": { 00:33:00.269 "name": "Nvme1", 00:33:00.269 "trtype": "tcp", 00:33:00.269 "traddr": "10.0.0.2", 00:33:00.269 "adrfam": "ipv4", 00:33:00.269 "trsvcid": "4420", 00:33:00.269 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:00.269 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:00.269 "hdgst": false, 00:33:00.269 "ddgst": false 00:33:00.269 }, 00:33:00.269 "method": "bdev_nvme_attach_controller" 00:33:00.269 }' 00:33:00.269 [2024-10-09 00:40:30.721482] Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 initialization... 00:33:00.269 [2024-10-09 00:40:30.721561] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3505430 ] 00:33:00.269 [2024-10-09 00:40:30.803200] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:00.269 [2024-10-09 00:40:30.899124] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:33:00.841 Running I/O for 10 seconds... 00:33:02.747 6326.00 IOPS, 49.42 MiB/s [2024-10-08T22:40:34.326Z] 6381.50 IOPS, 49.86 MiB/s [2024-10-08T22:40:35.710Z] 6406.67 IOPS, 50.05 MiB/s [2024-10-08T22:40:36.650Z] 6886.75 IOPS, 53.80 MiB/s [2024-10-08T22:40:37.710Z] 7430.20 IOPS, 58.05 MiB/s [2024-10-08T22:40:38.307Z] 7789.83 IOPS, 60.86 MiB/s [2024-10-08T22:40:39.691Z] 8042.43 IOPS, 62.83 MiB/s [2024-10-08T22:40:40.646Z] 8232.88 IOPS, 64.32 MiB/s [2024-10-08T22:40:41.587Z] 8369.11 IOPS, 65.38 MiB/s [2024-10-08T22:40:41.587Z] 8484.90 IOPS, 66.29 MiB/s 00:33:10.952 Latency(us) 00:33:10.952 [2024-10-08T22:40:41.587Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:10.952 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:33:10.952 Verification LBA range: start 0x0 length 0x1000 00:33:10.952 Nvme1n1 : 10.01 8489.44 66.32 0.00 0.00 15030.91 1938.77 28617.39 00:33:10.952 [2024-10-08T22:40:41.587Z] =================================================================================================================== 00:33:10.952 [2024-10-08T22:40:41.587Z] Total : 8489.44 66.32 0.00 0.00 15030.91 1938.77 28617.39 00:33:10.952 00:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=3507438 00:33:10.952 00:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:33:10.952 00:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:10.952 00:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:33:10.952 00:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:33:10.952 00:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@558 -- # config=() 00:33:10.952 00:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@558 -- # local subsystem config 00:33:10.952 00:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:33:10.952 00:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:33:10.952 { 00:33:10.952 "params": { 00:33:10.952 "name": "Nvme$subsystem", 00:33:10.952 "trtype": "$TEST_TRANSPORT", 00:33:10.952 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:10.952 "adrfam": "ipv4", 00:33:10.952 "trsvcid": "$NVMF_PORT", 00:33:10.952 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:10.952 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:10.952 "hdgst": ${hdgst:-false}, 00:33:10.952 "ddgst": ${ddgst:-false} 00:33:10.952 }, 00:33:10.952 "method": "bdev_nvme_attach_controller" 00:33:10.952 } 00:33:10.952 EOF 00:33:10.952 )") 00:33:10.952 [2024-10-09 00:40:41.424914] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.952 [2024-10-09 00:40:41.424942] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.952 00:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@580 -- # cat 00:33:10.952 00:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # jq . 00:33:10.952 00:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@583 -- # IFS=, 00:33:10.952 00:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:33:10.952 "params": { 00:33:10.952 "name": "Nvme1", 00:33:10.952 "trtype": "tcp", 00:33:10.952 "traddr": "10.0.0.2", 00:33:10.952 "adrfam": "ipv4", 00:33:10.952 "trsvcid": "4420", 00:33:10.952 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:10.952 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:10.952 "hdgst": false, 00:33:10.952 "ddgst": false 00:33:10.952 }, 00:33:10.952 "method": "bdev_nvme_attach_controller" 00:33:10.952 }' 00:33:10.952 [2024-10-09 00:40:41.436883] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.952 [2024-10-09 00:40:41.436894] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.952 [2024-10-09 00:40:41.448879] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.952 [2024-10-09 00:40:41.448887] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.952 [2024-10-09 00:40:41.460880] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.952 [2024-10-09 00:40:41.460889] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.952 [2024-10-09 00:40:41.469937] Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 initialization... 00:33:10.952 [2024-10-09 00:40:41.469986] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3507438 ] 00:33:10.952 [2024-10-09 00:40:41.472879] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.952 [2024-10-09 00:40:41.472888] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.952 [2024-10-09 00:40:41.484879] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.952 [2024-10-09 00:40:41.484888] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.952 [2024-10-09 00:40:41.496879] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.952 [2024-10-09 00:40:41.496887] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.952 [2024-10-09 00:40:41.508878] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.952 [2024-10-09 00:40:41.508886] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.952 [2024-10-09 00:40:41.520879] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.952 [2024-10-09 00:40:41.520888] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.952 [2024-10-09 00:40:41.532879] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.952 [2024-10-09 00:40:41.532887] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.952 [2024-10-09 00:40:41.544817] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:10.952 [2024-10-09 00:40:41.544879] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.952 [2024-10-09 00:40:41.544886] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.952 [2024-10-09 00:40:41.556880] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.952 [2024-10-09 00:40:41.556889] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.952 [2024-10-09 00:40:41.568892] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.952 [2024-10-09 00:40:41.568902] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.952 [2024-10-09 00:40:41.580879] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.952 [2024-10-09 00:40:41.580893] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.213 [2024-10-09 00:40:41.592880] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.213 [2024-10-09 00:40:41.592889] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.213 [2024-10-09 00:40:41.598435] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:33:11.213 [2024-10-09 00:40:41.604879] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.213 [2024-10-09 00:40:41.604888] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.213 [2024-10-09 00:40:41.616883] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.213 [2024-10-09 00:40:41.616897] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.213 [2024-10-09 00:40:41.628883] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.213 [2024-10-09 00:40:41.628892] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.213 [2024-10-09 00:40:41.640879] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.213 [2024-10-09 00:40:41.640889] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.213 [2024-10-09 00:40:41.652880] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.213 [2024-10-09 00:40:41.652889] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.213 [2024-10-09 00:40:41.664890] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.213 [2024-10-09 00:40:41.664905] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.213 [2024-10-09 00:40:41.676882] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.213 [2024-10-09 00:40:41.676896] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.213 [2024-10-09 00:40:41.688881] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.214 [2024-10-09 00:40:41.688892] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.214 [2024-10-09 00:40:41.700880] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.214 [2024-10-09 00:40:41.700890] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.214 [2024-10-09 00:40:41.712880] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.214 [2024-10-09 00:40:41.712891] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.214 [2024-10-09 00:40:41.724885] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.214 [2024-10-09 00:40:41.724900] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.214 Running I/O for 5 seconds... 00:33:11.214 [2024-10-09 00:40:41.740920] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.214 [2024-10-09 00:40:41.740937] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.214 [2024-10-09 00:40:41.753559] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.214 [2024-10-09 00:40:41.753575] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.214 [2024-10-09 00:40:41.768201] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.214 [2024-10-09 00:40:41.768217] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.214 [2024-10-09 00:40:41.781679] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.214 [2024-10-09 00:40:41.781695] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.214 [2024-10-09 00:40:41.796573] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.214 [2024-10-09 00:40:41.796589] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.214 [2024-10-09 00:40:41.809603] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.214 [2024-10-09 00:40:41.809618] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.214 [2024-10-09 00:40:41.824517] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.214 [2024-10-09 00:40:41.824533] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.214 [2024-10-09 00:40:41.837443] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.214 [2024-10-09 00:40:41.837458] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.474 [2024-10-09 00:40:41.852333] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.474 [2024-10-09 00:40:41.852349] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.474 [2024-10-09 00:40:41.865388] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.474 [2024-10-09 00:40:41.865403] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.474 [2024-10-09 00:40:41.880239] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.474 [2024-10-09 00:40:41.880255] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.474 [2024-10-09 00:40:41.893197] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.474 [2024-10-09 00:40:41.893212] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.474 [2024-10-09 00:40:41.908247] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.474 [2024-10-09 00:40:41.908262] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.474 [2024-10-09 00:40:41.921681] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.474 [2024-10-09 00:40:41.921696] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.474 [2024-10-09 00:40:41.935498] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.474 [2024-10-09 00:40:41.935513] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.474 [2024-10-09 00:40:41.948077] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.474 [2024-10-09 00:40:41.948092] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.474 [2024-10-09 00:40:41.961023] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.474 [2024-10-09 00:40:41.961039] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.474 [2024-10-09 00:40:41.972429] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.474 [2024-10-09 00:40:41.972444] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.474 [2024-10-09 00:40:41.985355] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.474 [2024-10-09 00:40:41.985370] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.474 [2024-10-09 00:40:42.000090] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.474 [2024-10-09 00:40:42.000105] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.474 [2024-10-09 00:40:42.013329] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.474 [2024-10-09 00:40:42.013344] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.474 [2024-10-09 00:40:42.028000] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.474 [2024-10-09 00:40:42.028016] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.474 [2024-10-09 00:40:42.040902] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.474 [2024-10-09 00:40:42.040917] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.474 [2024-10-09 00:40:42.053193] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.474 [2024-10-09 00:40:42.053208] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.474 [2024-10-09 00:40:42.068144] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.474 [2024-10-09 00:40:42.068160] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.474 [2024-10-09 00:40:42.081432] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.474 [2024-10-09 00:40:42.081447] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.475 [2024-10-09 00:40:42.095757] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.475 [2024-10-09 00:40:42.095772] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.734 [2024-10-09 00:40:42.109248] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.734 [2024-10-09 00:40:42.109263] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.734 [2024-10-09 00:40:42.124211] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.734 [2024-10-09 00:40:42.124226] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.735 [2024-10-09 00:40:42.137191] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.735 [2024-10-09 00:40:42.137206] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.735 [2024-10-09 00:40:42.152066] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.735 [2024-10-09 00:40:42.152081] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.735 [2024-10-09 00:40:42.165262] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.735 [2024-10-09 00:40:42.165281] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.735 [2024-10-09 00:40:42.180696] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.735 [2024-10-09 00:40:42.180711] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.735 [2024-10-09 00:40:42.193414] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.735 [2024-10-09 00:40:42.193428] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.735 [2024-10-09 00:40:42.207856] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.735 [2024-10-09 00:40:42.207871] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.735 [2024-10-09 00:40:42.220805] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.735 [2024-10-09 00:40:42.220821] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.735 [2024-10-09 00:40:42.232404] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.735 [2024-10-09 00:40:42.232420] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.735 [2024-10-09 00:40:42.245404] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.735 [2024-10-09 00:40:42.245419] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.735 [2024-10-09 00:40:42.260191] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.735 [2024-10-09 00:40:42.260207] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.735 [2024-10-09 00:40:42.273274] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.735 [2024-10-09 00:40:42.273288] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.735 [2024-10-09 00:40:42.288543] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.735 [2024-10-09 00:40:42.288558] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.735 [2024-10-09 00:40:42.301574] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.735 [2024-10-09 00:40:42.301589] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.735 [2024-10-09 00:40:42.316156] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.735 [2024-10-09 00:40:42.316171] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.735 [2024-10-09 00:40:42.329260] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.735 [2024-10-09 00:40:42.329274] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.735 [2024-10-09 00:40:42.344096] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.735 [2024-10-09 00:40:42.344111] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.735 [2024-10-09 00:40:42.356888] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.735 [2024-10-09 00:40:42.356903] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.995 [2024-10-09 00:40:42.369943] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.995 [2024-10-09 00:40:42.369958] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.995 [2024-10-09 00:40:42.384167] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.995 [2024-10-09 00:40:42.384182] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.995 [2024-10-09 00:40:42.397350] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.995 [2024-10-09 00:40:42.397366] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.995 [2024-10-09 00:40:42.411945] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.995 [2024-10-09 00:40:42.411960] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.995 [2024-10-09 00:40:42.425297] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.995 [2024-10-09 00:40:42.425316] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.995 [2024-10-09 00:40:42.440236] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.995 [2024-10-09 00:40:42.440252] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.995 [2024-10-09 00:40:42.453242] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.995 [2024-10-09 00:40:42.453257] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.996 [2024-10-09 00:40:42.468007] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.996 [2024-10-09 00:40:42.468023] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.996 [2024-10-09 00:40:42.481133] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.996 [2024-10-09 00:40:42.481149] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.996 [2024-10-09 00:40:42.492927] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.996 [2024-10-09 00:40:42.492942] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.996 [2024-10-09 00:40:42.505522] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.996 [2024-10-09 00:40:42.505537] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.996 [2024-10-09 00:40:42.519858] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.996 [2024-10-09 00:40:42.519874] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.996 [2024-10-09 00:40:42.532776] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.996 [2024-10-09 00:40:42.532792] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.996 [2024-10-09 00:40:42.545504] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.996 [2024-10-09 00:40:42.545519] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.996 [2024-10-09 00:40:42.560361] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.996 [2024-10-09 00:40:42.560377] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.996 [2024-10-09 00:40:42.573196] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.996 [2024-10-09 00:40:42.573211] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.996 [2024-10-09 00:40:42.588385] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.996 [2024-10-09 00:40:42.588400] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.996 [2024-10-09 00:40:42.601489] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.996 [2024-10-09 00:40:42.601504] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.996 [2024-10-09 00:40:42.616417] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.996 [2024-10-09 00:40:42.616433] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.996 [2024-10-09 00:40:42.629285] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.996 [2024-10-09 00:40:42.629300] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.255 [2024-10-09 00:40:42.644385] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.255 [2024-10-09 00:40:42.644401] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.255 [2024-10-09 00:40:42.657217] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.255 [2024-10-09 00:40:42.657232] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.255 [2024-10-09 00:40:42.672546] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.255 [2024-10-09 00:40:42.672561] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.255 [2024-10-09 00:40:42.685036] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.255 [2024-10-09 00:40:42.685057] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.255 [2024-10-09 00:40:42.700236] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.255 [2024-10-09 00:40:42.700252] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.256 [2024-10-09 00:40:42.713510] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.256 [2024-10-09 00:40:42.713525] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.256 [2024-10-09 00:40:42.728514] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.256 [2024-10-09 00:40:42.728530] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.256 18786.00 IOPS, 146.77 MiB/s [2024-10-08T22:40:42.891Z] [2024-10-09 00:40:42.741332] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.256 [2024-10-09 00:40:42.741348] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.256 [2024-10-09 00:40:42.756247] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.256 [2024-10-09 00:40:42.756262] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.256 [2024-10-09 00:40:42.769258] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.256 [2024-10-09 00:40:42.769273] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.256 [2024-10-09 00:40:42.783901] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.256 [2024-10-09 00:40:42.783917] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.256 [2024-10-09 00:40:42.796868] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.256 [2024-10-09 00:40:42.796884] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.256 [2024-10-09 00:40:42.809490] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.256 [2024-10-09 00:40:42.809505] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.256 [2024-10-09 00:40:42.824152] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.256 [2024-10-09 00:40:42.824167] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.256 [2024-10-09 00:40:42.837298] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.256 [2024-10-09 00:40:42.837313] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.256 [2024-10-09 00:40:42.852148] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.256 [2024-10-09 00:40:42.852163] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.256 [2024-10-09 00:40:42.865185] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.256 [2024-10-09 00:40:42.865200] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.256 [2024-10-09 00:40:42.879757] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.256 [2024-10-09 00:40:42.879772] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.516 [2024-10-09 00:40:42.892976] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.516 [2024-10-09 00:40:42.892992] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.516 [2024-10-09 00:40:42.905026] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.516 [2024-10-09 00:40:42.905041] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.516 [2024-10-09 00:40:42.920282] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.516 [2024-10-09 00:40:42.920298] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.516 [2024-10-09 00:40:42.933055] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.516 [2024-10-09 00:40:42.933070] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.516 [2024-10-09 00:40:42.944799] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.516 [2024-10-09 00:40:42.944815] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.516 [2024-10-09 00:40:42.957291] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.516 [2024-10-09 00:40:42.957306] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.516 [2024-10-09 00:40:42.971472] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.516 [2024-10-09 00:40:42.971487] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.516 [2024-10-09 00:40:42.984422] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.516 [2024-10-09 00:40:42.984438] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.516 [2024-10-09 00:40:42.997445] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.516 [2024-10-09 00:40:42.997459] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.516 [2024-10-09 00:40:43.011682] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.516 [2024-10-09 00:40:43.011697] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.516 [2024-10-09 00:40:43.024759] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.516 [2024-10-09 00:40:43.024774] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.516 [2024-10-09 00:40:43.037192] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.516 [2024-10-09 00:40:43.037207] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.516 [2024-10-09 00:40:43.051897] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.516 [2024-10-09 00:40:43.051912] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.516 [2024-10-09 00:40:43.064527] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.516 [2024-10-09 00:40:43.064542] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.516 [2024-10-09 00:40:43.077004] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.516 [2024-10-09 00:40:43.077019] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.516 [2024-10-09 00:40:43.092496] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.516 [2024-10-09 00:40:43.092512] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.516 [2024-10-09 00:40:43.105505] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.516 [2024-10-09 00:40:43.105520] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.516 [2024-10-09 00:40:43.120454] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.516 [2024-10-09 00:40:43.120470] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.516 [2024-10-09 00:40:43.133229] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.516 [2024-10-09 00:40:43.133244] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.516 [2024-10-09 00:40:43.147854] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.516 [2024-10-09 00:40:43.147870] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.776 [2024-10-09 00:40:43.160894] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.776 [2024-10-09 00:40:43.160910] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.776 [2024-10-09 00:40:43.172536] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.776 [2024-10-09 00:40:43.172551] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.776 [2024-10-09 00:40:43.185152] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.776 [2024-10-09 00:40:43.185167] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.776 [2024-10-09 00:40:43.200439] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.776 [2024-10-09 00:40:43.200454] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.776 [2024-10-09 00:40:43.213385] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.776 [2024-10-09 00:40:43.213400] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.776 [2024-10-09 00:40:43.228242] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.776 [2024-10-09 00:40:43.228259] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.776 [2024-10-09 00:40:43.240754] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.776 [2024-10-09 00:40:43.240769] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.776 [2024-10-09 00:40:43.252946] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.776 [2024-10-09 00:40:43.252961] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.776 [2024-10-09 00:40:43.265590] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.776 [2024-10-09 00:40:43.265605] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.776 [2024-10-09 00:40:43.280018] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.776 [2024-10-09 00:40:43.280034] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.776 [2024-10-09 00:40:43.293369] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.776 [2024-10-09 00:40:43.293384] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.776 [2024-10-09 00:40:43.307885] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.776 [2024-10-09 00:40:43.307900] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.776 [2024-10-09 00:40:43.320872] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.776 [2024-10-09 00:40:43.320887] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.776 [2024-10-09 00:40:43.332606] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.776 [2024-10-09 00:40:43.332621] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.776 [2024-10-09 00:40:43.345526] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.776 [2024-10-09 00:40:43.345541] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.776 [2024-10-09 00:40:43.359539] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.776 [2024-10-09 00:40:43.359554] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.777 [2024-10-09 00:40:43.372288] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.777 [2024-10-09 00:40:43.372303] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.777 [2024-10-09 00:40:43.384413] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.777 [2024-10-09 00:40:43.384428] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.777 [2024-10-09 00:40:43.397010] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.777 [2024-10-09 00:40:43.397025] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.777 [2024-10-09 00:40:43.409440] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.777 [2024-10-09 00:40:43.409455] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.037 [2024-10-09 00:40:43.424773] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.037 [2024-10-09 00:40:43.424788] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.037 [2024-10-09 00:40:43.437696] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.037 [2024-10-09 00:40:43.437711] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.037 [2024-10-09 00:40:43.452569] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.037 [2024-10-09 00:40:43.452585] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.037 [2024-10-09 00:40:43.465388] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.037 [2024-10-09 00:40:43.465404] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.037 [2024-10-09 00:40:43.479601] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.037 [2024-10-09 00:40:43.479617] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.037 [2024-10-09 00:40:43.492440] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.037 [2024-10-09 00:40:43.492455] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.037 [2024-10-09 00:40:43.505245] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.037 [2024-10-09 00:40:43.505259] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.037 [2024-10-09 00:40:43.520437] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.037 [2024-10-09 00:40:43.520453] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.037 [2024-10-09 00:40:43.533136] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.037 [2024-10-09 00:40:43.533150] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.037 [2024-10-09 00:40:43.547932] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.037 [2024-10-09 00:40:43.547947] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.037 [2024-10-09 00:40:43.560929] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.037 [2024-10-09 00:40:43.560944] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.037 [2024-10-09 00:40:43.573341] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.037 [2024-10-09 00:40:43.573355] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.037 [2024-10-09 00:40:43.587829] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.037 [2024-10-09 00:40:43.587844] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.037 [2024-10-09 00:40:43.600575] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.037 [2024-10-09 00:40:43.600590] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.037 [2024-10-09 00:40:43.613115] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.037 [2024-10-09 00:40:43.613129] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.037 [2024-10-09 00:40:43.627831] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.037 [2024-10-09 00:40:43.627846] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.037 [2024-10-09 00:40:43.640725] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.037 [2024-10-09 00:40:43.640740] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.037 [2024-10-09 00:40:43.653260] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.037 [2024-10-09 00:40:43.653274] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.037 [2024-10-09 00:40:43.668076] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.037 [2024-10-09 00:40:43.668091] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.298 [2024-10-09 00:40:43.681362] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.298 [2024-10-09 00:40:43.681377] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.298 [2024-10-09 00:40:43.695907] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.298 [2024-10-09 00:40:43.695922] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.298 [2024-10-09 00:40:43.709193] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.298 [2024-10-09 00:40:43.709207] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.298 [2024-10-09 00:40:43.723875] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.298 [2024-10-09 00:40:43.723890] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.298 [2024-10-09 00:40:43.737056] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.298 [2024-10-09 00:40:43.737072] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.298 18837.00 IOPS, 147.16 MiB/s [2024-10-08T22:40:43.933Z] [2024-10-09 00:40:43.749394] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.298 [2024-10-09 00:40:43.749409] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.298 [2024-10-09 00:40:43.764301] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.298 [2024-10-09 00:40:43.764317] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.298 [2024-10-09 00:40:43.776979] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.298 [2024-10-09 00:40:43.776994] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.298 [2024-10-09 00:40:43.789426] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.298 [2024-10-09 00:40:43.789440] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.298 [2024-10-09 00:40:43.804340] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.298 [2024-10-09 00:40:43.804355] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.298 [2024-10-09 00:40:43.816896] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.298 [2024-10-09 00:40:43.816911] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.298 [2024-10-09 00:40:43.829463] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.298 [2024-10-09 00:40:43.829477] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.298 [2024-10-09 00:40:43.843909] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.298 [2024-10-09 00:40:43.843924] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.298 [2024-10-09 00:40:43.856818] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.298 [2024-10-09 00:40:43.856834] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.298 [2024-10-09 00:40:43.869251] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.298 [2024-10-09 00:40:43.869265] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.298 [2024-10-09 00:40:43.884285] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.298 [2024-10-09 00:40:43.884300] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.298 [2024-10-09 00:40:43.896916] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.298 [2024-10-09 00:40:43.896931] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.298 [2024-10-09 00:40:43.909616] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.298 [2024-10-09 00:40:43.909631] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.298 [2024-10-09 00:40:43.923950] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.298 [2024-10-09 00:40:43.923965] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.559 [2024-10-09 00:40:43.937001] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.559 [2024-10-09 00:40:43.937017] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.559 [2024-10-09 00:40:43.948665] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.559 [2024-10-09 00:40:43.948683] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.559 [2024-10-09 00:40:43.961680] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.559 [2024-10-09 00:40:43.961695] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.559 [2024-10-09 00:40:43.976021] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.559 [2024-10-09 00:40:43.976037] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.559 [2024-10-09 00:40:43.988893] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.559 [2024-10-09 00:40:43.988908] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.559 [2024-10-09 00:40:44.001610] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.559 [2024-10-09 00:40:44.001625] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.559 [2024-10-09 00:40:44.016199] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.559 [2024-10-09 00:40:44.016214] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.559 [2024-10-09 00:40:44.029172] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.559 [2024-10-09 00:40:44.029187] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.559 [2024-10-09 00:40:44.043785] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.559 [2024-10-09 00:40:44.043801] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.559 [2024-10-09 00:40:44.056796] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.559 [2024-10-09 00:40:44.056811] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.559 [2024-10-09 00:40:44.069707] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.559 [2024-10-09 00:40:44.069727] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.559 [2024-10-09 00:40:44.084144] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.559 [2024-10-09 00:40:44.084159] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.559 [2024-10-09 00:40:44.096796] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.559 [2024-10-09 00:40:44.096811] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.559 [2024-10-09 00:40:44.109278] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.559 [2024-10-09 00:40:44.109293] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.559 [2024-10-09 00:40:44.124056] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.559 [2024-10-09 00:40:44.124072] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.559 [2024-10-09 00:40:44.137329] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.559 [2024-10-09 00:40:44.137343] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.559 [2024-10-09 00:40:44.152289] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.559 [2024-10-09 00:40:44.152304] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.559 [2024-10-09 00:40:44.164992] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.559 [2024-10-09 00:40:44.165007] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.559 [2024-10-09 00:40:44.177399] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.559 [2024-10-09 00:40:44.177413] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.559 [2024-10-09 00:40:44.192029] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.559 [2024-10-09 00:40:44.192045] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.819 [2024-10-09 00:40:44.204822] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.819 [2024-10-09 00:40:44.204842] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.819 [2024-10-09 00:40:44.217688] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.819 [2024-10-09 00:40:44.217703] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.819 [2024-10-09 00:40:44.231579] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.819 [2024-10-09 00:40:44.231595] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.819 [2024-10-09 00:40:44.244278] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.819 [2024-10-09 00:40:44.244294] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.819 [2024-10-09 00:40:44.256740] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.819 [2024-10-09 00:40:44.256756] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.819 [2024-10-09 00:40:44.269449] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.819 [2024-10-09 00:40:44.269465] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.819 [2024-10-09 00:40:44.283753] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.819 [2024-10-09 00:40:44.283768] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.819 [2024-10-09 00:40:44.296636] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.819 [2024-10-09 00:40:44.296651] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.819 [2024-10-09 00:40:44.309256] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.819 [2024-10-09 00:40:44.309270] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.819 [2024-10-09 00:40:44.323758] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.819 [2024-10-09 00:40:44.323774] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.819 [2024-10-09 00:40:44.336436] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.819 [2024-10-09 00:40:44.336452] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.819 [2024-10-09 00:40:44.348953] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.819 [2024-10-09 00:40:44.348968] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.819 [2024-10-09 00:40:44.360973] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.819 [2024-10-09 00:40:44.360989] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.819 [2024-10-09 00:40:44.373692] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.819 [2024-10-09 00:40:44.373707] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.819 [2024-10-09 00:40:44.388466] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.819 [2024-10-09 00:40:44.388481] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.819 [2024-10-09 00:40:44.401388] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.819 [2024-10-09 00:40:44.401404] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.819 [2024-10-09 00:40:44.416317] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.819 [2024-10-09 00:40:44.416333] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.819 [2024-10-09 00:40:44.429254] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.819 [2024-10-09 00:40:44.429269] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.819 [2024-10-09 00:40:44.444230] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.819 [2024-10-09 00:40:44.444246] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:14.079 [2024-10-09 00:40:44.457288] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:14.079 [2024-10-09 00:40:44.457307] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:14.079 [2024-10-09 00:40:44.471918] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:14.079 [2024-10-09 00:40:44.471934] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:14.079 [2024-10-09 00:40:44.484872] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:14.079 [2024-10-09 00:40:44.484889] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:14.079 [2024-10-09 00:40:44.497476] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:14.079 [2024-10-09 00:40:44.497491] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:14.079 [2024-10-09 00:40:44.512818] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:14.079 [2024-10-09 00:40:44.512834] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:14.079 [2024-10-09 00:40:44.525121] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:14.079 [2024-10-09 00:40:44.525137] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:14.079 [2024-10-09 00:40:44.537120] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:14.079 [2024-10-09 00:40:44.537136] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:14.079 [2024-10-09 00:40:44.549797] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:14.079 [2024-10-09 00:40:44.549812] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:14.079 [2024-10-09 00:40:44.563914] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:14.079 [2024-10-09 00:40:44.563929] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:14.079 [2024-10-09 00:40:44.576842] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:14.079 [2024-10-09 00:40:44.576858] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:14.079 [2024-10-09 00:40:44.589329] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:14.079 [2024-10-09 00:40:44.589344] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:14.079 [2024-10-09 00:40:44.604193] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:14.079 [2024-10-09 00:40:44.604208] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:14.079 [2024-10-09 00:40:44.617215] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:14.079 [2024-10-09 00:40:44.617230] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:14.079 [2024-10-09 00:40:44.632448] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:14.079 [2024-10-09 00:40:44.632464] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:14.079 [2024-10-09 00:40:44.644895] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:14.079 [2024-10-09 00:40:44.644910] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:14.079 [2024-10-09 00:40:44.657177] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:14.079 [2024-10-09 00:40:44.657191] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:14.079 [2024-10-09 00:40:44.672227] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:14.079 [2024-10-09 00:40:44.672243] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:14.079 [2024-10-09 00:40:44.685029] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:14.079 [2024-10-09 00:40:44.685044] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:14.079 [2024-10-09 00:40:44.699762] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:14.079 [2024-10-09 00:40:44.699778] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:14.079 [2024-10-09 00:40:44.712923] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:14.079 [2024-10-09 00:40:44.712939] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:14.340 [2024-10-09 00:40:44.725451] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:14.340 [2024-10-09 00:40:44.725466] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:14.340 [2024-10-09 00:40:44.740165] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:14.340 [2024-10-09 00:40:44.740181] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:14.340 18861.67 IOPS, 147.36 MiB/s [2024-10-08T22:40:44.975Z] [2024-10-09 00:40:44.752748] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:14.340 [2024-10-09 00:40:44.752764] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:14.340 [2024-10-09 00:40:44.765321] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:14.340 [2024-10-09 00:40:44.765336] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:14.340 [2024-10-09 00:40:44.780076] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:14.340 [2024-10-09 00:40:44.780092] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:14.340 [2024-10-09 00:40:44.792656] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:14.340 [2024-10-09 00:40:44.792672] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:14.340 [2024-10-09 00:40:44.805186] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:14.340 [2024-10-09 00:40:44.805201] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:14.340 [2024-10-09 00:40:44.820021] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:14.340 [2024-10-09 00:40:44.820036] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:14.340 [2024-10-09 00:40:44.833127] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:14.340 [2024-10-09 00:40:44.833142] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:14.340 [2024-10-09 00:40:44.845557] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:14.341 [2024-10-09 00:40:44.845572] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:14.341 [2024-10-09 00:40:44.860415] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:14.341 [2024-10-09 00:40:44.860431] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:14.341 [2024-10-09 00:40:44.873466] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:14.341 [2024-10-09 00:40:44.873481] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:14.341 [2024-10-09 00:40:44.888217] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:14.341 [2024-10-09 00:40:44.888233] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:14.341 [2024-10-09 00:40:44.901105] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:14.341 [2024-10-09 00:40:44.901120] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:14.341 [2024-10-09 00:40:44.913662] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:14.341 [2024-10-09 00:40:44.913677] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:14.341 [2024-10-09 00:40:44.928271] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:14.341 [2024-10-09 00:40:44.928286] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:14.341 [2024-10-09 00:40:44.941426] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:14.341 [2024-10-09 00:40:44.941441] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:14.341 [2024-10-09 00:40:44.956207] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:14.341 [2024-10-09 00:40:44.956223] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:14.341 [2024-10-09 00:40:44.969102] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:14.341 [2024-10-09 00:40:44.969118] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:14.601 [2024-10-09 00:40:44.981694] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:14.601 [2024-10-09 00:40:44.981710] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:14.601 [2024-10-09 00:40:44.995863] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:14.601 [2024-10-09 00:40:44.995878] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:14.601 [2024-10-09 00:40:45.008978] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:14.601 [2024-10-09 00:40:45.008993] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:14.601 [2024-10-09 00:40:45.021049] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:14.601 [2024-10-09 00:40:45.021064] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:14.601 [2024-10-09 00:40:45.035610] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:14.601 [2024-10-09 00:40:45.035625] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:14.601 [2024-10-09 00:40:45.048795] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:14.601 [2024-10-09 00:40:45.048811] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:14.601 [2024-10-09 00:40:45.061704] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:14.601 [2024-10-09 00:40:45.061718] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:14.601 [2024-10-09 00:40:45.076358] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:14.601 [2024-10-09 00:40:45.076373] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:14.601 [2024-10-09 00:40:45.089070] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:14.601 [2024-10-09 00:40:45.089085] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:14.601 [2024-10-09 00:40:45.104533] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:14.601 [2024-10-09 00:40:45.104549] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:14.601 [2024-10-09 00:40:45.117851] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:14.601 [2024-10-09 00:40:45.117867] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:14.601 [2024-10-09 00:40:45.132733] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:14.601 [2024-10-09 00:40:45.132749] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:14.601 [2024-10-09 00:40:45.145423] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:14.601 [2024-10-09 00:40:45.145437] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:14.601 [2024-10-09 00:40:45.160221] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:14.601 [2024-10-09 00:40:45.160236] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:14.601 [2024-10-09 00:40:45.172852] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:14.601 [2024-10-09 00:40:45.172867] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:14.601 [2024-10-09 00:40:45.184919] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:14.601 [2024-10-09 00:40:45.184934] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:14.601 [2024-10-09 00:40:45.197757] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:14.601 [2024-10-09 00:40:45.197772] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:14.601 [2024-10-09 00:40:45.212294] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:14.601 [2024-10-09 00:40:45.212312] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:14.601 [2024-10-09 00:40:45.225344] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:14.602 [2024-10-09 00:40:45.225359] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:14.862 [2024-10-09 00:40:45.239863] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:14.862 [2024-10-09 00:40:45.239879] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:14.862 [2024-10-09 00:40:45.252788] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:14.862 [2024-10-09 00:40:45.252803] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:14.862 [2024-10-09 00:40:45.265330] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:14.862 [2024-10-09 00:40:45.265345] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:14.862 [2024-10-09 00:40:45.280410] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:14.862 [2024-10-09 00:40:45.280426] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:14.862 [2024-10-09 00:40:45.293449] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:14.862 [2024-10-09 00:40:45.293464] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:14.862 [2024-10-09 00:40:45.308247] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:14.862 [2024-10-09 00:40:45.308262] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:14.862 [2024-10-09 00:40:45.321493] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:14.862 [2024-10-09 00:40:45.321516] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:14.862 [2024-10-09 00:40:45.336452] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:14.862 [2024-10-09 00:40:45.336470] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:14.862 [2024-10-09 00:40:45.349301] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:14.862 [2024-10-09 00:40:45.349317] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:14.862 [2024-10-09 00:40:45.364748] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:14.862 [2024-10-09 00:40:45.364764] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:14.862 [2024-10-09 00:40:45.377547] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:14.862 [2024-10-09 00:40:45.377562] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:14.862 [2024-10-09 00:40:45.392498] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:14.862 [2024-10-09 00:40:45.392514] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:14.862 [2024-10-09 00:40:45.405660] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:14.862 [2024-10-09 00:40:45.405675] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:14.862 [2024-10-09 00:40:45.419693] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:14.862 [2024-10-09 00:40:45.419709] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:14.862 [2024-10-09 00:40:45.432423] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:14.862 [2024-10-09 00:40:45.432439] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:14.862 [2024-10-09 00:40:45.445040] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:14.862 [2024-10-09 00:40:45.445054] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:14.862 [2024-10-09 00:40:45.460154] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:14.862 [2024-10-09 00:40:45.460169] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:14.862 [2024-10-09 00:40:45.472912] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:14.862 [2024-10-09 00:40:45.472931] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:14.862 [2024-10-09 00:40:45.485391] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:14.862 [2024-10-09 00:40:45.485406] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:15.123 [2024-10-09 00:40:45.500434] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:15.123 [2024-10-09 00:40:45.500450] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:15.123 [2024-10-09 00:40:45.513211] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:15.123 [2024-10-09 00:40:45.513225] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:15.123 [2024-10-09 00:40:45.528044] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:15.123 [2024-10-09 00:40:45.528059] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:15.123 [2024-10-09 00:40:45.540691] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:15.123 [2024-10-09 00:40:45.540706] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:15.123 [2024-10-09 00:40:45.553330] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:15.123 [2024-10-09 00:40:45.553345] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:15.123 [2024-10-09 00:40:45.568178] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:15.123 [2024-10-09 00:40:45.568193] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:15.123 [2024-10-09 00:40:45.581027] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:15.123 [2024-10-09 00:40:45.581042] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:15.123 [2024-10-09 00:40:45.596464] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:15.123 [2024-10-09 00:40:45.596479] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:15.123 [2024-10-09 00:40:45.609039] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:15.123 [2024-10-09 00:40:45.609054] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:15.123 [2024-10-09 00:40:45.623925] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:15.123 [2024-10-09 00:40:45.623940] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:15.123 [2024-10-09 00:40:45.637104] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:15.123 [2024-10-09 00:40:45.637119] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:15.123 [2024-10-09 00:40:45.649189] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:15.123 [2024-10-09 00:40:45.649203] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:15.123 [2024-10-09 00:40:45.663794] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:15.123 [2024-10-09 00:40:45.663810] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:15.123 [2024-10-09 00:40:45.677015] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:15.123 [2024-10-09 00:40:45.677030] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:15.123 [2024-10-09 00:40:45.689072] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:15.123 [2024-10-09 00:40:45.689087] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:15.123 [2024-10-09 00:40:45.704101] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:15.123 [2024-10-09 00:40:45.704117] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:15.123 [2024-10-09 00:40:45.717010] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:15.123 [2024-10-09 00:40:45.717025] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:15.123 [2024-10-09 00:40:45.729433] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:15.123 [2024-10-09 00:40:45.729451] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:15.123 18871.50 IOPS, 147.43 MiB/s [2024-10-08T22:40:45.758Z] [2024-10-09 00:40:45.743577] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:15.123 [2024-10-09 00:40:45.743593] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:15.123 [2024-10-09 00:40:45.756299] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:15.123 [2024-10-09 00:40:45.756315] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:15.384 [2024-10-09 00:40:45.768879] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:15.384 [2024-10-09 00:40:45.768895] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:15.384 [2024-10-09 00:40:45.780984] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:15.384 [2024-10-09 00:40:45.780999] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:15.384 [2024-10-09 00:40:45.793877] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:15.384 [2024-10-09 00:40:45.793892] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:15.384 [2024-10-09 00:40:45.808141] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:15.384 [2024-10-09 00:40:45.808157] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:15.384 [2024-10-09 00:40:45.821135] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:15.384 [2024-10-09 00:40:45.821150] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:15.384 [2024-10-09 00:40:45.832945] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:15.384 [2024-10-09 00:40:45.832960] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:15.384 [2024-10-09 00:40:45.844698] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:15.384 [2024-10-09 00:40:45.844713] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:15.384 [2024-10-09 00:40:45.857513] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:15.384 [2024-10-09 00:40:45.857527] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:15.384 [2024-10-09 00:40:45.872069] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:15.384 [2024-10-09 00:40:45.872085] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:15.384 [2024-10-09 00:40:45.884806] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:15.384 [2024-10-09 00:40:45.884821] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:15.384 [2024-10-09 00:40:45.897685] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:15.384 [2024-10-09 00:40:45.897700] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:15.384 [2024-10-09 00:40:45.911798] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:15.384 [2024-10-09 00:40:45.911813] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:15.384 [2024-10-09 00:40:45.924675] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:15.384 [2024-10-09 00:40:45.924691] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:15.384 [2024-10-09 00:40:45.937408] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:15.384 [2024-10-09 00:40:45.937422] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:15.384 [2024-10-09 00:40:45.952133] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:15.384 [2024-10-09 00:40:45.952149] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:15.384 [2024-10-09 00:40:45.965250] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:15.384 [2024-10-09 00:40:45.965266] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:15.384 [2024-10-09 00:40:45.980388] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:15.384 [2024-10-09 00:40:45.980404] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:15.384 [2024-10-09 00:40:45.993304] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:15.384 [2024-10-09 00:40:45.993320] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:15.384 [2024-10-09 00:40:46.008516] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:15.384 [2024-10-09 00:40:46.008532] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:15.644 [2024-10-09 00:40:46.021517] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:15.645 [2024-10-09 00:40:46.021532] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:15.645 [2024-10-09 00:40:46.036650] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:15.645 [2024-10-09 00:40:46.036666] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:15.645 [2024-10-09 00:40:46.049702] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:15.645 [2024-10-09 00:40:46.049718] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:15.645 [2024-10-09 00:40:46.064610] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:15.645 [2024-10-09 00:40:46.064626] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:15.645 [2024-10-09 00:40:46.077515] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:15.645 [2024-10-09 00:40:46.077530] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:15.645 [2024-10-09 00:40:46.091839] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:15.645 [2024-10-09 00:40:46.091854] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:15.645 [2024-10-09 00:40:46.104904] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:15.645 [2024-10-09 00:40:46.104919] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:15.645 [2024-10-09 00:40:46.116907] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:15.645 [2024-10-09 00:40:46.116922] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:15.645 [2024-10-09 00:40:46.130255] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:15.645 [2024-10-09 00:40:46.130271] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:15.645 [2024-10-09 00:40:46.144459] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:15.645 [2024-10-09 00:40:46.144475] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:15.645 [2024-10-09 00:40:46.157627] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:15.645 [2024-10-09 00:40:46.157642] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:15.645 [2024-10-09 00:40:46.172309] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:15.645 [2024-10-09 00:40:46.172325] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:15.645 [2024-10-09 00:40:46.185040] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:15.645 [2024-10-09 00:40:46.185055] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:15.645 [2024-10-09 00:40:46.196979] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:15.645 [2024-10-09 00:40:46.196995] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:15.645 [2024-10-09 00:40:46.209913] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:15.645 [2024-10-09 00:40:46.209929] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:15.645 [2024-10-09 00:40:46.224009] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:15.645 [2024-10-09 00:40:46.224026] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:15.645 [2024-10-09 00:40:46.237369] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:15.645 [2024-10-09 00:40:46.237385] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:15.645 [2024-10-09 00:40:46.252707] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:15.645 [2024-10-09 00:40:46.252728] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:15.645 [2024-10-09 00:40:46.265053] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:15.645 [2024-10-09 00:40:46.265068] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:15.905 [2024-10-09 00:40:46.279984] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:15.905 [2024-10-09 00:40:46.280000] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:15.905 [2024-10-09 00:40:46.293306] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:15.905 [2024-10-09 00:40:46.293321] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:15.905 [2024-10-09 00:40:46.308283] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:15.905 [2024-10-09 00:40:46.308298] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:15.905 [2024-10-09 00:40:46.321207] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:15.905 [2024-10-09 00:40:46.321223] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:15.905 [2024-10-09 00:40:46.336139] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:15.905 [2024-10-09 00:40:46.336155] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:15.905 [2024-10-09 00:40:46.348841] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:15.905 [2024-10-09 00:40:46.348857] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:15.905 [2024-10-09 00:40:46.361029] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:15.905 [2024-10-09 00:40:46.361044] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:15.905 [2024-10-09 00:40:46.376069] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:15.905 [2024-10-09 00:40:46.376085] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:15.905 [2024-10-09 00:40:46.389139] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:15.905 [2024-10-09 00:40:46.389154] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:15.905 [2024-10-09 00:40:46.403703] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:15.905 [2024-10-09 00:40:46.403718] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:15.905 [2024-10-09 00:40:46.416680] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:15.905 [2024-10-09 00:40:46.416696] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:15.905 [2024-10-09 00:40:46.429012] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:15.905 [2024-10-09 00:40:46.429027] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:15.905 [2024-10-09 00:40:46.443909] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:15.905 [2024-10-09 00:40:46.443925] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:15.905 [2024-10-09 00:40:46.457552] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:15.905 [2024-10-09 00:40:46.457568] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:15.905 [2024-10-09 00:40:46.472452] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:15.905 [2024-10-09 00:40:46.472467] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:15.905 [2024-10-09 00:40:46.484984] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:15.905 [2024-10-09 00:40:46.485000] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:15.905 [2024-10-09 00:40:46.496713] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:15.905 [2024-10-09 00:40:46.496732] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:15.905 [2024-10-09 00:40:46.509476] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:15.905 [2024-10-09 00:40:46.509491] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:15.905 [2024-10-09 00:40:46.524424] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:15.905 [2024-10-09 00:40:46.524440] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:15.905 [2024-10-09 00:40:46.537345] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:15.905 [2024-10-09 00:40:46.537360] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:16.166 [2024-10-09 00:40:46.552337] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:16.166 [2024-10-09 00:40:46.552353] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:16.166 [2024-10-09 00:40:46.565357] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:16.166 [2024-10-09 00:40:46.565373] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:16.166 [2024-10-09 00:40:46.580019] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:16.166 [2024-10-09 00:40:46.580034] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:16.166 [2024-10-09 00:40:46.593179] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:16.166 [2024-10-09 00:40:46.593194] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:16.166 [2024-10-09 00:40:46.607660] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:16.166 [2024-10-09 00:40:46.607676] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:16.166 [2024-10-09 00:40:46.620684] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:16.166 [2024-10-09 00:40:46.620699] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:16.166 [2024-10-09 00:40:46.633026] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:16.166 [2024-10-09 00:40:46.633040] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:16.166 [2024-10-09 00:40:46.647844] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:16.166 [2024-10-09 00:40:46.647860] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:16.166 [2024-10-09 00:40:46.660726] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:16.166 [2024-10-09 00:40:46.660741] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:16.166 [2024-10-09 00:40:46.673419] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:16.166 [2024-10-09 00:40:46.673435] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:16.166 [2024-10-09 00:40:46.688221] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:16.166 [2024-10-09 00:40:46.688236] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:16.166 [2024-10-09 00:40:46.700876] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:16.166 [2024-10-09 00:40:46.700892] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:16.166 [2024-10-09 00:40:46.713448] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:16.166 [2024-10-09 00:40:46.713464] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:16.166 [2024-10-09 00:40:46.728398] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:16.166 [2024-10-09 00:40:46.728413] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:16.166 [2024-10-09 00:40:46.741208] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:16.166 [2024-10-09 00:40:46.741222] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:16.166 18846.60 IOPS, 147.24 MiB/s [2024-10-08T22:40:46.801Z] [2024-10-09 00:40:46.789203] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:16.166 [2024-10-09 00:40:46.789216] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:16.166 00:33:16.166 Latency(us) 00:33:16.166 [2024-10-08T22:40:46.801Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:16.166 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:33:16.166 Nvme1n1 : 5.05 18703.63 146.12 0.00 0.00 6783.27 2798.93 48059.73 00:33:16.166 [2024-10-08T22:40:46.801Z] =================================================================================================================== 00:33:16.166 [2024-10-08T22:40:46.801Z] Total : 18703.63 146.12 0.00 0.00 6783.27 2798.93 48059.73 00:33:16.426 [2024-10-09 00:40:46.800884] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:16.426 [2024-10-09 00:40:46.800898] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:16.426 [2024-10-09 00:40:46.812887] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:16.427 [2024-10-09 00:40:46.812900] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:16.427 [2024-10-09 00:40:46.824886] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:16.427 [2024-10-09 00:40:46.824900] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:16.427 [2024-10-09 00:40:46.836884] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:16.427 [2024-10-09 00:40:46.836894] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:16.427 [2024-10-09 00:40:46.848882] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:16.427 [2024-10-09 00:40:46.848892] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:16.427 [2024-10-09 00:40:46.860879] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:16.427 [2024-10-09 00:40:46.860887] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:16.427 [2024-10-09 00:40:46.872883] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:16.427 [2024-10-09 00:40:46.872894] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:16.427 [2024-10-09 00:40:46.884881] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:16.427 [2024-10-09 00:40:46.884891] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:16.427 [2024-10-09 00:40:46.896881] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:16.427 [2024-10-09 00:40:46.896891] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:16.427 [2024-10-09 00:40:46.908879] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:16.427 [2024-10-09 00:40:46.908887] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:16.427 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (3507438) - No such process 00:33:16.427 00:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 3507438 00:33:16.427 00:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:16.427 00:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:16.427 00:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:16.427 00:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:16.427 00:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:33:16.427 00:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:16.427 00:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:16.427 delay0 00:33:16.427 00:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:16.427 00:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:33:16.427 00:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:16.427 00:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:16.427 00:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:16.427 00:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:33:16.687 [2024-10-09 00:40:47.062894] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:33:24.840 Initializing NVMe Controllers 00:33:24.840 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:33:24.840 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:33:24.840 Initialization complete. Launching workers. 00:33:24.840 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 295, failed: 14383 00:33:24.840 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 14602, failed to submit 76 00:33:24.840 success 14454, unsuccessful 148, failed 0 00:33:24.840 00:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:33:24.840 00:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:33:24.840 00:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@514 -- # nvmfcleanup 00:33:24.840 00:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:33:24.840 00:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:24.840 00:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:33:24.840 00:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:24.840 00:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:24.840 rmmod nvme_tcp 00:33:24.840 rmmod nvme_fabrics 00:33:24.840 rmmod nvme_keyring 00:33:24.840 00:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:24.840 00:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:33:24.840 00:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:33:24.840 00:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@515 -- # '[' -n 3505081 ']' 00:33:24.840 00:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@516 -- # killprocess 3505081 00:33:24.840 00:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@950 -- # '[' -z 3505081 ']' 00:33:24.840 00:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@954 -- # kill -0 3505081 00:33:24.840 00:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@955 -- # uname 00:33:24.840 00:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:24.840 00:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3505081 00:33:24.840 00:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:33:24.840 00:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:33:24.840 00:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3505081' 00:33:24.840 killing process with pid 3505081 00:33:24.840 00:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@969 -- # kill 3505081 00:33:24.840 00:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@974 -- # wait 3505081 00:33:24.840 00:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:33:24.840 00:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:33:24.840 00:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:33:24.840 00:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:33:24.840 00:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@789 -- # iptables-save 00:33:24.840 00:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:33:24.840 00:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@789 -- # iptables-restore 00:33:24.840 00:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:24.840 00:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:24.840 00:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:24.840 00:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:24.840 00:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:26.230 00:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:26.230 00:33:26.230 real 0m34.657s 00:33:26.230 user 0m44.180s 00:33:26.230 sys 0m12.755s 00:33:26.230 00:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:26.230 00:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:26.230 ************************************ 00:33:26.230 END TEST nvmf_zcopy 00:33:26.230 ************************************ 00:33:26.230 00:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:33:26.230 00:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:33:26.230 00:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:33:26.230 00:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:33:26.230 ************************************ 00:33:26.230 START TEST nvmf_nmic 00:33:26.230 ************************************ 00:33:26.230 00:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:33:26.230 * Looking for test storage... 00:33:26.230 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:26.230 00:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:33:26.230 00:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1681 -- # lcov --version 00:33:26.230 00:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:33:26.230 00:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:33:26.230 00:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:26.230 00:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:26.230 00:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:26.230 00:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:33:26.230 00:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:33:26.230 00:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:33:26.230 00:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:33:26.230 00:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:33:26.230 00:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:33:26.230 00:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:33:26.230 00:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:26.230 00:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:33:26.230 00:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:33:26.230 00:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:26.230 00:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:26.491 00:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:33:26.491 00:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:33:26.491 00:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:26.491 00:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:33:26.491 00:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:33:26.491 00:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:33:26.491 00:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:33:26.491 00:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:26.491 00:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:33:26.491 00:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:33:26.491 00:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:26.491 00:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:26.491 00:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:33:26.491 00:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:26.491 00:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:33:26.491 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:26.491 --rc genhtml_branch_coverage=1 00:33:26.491 --rc genhtml_function_coverage=1 00:33:26.491 --rc genhtml_legend=1 00:33:26.491 --rc geninfo_all_blocks=1 00:33:26.491 --rc geninfo_unexecuted_blocks=1 00:33:26.491 00:33:26.491 ' 00:33:26.491 00:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:33:26.491 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:26.491 --rc genhtml_branch_coverage=1 00:33:26.491 --rc genhtml_function_coverage=1 00:33:26.491 --rc genhtml_legend=1 00:33:26.491 --rc geninfo_all_blocks=1 00:33:26.491 --rc geninfo_unexecuted_blocks=1 00:33:26.491 00:33:26.491 ' 00:33:26.491 00:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:33:26.491 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:26.491 --rc genhtml_branch_coverage=1 00:33:26.491 --rc genhtml_function_coverage=1 00:33:26.491 --rc genhtml_legend=1 00:33:26.491 --rc geninfo_all_blocks=1 00:33:26.491 --rc geninfo_unexecuted_blocks=1 00:33:26.491 00:33:26.491 ' 00:33:26.491 00:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:33:26.491 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:26.491 --rc genhtml_branch_coverage=1 00:33:26.491 --rc genhtml_function_coverage=1 00:33:26.491 --rc genhtml_legend=1 00:33:26.491 --rc geninfo_all_blocks=1 00:33:26.492 --rc geninfo_unexecuted_blocks=1 00:33:26.492 00:33:26.492 ' 00:33:26.492 00:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:26.492 00:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:33:26.492 00:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:26.492 00:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:26.492 00:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:26.492 00:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:26.492 00:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:26.492 00:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:26.492 00:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:26.492 00:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:26.492 00:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:26.492 00:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:26.492 00:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:33:26.492 00:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:33:26.492 00:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:26.492 00:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:26.492 00:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:26.492 00:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:26.492 00:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:26.492 00:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:33:26.492 00:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:26.492 00:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:26.492 00:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:26.492 00:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:26.492 00:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:26.492 00:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:26.492 00:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:33:26.492 00:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:26.492 00:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:33:26.492 00:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:26.492 00:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:26.492 00:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:26.492 00:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:26.492 00:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:26.492 00:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:33:26.492 00:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:33:26.492 00:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:26.492 00:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:26.492 00:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:26.492 00:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:33:26.492 00:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:33:26.492 00:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:33:26.492 00:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:33:26.492 00:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:26.492 00:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@474 -- # prepare_net_devs 00:33:26.492 00:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@436 -- # local -g is_hw=no 00:33:26.492 00:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@438 -- # remove_spdk_ns 00:33:26.492 00:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:26.492 00:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:26.492 00:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:26.492 00:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:33:26.492 00:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:33:26.492 00:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:33:26.492 00:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:34.648 00:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:34.648 00:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:33:34.648 00:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:34.648 00:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:34.648 00:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:34.648 00:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:34.648 00:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:34.648 00:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:33:34.648 00:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:34.648 00:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:33:34.648 00:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:33:34.648 00:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:33:34.648 00:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:33:34.648 00:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:33:34.648 00:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:33:34.648 00:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:34.648 00:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:34.648 00:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:34.648 00:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:34.648 00:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:34.648 00:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:34.648 00:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:34.648 00:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:34.648 00:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:34.648 00:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:34.648 00:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:34.648 00:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:34.648 00:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:34.648 00:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:34.648 00:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:34.649 00:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:34.649 00:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:34.649 00:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:34.649 00:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:34.649 00:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:33:34.649 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:33:34.649 00:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:34.649 00:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:34.649 00:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:34.649 00:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:34.649 00:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:34.649 00:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:34.649 00:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:33:34.649 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:33:34.649 00:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:34.649 00:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:34.649 00:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:34.649 00:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:34.649 00:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:34.649 00:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:34.649 00:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:34.649 00:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:34.649 00:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:33:34.649 00:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:34.649 00:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:33:34.649 00:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:34.649 00:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ up == up ]] 00:33:34.649 00:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:33:34.649 00:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:34.649 00:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:33:34.649 Found net devices under 0000:4b:00.0: cvl_0_0 00:33:34.649 00:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:33:34.649 00:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:33:34.649 00:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:34.649 00:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:33:34.649 00:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:34.649 00:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ up == up ]] 00:33:34.649 00:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:33:34.649 00:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:34.649 00:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:33:34.649 Found net devices under 0000:4b:00.1: cvl_0_1 00:33:34.649 00:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:33:34.649 00:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:33:34.649 00:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # is_hw=yes 00:33:34.649 00:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:33:34.649 00:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:33:34.649 00:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:33:34.649 00:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:34.649 00:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:34.649 00:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:34.649 00:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:34.649 00:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:34.649 00:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:34.649 00:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:34.649 00:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:34.649 00:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:34.649 00:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:34.649 00:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:34.649 00:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:34.649 00:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:34.649 00:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:34.649 00:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:34.649 00:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:34.649 00:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:34.649 00:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:34.649 00:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:34.649 00:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:34.649 00:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:34.649 00:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:34.649 00:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:34.649 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:34.649 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.703 ms 00:33:34.649 00:33:34.649 --- 10.0.0.2 ping statistics --- 00:33:34.649 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:34.649 rtt min/avg/max/mdev = 0.703/0.703/0.703/0.000 ms 00:33:34.649 00:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:34.649 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:34.649 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.278 ms 00:33:34.649 00:33:34.649 --- 10.0.0.1 ping statistics --- 00:33:34.649 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:34.649 rtt min/avg/max/mdev = 0.278/0.278/0.278/0.000 ms 00:33:34.649 00:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:34.649 00:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@448 -- # return 0 00:33:34.649 00:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:33:34.649 00:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:34.649 00:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:33:34.649 00:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:33:34.649 00:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:34.649 00:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:33:34.649 00:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:33:34.649 00:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:33:34.649 00:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:33:34.649 00:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:34.649 00:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:34.649 00:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@507 -- # nvmfpid=3513904 00:33:34.649 00:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@508 -- # waitforlisten 3513904 00:33:34.649 00:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:33:34.649 00:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@831 -- # '[' -z 3513904 ']' 00:33:34.649 00:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:34.649 00:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:34.649 00:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:34.649 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:34.649 00:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:34.649 00:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:34.649 [2024-10-09 00:41:04.433532] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:33:34.649 [2024-10-09 00:41:04.434646] Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 initialization... 00:33:34.649 [2024-10-09 00:41:04.434696] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:34.649 [2024-10-09 00:41:04.522798] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:34.649 [2024-10-09 00:41:04.619450] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:34.649 [2024-10-09 00:41:04.619507] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:34.649 [2024-10-09 00:41:04.619517] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:34.649 [2024-10-09 00:41:04.619524] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:34.650 [2024-10-09 00:41:04.619531] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:34.650 [2024-10-09 00:41:04.621902] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:33:34.650 [2024-10-09 00:41:04.622067] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:33:34.650 [2024-10-09 00:41:04.622229] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:33:34.650 [2024-10-09 00:41:04.622229] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:33:34.650 [2024-10-09 00:41:04.720319] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:33:34.650 [2024-10-09 00:41:04.720796] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:33:34.650 [2024-10-09 00:41:04.721489] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:33:34.650 [2024-10-09 00:41:04.721783] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:33:34.650 [2024-10-09 00:41:04.721845] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:33:34.650 00:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:34.650 00:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@864 -- # return 0 00:33:34.650 00:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:33:34.650 00:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:34.650 00:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:34.911 00:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:34.911 00:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:34.911 00:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:34.911 00:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:34.911 [2024-10-09 00:41:05.303135] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:34.911 00:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:34.911 00:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:33:34.911 00:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:34.911 00:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:34.911 Malloc0 00:33:34.911 00:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:34.911 00:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:33:34.911 00:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:34.911 00:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:34.911 00:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:34.911 00:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:34.911 00:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:34.911 00:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:34.911 00:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:34.911 00:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:34.911 00:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:34.911 00:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:34.911 [2024-10-09 00:41:05.387327] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:34.911 00:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:34.911 00:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:33:34.911 test case1: single bdev can't be used in multiple subsystems 00:33:34.911 00:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:33:34.911 00:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:34.911 00:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:34.911 00:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:34.911 00:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:33:34.911 00:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:34.911 00:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:34.911 00:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:34.911 00:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:33:34.911 00:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:33:34.911 00:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:34.911 00:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:34.911 [2024-10-09 00:41:05.422715] bdev.c:8202:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:33:34.911 [2024-10-09 00:41:05.422752] subsystem.c:2157:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:33:34.911 [2024-10-09 00:41:05.422761] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.911 request: 00:33:34.911 { 00:33:34.911 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:33:34.911 "namespace": { 00:33:34.911 "bdev_name": "Malloc0", 00:33:34.911 "no_auto_visible": false 00:33:34.911 }, 00:33:34.911 "method": "nvmf_subsystem_add_ns", 00:33:34.911 "req_id": 1 00:33:34.911 } 00:33:34.911 Got JSON-RPC error response 00:33:34.911 response: 00:33:34.911 { 00:33:34.911 "code": -32602, 00:33:34.911 "message": "Invalid parameters" 00:33:34.911 } 00:33:34.911 00:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:33:34.911 00:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:33:34.911 00:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:33:34.911 00:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:33:34.911 Adding namespace failed - expected result. 00:33:34.911 00:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:33:34.911 test case2: host connect to nvmf target in multiple paths 00:33:34.911 00:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:33:34.911 00:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:34.911 00:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:34.911 [2024-10-09 00:41:05.434883] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:33:34.911 00:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:34.911 00:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:33:35.173 00:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:33:35.761 00:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:33:35.761 00:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:33:35.761 00:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:33:35.761 00:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:33:35.761 00:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:33:37.672 00:41:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:33:37.672 00:41:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:33:37.672 00:41:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:33:37.672 00:41:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:33:37.672 00:41:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:33:37.672 00:41:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:33:37.672 00:41:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:33:37.672 [global] 00:33:37.672 thread=1 00:33:37.672 invalidate=1 00:33:37.672 rw=write 00:33:37.672 time_based=1 00:33:37.672 runtime=1 00:33:37.672 ioengine=libaio 00:33:37.672 direct=1 00:33:37.672 bs=4096 00:33:37.672 iodepth=1 00:33:37.672 norandommap=0 00:33:37.672 numjobs=1 00:33:37.672 00:33:37.672 verify_dump=1 00:33:37.672 verify_backlog=512 00:33:37.672 verify_state_save=0 00:33:37.672 do_verify=1 00:33:37.672 verify=crc32c-intel 00:33:37.672 [job0] 00:33:37.672 filename=/dev/nvme0n1 00:33:37.672 Could not set queue depth (nvme0n1) 00:33:38.240 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:33:38.240 fio-3.35 00:33:38.240 Starting 1 thread 00:33:39.180 00:33:39.180 job0: (groupid=0, jobs=1): err= 0: pid=3514979: Wed Oct 9 00:41:09 2024 00:33:39.180 read: IOPS=17, BW=70.2KiB/s (71.9kB/s)(72.0KiB/1026msec) 00:33:39.180 slat (nsec): min=27790, max=33277, avg=28436.44, stdev=1261.44 00:33:39.180 clat (usec): min=1001, max=44025, avg=39707.06, stdev=9676.93 00:33:39.180 lat (usec): min=1029, max=44058, avg=39735.50, stdev=9676.92 00:33:39.180 clat percentiles (usec): 00:33:39.180 | 1.00th=[ 1004], 5.00th=[ 1004], 10.00th=[41157], 20.00th=[41681], 00:33:39.180 | 30.00th=[41681], 40.00th=[41681], 50.00th=[42206], 60.00th=[42206], 00:33:39.180 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[43779], 00:33:39.180 | 99.00th=[43779], 99.50th=[43779], 99.90th=[43779], 99.95th=[43779], 00:33:39.180 | 99.99th=[43779] 00:33:39.180 write: IOPS=499, BW=1996KiB/s (2044kB/s)(2048KiB/1026msec); 0 zone resets 00:33:39.180 slat (nsec): min=9527, max=70525, avg=31878.69, stdev=10617.19 00:33:39.180 clat (usec): min=218, max=824, avg=566.45, stdev=91.27 00:33:39.180 lat (usec): min=240, max=860, avg=598.33, stdev=95.31 00:33:39.180 clat percentiles (usec): 00:33:39.180 | 1.00th=[ 322], 5.00th=[ 404], 10.00th=[ 441], 20.00th=[ 494], 00:33:39.180 | 30.00th=[ 529], 40.00th=[ 545], 50.00th=[ 578], 60.00th=[ 586], 00:33:39.180 | 70.00th=[ 619], 80.00th=[ 652], 90.00th=[ 676], 95.00th=[ 701], 00:33:39.180 | 99.00th=[ 742], 99.50th=[ 783], 99.90th=[ 824], 99.95th=[ 824], 00:33:39.180 | 99.99th=[ 824] 00:33:39.180 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:33:39.180 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:33:39.180 lat (usec) : 250=0.38%, 500=20.38%, 750=74.91%, 1000=0.94% 00:33:39.180 lat (msec) : 2=0.19%, 50=3.21% 00:33:39.180 cpu : usr=0.68%, sys=2.34%, ctx=532, majf=0, minf=1 00:33:39.180 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:39.180 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:39.180 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:39.180 issued rwts: total=18,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:39.180 latency : target=0, window=0, percentile=100.00%, depth=1 00:33:39.180 00:33:39.180 Run status group 0 (all jobs): 00:33:39.180 READ: bw=70.2KiB/s (71.9kB/s), 70.2KiB/s-70.2KiB/s (71.9kB/s-71.9kB/s), io=72.0KiB (73.7kB), run=1026-1026msec 00:33:39.180 WRITE: bw=1996KiB/s (2044kB/s), 1996KiB/s-1996KiB/s (2044kB/s-2044kB/s), io=2048KiB (2097kB), run=1026-1026msec 00:33:39.180 00:33:39.180 Disk stats (read/write): 00:33:39.180 nvme0n1: ios=70/512, merge=0/0, ticks=1368/229, in_queue=1597, util=96.59% 00:33:39.180 00:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:33:39.441 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:33:39.441 00:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:33:39.441 00:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:33:39.441 00:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:33:39.441 00:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:33:39.441 00:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:33:39.441 00:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:33:39.441 00:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:33:39.441 00:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:33:39.441 00:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:33:39.441 00:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@514 -- # nvmfcleanup 00:33:39.441 00:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:33:39.441 00:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:39.441 00:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:33:39.441 00:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:39.441 00:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:39.441 rmmod nvme_tcp 00:33:39.441 rmmod nvme_fabrics 00:33:39.441 rmmod nvme_keyring 00:33:39.441 00:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:39.441 00:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:33:39.442 00:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:33:39.442 00:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@515 -- # '[' -n 3513904 ']' 00:33:39.442 00:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@516 -- # killprocess 3513904 00:33:39.442 00:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@950 -- # '[' -z 3513904 ']' 00:33:39.442 00:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@954 -- # kill -0 3513904 00:33:39.442 00:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@955 -- # uname 00:33:39.442 00:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:39.442 00:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3513904 00:33:39.442 00:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:33:39.442 00:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:33:39.442 00:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3513904' 00:33:39.442 killing process with pid 3513904 00:33:39.442 00:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@969 -- # kill 3513904 00:33:39.442 00:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@974 -- # wait 3513904 00:33:39.702 00:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:33:39.702 00:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:33:39.703 00:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:33:39.703 00:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:33:39.703 00:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@789 -- # iptables-save 00:33:39.703 00:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:33:39.703 00:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@789 -- # iptables-restore 00:33:39.703 00:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:39.703 00:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:39.703 00:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:39.703 00:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:39.703 00:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:41.615 00:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:41.615 00:33:41.615 real 0m15.567s 00:33:41.615 user 0m36.085s 00:33:41.615 sys 0m7.266s 00:33:41.615 00:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:41.615 00:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:41.615 ************************************ 00:33:41.615 END TEST nvmf_nmic 00:33:41.615 ************************************ 00:33:41.882 00:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:33:41.882 00:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:33:41.882 00:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:33:41.882 00:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:33:41.882 ************************************ 00:33:41.882 START TEST nvmf_fio_target 00:33:41.882 ************************************ 00:33:41.882 00:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:33:41.882 * Looking for test storage... 00:33:41.882 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:41.882 00:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:33:41.882 00:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1681 -- # lcov --version 00:33:41.882 00:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:33:41.882 00:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:33:41.882 00:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:41.882 00:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:41.882 00:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:41.882 00:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:33:41.882 00:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:33:41.882 00:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:33:41.882 00:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:33:41.882 00:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:33:41.882 00:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:33:41.882 00:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:33:41.882 00:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:41.882 00:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:33:41.882 00:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:33:41.882 00:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:41.882 00:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:41.882 00:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:33:41.882 00:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:33:41.882 00:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:41.883 00:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:33:42.150 00:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:33:42.150 00:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:33:42.150 00:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:33:42.150 00:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:42.150 00:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:33:42.150 00:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:33:42.150 00:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:42.150 00:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:42.150 00:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:33:42.150 00:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:42.150 00:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:33:42.150 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:42.150 --rc genhtml_branch_coverage=1 00:33:42.150 --rc genhtml_function_coverage=1 00:33:42.150 --rc genhtml_legend=1 00:33:42.150 --rc geninfo_all_blocks=1 00:33:42.150 --rc geninfo_unexecuted_blocks=1 00:33:42.150 00:33:42.150 ' 00:33:42.150 00:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:33:42.150 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:42.150 --rc genhtml_branch_coverage=1 00:33:42.150 --rc genhtml_function_coverage=1 00:33:42.150 --rc genhtml_legend=1 00:33:42.150 --rc geninfo_all_blocks=1 00:33:42.150 --rc geninfo_unexecuted_blocks=1 00:33:42.150 00:33:42.150 ' 00:33:42.150 00:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:33:42.150 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:42.150 --rc genhtml_branch_coverage=1 00:33:42.150 --rc genhtml_function_coverage=1 00:33:42.150 --rc genhtml_legend=1 00:33:42.150 --rc geninfo_all_blocks=1 00:33:42.150 --rc geninfo_unexecuted_blocks=1 00:33:42.150 00:33:42.150 ' 00:33:42.150 00:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:33:42.150 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:42.150 --rc genhtml_branch_coverage=1 00:33:42.150 --rc genhtml_function_coverage=1 00:33:42.150 --rc genhtml_legend=1 00:33:42.150 --rc geninfo_all_blocks=1 00:33:42.150 --rc geninfo_unexecuted_blocks=1 00:33:42.150 00:33:42.150 ' 00:33:42.150 00:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:42.150 00:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:33:42.150 00:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:42.150 00:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:42.150 00:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:42.150 00:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:42.150 00:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:42.150 00:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:42.150 00:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:42.150 00:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:42.150 00:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:42.150 00:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:42.150 00:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:33:42.150 00:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:33:42.150 00:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:42.150 00:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:42.151 00:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:42.151 00:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:42.151 00:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:42.151 00:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:33:42.151 00:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:42.151 00:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:42.151 00:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:42.151 00:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:42.151 00:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:42.151 00:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:42.151 00:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:33:42.151 00:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:42.151 00:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:33:42.151 00:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:42.151 00:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:42.151 00:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:42.151 00:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:42.151 00:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:42.151 00:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:33:42.151 00:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:33:42.151 00:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:42.151 00:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:42.151 00:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:42.151 00:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:33:42.151 00:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:33:42.151 00:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:33:42.151 00:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:33:42.151 00:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:33:42.151 00:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:42.151 00:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@474 -- # prepare_net_devs 00:33:42.151 00:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@436 -- # local -g is_hw=no 00:33:42.151 00:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@438 -- # remove_spdk_ns 00:33:42.151 00:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:42.151 00:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:42.151 00:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:42.151 00:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:33:42.151 00:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:33:42.151 00:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:33:42.151 00:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:33:50.305 00:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:50.305 00:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:33:50.305 00:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:50.305 00:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:50.305 00:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:50.305 00:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:50.305 00:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:50.305 00:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:33:50.305 00:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:50.305 00:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:33:50.305 00:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:33:50.305 00:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:33:50.305 00:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:33:50.305 00:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:33:50.305 00:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:33:50.305 00:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:50.305 00:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:50.305 00:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:50.305 00:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:50.305 00:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:50.305 00:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:50.305 00:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:50.305 00:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:50.305 00:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:50.305 00:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:50.305 00:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:50.305 00:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:50.305 00:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:50.305 00:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:50.305 00:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:50.305 00:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:50.305 00:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:50.305 00:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:50.305 00:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:50.305 00:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:33:50.305 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:33:50.305 00:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:50.305 00:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:50.305 00:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:50.305 00:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:50.305 00:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:50.305 00:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:50.305 00:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:33:50.305 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:33:50.305 00:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:50.305 00:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:50.305 00:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:50.305 00:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:50.305 00:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:50.305 00:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:50.305 00:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:50.305 00:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:50.305 00:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:33:50.305 00:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:50.305 00:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:33:50.305 00:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:50.305 00:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ up == up ]] 00:33:50.305 00:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:33:50.305 00:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:50.305 00:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:33:50.305 Found net devices under 0000:4b:00.0: cvl_0_0 00:33:50.305 00:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:33:50.305 00:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:33:50.305 00:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:50.305 00:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:33:50.305 00:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:50.305 00:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ up == up ]] 00:33:50.305 00:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:33:50.305 00:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:50.305 00:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:33:50.305 Found net devices under 0000:4b:00.1: cvl_0_1 00:33:50.305 00:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:33:50.305 00:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:33:50.305 00:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # is_hw=yes 00:33:50.305 00:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:33:50.305 00:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:33:50.305 00:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:33:50.305 00:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:50.305 00:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:50.305 00:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:50.305 00:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:50.305 00:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:50.305 00:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:50.305 00:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:50.305 00:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:50.305 00:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:50.305 00:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:50.306 00:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:50.306 00:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:50.306 00:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:50.306 00:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:50.306 00:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:50.306 00:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:50.306 00:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:50.306 00:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:50.306 00:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:50.306 00:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:50.306 00:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:50.306 00:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:50.306 00:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:50.306 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:50.306 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.538 ms 00:33:50.306 00:33:50.306 --- 10.0.0.2 ping statistics --- 00:33:50.306 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:50.306 rtt min/avg/max/mdev = 0.538/0.538/0.538/0.000 ms 00:33:50.306 00:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:50.306 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:50.306 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.288 ms 00:33:50.306 00:33:50.306 --- 10.0.0.1 ping statistics --- 00:33:50.306 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:50.306 rtt min/avg/max/mdev = 0.288/0.288/0.288/0.000 ms 00:33:50.306 00:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:50.306 00:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@448 -- # return 0 00:33:50.306 00:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:33:50.306 00:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:50.306 00:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:33:50.306 00:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:33:50.306 00:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:50.306 00:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:33:50.306 00:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:33:50.306 00:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:33:50.306 00:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:33:50.306 00:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:50.306 00:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:33:50.306 00:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@507 -- # nvmfpid=3519314 00:33:50.306 00:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@508 -- # waitforlisten 3519314 00:33:50.306 00:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:33:50.306 00:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@831 -- # '[' -z 3519314 ']' 00:33:50.306 00:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:50.306 00:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:50.306 00:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:50.306 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:50.306 00:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:50.306 00:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:33:50.306 [2024-10-09 00:41:20.087005] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:33:50.306 [2024-10-09 00:41:20.088182] Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 initialization... 00:33:50.306 [2024-10-09 00:41:20.088233] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:50.306 [2024-10-09 00:41:20.180687] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:50.306 [2024-10-09 00:41:20.275712] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:50.306 [2024-10-09 00:41:20.275792] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:50.306 [2024-10-09 00:41:20.275800] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:50.306 [2024-10-09 00:41:20.275808] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:50.306 [2024-10-09 00:41:20.275814] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:50.306 [2024-10-09 00:41:20.277817] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:33:50.306 [2024-10-09 00:41:20.277996] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:33:50.306 [2024-10-09 00:41:20.278159] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:33:50.306 [2024-10-09 00:41:20.278160] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:33:50.306 [2024-10-09 00:41:20.374215] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:33:50.306 [2024-10-09 00:41:20.375725] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:33:50.306 [2024-10-09 00:41:20.376125] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:33:50.306 [2024-10-09 00:41:20.376592] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:33:50.306 [2024-10-09 00:41:20.376600] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:33:50.306 00:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:50.306 00:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@864 -- # return 0 00:33:50.306 00:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:33:50.306 00:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:50.306 00:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:33:50.306 00:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:50.306 00:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:33:50.567 [2024-10-09 00:41:21.079179] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:50.567 00:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:33:50.828 00:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:33:50.828 00:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:33:51.088 00:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:33:51.088 00:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:33:51.088 00:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:33:51.088 00:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:33:51.349 00:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:33:51.349 00:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:33:51.609 00:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:33:51.870 00:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:33:51.870 00:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:33:51.870 00:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:33:51.870 00:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:33:52.132 00:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:33:52.132 00:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:33:52.393 00:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:33:52.654 00:41:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:33:52.654 00:41:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:52.654 00:41:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:33:52.654 00:41:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:33:52.914 00:41:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:52.914 [2024-10-09 00:41:23.531008] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:53.174 00:41:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:33:53.174 00:41:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:33:53.434 00:41:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:33:53.695 00:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:33:53.696 00:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:33:53.696 00:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:33:53.696 00:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:33:53.696 00:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:33:53.696 00:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:33:56.248 00:41:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:33:56.248 00:41:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:33:56.248 00:41:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:33:56.248 00:41:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:33:56.248 00:41:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:33:56.248 00:41:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:33:56.248 00:41:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:33:56.248 [global] 00:33:56.248 thread=1 00:33:56.248 invalidate=1 00:33:56.248 rw=write 00:33:56.248 time_based=1 00:33:56.248 runtime=1 00:33:56.248 ioengine=libaio 00:33:56.248 direct=1 00:33:56.248 bs=4096 00:33:56.248 iodepth=1 00:33:56.248 norandommap=0 00:33:56.248 numjobs=1 00:33:56.248 00:33:56.248 verify_dump=1 00:33:56.248 verify_backlog=512 00:33:56.248 verify_state_save=0 00:33:56.248 do_verify=1 00:33:56.248 verify=crc32c-intel 00:33:56.248 [job0] 00:33:56.248 filename=/dev/nvme0n1 00:33:56.248 [job1] 00:33:56.248 filename=/dev/nvme0n2 00:33:56.248 [job2] 00:33:56.248 filename=/dev/nvme0n3 00:33:56.248 [job3] 00:33:56.248 filename=/dev/nvme0n4 00:33:56.248 Could not set queue depth (nvme0n1) 00:33:56.248 Could not set queue depth (nvme0n2) 00:33:56.248 Could not set queue depth (nvme0n3) 00:33:56.248 Could not set queue depth (nvme0n4) 00:33:56.248 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:33:56.248 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:33:56.248 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:33:56.248 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:33:56.248 fio-3.35 00:33:56.248 Starting 4 threads 00:33:57.658 00:33:57.658 job0: (groupid=0, jobs=1): err= 0: pid=3520849: Wed Oct 9 00:41:27 2024 00:33:57.658 read: IOPS=15, BW=63.9KiB/s (65.4kB/s)(64.0KiB/1002msec) 00:33:57.658 slat (nsec): min=27530, max=28172, avg=27776.94, stdev=207.09 00:33:57.658 clat (usec): min=41004, max=42076, avg=41848.34, stdev=300.61 00:33:57.658 lat (usec): min=41032, max=42104, avg=41876.12, stdev=300.59 00:33:57.658 clat percentiles (usec): 00:33:57.658 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41681], 00:33:57.658 | 30.00th=[41681], 40.00th=[41681], 50.00th=[41681], 60.00th=[42206], 00:33:57.658 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:33:57.658 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:33:57.658 | 99.99th=[42206] 00:33:57.658 write: IOPS=510, BW=2044KiB/s (2093kB/s)(2048KiB/1002msec); 0 zone resets 00:33:57.658 slat (nsec): min=9424, max=56377, avg=32282.65, stdev=9863.89 00:33:57.658 clat (usec): min=239, max=842, avg=607.79, stdev=117.61 00:33:57.658 lat (usec): min=251, max=889, avg=640.07, stdev=122.42 00:33:57.658 clat percentiles (usec): 00:33:57.658 | 1.00th=[ 293], 5.00th=[ 388], 10.00th=[ 449], 20.00th=[ 502], 00:33:57.658 | 30.00th=[ 553], 40.00th=[ 594], 50.00th=[ 619], 60.00th=[ 652], 00:33:57.659 | 70.00th=[ 676], 80.00th=[ 709], 90.00th=[ 750], 95.00th=[ 775], 00:33:57.659 | 99.00th=[ 824], 99.50th=[ 840], 99.90th=[ 840], 99.95th=[ 840], 00:33:57.659 | 99.99th=[ 840] 00:33:57.659 bw ( KiB/s): min= 4096, max= 4096, per=37.56%, avg=4096.00, stdev= 0.00, samples=1 00:33:57.659 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:33:57.659 lat (usec) : 250=0.19%, 500=19.13%, 750=67.61%, 1000=10.04% 00:33:57.659 lat (msec) : 50=3.03% 00:33:57.659 cpu : usr=0.80%, sys=2.40%, ctx=532, majf=0, minf=1 00:33:57.659 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:57.659 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:57.659 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:57.659 issued rwts: total=16,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:57.659 latency : target=0, window=0, percentile=100.00%, depth=1 00:33:57.659 job1: (groupid=0, jobs=1): err= 0: pid=3520865: Wed Oct 9 00:41:27 2024 00:33:57.659 read: IOPS=680, BW=2721KiB/s (2787kB/s)(2724KiB/1001msec) 00:33:57.659 slat (nsec): min=6219, max=55866, avg=24124.11, stdev=7409.05 00:33:57.659 clat (usec): min=349, max=1089, avg=749.61, stdev=113.20 00:33:57.659 lat (usec): min=357, max=1116, avg=773.73, stdev=115.75 00:33:57.659 clat percentiles (usec): 00:33:57.659 | 1.00th=[ 453], 5.00th=[ 562], 10.00th=[ 594], 20.00th=[ 652], 00:33:57.659 | 30.00th=[ 701], 40.00th=[ 725], 50.00th=[ 758], 60.00th=[ 791], 00:33:57.659 | 70.00th=[ 816], 80.00th=[ 840], 90.00th=[ 873], 95.00th=[ 922], 00:33:57.659 | 99.00th=[ 1004], 99.50th=[ 1045], 99.90th=[ 1090], 99.95th=[ 1090], 00:33:57.659 | 99.99th=[ 1090] 00:33:57.659 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:33:57.659 slat (nsec): min=8941, max=67854, avg=30828.40, stdev=9265.01 00:33:57.659 clat (usec): min=183, max=867, avg=419.19, stdev=115.06 00:33:57.659 lat (usec): min=206, max=901, avg=450.02, stdev=117.47 00:33:57.659 clat percentiles (usec): 00:33:57.659 | 1.00th=[ 208], 5.00th=[ 251], 10.00th=[ 289], 20.00th=[ 314], 00:33:57.659 | 30.00th=[ 338], 40.00th=[ 379], 50.00th=[ 416], 60.00th=[ 441], 00:33:57.659 | 70.00th=[ 478], 80.00th=[ 515], 90.00th=[ 570], 95.00th=[ 611], 00:33:57.659 | 99.00th=[ 742], 99.50th=[ 783], 99.90th=[ 865], 99.95th=[ 865], 00:33:57.659 | 99.99th=[ 865] 00:33:57.659 bw ( KiB/s): min= 4096, max= 4096, per=37.56%, avg=4096.00, stdev= 0.00, samples=1 00:33:57.659 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:33:57.659 lat (usec) : 250=2.87%, 500=43.93%, 750=31.73%, 1000=21.06% 00:33:57.659 lat (msec) : 2=0.41% 00:33:57.659 cpu : usr=3.80%, sys=6.00%, ctx=1705, majf=0, minf=1 00:33:57.659 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:57.659 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:57.659 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:57.659 issued rwts: total=681,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:57.659 latency : target=0, window=0, percentile=100.00%, depth=1 00:33:57.659 job2: (groupid=0, jobs=1): err= 0: pid=3520881: Wed Oct 9 00:41:27 2024 00:33:57.659 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:33:57.659 slat (nsec): min=7899, max=60082, avg=26375.14, stdev=2717.87 00:33:57.659 clat (usec): min=680, max=1193, avg=1004.29, stdev=79.51 00:33:57.659 lat (usec): min=706, max=1218, avg=1030.66, stdev=79.44 00:33:57.659 clat percentiles (usec): 00:33:57.659 | 1.00th=[ 775], 5.00th=[ 832], 10.00th=[ 906], 20.00th=[ 955], 00:33:57.659 | 30.00th=[ 979], 40.00th=[ 1004], 50.00th=[ 1012], 60.00th=[ 1029], 00:33:57.659 | 70.00th=[ 1045], 80.00th=[ 1057], 90.00th=[ 1090], 95.00th=[ 1123], 00:33:57.659 | 99.00th=[ 1156], 99.50th=[ 1172], 99.90th=[ 1188], 99.95th=[ 1188], 00:33:57.659 | 99.99th=[ 1188] 00:33:57.659 write: IOPS=713, BW=2853KiB/s (2922kB/s)(2856KiB/1001msec); 0 zone resets 00:33:57.659 slat (nsec): min=10038, max=56080, avg=30382.21, stdev=10046.74 00:33:57.659 clat (usec): min=192, max=1095, avg=617.32, stdev=119.90 00:33:57.659 lat (usec): min=202, max=1130, avg=647.70, stdev=123.61 00:33:57.659 clat percentiles (usec): 00:33:57.659 | 1.00th=[ 310], 5.00th=[ 396], 10.00th=[ 465], 20.00th=[ 523], 00:33:57.659 | 30.00th=[ 570], 40.00th=[ 594], 50.00th=[ 619], 60.00th=[ 660], 00:33:57.659 | 70.00th=[ 685], 80.00th=[ 717], 90.00th=[ 750], 95.00th=[ 783], 00:33:57.659 | 99.00th=[ 922], 99.50th=[ 963], 99.90th=[ 1090], 99.95th=[ 1090], 00:33:57.659 | 99.99th=[ 1090] 00:33:57.659 bw ( KiB/s): min= 4096, max= 4096, per=37.56%, avg=4096.00, stdev= 0.00, samples=1 00:33:57.659 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:33:57.659 lat (usec) : 250=0.24%, 500=9.87%, 750=42.41%, 1000=21.86% 00:33:57.659 lat (msec) : 2=25.61% 00:33:57.659 cpu : usr=1.90%, sys=3.50%, ctx=1227, majf=0, minf=1 00:33:57.659 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:57.659 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:57.659 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:57.659 issued rwts: total=512,714,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:57.659 latency : target=0, window=0, percentile=100.00%, depth=1 00:33:57.659 job3: (groupid=0, jobs=1): err= 0: pid=3520888: Wed Oct 9 00:41:27 2024 00:33:57.659 read: IOPS=16, BW=67.1KiB/s (68.7kB/s)(68.0KiB/1013msec) 00:33:57.659 slat (nsec): min=26138, max=27089, avg=26464.76, stdev=238.61 00:33:57.659 clat (usec): min=1167, max=42068, avg=39454.82, stdev=9870.96 00:33:57.659 lat (usec): min=1194, max=42094, avg=39481.28, stdev=9870.93 00:33:57.659 clat percentiles (usec): 00:33:57.659 | 1.00th=[ 1172], 5.00th=[ 1172], 10.00th=[41157], 20.00th=[41681], 00:33:57.659 | 30.00th=[41681], 40.00th=[41681], 50.00th=[42206], 60.00th=[42206], 00:33:57.659 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:33:57.659 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:33:57.659 | 99.99th=[42206] 00:33:57.659 write: IOPS=505, BW=2022KiB/s (2070kB/s)(2048KiB/1013msec); 0 zone resets 00:33:57.659 slat (nsec): min=9186, max=53464, avg=30577.04, stdev=8969.64 00:33:57.659 clat (usec): min=237, max=1051, avg=629.76, stdev=129.05 00:33:57.659 lat (usec): min=250, max=1069, avg=660.34, stdev=133.03 00:33:57.659 clat percentiles (usec): 00:33:57.659 | 1.00th=[ 277], 5.00th=[ 396], 10.00th=[ 461], 20.00th=[ 529], 00:33:57.659 | 30.00th=[ 578], 40.00th=[ 611], 50.00th=[ 644], 60.00th=[ 676], 00:33:57.659 | 70.00th=[ 709], 80.00th=[ 734], 90.00th=[ 775], 95.00th=[ 807], 00:33:57.659 | 99.00th=[ 979], 99.50th=[ 1012], 99.90th=[ 1057], 99.95th=[ 1057], 00:33:57.659 | 99.99th=[ 1057] 00:33:57.659 bw ( KiB/s): min= 4096, max= 4096, per=37.56%, avg=4096.00, stdev= 0.00, samples=1 00:33:57.659 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:33:57.659 lat (usec) : 250=0.19%, 500=15.12%, 750=67.67%, 1000=13.04% 00:33:57.659 lat (msec) : 2=0.95%, 50=3.02% 00:33:57.659 cpu : usr=0.89%, sys=2.08%, ctx=529, majf=0, minf=2 00:33:57.659 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:57.659 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:57.659 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:57.659 issued rwts: total=17,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:57.659 latency : target=0, window=0, percentile=100.00%, depth=1 00:33:57.659 00:33:57.659 Run status group 0 (all jobs): 00:33:57.659 READ: bw=4841KiB/s (4957kB/s), 63.9KiB/s-2721KiB/s (65.4kB/s-2787kB/s), io=4904KiB (5022kB), run=1001-1013msec 00:33:57.659 WRITE: bw=10.7MiB/s (11.2MB/s), 2022KiB/s-4092KiB/s (2070kB/s-4190kB/s), io=10.8MiB (11.3MB), run=1001-1013msec 00:33:57.659 00:33:57.659 Disk stats (read/write): 00:33:57.659 nvme0n1: ios=68/512, merge=0/0, ticks=1383/247, in_queue=1630, util=98.90% 00:33:57.659 nvme0n2: ios=546/951, merge=0/0, ticks=719/299, in_queue=1018, util=91.42% 00:33:57.659 nvme0n3: ios=497/512, merge=0/0, ticks=1399/307, in_queue=1706, util=96.72% 00:33:57.659 nvme0n4: ios=12/512, merge=0/0, ticks=462/245, in_queue=707, util=89.40% 00:33:57.659 00:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:33:57.659 [global] 00:33:57.659 thread=1 00:33:57.659 invalidate=1 00:33:57.659 rw=randwrite 00:33:57.659 time_based=1 00:33:57.659 runtime=1 00:33:57.659 ioengine=libaio 00:33:57.659 direct=1 00:33:57.659 bs=4096 00:33:57.659 iodepth=1 00:33:57.659 norandommap=0 00:33:57.659 numjobs=1 00:33:57.659 00:33:57.659 verify_dump=1 00:33:57.659 verify_backlog=512 00:33:57.659 verify_state_save=0 00:33:57.659 do_verify=1 00:33:57.659 verify=crc32c-intel 00:33:57.659 [job0] 00:33:57.659 filename=/dev/nvme0n1 00:33:57.659 [job1] 00:33:57.659 filename=/dev/nvme0n2 00:33:57.659 [job2] 00:33:57.659 filename=/dev/nvme0n3 00:33:57.659 [job3] 00:33:57.659 filename=/dev/nvme0n4 00:33:57.659 Could not set queue depth (nvme0n1) 00:33:57.659 Could not set queue depth (nvme0n2) 00:33:57.659 Could not set queue depth (nvme0n3) 00:33:57.659 Could not set queue depth (nvme0n4) 00:33:57.920 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:33:57.920 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:33:57.920 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:33:57.920 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:33:57.920 fio-3.35 00:33:57.920 Starting 4 threads 00:33:59.334 00:33:59.334 job0: (groupid=0, jobs=1): err= 0: pid=3521303: Wed Oct 9 00:41:29 2024 00:33:59.334 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:33:59.334 slat (nsec): min=7217, max=57891, avg=25861.98, stdev=4238.53 00:33:59.334 clat (usec): min=350, max=1216, avg=828.44, stdev=146.50 00:33:59.334 lat (usec): min=376, max=1242, avg=854.31, stdev=146.75 00:33:59.334 clat percentiles (usec): 00:33:59.334 | 1.00th=[ 469], 5.00th=[ 553], 10.00th=[ 619], 20.00th=[ 717], 00:33:59.334 | 30.00th=[ 775], 40.00th=[ 807], 50.00th=[ 848], 60.00th=[ 873], 00:33:59.334 | 70.00th=[ 914], 80.00th=[ 947], 90.00th=[ 1004], 95.00th=[ 1045], 00:33:59.334 | 99.00th=[ 1123], 99.50th=[ 1156], 99.90th=[ 1221], 99.95th=[ 1221], 00:33:59.334 | 99.99th=[ 1221] 00:33:59.334 write: IOPS=1016, BW=4068KiB/s (4166kB/s)(4072KiB/1001msec); 0 zone resets 00:33:59.334 slat (nsec): min=9863, max=52608, avg=32242.47, stdev=6951.87 00:33:59.334 clat (usec): min=142, max=865, avg=504.90, stdev=137.13 00:33:59.334 lat (usec): min=154, max=898, avg=537.14, stdev=138.21 00:33:59.334 clat percentiles (usec): 00:33:59.334 | 1.00th=[ 215], 5.00th=[ 277], 10.00th=[ 310], 20.00th=[ 383], 00:33:59.334 | 30.00th=[ 429], 40.00th=[ 469], 50.00th=[ 506], 60.00th=[ 553], 00:33:59.334 | 70.00th=[ 586], 80.00th=[ 627], 90.00th=[ 685], 95.00th=[ 717], 00:33:59.334 | 99.00th=[ 791], 99.50th=[ 824], 99.90th=[ 865], 99.95th=[ 865], 00:33:59.334 | 99.99th=[ 865] 00:33:59.334 bw ( KiB/s): min= 4096, max= 4096, per=36.57%, avg=4096.00, stdev= 0.00, samples=1 00:33:59.334 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:33:59.334 lat (usec) : 250=1.50%, 500=30.85%, 750=41.96%, 1000=22.29% 00:33:59.334 lat (msec) : 2=3.40% 00:33:59.335 cpu : usr=2.30%, sys=4.80%, ctx=1533, majf=0, minf=1 00:33:59.335 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:59.335 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:59.335 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:59.335 issued rwts: total=512,1018,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:59.335 latency : target=0, window=0, percentile=100.00%, depth=1 00:33:59.335 job1: (groupid=0, jobs=1): err= 0: pid=3521317: Wed Oct 9 00:41:29 2024 00:33:59.335 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:33:59.335 slat (nsec): min=7198, max=60253, avg=26182.94, stdev=3023.15 00:33:59.335 clat (usec): min=602, max=1334, avg=1017.76, stdev=109.96 00:33:59.335 lat (usec): min=629, max=1360, avg=1043.94, stdev=109.90 00:33:59.335 clat percentiles (usec): 00:33:59.335 | 1.00th=[ 717], 5.00th=[ 832], 10.00th=[ 873], 20.00th=[ 930], 00:33:59.335 | 30.00th=[ 971], 40.00th=[ 996], 50.00th=[ 1020], 60.00th=[ 1057], 00:33:59.335 | 70.00th=[ 1074], 80.00th=[ 1106], 90.00th=[ 1156], 95.00th=[ 1188], 00:33:59.335 | 99.00th=[ 1254], 99.50th=[ 1270], 99.90th=[ 1336], 99.95th=[ 1336], 00:33:59.335 | 99.99th=[ 1336] 00:33:59.335 write: IOPS=640, BW=2561KiB/s (2623kB/s)(2564KiB/1001msec); 0 zone resets 00:33:59.335 slat (nsec): min=9097, max=52429, avg=31063.74, stdev=7118.69 00:33:59.335 clat (usec): min=148, max=1034, avg=680.37, stdev=140.04 00:33:59.335 lat (usec): min=160, max=1066, avg=711.43, stdev=142.00 00:33:59.335 clat percentiles (usec): 00:33:59.335 | 1.00th=[ 306], 5.00th=[ 445], 10.00th=[ 502], 20.00th=[ 570], 00:33:59.335 | 30.00th=[ 619], 40.00th=[ 652], 50.00th=[ 676], 60.00th=[ 717], 00:33:59.335 | 70.00th=[ 758], 80.00th=[ 807], 90.00th=[ 857], 95.00th=[ 898], 00:33:59.335 | 99.00th=[ 955], 99.50th=[ 979], 99.90th=[ 1037], 99.95th=[ 1037], 00:33:59.335 | 99.99th=[ 1037] 00:33:59.335 bw ( KiB/s): min= 4096, max= 4096, per=36.57%, avg=4096.00, stdev= 0.00, samples=1 00:33:59.335 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:33:59.335 lat (usec) : 250=0.43%, 500=5.20%, 750=32.35%, 1000=35.91% 00:33:59.335 lat (msec) : 2=26.11% 00:33:59.335 cpu : usr=2.60%, sys=4.40%, ctx=1153, majf=0, minf=2 00:33:59.335 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:59.335 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:59.335 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:59.335 issued rwts: total=512,641,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:59.335 latency : target=0, window=0, percentile=100.00%, depth=1 00:33:59.335 job2: (groupid=0, jobs=1): err= 0: pid=3521334: Wed Oct 9 00:41:29 2024 00:33:59.335 read: IOPS=17, BW=71.2KiB/s (72.9kB/s)(72.0KiB/1011msec) 00:33:59.335 slat (nsec): min=26602, max=27094, avg=26789.78, stdev=126.82 00:33:59.335 clat (usec): min=40832, max=42012, avg=41126.16, stdev=373.62 00:33:59.335 lat (usec): min=40859, max=42038, avg=41152.95, stdev=373.60 00:33:59.335 clat percentiles (usec): 00:33:59.335 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:33:59.335 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:33:59.335 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41681], 95.00th=[42206], 00:33:59.335 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:33:59.335 | 99.99th=[42206] 00:33:59.335 write: IOPS=506, BW=2026KiB/s (2074kB/s)(2048KiB/1011msec); 0 zone resets 00:33:59.335 slat (nsec): min=9909, max=51564, avg=29416.81, stdev=9969.63 00:33:59.335 clat (usec): min=120, max=695, avg=479.51, stdev=114.26 00:33:59.335 lat (usec): min=130, max=719, avg=508.93, stdev=119.76 00:33:59.335 clat percentiles (usec): 00:33:59.335 | 1.00th=[ 249], 5.00th=[ 281], 10.00th=[ 302], 20.00th=[ 363], 00:33:59.335 | 30.00th=[ 408], 40.00th=[ 465], 50.00th=[ 515], 60.00th=[ 537], 00:33:59.335 | 70.00th=[ 562], 80.00th=[ 586], 90.00th=[ 611], 95.00th=[ 635], 00:33:59.335 | 99.00th=[ 676], 99.50th=[ 685], 99.90th=[ 693], 99.95th=[ 693], 00:33:59.335 | 99.99th=[ 693] 00:33:59.335 bw ( KiB/s): min= 4096, max= 4096, per=36.57%, avg=4096.00, stdev= 0.00, samples=1 00:33:59.335 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:33:59.335 lat (usec) : 250=1.13%, 500=42.64%, 750=52.83% 00:33:59.335 lat (msec) : 50=3.40% 00:33:59.335 cpu : usr=0.89%, sys=1.29%, ctx=532, majf=0, minf=1 00:33:59.335 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:59.335 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:59.335 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:59.335 issued rwts: total=18,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:59.335 latency : target=0, window=0, percentile=100.00%, depth=1 00:33:59.335 job3: (groupid=0, jobs=1): err= 0: pid=3521340: Wed Oct 9 00:41:29 2024 00:33:59.335 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:33:59.335 slat (nsec): min=24175, max=58016, avg=25489.93, stdev=3028.49 00:33:59.335 clat (usec): min=635, max=1340, avg=1039.38, stdev=103.23 00:33:59.335 lat (usec): min=660, max=1365, avg=1064.87, stdev=102.93 00:33:59.335 clat percentiles (usec): 00:33:59.335 | 1.00th=[ 750], 5.00th=[ 840], 10.00th=[ 906], 20.00th=[ 971], 00:33:59.335 | 30.00th=[ 1004], 40.00th=[ 1029], 50.00th=[ 1045], 60.00th=[ 1074], 00:33:59.335 | 70.00th=[ 1090], 80.00th=[ 1123], 90.00th=[ 1156], 95.00th=[ 1188], 00:33:59.335 | 99.00th=[ 1254], 99.50th=[ 1287], 99.90th=[ 1336], 99.95th=[ 1336], 00:33:59.335 | 99.99th=[ 1336] 00:33:59.335 write: IOPS=659, BW=2637KiB/s (2701kB/s)(2640KiB/1001msec); 0 zone resets 00:33:59.335 slat (nsec): min=9507, max=64486, avg=29059.10, stdev=8161.89 00:33:59.335 clat (usec): min=274, max=960, avg=645.55, stdev=130.90 00:33:59.335 lat (usec): min=300, max=973, avg=674.61, stdev=133.27 00:33:59.335 clat percentiles (usec): 00:33:59.335 | 1.00th=[ 338], 5.00th=[ 404], 10.00th=[ 474], 20.00th=[ 529], 00:33:59.335 | 30.00th=[ 586], 40.00th=[ 619], 50.00th=[ 652], 60.00th=[ 693], 00:33:59.335 | 70.00th=[ 725], 80.00th=[ 758], 90.00th=[ 807], 95.00th=[ 840], 00:33:59.335 | 99.00th=[ 914], 99.50th=[ 922], 99.90th=[ 963], 99.95th=[ 963], 00:33:59.335 | 99.99th=[ 963] 00:33:59.335 bw ( KiB/s): min= 4096, max= 4096, per=36.57%, avg=4096.00, stdev= 0.00, samples=1 00:33:59.335 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:33:59.335 lat (usec) : 500=8.70%, 750=35.75%, 1000=24.32% 00:33:59.335 lat (msec) : 2=31.23% 00:33:59.335 cpu : usr=1.40%, sys=3.70%, ctx=1172, majf=0, minf=1 00:33:59.335 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:59.335 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:59.335 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:59.335 issued rwts: total=512,660,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:59.335 latency : target=0, window=0, percentile=100.00%, depth=1 00:33:59.335 00:33:59.335 Run status group 0 (all jobs): 00:33:59.335 READ: bw=6148KiB/s (6296kB/s), 71.2KiB/s-2046KiB/s (72.9kB/s-2095kB/s), io=6216KiB (6365kB), run=1001-1011msec 00:33:59.335 WRITE: bw=10.9MiB/s (11.5MB/s), 2026KiB/s-4068KiB/s (2074kB/s-4166kB/s), io=11.1MiB (11.6MB), run=1001-1011msec 00:33:59.335 00:33:59.335 Disk stats (read/write): 00:33:59.335 nvme0n1: ios=538/711, merge=0/0, ticks=1404/331, in_queue=1735, util=96.19% 00:33:59.335 nvme0n2: ios=479/512, merge=0/0, ticks=474/265, in_queue=739, util=87.87% 00:33:59.335 nvme0n3: ios=36/512, merge=0/0, ticks=1500/237, in_queue=1737, util=96.84% 00:33:59.335 nvme0n4: ios=451/512, merge=0/0, ticks=450/310, in_queue=760, util=89.43% 00:33:59.335 00:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:33:59.335 [global] 00:33:59.335 thread=1 00:33:59.335 invalidate=1 00:33:59.335 rw=write 00:33:59.335 time_based=1 00:33:59.335 runtime=1 00:33:59.335 ioengine=libaio 00:33:59.335 direct=1 00:33:59.335 bs=4096 00:33:59.335 iodepth=128 00:33:59.335 norandommap=0 00:33:59.335 numjobs=1 00:33:59.335 00:33:59.335 verify_dump=1 00:33:59.335 verify_backlog=512 00:33:59.335 verify_state_save=0 00:33:59.335 do_verify=1 00:33:59.335 verify=crc32c-intel 00:33:59.335 [job0] 00:33:59.335 filename=/dev/nvme0n1 00:33:59.335 [job1] 00:33:59.335 filename=/dev/nvme0n2 00:33:59.335 [job2] 00:33:59.335 filename=/dev/nvme0n3 00:33:59.335 [job3] 00:33:59.335 filename=/dev/nvme0n4 00:33:59.335 Could not set queue depth (nvme0n1) 00:33:59.335 Could not set queue depth (nvme0n2) 00:33:59.335 Could not set queue depth (nvme0n3) 00:33:59.335 Could not set queue depth (nvme0n4) 00:33:59.595 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:33:59.595 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:33:59.595 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:33:59.595 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:33:59.595 fio-3.35 00:33:59.595 Starting 4 threads 00:34:00.997 00:34:00.997 job0: (groupid=0, jobs=1): err= 0: pid=3521757: Wed Oct 9 00:41:31 2024 00:34:00.997 read: IOPS=5570, BW=21.8MiB/s (22.8MB/s)(22.0MiB/1011msec) 00:34:00.997 slat (nsec): min=940, max=10986k, avg=79401.85, stdev=579838.60 00:34:00.997 clat (usec): min=2664, max=72858, avg=9876.61, stdev=6635.51 00:34:00.997 lat (usec): min=2668, max=73146, avg=9956.02, stdev=6703.65 00:34:00.997 clat percentiles (usec): 00:34:00.997 | 1.00th=[ 3851], 5.00th=[ 4555], 10.00th=[ 5276], 20.00th=[ 5800], 00:34:00.997 | 30.00th=[ 6915], 40.00th=[ 7898], 50.00th=[ 8586], 60.00th=[ 9503], 00:34:00.997 | 70.00th=[10683], 80.00th=[12387], 90.00th=[14353], 95.00th=[17957], 00:34:00.997 | 99.00th=[44303], 99.50th=[58983], 99.90th=[66323], 99.95th=[72877], 00:34:00.997 | 99.99th=[72877] 00:34:00.997 write: IOPS=5861, BW=22.9MiB/s (24.0MB/s)(23.1MiB/1011msec); 0 zone resets 00:34:00.997 slat (nsec): min=1626, max=8566.2k, avg=88502.96, stdev=531654.49 00:34:00.997 clat (usec): min=1172, max=72861, avg=12225.43, stdev=12722.64 00:34:00.997 lat (usec): min=1182, max=72876, avg=12313.93, stdev=12797.10 00:34:00.997 clat percentiles (usec): 00:34:00.997 | 1.00th=[ 3851], 5.00th=[ 4424], 10.00th=[ 4883], 20.00th=[ 5669], 00:34:00.997 | 30.00th=[ 6325], 40.00th=[ 7373], 50.00th=[ 8291], 60.00th=[ 9372], 00:34:00.997 | 70.00th=[10421], 80.00th=[13304], 90.00th=[22152], 95.00th=[44303], 00:34:00.997 | 99.00th=[69731], 99.50th=[70779], 99.90th=[72877], 99.95th=[72877], 00:34:00.997 | 99.99th=[72877] 00:34:00.997 bw ( KiB/s): min=15368, max=31024, per=26.62%, avg=23196.00, stdev=11070.46, samples=2 00:34:00.997 iops : min= 3842, max= 7756, avg=5799.00, stdev=2767.62, samples=2 00:34:00.997 lat (msec) : 2=0.10%, 4=2.19%, 10=62.83%, 20=27.54%, 50=4.79% 00:34:00.997 lat (msec) : 100=2.54% 00:34:00.997 cpu : usr=3.76%, sys=5.35%, ctx=415, majf=0, minf=1 00:34:00.997 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:34:00.997 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:00.997 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:00.997 issued rwts: total=5632,5926,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:00.997 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:00.997 job1: (groupid=0, jobs=1): err= 0: pid=3521768: Wed Oct 9 00:41:31 2024 00:34:00.997 read: IOPS=5609, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1004msec) 00:34:00.997 slat (nsec): min=915, max=8597.5k, avg=68921.23, stdev=478348.57 00:34:00.997 clat (usec): min=2248, max=35151, avg=7925.88, stdev=4160.52 00:34:00.997 lat (usec): min=2253, max=35161, avg=7994.81, stdev=4216.83 00:34:00.997 clat percentiles (usec): 00:34:00.997 | 1.00th=[ 3523], 5.00th=[ 4359], 10.00th=[ 4817], 20.00th=[ 5145], 00:34:00.997 | 30.00th=[ 5473], 40.00th=[ 5800], 50.00th=[ 6390], 60.00th=[ 7177], 00:34:00.997 | 70.00th=[ 8291], 80.00th=[10421], 90.00th=[12911], 95.00th=[16319], 00:34:00.997 | 99.00th=[23725], 99.50th=[30016], 99.90th=[34341], 99.95th=[35390], 00:34:00.997 | 99.99th=[35390] 00:34:00.997 write: IOPS=5835, BW=22.8MiB/s (23.9MB/s)(22.9MiB/1004msec); 0 zone resets 00:34:00.997 slat (nsec): min=1580, max=12725k, avg=100085.45, stdev=591424.97 00:34:00.997 clat (usec): min=1117, max=123036, avg=14114.49, stdev=17865.43 00:34:00.997 lat (usec): min=1128, max=123052, avg=14214.58, stdev=17978.39 00:34:00.997 clat percentiles (msec): 00:34:00.997 | 1.00th=[ 3], 5.00th=[ 4], 10.00th=[ 4], 20.00th=[ 5], 00:34:00.997 | 30.00th=[ 6], 40.00th=[ 6], 50.00th=[ 8], 60.00th=[ 11], 00:34:00.997 | 70.00th=[ 13], 80.00th=[ 18], 90.00th=[ 34], 95.00th=[ 50], 00:34:00.997 | 99.00th=[ 103], 99.50th=[ 116], 99.90th=[ 124], 99.95th=[ 124], 00:34:00.997 | 99.99th=[ 124] 00:34:00.997 bw ( KiB/s): min=12288, max=33568, per=26.31%, avg=22928.00, stdev=15047.23, samples=2 00:34:00.997 iops : min= 3072, max= 8392, avg=5732.00, stdev=3761.81, samples=2 00:34:00.997 lat (msec) : 2=0.03%, 4=8.06%, 10=60.71%, 20=21.14%, 50=7.57% 00:34:00.997 lat (msec) : 100=1.95%, 250=0.55% 00:34:00.997 cpu : usr=2.29%, sys=6.08%, ctx=539, majf=0, minf=2 00:34:00.997 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:34:00.997 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:00.997 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:00.997 issued rwts: total=5632,5859,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:00.997 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:00.997 job2: (groupid=0, jobs=1): err= 0: pid=3521785: Wed Oct 9 00:41:31 2024 00:34:00.997 read: IOPS=4108, BW=16.0MiB/s (16.8MB/s)(16.2MiB/1010msec) 00:34:00.997 slat (nsec): min=941, max=11807k, avg=86097.85, stdev=585906.03 00:34:00.997 clat (usec): min=2788, max=53305, avg=9864.48, stdev=5357.14 00:34:00.997 lat (usec): min=2796, max=53313, avg=9950.58, stdev=5432.87 00:34:00.997 clat percentiles (usec): 00:34:00.997 | 1.00th=[ 3326], 5.00th=[ 5473], 10.00th=[ 6456], 20.00th=[ 7046], 00:34:00.998 | 30.00th=[ 7373], 40.00th=[ 7570], 50.00th=[ 7767], 60.00th=[ 8160], 00:34:00.998 | 70.00th=[ 9634], 80.00th=[12649], 90.00th=[15270], 95.00th=[19006], 00:34:00.998 | 99.00th=[31851], 99.50th=[34866], 99.90th=[47449], 99.95th=[47449], 00:34:00.998 | 99.99th=[53216] 00:34:00.998 write: IOPS=4562, BW=17.8MiB/s (18.7MB/s)(18.0MiB/1010msec); 0 zone resets 00:34:00.998 slat (nsec): min=1610, max=11173k, avg=124226.72, stdev=772551.24 00:34:00.998 clat (usec): min=358, max=131830, avg=18927.26, stdev=24489.59 00:34:00.998 lat (usec): min=367, max=131838, avg=19051.49, stdev=24654.30 00:34:00.998 clat percentiles (usec): 00:34:00.998 | 1.00th=[ 1188], 5.00th=[ 2180], 10.00th=[ 3621], 20.00th=[ 6128], 00:34:00.998 | 30.00th=[ 7111], 40.00th=[ 8455], 50.00th=[ 10552], 60.00th=[ 11994], 00:34:00.998 | 70.00th=[ 12649], 80.00th=[ 22676], 90.00th=[ 51643], 95.00th=[ 74974], 00:34:00.998 | 99.00th=[121111], 99.50th=[126354], 99.90th=[131597], 99.95th=[131597], 00:34:00.998 | 99.99th=[131597] 00:34:00.998 bw ( KiB/s): min=11144, max=25136, per=20.82%, avg=18140.00, stdev=9893.84, samples=2 00:34:00.998 iops : min= 2786, max= 6284, avg=4535.00, stdev=2473.46, samples=2 00:34:00.998 lat (usec) : 500=0.02%, 750=0.08%, 1000=0.19% 00:34:00.998 lat (msec) : 2=2.00%, 4=4.34%, 10=51.53%, 20=28.82%, 50=7.63% 00:34:00.998 lat (msec) : 100=3.95%, 250=1.44% 00:34:00.998 cpu : usr=3.17%, sys=4.66%, ctx=432, majf=0, minf=2 00:34:00.998 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:34:00.998 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:00.998 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:00.998 issued rwts: total=4150,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:00.998 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:00.998 job3: (groupid=0, jobs=1): err= 0: pid=3521791: Wed Oct 9 00:41:31 2024 00:34:00.998 read: IOPS=5356, BW=20.9MiB/s (21.9MB/s)(21.0MiB/1005msec) 00:34:00.998 slat (nsec): min=939, max=11124k, avg=89387.00, stdev=653240.25 00:34:00.998 clat (usec): min=844, max=34331, avg=11781.20, stdev=4327.96 00:34:00.998 lat (usec): min=3544, max=34338, avg=11870.58, stdev=4373.09 00:34:00.998 clat percentiles (usec): 00:34:00.998 | 1.00th=[ 4621], 5.00th=[ 6980], 10.00th=[ 7177], 20.00th=[ 7963], 00:34:00.998 | 30.00th=[ 8586], 40.00th=[ 9372], 50.00th=[10945], 60.00th=[12649], 00:34:00.998 | 70.00th=[13435], 80.00th=[15270], 90.00th=[18482], 95.00th=[20055], 00:34:00.998 | 99.00th=[23725], 99.50th=[24773], 99.90th=[25560], 99.95th=[25560], 00:34:00.998 | 99.99th=[34341] 00:34:00.998 write: IOPS=5603, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1005msec); 0 zone resets 00:34:00.998 slat (nsec): min=1563, max=10673k, avg=79170.14, stdev=654664.12 00:34:00.998 clat (usec): min=3318, max=51113, avg=11378.18, stdev=5375.87 00:34:00.998 lat (usec): min=3327, max=51122, avg=11457.35, stdev=5418.53 00:34:00.998 clat percentiles (usec): 00:34:00.998 | 1.00th=[ 4178], 5.00th=[ 5145], 10.00th=[ 5669], 20.00th=[ 6521], 00:34:00.998 | 30.00th=[ 7898], 40.00th=[ 8717], 50.00th=[10290], 60.00th=[11600], 00:34:00.998 | 70.00th=[13173], 80.00th=[15664], 90.00th=[18744], 95.00th=[20579], 00:34:00.998 | 99.00th=[26870], 99.50th=[31327], 99.90th=[42206], 99.95th=[42206], 00:34:00.998 | 99.99th=[51119] 00:34:00.998 bw ( KiB/s): min=20480, max=24576, per=25.85%, avg=22528.00, stdev=2896.31, samples=2 00:34:00.998 iops : min= 5120, max= 6144, avg=5632.00, stdev=724.08, samples=2 00:34:00.998 lat (usec) : 1000=0.01% 00:34:00.998 lat (msec) : 4=0.45%, 10=44.58%, 20=48.14%, 50=6.80%, 100=0.01% 00:34:00.998 cpu : usr=4.88%, sys=5.58%, ctx=285, majf=0, minf=2 00:34:00.998 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:34:00.998 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:00.998 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:00.998 issued rwts: total=5383,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:00.998 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:00.998 00:34:00.998 Run status group 0 (all jobs): 00:34:00.998 READ: bw=80.4MiB/s (84.3MB/s), 16.0MiB/s-21.9MiB/s (16.8MB/s-23.0MB/s), io=81.2MiB (85.2MB), run=1004-1011msec 00:34:00.998 WRITE: bw=85.1MiB/s (89.2MB/s), 17.8MiB/s-22.9MiB/s (18.7MB/s-24.0MB/s), io=86.0MiB (90.2MB), run=1004-1011msec 00:34:00.998 00:34:00.998 Disk stats (read/write): 00:34:00.998 nvme0n1: ios=5144/5191, merge=0/0, ticks=46718/56077, in_queue=102795, util=96.49% 00:34:00.998 nvme0n2: ios=3663/4096, merge=0/0, ticks=31748/71934, in_queue=103682, util=88.06% 00:34:00.998 nvme0n3: ios=4096/4271, merge=0/0, ticks=38357/57473, in_queue=95830, util=88.38% 00:34:00.998 nvme0n4: ios=4395/4608, merge=0/0, ticks=38787/36553, in_queue=75340, util=90.38% 00:34:00.998 00:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:34:00.998 [global] 00:34:00.998 thread=1 00:34:00.998 invalidate=1 00:34:00.998 rw=randwrite 00:34:00.998 time_based=1 00:34:00.998 runtime=1 00:34:00.998 ioengine=libaio 00:34:00.998 direct=1 00:34:00.998 bs=4096 00:34:00.998 iodepth=128 00:34:00.998 norandommap=0 00:34:00.998 numjobs=1 00:34:00.998 00:34:00.998 verify_dump=1 00:34:00.998 verify_backlog=512 00:34:00.998 verify_state_save=0 00:34:00.998 do_verify=1 00:34:00.998 verify=crc32c-intel 00:34:00.998 [job0] 00:34:00.998 filename=/dev/nvme0n1 00:34:00.998 [job1] 00:34:00.998 filename=/dev/nvme0n2 00:34:00.998 [job2] 00:34:00.998 filename=/dev/nvme0n3 00:34:00.998 [job3] 00:34:00.998 filename=/dev/nvme0n4 00:34:00.998 Could not set queue depth (nvme0n1) 00:34:00.998 Could not set queue depth (nvme0n2) 00:34:00.998 Could not set queue depth (nvme0n3) 00:34:00.998 Could not set queue depth (nvme0n4) 00:34:01.260 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:34:01.260 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:34:01.260 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:34:01.260 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:34:01.260 fio-3.35 00:34:01.260 Starting 4 threads 00:34:02.667 00:34:02.667 job0: (groupid=0, jobs=1): err= 0: pid=3522194: Wed Oct 9 00:41:32 2024 00:34:02.667 read: IOPS=8175, BW=31.9MiB/s (33.5MB/s)(32.0MiB/1002msec) 00:34:02.667 slat (nsec): min=920, max=12181k, avg=60729.59, stdev=442153.87 00:34:02.667 clat (usec): min=3462, max=21312, avg=8007.16, stdev=2314.75 00:34:02.667 lat (usec): min=3465, max=27533, avg=8067.89, stdev=2341.12 00:34:02.667 clat percentiles (usec): 00:34:02.667 | 1.00th=[ 4490], 5.00th=[ 5211], 10.00th=[ 5604], 20.00th=[ 6456], 00:34:02.667 | 30.00th=[ 6783], 40.00th=[ 7177], 50.00th=[ 7570], 60.00th=[ 8160], 00:34:02.667 | 70.00th=[ 8586], 80.00th=[ 9241], 90.00th=[10421], 95.00th=[11863], 00:34:02.667 | 99.00th=[18482], 99.50th=[18482], 99.90th=[19268], 99.95th=[19268], 00:34:02.667 | 99.99th=[21365] 00:34:02.667 write: IOPS=8405, BW=32.8MiB/s (34.4MB/s)(32.9MiB/1002msec); 0 zone resets 00:34:02.667 slat (nsec): min=1570, max=8600.3k, avg=55175.61, stdev=355513.41 00:34:02.667 clat (usec): min=632, max=19294, avg=7290.78, stdev=2131.74 00:34:02.667 lat (usec): min=1164, max=19299, avg=7345.96, stdev=2143.21 00:34:02.667 clat percentiles (usec): 00:34:02.667 | 1.00th=[ 3032], 5.00th=[ 4621], 10.00th=[ 4817], 20.00th=[ 5538], 00:34:02.667 | 30.00th=[ 6325], 40.00th=[ 6915], 50.00th=[ 7177], 60.00th=[ 7504], 00:34:02.667 | 70.00th=[ 7832], 80.00th=[ 8586], 90.00th=[ 9503], 95.00th=[10945], 00:34:02.667 | 99.00th=[15008], 99.50th=[16712], 99.90th=[16712], 99.95th=[16712], 00:34:02.667 | 99.99th=[19268] 00:34:02.667 bw ( KiB/s): min=32702, max=33592, per=32.60%, avg=33147.00, stdev=629.33, samples=2 00:34:02.667 iops : min= 8175, max= 8398, avg=8286.50, stdev=157.68, samples=2 00:34:02.667 lat (usec) : 750=0.01% 00:34:02.667 lat (msec) : 2=0.17%, 4=1.30%, 10=87.72%, 20=10.80%, 50=0.01% 00:34:02.667 cpu : usr=5.00%, sys=7.39%, ctx=673, majf=0, minf=1 00:34:02.667 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:34:02.667 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:02.667 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:02.667 issued rwts: total=8192,8422,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:02.667 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:02.667 job1: (groupid=0, jobs=1): err= 0: pid=3522212: Wed Oct 9 00:41:32 2024 00:34:02.667 read: IOPS=3056, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1005msec) 00:34:02.667 slat (nsec): min=928, max=16387k, avg=139029.49, stdev=955824.62 00:34:02.667 clat (usec): min=4639, max=54345, avg=18101.27, stdev=12359.10 00:34:02.667 lat (usec): min=4644, max=54370, avg=18240.30, stdev=12463.46 00:34:02.667 clat percentiles (usec): 00:34:02.667 | 1.00th=[ 5211], 5.00th=[ 6456], 10.00th=[ 7111], 20.00th=[ 8094], 00:34:02.667 | 30.00th=[ 9896], 40.00th=[11076], 50.00th=[13304], 60.00th=[13960], 00:34:02.667 | 70.00th=[21627], 80.00th=[32113], 90.00th=[39060], 95.00th=[43254], 00:34:02.667 | 99.00th=[49021], 99.50th=[50594], 99.90th=[52167], 99.95th=[53216], 00:34:02.667 | 99.99th=[54264] 00:34:02.667 write: IOPS=3423, BW=13.4MiB/s (14.0MB/s)(13.4MiB/1005msec); 0 zone resets 00:34:02.667 slat (nsec): min=1549, max=14014k, avg=152217.34, stdev=993748.42 00:34:02.667 clat (usec): min=2748, max=73590, avg=20567.64, stdev=15961.07 00:34:02.667 lat (usec): min=2772, max=73601, avg=20719.86, stdev=16071.85 00:34:02.667 clat percentiles (usec): 00:34:02.667 | 1.00th=[ 4228], 5.00th=[ 4948], 10.00th=[ 5669], 20.00th=[ 6849], 00:34:02.667 | 30.00th=[ 9896], 40.00th=[10683], 50.00th=[12387], 60.00th=[17695], 00:34:02.667 | 70.00th=[28705], 80.00th=[34866], 90.00th=[39584], 95.00th=[54789], 00:34:02.667 | 99.00th=[65799], 99.50th=[70779], 99.90th=[73925], 99.95th=[73925], 00:34:02.667 | 99.99th=[73925] 00:34:02.667 bw ( KiB/s): min=12255, max=14232, per=13.03%, avg=13243.50, stdev=1397.95, samples=2 00:34:02.667 iops : min= 3063, max= 3558, avg=3310.50, stdev=350.02, samples=2 00:34:02.667 lat (msec) : 4=0.05%, 10=31.66%, 20=32.40%, 50=32.37%, 100=3.53% 00:34:02.667 cpu : usr=2.29%, sys=3.78%, ctx=227, majf=0, minf=2 00:34:02.667 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:34:02.667 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:02.667 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:02.667 issued rwts: total=3072,3441,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:02.667 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:02.667 job2: (groupid=0, jobs=1): err= 0: pid=3522233: Wed Oct 9 00:41:32 2024 00:34:02.667 read: IOPS=6603, BW=25.8MiB/s (27.0MB/s)(26.0MiB/1008msec) 00:34:02.667 slat (nsec): min=931, max=12201k, avg=69699.32, stdev=522761.01 00:34:02.667 clat (usec): min=3241, max=42172, avg=9462.06, stdev=4380.42 00:34:02.667 lat (usec): min=3247, max=42181, avg=9531.76, stdev=4420.04 00:34:02.667 clat percentiles (usec): 00:34:02.667 | 1.00th=[ 3851], 5.00th=[ 5932], 10.00th=[ 6390], 20.00th=[ 7046], 00:34:02.667 | 30.00th=[ 7570], 40.00th=[ 7898], 50.00th=[ 8225], 60.00th=[ 8586], 00:34:02.667 | 70.00th=[ 9634], 80.00th=[10945], 90.00th=[12387], 95.00th=[17957], 00:34:02.667 | 99.00th=[30016], 99.50th=[32637], 99.90th=[38536], 99.95th=[38536], 00:34:02.667 | 99.99th=[42206] 00:34:02.667 write: IOPS=6872, BW=26.8MiB/s (28.1MB/s)(27.1MiB/1008msec); 0 zone resets 00:34:02.667 slat (nsec): min=1608, max=7000.2k, avg=72472.61, stdev=462043.42 00:34:02.667 clat (usec): min=2030, max=77392, avg=9371.65, stdev=9221.95 00:34:02.667 lat (usec): min=2043, max=77401, avg=9444.12, stdev=9284.94 00:34:02.667 clat percentiles (usec): 00:34:02.667 | 1.00th=[ 3621], 5.00th=[ 4948], 10.00th=[ 5211], 20.00th=[ 5997], 00:34:02.667 | 30.00th=[ 6915], 40.00th=[ 7504], 50.00th=[ 7963], 60.00th=[ 8225], 00:34:02.667 | 70.00th=[ 8356], 80.00th=[ 9241], 90.00th=[10683], 95.00th=[11338], 00:34:02.667 | 99.00th=[65799], 99.50th=[66323], 99.90th=[70779], 99.95th=[77071], 00:34:02.667 | 99.99th=[77071] 00:34:02.667 bw ( KiB/s): min=21632, max=32702, per=26.72%, avg=27167.00, stdev=7827.67, samples=2 00:34:02.667 iops : min= 5408, max= 8175, avg=6791.50, stdev=1956.56, samples=2 00:34:02.667 lat (msec) : 4=1.27%, 10=76.23%, 20=18.56%, 50=2.71%, 100=1.23% 00:34:02.667 cpu : usr=3.97%, sys=6.95%, ctx=555, majf=0, minf=3 00:34:02.667 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:34:02.667 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:02.667 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:02.667 issued rwts: total=6656,6927,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:02.667 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:02.667 job3: (groupid=0, jobs=1): err= 0: pid=3522239: Wed Oct 9 00:41:32 2024 00:34:02.667 read: IOPS=6603, BW=25.8MiB/s (27.0MB/s)(26.0MiB/1008msec) 00:34:02.667 slat (nsec): min=969, max=8761.9k, avg=73306.91, stdev=536299.40 00:34:02.667 clat (usec): min=2982, max=27893, avg=9771.32, stdev=2800.70 00:34:02.667 lat (usec): min=2991, max=27896, avg=9844.62, stdev=2827.78 00:34:02.667 clat percentiles (usec): 00:34:02.667 | 1.00th=[ 4490], 5.00th=[ 5932], 10.00th=[ 6718], 20.00th=[ 7701], 00:34:02.667 | 30.00th=[ 8160], 40.00th=[ 8717], 50.00th=[ 9372], 60.00th=[10159], 00:34:02.667 | 70.00th=[10683], 80.00th=[11863], 90.00th=[13304], 95.00th=[14484], 00:34:02.667 | 99.00th=[17171], 99.50th=[23725], 99.90th=[27132], 99.95th=[27919], 00:34:02.667 | 99.99th=[27919] 00:34:02.667 write: IOPS=6775, BW=26.5MiB/s (27.8MB/s)(26.7MiB/1008msec); 0 zone resets 00:34:02.667 slat (nsec): min=1647, max=7185.8k, avg=69691.44, stdev=450446.74 00:34:02.667 clat (usec): min=2725, max=27888, avg=9165.75, stdev=3257.08 00:34:02.667 lat (usec): min=2733, max=27896, avg=9235.44, stdev=3273.49 00:34:02.667 clat percentiles (usec): 00:34:02.667 | 1.00th=[ 3720], 5.00th=[ 5473], 10.00th=[ 5932], 20.00th=[ 7177], 00:34:02.667 | 30.00th=[ 7832], 40.00th=[ 8029], 50.00th=[ 8291], 60.00th=[ 8717], 00:34:02.667 | 70.00th=[ 9372], 80.00th=[10552], 90.00th=[12780], 95.00th=[17695], 00:34:02.667 | 99.00th=[20841], 99.50th=[22152], 99.90th=[23200], 99.95th=[23200], 00:34:02.667 | 99.99th=[27919] 00:34:02.668 bw ( KiB/s): min=26496, max=27128, per=26.37%, avg=26812.00, stdev=446.89, samples=2 00:34:02.668 iops : min= 6624, max= 6782, avg=6703.00, stdev=111.72, samples=2 00:34:02.668 lat (msec) : 4=0.89%, 10=65.97%, 20=31.83%, 50=1.31% 00:34:02.668 cpu : usr=5.06%, sys=6.36%, ctx=532, majf=0, minf=1 00:34:02.668 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:34:02.668 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:02.668 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:02.668 issued rwts: total=6656,6830,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:02.668 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:02.668 00:34:02.668 Run status group 0 (all jobs): 00:34:02.668 READ: bw=95.2MiB/s (99.9MB/s), 11.9MiB/s-31.9MiB/s (12.5MB/s-33.5MB/s), io=96.0MiB (101MB), run=1002-1008msec 00:34:02.668 WRITE: bw=99.3MiB/s (104MB/s), 13.4MiB/s-32.8MiB/s (14.0MB/s-34.4MB/s), io=100MiB (105MB), run=1002-1008msec 00:34:02.668 00:34:02.668 Disk stats (read/write): 00:34:02.668 nvme0n1: ios=6821/7168, merge=0/0, ticks=51848/49315, in_queue=101163, util=96.49% 00:34:02.668 nvme0n2: ios=1886/2048, merge=0/0, ticks=21506/30321, in_queue=51827, util=87.46% 00:34:02.668 nvme0n3: ios=6370/6656, merge=0/0, ticks=51626/48940, in_queue=100566, util=88.49% 00:34:02.668 nvme0n4: ios=5138/5632, merge=0/0, ticks=50403/51735, in_queue=102138, util=97.01% 00:34:02.668 00:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:34:02.668 00:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=3522485 00:34:02.668 00:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:34:02.668 00:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:34:02.668 [global] 00:34:02.668 thread=1 00:34:02.668 invalidate=1 00:34:02.668 rw=read 00:34:02.668 time_based=1 00:34:02.668 runtime=10 00:34:02.668 ioengine=libaio 00:34:02.668 direct=1 00:34:02.668 bs=4096 00:34:02.668 iodepth=1 00:34:02.668 norandommap=1 00:34:02.668 numjobs=1 00:34:02.668 00:34:02.668 [job0] 00:34:02.668 filename=/dev/nvme0n1 00:34:02.668 [job1] 00:34:02.668 filename=/dev/nvme0n2 00:34:02.668 [job2] 00:34:02.668 filename=/dev/nvme0n3 00:34:02.668 [job3] 00:34:02.668 filename=/dev/nvme0n4 00:34:02.668 Could not set queue depth (nvme0n1) 00:34:02.668 Could not set queue depth (nvme0n2) 00:34:02.668 Could not set queue depth (nvme0n3) 00:34:02.668 Could not set queue depth (nvme0n4) 00:34:02.931 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:02.931 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:02.931 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:02.931 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:02.931 fio-3.35 00:34:02.931 Starting 4 threads 00:34:05.560 00:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:34:05.560 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=7426048, buflen=4096 00:34:05.560 fio: pid=3522705, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:34:05.560 00:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:34:05.844 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=11374592, buflen=4096 00:34:05.844 fio: pid=3522704, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:34:05.844 00:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:34:05.844 00:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:34:05.844 fio: io_u error on file /dev/nvme0n1: Input/output error: read offset=782336, buflen=4096 00:34:05.844 fio: pid=3522696, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:34:06.104 00:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:34:06.104 00:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:34:06.104 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=16076800, buflen=4096 00:34:06.104 fio: pid=3522697, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:34:06.104 00:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:34:06.104 00:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:34:06.104 00:34:06.104 job0: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=3522696: Wed Oct 9 00:41:36 2024 00:34:06.104 read: IOPS=64, BW=258KiB/s (264kB/s)(764KiB/2958msec) 00:34:06.104 slat (usec): min=6, max=10667, avg=119.67, stdev=941.64 00:34:06.104 clat (usec): min=767, max=45257, avg=15355.41, stdev=19536.50 00:34:06.104 lat (usec): min=792, max=52885, avg=15475.54, stdev=19696.09 00:34:06.104 clat percentiles (usec): 00:34:06.104 | 1.00th=[ 807], 5.00th=[ 889], 10.00th=[ 947], 20.00th=[ 996], 00:34:06.104 | 30.00th=[ 1029], 40.00th=[ 1074], 50.00th=[ 1123], 60.00th=[ 1156], 00:34:06.104 | 70.00th=[41157], 80.00th=[41681], 90.00th=[42206], 95.00th=[42206], 00:34:06.104 | 99.00th=[43779], 99.50th=[45351], 99.90th=[45351], 99.95th=[45351], 00:34:06.104 | 99.99th=[45351] 00:34:06.104 bw ( KiB/s): min= 95, max= 744, per=2.57%, avg=287.80, stdev=288.67, samples=5 00:34:06.104 iops : min= 23, max= 186, avg=71.80, stdev=72.29, samples=5 00:34:06.104 lat (usec) : 1000=22.92% 00:34:06.104 lat (msec) : 2=41.67%, 50=34.90% 00:34:06.104 cpu : usr=0.10%, sys=0.20%, ctx=193, majf=0, minf=1 00:34:06.104 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:06.104 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:06.104 complete : 0=0.5%, 4=99.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:06.104 issued rwts: total=192,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:06.104 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:06.104 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3522697: Wed Oct 9 00:41:36 2024 00:34:06.104 read: IOPS=1259, BW=5039KiB/s (5159kB/s)(15.3MiB/3116msec) 00:34:06.104 slat (usec): min=6, max=19807, avg=42.38, stdev=471.38 00:34:06.104 clat (usec): min=253, max=2528, avg=745.15, stdev=149.35 00:34:06.104 lat (usec): min=280, max=20655, avg=787.53, stdev=496.16 00:34:06.104 clat percentiles (usec): 00:34:06.104 | 1.00th=[ 396], 5.00th=[ 498], 10.00th=[ 545], 20.00th=[ 619], 00:34:06.104 | 30.00th=[ 668], 40.00th=[ 717], 50.00th=[ 758], 60.00th=[ 799], 00:34:06.104 | 70.00th=[ 832], 80.00th=[ 865], 90.00th=[ 914], 95.00th=[ 963], 00:34:06.104 | 99.00th=[ 1037], 99.50th=[ 1057], 99.90th=[ 1434], 99.95th=[ 2409], 00:34:06.104 | 99.99th=[ 2540] 00:34:06.104 bw ( KiB/s): min= 4569, max= 5320, per=45.37%, avg=5070.83, stdev=268.33, samples=6 00:34:06.104 iops : min= 1142, max= 1330, avg=1267.67, stdev=67.18, samples=6 00:34:06.104 lat (usec) : 500=5.15%, 750=42.79%, 1000=49.59% 00:34:06.104 lat (msec) : 2=2.39%, 4=0.05% 00:34:06.104 cpu : usr=2.09%, sys=4.78%, ctx=3933, majf=0, minf=2 00:34:06.104 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:06.104 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:06.104 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:06.104 issued rwts: total=3926,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:06.104 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:06.104 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3522704: Wed Oct 9 00:41:36 2024 00:34:06.104 read: IOPS=1011, BW=4044KiB/s (4141kB/s)(10.8MiB/2747msec) 00:34:06.104 slat (usec): min=6, max=13926, avg=35.97, stdev=349.92 00:34:06.104 clat (usec): min=322, max=2142, avg=946.54, stdev=113.53 00:34:06.104 lat (usec): min=329, max=14839, avg=982.51, stdev=366.74 00:34:06.104 clat percentiles (usec): 00:34:06.104 | 1.00th=[ 611], 5.00th=[ 766], 10.00th=[ 824], 20.00th=[ 865], 00:34:06.104 | 30.00th=[ 898], 40.00th=[ 922], 50.00th=[ 947], 60.00th=[ 979], 00:34:06.105 | 70.00th=[ 1004], 80.00th=[ 1037], 90.00th=[ 1074], 95.00th=[ 1123], 00:34:06.105 | 99.00th=[ 1188], 99.50th=[ 1221], 99.90th=[ 1303], 99.95th=[ 1319], 00:34:06.105 | 99.99th=[ 2147] 00:34:06.105 bw ( KiB/s): min= 4048, max= 4144, per=36.61%, avg=4092.80, stdev=36.92, samples=5 00:34:06.105 iops : min= 1012, max= 1036, avg=1023.20, stdev= 9.23, samples=5 00:34:06.105 lat (usec) : 500=0.25%, 750=4.10%, 1000=64.25% 00:34:06.105 lat (msec) : 2=31.32%, 4=0.04% 00:34:06.105 cpu : usr=1.68%, sys=4.15%, ctx=2781, majf=0, minf=2 00:34:06.105 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:06.105 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:06.105 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:06.105 issued rwts: total=2778,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:06.105 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:06.105 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3522705: Wed Oct 9 00:41:36 2024 00:34:06.105 read: IOPS=702, BW=2809KiB/s (2876kB/s)(7252KiB/2582msec) 00:34:06.105 slat (nsec): min=24521, max=58314, avg=25738.49, stdev=2791.34 00:34:06.105 clat (usec): min=773, max=42100, avg=1391.69, stdev=3704.12 00:34:06.105 lat (usec): min=799, max=42126, avg=1417.43, stdev=3704.08 00:34:06.105 clat percentiles (usec): 00:34:06.105 | 1.00th=[ 816], 5.00th=[ 889], 10.00th=[ 947], 20.00th=[ 1004], 00:34:06.105 | 30.00th=[ 1029], 40.00th=[ 1045], 50.00th=[ 1057], 60.00th=[ 1074], 00:34:06.105 | 70.00th=[ 1090], 80.00th=[ 1106], 90.00th=[ 1139], 95.00th=[ 1188], 00:34:06.105 | 99.00th=[ 1287], 99.50th=[41681], 99.90th=[42206], 99.95th=[42206], 00:34:06.105 | 99.99th=[42206] 00:34:06.105 bw ( KiB/s): min= 328, max= 3696, per=25.91%, avg=2896.00, stdev=1458.09, samples=5 00:34:06.105 iops : min= 82, max= 924, avg=724.00, stdev=364.52, samples=5 00:34:06.105 lat (usec) : 1000=19.68% 00:34:06.105 lat (msec) : 2=79.44%, 50=0.83% 00:34:06.105 cpu : usr=0.50%, sys=2.36%, ctx=1815, majf=0, minf=2 00:34:06.105 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:06.105 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:06.105 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:06.105 issued rwts: total=1814,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:06.105 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:06.105 00:34:06.105 Run status group 0 (all jobs): 00:34:06.105 READ: bw=10.9MiB/s (11.4MB/s), 258KiB/s-5039KiB/s (264kB/s-5159kB/s), io=34.0MiB (35.7MB), run=2582-3116msec 00:34:06.105 00:34:06.105 Disk stats (read/write): 00:34:06.105 nvme0n1: ios=188/0, merge=0/0, ticks=2799/0, in_queue=2799, util=94.19% 00:34:06.105 nvme0n2: ios=3898/0, merge=0/0, ticks=2451/0, in_queue=2451, util=93.68% 00:34:06.105 nvme0n3: ios=2644/0, merge=0/0, ticks=2341/0, in_queue=2341, util=95.99% 00:34:06.105 nvme0n4: ios=1807/0, merge=0/0, ticks=2231/0, in_queue=2231, util=96.06% 00:34:06.364 00:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:34:06.364 00:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:34:06.623 00:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:34:06.623 00:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:34:06.623 00:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:34:06.623 00:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:34:06.883 00:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:34:06.883 00:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:34:07.143 00:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:34:07.143 00:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # wait 3522485 00:34:07.143 00:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:34:07.143 00:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:34:07.143 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:34:07.143 00:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:34:07.143 00:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:34:07.143 00:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:34:07.143 00:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:34:07.143 00:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:34:07.143 00:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:34:07.143 00:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:34:07.143 00:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:34:07.143 00:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:34:07.143 nvmf hotplug test: fio failed as expected 00:34:07.143 00:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:07.403 00:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:34:07.403 00:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:34:07.403 00:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:34:07.403 00:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:34:07.403 00:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:34:07.403 00:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@514 -- # nvmfcleanup 00:34:07.403 00:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:34:07.403 00:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:07.403 00:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:34:07.403 00:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:07.403 00:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:07.403 rmmod nvme_tcp 00:34:07.403 rmmod nvme_fabrics 00:34:07.403 rmmod nvme_keyring 00:34:07.403 00:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:07.403 00:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:34:07.403 00:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:34:07.403 00:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@515 -- # '[' -n 3519314 ']' 00:34:07.403 00:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@516 -- # killprocess 3519314 00:34:07.403 00:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@950 -- # '[' -z 3519314 ']' 00:34:07.403 00:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@954 -- # kill -0 3519314 00:34:07.403 00:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@955 -- # uname 00:34:07.403 00:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:34:07.403 00:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3519314 00:34:07.403 00:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:34:07.403 00:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:34:07.403 00:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3519314' 00:34:07.403 killing process with pid 3519314 00:34:07.403 00:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@969 -- # kill 3519314 00:34:07.403 00:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@974 -- # wait 3519314 00:34:07.663 00:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:34:07.663 00:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:34:07.663 00:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:34:07.663 00:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:34:07.663 00:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@789 -- # iptables-save 00:34:07.663 00:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:34:07.663 00:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@789 -- # iptables-restore 00:34:07.663 00:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:07.663 00:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:07.663 00:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:07.663 00:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:07.663 00:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:09.590 00:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:09.590 00:34:09.590 real 0m27.879s 00:34:09.590 user 2m21.063s 00:34:09.590 sys 0m12.708s 00:34:09.590 00:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:34:09.590 00:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:34:09.590 ************************************ 00:34:09.590 END TEST nvmf_fio_target 00:34:09.590 ************************************ 00:34:09.851 00:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:34:09.851 00:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:34:09.851 00:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:34:09.851 00:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:34:09.851 ************************************ 00:34:09.851 START TEST nvmf_bdevio 00:34:09.851 ************************************ 00:34:09.851 00:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:34:09.851 * Looking for test storage... 00:34:09.851 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:09.851 00:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:34:09.851 00:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1681 -- # lcov --version 00:34:09.851 00:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:34:09.851 00:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:34:09.851 00:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:09.851 00:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:09.851 00:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:09.851 00:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:34:09.851 00:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:34:09.851 00:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:34:09.851 00:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:34:09.851 00:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:34:09.851 00:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:34:09.851 00:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:34:09.851 00:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:09.851 00:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:34:09.851 00:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:34:09.851 00:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:09.851 00:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:09.851 00:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:34:09.851 00:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:34:09.851 00:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:09.851 00:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:34:09.851 00:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:34:09.851 00:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:34:09.851 00:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:34:09.851 00:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:09.851 00:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:34:09.851 00:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:34:09.851 00:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:09.851 00:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:09.851 00:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:34:09.851 00:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:09.851 00:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:34:09.851 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:09.851 --rc genhtml_branch_coverage=1 00:34:09.851 --rc genhtml_function_coverage=1 00:34:09.851 --rc genhtml_legend=1 00:34:09.851 --rc geninfo_all_blocks=1 00:34:09.851 --rc geninfo_unexecuted_blocks=1 00:34:09.852 00:34:09.852 ' 00:34:09.852 00:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:34:09.852 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:09.852 --rc genhtml_branch_coverage=1 00:34:09.852 --rc genhtml_function_coverage=1 00:34:09.852 --rc genhtml_legend=1 00:34:09.852 --rc geninfo_all_blocks=1 00:34:09.852 --rc geninfo_unexecuted_blocks=1 00:34:09.852 00:34:09.852 ' 00:34:09.852 00:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:34:09.852 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:09.852 --rc genhtml_branch_coverage=1 00:34:09.852 --rc genhtml_function_coverage=1 00:34:09.852 --rc genhtml_legend=1 00:34:09.852 --rc geninfo_all_blocks=1 00:34:09.852 --rc geninfo_unexecuted_blocks=1 00:34:09.852 00:34:09.852 ' 00:34:09.852 00:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:34:09.852 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:09.852 --rc genhtml_branch_coverage=1 00:34:09.852 --rc genhtml_function_coverage=1 00:34:09.852 --rc genhtml_legend=1 00:34:09.852 --rc geninfo_all_blocks=1 00:34:09.852 --rc geninfo_unexecuted_blocks=1 00:34:09.852 00:34:09.852 ' 00:34:09.852 00:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:09.852 00:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:34:09.852 00:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:09.852 00:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:09.852 00:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:09.852 00:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:09.852 00:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:09.852 00:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:09.852 00:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:09.852 00:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:09.852 00:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:09.852 00:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:10.113 00:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:34:10.113 00:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:34:10.113 00:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:10.113 00:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:10.113 00:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:10.113 00:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:10.113 00:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:10.113 00:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:34:10.113 00:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:10.113 00:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:10.113 00:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:10.113 00:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:10.113 00:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:10.113 00:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:10.113 00:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:34:10.113 00:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:10.113 00:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:34:10.113 00:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:10.113 00:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:10.113 00:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:10.113 00:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:10.113 00:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:10.113 00:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:34:10.113 00:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:34:10.113 00:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:10.113 00:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:10.113 00:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:10.113 00:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:34:10.113 00:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:34:10.113 00:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:34:10.113 00:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:34:10.113 00:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:10.113 00:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@474 -- # prepare_net_devs 00:34:10.113 00:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@436 -- # local -g is_hw=no 00:34:10.113 00:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@438 -- # remove_spdk_ns 00:34:10.113 00:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:10.113 00:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:10.113 00:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:10.113 00:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:34:10.113 00:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:34:10.113 00:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:34:10.113 00:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:18.297 00:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:18.297 00:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:34:18.297 00:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:18.297 00:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:18.297 00:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:18.297 00:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:18.297 00:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:18.297 00:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:34:18.298 00:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:18.298 00:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:34:18.298 00:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:34:18.298 00:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:34:18.298 00:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:34:18.298 00:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:34:18.298 00:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:34:18.298 00:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:18.298 00:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:18.298 00:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:18.298 00:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:18.298 00:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:18.298 00:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:18.298 00:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:18.298 00:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:18.298 00:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:18.298 00:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:18.298 00:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:18.298 00:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:18.298 00:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:18.298 00:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:18.298 00:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:18.298 00:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:18.298 00:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:18.298 00:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:18.298 00:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:18.298 00:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:34:18.298 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:34:18.298 00:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:18.298 00:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:18.298 00:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:18.298 00:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:18.298 00:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:18.298 00:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:18.298 00:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:34:18.298 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:34:18.298 00:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:18.298 00:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:18.298 00:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:18.298 00:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:18.298 00:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:18.298 00:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:18.298 00:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:18.298 00:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:18.298 00:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:34:18.298 00:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:18.298 00:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:34:18.298 00:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:18.298 00:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ up == up ]] 00:34:18.298 00:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:34:18.298 00:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:18.298 00:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:34:18.298 Found net devices under 0000:4b:00.0: cvl_0_0 00:34:18.298 00:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:34:18.298 00:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:34:18.298 00:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:18.298 00:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:34:18.298 00:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:18.298 00:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ up == up ]] 00:34:18.298 00:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:34:18.298 00:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:18.298 00:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:34:18.298 Found net devices under 0000:4b:00.1: cvl_0_1 00:34:18.298 00:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:34:18.298 00:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:34:18.298 00:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # is_hw=yes 00:34:18.298 00:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:34:18.298 00:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:34:18.298 00:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:34:18.298 00:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:18.298 00:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:18.298 00:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:18.298 00:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:18.298 00:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:18.298 00:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:18.298 00:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:18.298 00:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:18.298 00:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:18.298 00:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:18.298 00:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:18.298 00:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:18.298 00:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:18.298 00:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:18.298 00:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:18.298 00:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:18.298 00:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:18.298 00:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:18.298 00:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:18.298 00:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:18.298 00:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:18.298 00:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:18.298 00:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:18.298 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:18.298 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.578 ms 00:34:18.298 00:34:18.298 --- 10.0.0.2 ping statistics --- 00:34:18.298 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:18.298 rtt min/avg/max/mdev = 0.578/0.578/0.578/0.000 ms 00:34:18.298 00:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:18.298 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:18.298 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.316 ms 00:34:18.298 00:34:18.298 --- 10.0.0.1 ping statistics --- 00:34:18.298 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:18.298 rtt min/avg/max/mdev = 0.316/0.316/0.316/0.000 ms 00:34:18.298 00:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:18.298 00:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@448 -- # return 0 00:34:18.298 00:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:34:18.298 00:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:18.298 00:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:34:18.298 00:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:34:18.298 00:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:18.298 00:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:34:18.298 00:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:34:18.299 00:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:34:18.299 00:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:34:18.299 00:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@724 -- # xtrace_disable 00:34:18.299 00:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:18.299 00:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@507 -- # nvmfpid=3527717 00:34:18.299 00:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@508 -- # waitforlisten 3527717 00:34:18.299 00:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x78 00:34:18.299 00:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@831 -- # '[' -z 3527717 ']' 00:34:18.299 00:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:18.299 00:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@836 -- # local max_retries=100 00:34:18.299 00:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:18.299 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:18.299 00:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@840 -- # xtrace_disable 00:34:18.299 00:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:18.299 [2024-10-09 00:41:48.065068] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:34:18.299 [2024-10-09 00:41:48.066186] Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 initialization... 00:34:18.299 [2024-10-09 00:41:48.066238] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:18.299 [2024-10-09 00:41:48.154700] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:18.299 [2024-10-09 00:41:48.252063] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:18.299 [2024-10-09 00:41:48.252121] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:18.299 [2024-10-09 00:41:48.252129] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:18.299 [2024-10-09 00:41:48.252136] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:18.299 [2024-10-09 00:41:48.252143] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:18.299 [2024-10-09 00:41:48.254211] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 4 00:34:18.299 [2024-10-09 00:41:48.254372] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 5 00:34:18.299 [2024-10-09 00:41:48.254530] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 6 00:34:18.299 [2024-10-09 00:41:48.254531] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:34:18.299 [2024-10-09 00:41:48.341016] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:34:18.299 [2024-10-09 00:41:48.341954] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:34:18.299 [2024-10-09 00:41:48.342244] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:34:18.300 [2024-10-09 00:41:48.342595] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:34:18.300 [2024-10-09 00:41:48.342640] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:34:18.300 00:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:34:18.300 00:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@864 -- # return 0 00:34:18.300 00:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:34:18.300 00:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@730 -- # xtrace_disable 00:34:18.300 00:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:18.300 00:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:18.300 00:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:34:18.300 00:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:18.300 00:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:18.569 [2024-10-09 00:41:48.927511] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:18.569 00:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:18.569 00:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:34:18.569 00:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:18.569 00:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:18.569 Malloc0 00:34:18.569 00:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:18.569 00:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:34:18.569 00:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:18.569 00:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:18.569 00:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:18.569 00:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:18.569 00:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:18.569 00:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:18.569 00:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:18.569 00:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:18.569 00:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:18.569 00:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:18.569 [2024-10-09 00:41:49.011666] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:18.569 00:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:18.569 00:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:34:18.569 00:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:34:18.569 00:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@558 -- # config=() 00:34:18.569 00:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@558 -- # local subsystem config 00:34:18.569 00:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:34:18.569 00:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:34:18.569 { 00:34:18.569 "params": { 00:34:18.569 "name": "Nvme$subsystem", 00:34:18.569 "trtype": "$TEST_TRANSPORT", 00:34:18.569 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:18.569 "adrfam": "ipv4", 00:34:18.569 "trsvcid": "$NVMF_PORT", 00:34:18.569 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:18.569 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:18.569 "hdgst": ${hdgst:-false}, 00:34:18.569 "ddgst": ${ddgst:-false} 00:34:18.569 }, 00:34:18.569 "method": "bdev_nvme_attach_controller" 00:34:18.569 } 00:34:18.569 EOF 00:34:18.569 )") 00:34:18.569 00:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@580 -- # cat 00:34:18.569 00:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # jq . 00:34:18.569 00:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@583 -- # IFS=, 00:34:18.569 00:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:34:18.569 "params": { 00:34:18.569 "name": "Nvme1", 00:34:18.569 "trtype": "tcp", 00:34:18.569 "traddr": "10.0.0.2", 00:34:18.569 "adrfam": "ipv4", 00:34:18.569 "trsvcid": "4420", 00:34:18.569 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:18.569 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:18.569 "hdgst": false, 00:34:18.569 "ddgst": false 00:34:18.569 }, 00:34:18.569 "method": "bdev_nvme_attach_controller" 00:34:18.569 }' 00:34:18.569 [2024-10-09 00:41:49.045679] Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 initialization... 00:34:18.569 [2024-10-09 00:41:49.045756] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3528050 ] 00:34:18.569 [2024-10-09 00:41:49.120639] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:34:18.829 [2024-10-09 00:41:49.218290] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:34:18.829 [2024-10-09 00:41:49.218453] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:34:18.829 [2024-10-09 00:41:49.218453] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:34:19.090 I/O targets: 00:34:19.090 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:34:19.090 00:34:19.090 00:34:19.090 CUnit - A unit testing framework for C - Version 2.1-3 00:34:19.090 http://cunit.sourceforge.net/ 00:34:19.090 00:34:19.090 00:34:19.090 Suite: bdevio tests on: Nvme1n1 00:34:19.090 Test: blockdev write read block ...passed 00:34:19.090 Test: blockdev write zeroes read block ...passed 00:34:19.090 Test: blockdev write zeroes read no split ...passed 00:34:19.090 Test: blockdev write zeroes read split ...passed 00:34:19.090 Test: blockdev write zeroes read split partial ...passed 00:34:19.090 Test: blockdev reset ...[2024-10-09 00:41:49.716530] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:19.090 [2024-10-09 00:41:49.716637] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f20d0 (9): Bad file descriptor 00:34:19.090 [2024-10-09 00:41:49.723706] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:34:19.090 passed 00:34:19.351 Test: blockdev write read 8 blocks ...passed 00:34:19.351 Test: blockdev write read size > 128k ...passed 00:34:19.351 Test: blockdev write read invalid size ...passed 00:34:19.351 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:34:19.351 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:34:19.351 Test: blockdev write read max offset ...passed 00:34:19.351 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:34:19.351 Test: blockdev writev readv 8 blocks ...passed 00:34:19.351 Test: blockdev writev readv 30 x 1block ...passed 00:34:19.351 Test: blockdev writev readv block ...passed 00:34:19.612 Test: blockdev writev readv size > 128k ...passed 00:34:19.612 Test: blockdev writev readv size > 128k in two iovs ...passed 00:34:19.612 Test: blockdev comparev and writev ...[2024-10-09 00:41:49.991463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:34:19.612 [2024-10-09 00:41:49.991511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:19.612 [2024-10-09 00:41:49.991528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:34:19.612 [2024-10-09 00:41:49.991545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:19.612 [2024-10-09 00:41:49.992169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:34:19.612 [2024-10-09 00:41:49.992183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:34:19.612 [2024-10-09 00:41:49.992197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:34:19.612 [2024-10-09 00:41:49.992206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:34:19.612 [2024-10-09 00:41:49.992850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:34:19.612 [2024-10-09 00:41:49.992863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:34:19.612 [2024-10-09 00:41:49.992877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:34:19.612 [2024-10-09 00:41:49.992885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:34:19.612 [2024-10-09 00:41:49.993538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:34:19.612 [2024-10-09 00:41:49.993550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:34:19.612 [2024-10-09 00:41:49.993564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:34:19.612 [2024-10-09 00:41:49.993572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:34:19.612 passed 00:34:19.612 Test: blockdev nvme passthru rw ...passed 00:34:19.612 Test: blockdev nvme passthru vendor specific ...[2024-10-09 00:41:50.078483] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:34:19.612 [2024-10-09 00:41:50.078512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:34:19.612 [2024-10-09 00:41:50.078678] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:34:19.612 [2024-10-09 00:41:50.078689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:34:19.612 [2024-10-09 00:41:50.078905] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:34:19.612 [2024-10-09 00:41:50.078916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:34:19.612 [2024-10-09 00:41:50.079120] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:34:19.612 [2024-10-09 00:41:50.079130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:34:19.612 passed 00:34:19.612 Test: blockdev nvme admin passthru ...passed 00:34:19.612 Test: blockdev copy ...passed 00:34:19.612 00:34:19.612 Run Summary: Type Total Ran Passed Failed Inactive 00:34:19.612 suites 1 1 n/a 0 0 00:34:19.612 tests 23 23 23 0 0 00:34:19.612 asserts 152 152 152 0 n/a 00:34:19.612 00:34:19.612 Elapsed time = 1.197 seconds 00:34:19.873 00:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:19.873 00:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:19.873 00:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:19.873 00:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:19.873 00:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:34:19.873 00:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:34:19.873 00:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@514 -- # nvmfcleanup 00:34:19.873 00:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:34:19.873 00:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:19.873 00:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:34:19.873 00:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:19.873 00:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:19.873 rmmod nvme_tcp 00:34:19.873 rmmod nvme_fabrics 00:34:19.873 rmmod nvme_keyring 00:34:19.873 00:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:19.873 00:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:34:19.873 00:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:34:19.873 00:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@515 -- # '[' -n 3527717 ']' 00:34:19.873 00:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@516 -- # killprocess 3527717 00:34:19.873 00:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@950 -- # '[' -z 3527717 ']' 00:34:19.873 00:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@954 -- # kill -0 3527717 00:34:19.873 00:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@955 -- # uname 00:34:19.873 00:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:34:19.873 00:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3527717 00:34:19.873 00:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:34:19.873 00:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:34:19.873 00:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3527717' 00:34:19.873 killing process with pid 3527717 00:34:19.873 00:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@969 -- # kill 3527717 00:34:19.873 00:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@974 -- # wait 3527717 00:34:20.134 00:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:34:20.134 00:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:34:20.134 00:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:34:20.134 00:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:34:20.134 00:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@789 -- # iptables-save 00:34:20.134 00:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:34:20.134 00:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@789 -- # iptables-restore 00:34:20.134 00:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:20.134 00:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:20.134 00:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:20.134 00:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:20.134 00:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:22.694 00:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:22.694 00:34:22.694 real 0m12.465s 00:34:22.694 user 0m10.763s 00:34:22.694 sys 0m6.582s 00:34:22.695 00:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:34:22.695 00:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:22.695 ************************************ 00:34:22.695 END TEST nvmf_bdevio 00:34:22.695 ************************************ 00:34:22.695 00:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:34:22.695 00:34:22.695 real 5m1.972s 00:34:22.695 user 10m24.307s 00:34:22.695 sys 2m6.746s 00:34:22.695 00:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1126 -- # xtrace_disable 00:34:22.695 00:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:34:22.695 ************************************ 00:34:22.695 END TEST nvmf_target_core_interrupt_mode 00:34:22.695 ************************************ 00:34:22.695 00:41:52 nvmf_tcp -- nvmf/nvmf.sh@21 -- # run_test nvmf_interrupt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:34:22.695 00:41:52 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:34:22.695 00:41:52 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:34:22.695 00:41:52 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:22.695 ************************************ 00:34:22.695 START TEST nvmf_interrupt 00:34:22.695 ************************************ 00:34:22.695 00:41:52 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:34:22.695 * Looking for test storage... 00:34:22.695 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:22.695 00:41:52 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:34:22.695 00:41:52 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1681 -- # lcov --version 00:34:22.695 00:41:52 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:34:22.695 00:41:53 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:34:22.695 00:41:53 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:22.695 00:41:53 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:22.695 00:41:53 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:22.695 00:41:53 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # IFS=.-: 00:34:22.695 00:41:53 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # read -ra ver1 00:34:22.695 00:41:53 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # IFS=.-: 00:34:22.695 00:41:53 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # read -ra ver2 00:34:22.695 00:41:53 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@338 -- # local 'op=<' 00:34:22.695 00:41:53 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@340 -- # ver1_l=2 00:34:22.695 00:41:53 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@341 -- # ver2_l=1 00:34:22.695 00:41:53 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:22.695 00:41:53 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@344 -- # case "$op" in 00:34:22.695 00:41:53 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@345 -- # : 1 00:34:22.695 00:41:53 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:22.695 00:41:53 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:22.695 00:41:53 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # decimal 1 00:34:22.695 00:41:53 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=1 00:34:22.695 00:41:53 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:22.695 00:41:53 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 1 00:34:22.695 00:41:53 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # ver1[v]=1 00:34:22.695 00:41:53 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # decimal 2 00:34:22.695 00:41:53 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=2 00:34:22.695 00:41:53 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:22.695 00:41:53 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 2 00:34:22.695 00:41:53 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # ver2[v]=2 00:34:22.695 00:41:53 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:22.695 00:41:53 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:22.695 00:41:53 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # return 0 00:34:22.695 00:41:53 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:22.695 00:41:53 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:34:22.695 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:22.695 --rc genhtml_branch_coverage=1 00:34:22.695 --rc genhtml_function_coverage=1 00:34:22.695 --rc genhtml_legend=1 00:34:22.695 --rc geninfo_all_blocks=1 00:34:22.695 --rc geninfo_unexecuted_blocks=1 00:34:22.695 00:34:22.695 ' 00:34:22.695 00:41:53 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:34:22.695 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:22.695 --rc genhtml_branch_coverage=1 00:34:22.695 --rc genhtml_function_coverage=1 00:34:22.695 --rc genhtml_legend=1 00:34:22.695 --rc geninfo_all_blocks=1 00:34:22.695 --rc geninfo_unexecuted_blocks=1 00:34:22.695 00:34:22.695 ' 00:34:22.695 00:41:53 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:34:22.695 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:22.695 --rc genhtml_branch_coverage=1 00:34:22.695 --rc genhtml_function_coverage=1 00:34:22.695 --rc genhtml_legend=1 00:34:22.695 --rc geninfo_all_blocks=1 00:34:22.695 --rc geninfo_unexecuted_blocks=1 00:34:22.695 00:34:22.695 ' 00:34:22.695 00:41:53 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:34:22.695 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:22.695 --rc genhtml_branch_coverage=1 00:34:22.695 --rc genhtml_function_coverage=1 00:34:22.695 --rc genhtml_legend=1 00:34:22.695 --rc geninfo_all_blocks=1 00:34:22.695 --rc geninfo_unexecuted_blocks=1 00:34:22.695 00:34:22.695 ' 00:34:22.695 00:41:53 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:22.695 00:41:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # uname -s 00:34:22.695 00:41:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:22.695 00:41:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:22.695 00:41:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:22.695 00:41:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:22.695 00:41:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:22.695 00:41:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:22.695 00:41:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:22.695 00:41:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:22.695 00:41:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:22.695 00:41:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:22.695 00:41:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:34:22.695 00:41:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:34:22.695 00:41:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:22.695 00:41:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:22.695 00:41:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:22.695 00:41:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:22.695 00:41:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:22.695 00:41:53 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@15 -- # shopt -s extglob 00:34:22.695 00:41:53 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:22.695 00:41:53 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:22.695 00:41:53 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:22.695 00:41:53 nvmf_tcp.nvmf_interrupt -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:22.695 00:41:53 nvmf_tcp.nvmf_interrupt -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:22.695 00:41:53 nvmf_tcp.nvmf_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:22.695 00:41:53 nvmf_tcp.nvmf_interrupt -- paths/export.sh@5 -- # export PATH 00:34:22.695 00:41:53 nvmf_tcp.nvmf_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:22.695 00:41:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@51 -- # : 0 00:34:22.695 00:41:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:22.695 00:41:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:22.695 00:41:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:22.695 00:41:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:22.695 00:41:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:22.695 00:41:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:34:22.695 00:41:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:34:22.695 00:41:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:22.695 00:41:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:22.695 00:41:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:22.695 00:41:53 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/interrupt/common.sh 00:34:22.695 00:41:53 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@12 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:34:22.695 00:41:53 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@14 -- # nvmftestinit 00:34:22.696 00:41:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:34:22.696 00:41:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:22.696 00:41:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@474 -- # prepare_net_devs 00:34:22.696 00:41:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@436 -- # local -g is_hw=no 00:34:22.696 00:41:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@438 -- # remove_spdk_ns 00:34:22.696 00:41:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:22.696 00:41:53 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:22.696 00:41:53 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:22.696 00:41:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:34:22.696 00:41:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:34:22.696 00:41:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@309 -- # xtrace_disable 00:34:22.696 00:41:53 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:34:30.859 00:42:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:30.859 00:42:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # pci_devs=() 00:34:30.859 00:42:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:30.859 00:42:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:30.859 00:42:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:30.859 00:42:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:30.859 00:42:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:30.859 00:42:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # net_devs=() 00:34:30.859 00:42:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:30.859 00:42:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # e810=() 00:34:30.859 00:42:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # local -ga e810 00:34:30.859 00:42:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # x722=() 00:34:30.859 00:42:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # local -ga x722 00:34:30.859 00:42:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # mlx=() 00:34:30.859 00:42:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # local -ga mlx 00:34:30.859 00:42:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:30.859 00:42:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:30.859 00:42:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:30.859 00:42:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:30.859 00:42:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:30.859 00:42:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:30.859 00:42:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:30.859 00:42:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:30.859 00:42:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:30.859 00:42:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:30.859 00:42:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:30.859 00:42:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:30.859 00:42:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:30.859 00:42:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:30.859 00:42:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:30.859 00:42:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:30.859 00:42:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:30.859 00:42:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:30.859 00:42:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:30.859 00:42:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:34:30.859 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:34:30.859 00:42:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:30.859 00:42:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:30.859 00:42:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:30.859 00:42:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:30.859 00:42:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:30.859 00:42:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:30.859 00:42:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:34:30.859 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:34:30.859 00:42:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:30.859 00:42:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:30.859 00:42:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:30.859 00:42:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:30.859 00:42:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:30.859 00:42:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:30.859 00:42:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:30.859 00:42:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:30.859 00:42:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:34:30.859 00:42:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:30.859 00:42:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:34:30.859 00:42:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:30.859 00:42:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ up == up ]] 00:34:30.859 00:42:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:34:30.859 00:42:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:30.859 00:42:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:34:30.859 Found net devices under 0000:4b:00.0: cvl_0_0 00:34:30.859 00:42:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:34:30.859 00:42:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:34:30.859 00:42:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:30.859 00:42:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:34:30.859 00:42:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:30.859 00:42:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ up == up ]] 00:34:30.859 00:42:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:34:30.859 00:42:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:30.859 00:42:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:34:30.859 Found net devices under 0000:4b:00.1: cvl_0_1 00:34:30.859 00:42:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:34:30.859 00:42:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:34:30.859 00:42:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # is_hw=yes 00:34:30.859 00:42:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:34:30.859 00:42:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:34:30.859 00:42:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:34:30.859 00:42:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:30.859 00:42:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:30.859 00:42:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:30.859 00:42:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:30.859 00:42:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:30.859 00:42:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:30.859 00:42:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:30.859 00:42:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:30.859 00:42:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:30.859 00:42:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:30.859 00:42:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:30.859 00:42:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:30.859 00:42:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:30.859 00:42:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:30.859 00:42:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:30.859 00:42:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:30.859 00:42:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:30.859 00:42:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:30.859 00:42:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:30.859 00:42:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:30.859 00:42:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:30.859 00:42:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:30.859 00:42:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:30.859 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:30.859 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.614 ms 00:34:30.859 00:34:30.859 --- 10.0.0.2 ping statistics --- 00:34:30.859 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:30.859 rtt min/avg/max/mdev = 0.614/0.614/0.614/0.000 ms 00:34:30.859 00:42:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:30.859 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:30.859 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.170 ms 00:34:30.859 00:34:30.859 --- 10.0.0.1 ping statistics --- 00:34:30.859 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:30.859 rtt min/avg/max/mdev = 0.170/0.170/0.170/0.000 ms 00:34:30.859 00:42:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:30.859 00:42:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@448 -- # return 0 00:34:30.859 00:42:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:34:30.859 00:42:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:30.859 00:42:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:34:30.859 00:42:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:34:30.859 00:42:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:30.859 00:42:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:34:30.859 00:42:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:34:30.859 00:42:00 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@15 -- # nvmfappstart -m 0x3 00:34:30.859 00:42:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:34:30.859 00:42:00 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@724 -- # xtrace_disable 00:34:30.859 00:42:00 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:34:30.859 00:42:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@507 -- # nvmfpid=3532437 00:34:30.859 00:42:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@508 -- # waitforlisten 3532437 00:34:30.859 00:42:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:34:30.859 00:42:00 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@831 -- # '[' -z 3532437 ']' 00:34:30.859 00:42:00 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:30.859 00:42:00 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@836 -- # local max_retries=100 00:34:30.859 00:42:00 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:30.859 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:30.859 00:42:00 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@840 -- # xtrace_disable 00:34:30.859 00:42:00 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:34:30.859 [2024-10-09 00:42:00.664946] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:34:30.859 [2024-10-09 00:42:00.666054] Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 initialization... 00:34:30.859 [2024-10-09 00:42:00.666102] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:30.859 [2024-10-09 00:42:00.754257] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:34:30.859 [2024-10-09 00:42:00.851512] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:30.859 [2024-10-09 00:42:00.851576] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:30.859 [2024-10-09 00:42:00.851586] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:30.859 [2024-10-09 00:42:00.851593] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:30.859 [2024-10-09 00:42:00.851599] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:30.859 [2024-10-09 00:42:00.852766] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:34:30.859 [2024-10-09 00:42:00.852770] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:34:30.859 [2024-10-09 00:42:00.931874] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:34:30.859 [2024-10-09 00:42:00.932671] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:34:30.859 [2024-10-09 00:42:00.932855] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:34:31.119 00:42:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:34:31.119 00:42:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@864 -- # return 0 00:34:31.119 00:42:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:34:31.119 00:42:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@730 -- # xtrace_disable 00:34:31.119 00:42:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:34:31.119 00:42:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:31.119 00:42:01 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@16 -- # setup_bdev_aio 00:34:31.119 00:42:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # uname -s 00:34:31.119 00:42:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:34:31.119 00:42:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@78 -- # dd if=/dev/zero of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile bs=2048 count=5000 00:34:31.119 5000+0 records in 00:34:31.119 5000+0 records out 00:34:31.119 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0190819 s, 537 MB/s 00:34:31.119 00:42:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@79 -- # rpc_cmd bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile AIO0 2048 00:34:31.119 00:42:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:31.119 00:42:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:34:31.119 AIO0 00:34:31.119 00:42:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:31.120 00:42:01 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -q 256 00:34:31.120 00:42:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:31.120 00:42:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:34:31.120 [2024-10-09 00:42:01.637837] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:31.120 00:42:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:31.120 00:42:01 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:34:31.120 00:42:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:31.120 00:42:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:34:31.120 00:42:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:31.120 00:42:01 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 AIO0 00:34:31.120 00:42:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:31.120 00:42:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:34:31.120 00:42:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:31.120 00:42:01 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:31.120 00:42:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:31.120 00:42:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:34:31.120 [2024-10-09 00:42:01.694483] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:31.120 00:42:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:31.120 00:42:01 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:34:31.120 00:42:01 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 3532437 0 00:34:31.120 00:42:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3532437 0 idle 00:34:31.120 00:42:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3532437 00:34:31.120 00:42:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:34:31.120 00:42:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:34:31.120 00:42:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:34:31.120 00:42:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:34:31.120 00:42:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:34:31.120 00:42:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:34:31.120 00:42:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:34:31.120 00:42:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:34:31.120 00:42:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:34:31.120 00:42:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3532437 -w 256 00:34:31.120 00:42:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:34:31.380 00:42:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3532437 root 20 0 128.2g 43776 32256 R 0.0 0.0 0:00.35 reactor_0' 00:34:31.380 00:42:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3532437 root 20 0 128.2g 43776 32256 R 0.0 0.0 0:00.35 reactor_0 00:34:31.380 00:42:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:34:31.380 00:42:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:34:31.380 00:42:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:34:31.380 00:42:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:34:31.380 00:42:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:34:31.380 00:42:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:34:31.380 00:42:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:34:31.380 00:42:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:34:31.380 00:42:01 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:34:31.380 00:42:01 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 3532437 1 00:34:31.380 00:42:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3532437 1 idle 00:34:31.380 00:42:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3532437 00:34:31.380 00:42:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:34:31.380 00:42:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:34:31.380 00:42:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:34:31.380 00:42:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:34:31.380 00:42:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:34:31.380 00:42:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:34:31.380 00:42:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:34:31.380 00:42:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:34:31.380 00:42:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:34:31.380 00:42:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3532437 -w 256 00:34:31.380 00:42:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:34:31.641 00:42:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3532445 root 20 0 128.2g 43776 32256 S 0.0 0.0 0:00.00 reactor_1' 00:34:31.641 00:42:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3532445 root 20 0 128.2g 43776 32256 S 0.0 0.0 0:00.00 reactor_1 00:34:31.641 00:42:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:34:31.641 00:42:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:34:31.641 00:42:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:34:31.641 00:42:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:34:31.641 00:42:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:34:31.641 00:42:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:34:31.641 00:42:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:34:31.641 00:42:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:34:31.641 00:42:02 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@28 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:34:31.641 00:42:02 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@35 -- # perf_pid=3532878 00:34:31.641 00:42:02 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:34:31.641 00:42:02 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 256 -o 4096 -w randrw -M 30 -t 10 -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:34:31.641 00:42:02 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:34:31.641 00:42:02 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 3532437 0 00:34:31.641 00:42:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 3532437 0 busy 00:34:31.641 00:42:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3532437 00:34:31.641 00:42:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:34:31.641 00:42:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:34:31.641 00:42:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:34:31.641 00:42:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:34:31.641 00:42:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:34:31.641 00:42:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:34:31.641 00:42:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:34:31.641 00:42:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:34:31.641 00:42:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3532437 -w 256 00:34:31.641 00:42:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:34:31.641 00:42:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3532437 root 20 0 128.2g 44928 32256 R 20.0 0.0 0:00.38 reactor_0' 00:34:31.641 00:42:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3532437 root 20 0 128.2g 44928 32256 R 20.0 0.0 0:00.38 reactor_0 00:34:31.641 00:42:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:34:31.641 00:42:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:34:31.902 00:42:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=20.0 00:34:31.902 00:42:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=20 00:34:31.902 00:42:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:34:31.902 00:42:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:34:31.902 00:42:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@31 -- # sleep 1 00:34:32.862 00:42:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j-- )) 00:34:32.862 00:42:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:34:32.863 00:42:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3532437 -w 256 00:34:32.863 00:42:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:34:32.863 00:42:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3532437 root 20 0 128.2g 44928 32256 R 99.9 0.0 0:02.74 reactor_0' 00:34:32.863 00:42:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3532437 root 20 0 128.2g 44928 32256 R 99.9 0.0 0:02.74 reactor_0 00:34:32.863 00:42:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:34:32.863 00:42:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:34:32.863 00:42:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:34:32.863 00:42:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:34:32.863 00:42:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:34:32.863 00:42:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:34:32.863 00:42:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:34:32.863 00:42:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:34:32.863 00:42:03 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:34:32.863 00:42:03 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:34:32.863 00:42:03 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 3532437 1 00:34:32.863 00:42:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 3532437 1 busy 00:34:32.863 00:42:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3532437 00:34:32.863 00:42:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:34:32.863 00:42:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:34:32.863 00:42:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:34:32.863 00:42:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:34:32.863 00:42:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:34:32.863 00:42:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:34:32.863 00:42:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:34:32.863 00:42:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:34:32.863 00:42:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3532437 -w 256 00:34:32.863 00:42:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:34:33.129 00:42:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3532445 root 20 0 128.2g 44928 32256 R 93.8 0.0 0:01.39 reactor_1' 00:34:33.129 00:42:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3532445 root 20 0 128.2g 44928 32256 R 93.8 0.0 0:01.39 reactor_1 00:34:33.129 00:42:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:34:33.129 00:42:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:34:33.129 00:42:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=93.8 00:34:33.129 00:42:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=93 00:34:33.129 00:42:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:34:33.129 00:42:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:34:33.129 00:42:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:34:33.129 00:42:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:34:33.129 00:42:03 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@42 -- # wait 3532878 00:34:43.145 Initializing NVMe Controllers 00:34:43.145 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:34:43.145 Controller IO queue size 256, less than required. 00:34:43.145 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:34:43.145 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:34:43.145 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:34:43.145 Initialization complete. Launching workers. 00:34:43.145 ======================================================== 00:34:43.145 Latency(us) 00:34:43.145 Device Information : IOPS MiB/s Average min max 00:34:43.145 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 18548.00 72.45 13806.33 4038.89 32376.26 00:34:43.145 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 19071.10 74.50 13424.83 8255.96 29819.54 00:34:43.145 ======================================================== 00:34:43.145 Total : 37619.10 146.95 13612.93 4038.89 32376.26 00:34:43.145 00:34:43.145 00:42:12 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:34:43.145 00:42:12 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 3532437 0 00:34:43.145 00:42:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3532437 0 idle 00:34:43.145 00:42:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3532437 00:34:43.145 00:42:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:34:43.145 00:42:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:34:43.145 00:42:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:34:43.145 00:42:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:34:43.145 00:42:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:34:43.145 00:42:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:34:43.145 00:42:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:34:43.145 00:42:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:34:43.145 00:42:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:34:43.145 00:42:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3532437 -w 256 00:34:43.145 00:42:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:34:43.145 00:42:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3532437 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:20.32 reactor_0' 00:34:43.145 00:42:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3532437 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:20.32 reactor_0 00:34:43.145 00:42:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:34:43.145 00:42:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:34:43.145 00:42:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:34:43.145 00:42:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:34:43.145 00:42:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:34:43.145 00:42:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:34:43.145 00:42:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:34:43.145 00:42:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:34:43.145 00:42:12 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:34:43.145 00:42:12 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 3532437 1 00:34:43.145 00:42:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3532437 1 idle 00:34:43.145 00:42:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3532437 00:34:43.145 00:42:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:34:43.145 00:42:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:34:43.145 00:42:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:34:43.145 00:42:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:34:43.145 00:42:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:34:43.145 00:42:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:34:43.145 00:42:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:34:43.145 00:42:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:34:43.145 00:42:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:34:43.145 00:42:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3532437 -w 256 00:34:43.145 00:42:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:34:43.145 00:42:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3532445 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:10.00 reactor_1' 00:34:43.145 00:42:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3532445 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:10.00 reactor_1 00:34:43.145 00:42:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:34:43.145 00:42:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:34:43.145 00:42:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:34:43.145 00:42:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:34:43.145 00:42:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:34:43.145 00:42:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:34:43.145 00:42:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:34:43.145 00:42:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:34:43.145 00:42:12 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@50 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:34:43.145 00:42:13 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@51 -- # waitforserial SPDKISFASTANDAWESOME 00:34:43.145 00:42:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1198 -- # local i=0 00:34:43.145 00:42:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:34:43.145 00:42:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:34:43.145 00:42:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1205 -- # sleep 2 00:34:45.074 00:42:15 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:34:45.074 00:42:15 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:34:45.074 00:42:15 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:34:45.074 00:42:15 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:34:45.074 00:42:15 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:34:45.074 00:42:15 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1208 -- # return 0 00:34:45.074 00:42:15 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:34:45.074 00:42:15 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 3532437 0 00:34:45.074 00:42:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3532437 0 idle 00:34:45.074 00:42:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3532437 00:34:45.074 00:42:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:34:45.074 00:42:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:34:45.074 00:42:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:34:45.074 00:42:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:34:45.074 00:42:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:34:45.074 00:42:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:34:45.074 00:42:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:34:45.074 00:42:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:34:45.074 00:42:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:34:45.074 00:42:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3532437 -w 256 00:34:45.074 00:42:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:34:45.074 00:42:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3532437 root 20 0 128.2g 79488 32256 S 0.0 0.1 0:20.70 reactor_0' 00:34:45.074 00:42:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3532437 root 20 0 128.2g 79488 32256 S 0.0 0.1 0:20.70 reactor_0 00:34:45.074 00:42:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:34:45.074 00:42:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:34:45.074 00:42:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:34:45.074 00:42:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:34:45.074 00:42:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:34:45.074 00:42:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:34:45.074 00:42:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:34:45.074 00:42:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:34:45.074 00:42:15 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:34:45.074 00:42:15 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 3532437 1 00:34:45.074 00:42:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3532437 1 idle 00:34:45.074 00:42:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3532437 00:34:45.074 00:42:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:34:45.074 00:42:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:34:45.074 00:42:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:34:45.074 00:42:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:34:45.074 00:42:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:34:45.074 00:42:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:34:45.074 00:42:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:34:45.074 00:42:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:34:45.074 00:42:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:34:45.074 00:42:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3532437 -w 256 00:34:45.074 00:42:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:34:45.074 00:42:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3532445 root 20 0 128.2g 79488 32256 S 0.0 0.1 0:10.15 reactor_1' 00:34:45.074 00:42:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3532445 root 20 0 128.2g 79488 32256 S 0.0 0.1 0:10.15 reactor_1 00:34:45.074 00:42:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:34:45.074 00:42:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:34:45.074 00:42:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:34:45.074 00:42:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:34:45.074 00:42:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:34:45.074 00:42:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:34:45.074 00:42:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:34:45.074 00:42:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:34:45.074 00:42:15 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@55 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:34:45.334 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:34:45.334 00:42:15 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@56 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:34:45.334 00:42:15 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1219 -- # local i=0 00:34:45.334 00:42:15 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:34:45.334 00:42:15 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:34:45.335 00:42:15 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:34:45.335 00:42:15 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:34:45.335 00:42:15 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # return 0 00:34:45.335 00:42:15 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:34:45.335 00:42:15 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@59 -- # nvmftestfini 00:34:45.335 00:42:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@514 -- # nvmfcleanup 00:34:45.335 00:42:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@121 -- # sync 00:34:45.335 00:42:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:45.335 00:42:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@124 -- # set +e 00:34:45.335 00:42:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:45.335 00:42:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:45.335 rmmod nvme_tcp 00:34:45.335 rmmod nvme_fabrics 00:34:45.335 rmmod nvme_keyring 00:34:45.335 00:42:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:45.335 00:42:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@128 -- # set -e 00:34:45.335 00:42:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@129 -- # return 0 00:34:45.335 00:42:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@515 -- # '[' -n 3532437 ']' 00:34:45.335 00:42:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@516 -- # killprocess 3532437 00:34:45.335 00:42:15 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@950 -- # '[' -z 3532437 ']' 00:34:45.335 00:42:15 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@954 -- # kill -0 3532437 00:34:45.335 00:42:15 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@955 -- # uname 00:34:45.335 00:42:15 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:34:45.335 00:42:15 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3532437 00:34:45.335 00:42:15 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:34:45.335 00:42:15 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:34:45.335 00:42:15 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3532437' 00:34:45.335 killing process with pid 3532437 00:34:45.335 00:42:15 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@969 -- # kill 3532437 00:34:45.335 00:42:15 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@974 -- # wait 3532437 00:34:45.595 00:42:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:34:45.595 00:42:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:34:45.595 00:42:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:34:45.595 00:42:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@297 -- # iptr 00:34:45.595 00:42:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@789 -- # iptables-save 00:34:45.595 00:42:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:34:45.595 00:42:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@789 -- # iptables-restore 00:34:45.595 00:42:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:45.595 00:42:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:45.596 00:42:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:45.596 00:42:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:45.596 00:42:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:48.139 00:42:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:48.139 00:34:48.139 real 0m25.339s 00:34:48.139 user 0m40.199s 00:34:48.139 sys 0m9.922s 00:34:48.139 00:42:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1126 -- # xtrace_disable 00:34:48.139 00:42:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:34:48.139 ************************************ 00:34:48.139 END TEST nvmf_interrupt 00:34:48.139 ************************************ 00:34:48.139 00:34:48.139 real 29m46.449s 00:34:48.139 user 61m12.536s 00:34:48.139 sys 10m7.489s 00:34:48.139 00:42:18 nvmf_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:34:48.139 00:42:18 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:48.139 ************************************ 00:34:48.139 END TEST nvmf_tcp 00:34:48.139 ************************************ 00:34:48.139 00:42:18 -- spdk/autotest.sh@281 -- # [[ 0 -eq 0 ]] 00:34:48.139 00:42:18 -- spdk/autotest.sh@282 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:34:48.139 00:42:18 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:34:48.139 00:42:18 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:34:48.139 00:42:18 -- common/autotest_common.sh@10 -- # set +x 00:34:48.139 ************************************ 00:34:48.139 START TEST spdkcli_nvmf_tcp 00:34:48.139 ************************************ 00:34:48.139 00:42:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:34:48.139 * Looking for test storage... 00:34:48.139 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:34:48.139 00:42:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:34:48.139 00:42:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@1681 -- # lcov --version 00:34:48.139 00:42:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:34:48.139 00:42:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:34:48.139 00:42:18 spdkcli_nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:48.139 00:42:18 spdkcli_nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:48.139 00:42:18 spdkcli_nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:48.139 00:42:18 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:34:48.139 00:42:18 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:34:48.139 00:42:18 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:34:48.139 00:42:18 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:34:48.139 00:42:18 spdkcli_nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:34:48.139 00:42:18 spdkcli_nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:34:48.139 00:42:18 spdkcli_nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:34:48.139 00:42:18 spdkcli_nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:48.139 00:42:18 spdkcli_nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:34:48.139 00:42:18 spdkcli_nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:34:48.139 00:42:18 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:48.139 00:42:18 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:48.139 00:42:18 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:34:48.139 00:42:18 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:34:48.139 00:42:18 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:48.139 00:42:18 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:34:48.139 00:42:18 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:34:48.139 00:42:18 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:34:48.139 00:42:18 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:34:48.139 00:42:18 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:48.139 00:42:18 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:34:48.139 00:42:18 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:34:48.139 00:42:18 spdkcli_nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:48.139 00:42:18 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:48.139 00:42:18 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:34:48.139 00:42:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:48.139 00:42:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:34:48.139 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:48.139 --rc genhtml_branch_coverage=1 00:34:48.139 --rc genhtml_function_coverage=1 00:34:48.139 --rc genhtml_legend=1 00:34:48.139 --rc geninfo_all_blocks=1 00:34:48.139 --rc geninfo_unexecuted_blocks=1 00:34:48.139 00:34:48.139 ' 00:34:48.139 00:42:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:34:48.139 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:48.139 --rc genhtml_branch_coverage=1 00:34:48.139 --rc genhtml_function_coverage=1 00:34:48.139 --rc genhtml_legend=1 00:34:48.139 --rc geninfo_all_blocks=1 00:34:48.139 --rc geninfo_unexecuted_blocks=1 00:34:48.139 00:34:48.139 ' 00:34:48.139 00:42:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:34:48.139 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:48.139 --rc genhtml_branch_coverage=1 00:34:48.139 --rc genhtml_function_coverage=1 00:34:48.139 --rc genhtml_legend=1 00:34:48.139 --rc geninfo_all_blocks=1 00:34:48.139 --rc geninfo_unexecuted_blocks=1 00:34:48.139 00:34:48.139 ' 00:34:48.139 00:42:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:34:48.139 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:48.139 --rc genhtml_branch_coverage=1 00:34:48.139 --rc genhtml_function_coverage=1 00:34:48.139 --rc genhtml_legend=1 00:34:48.139 --rc geninfo_all_blocks=1 00:34:48.139 --rc geninfo_unexecuted_blocks=1 00:34:48.139 00:34:48.139 ' 00:34:48.139 00:42:18 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:34:48.139 00:42:18 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:34:48.139 00:42:18 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:34:48.139 00:42:18 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:48.139 00:42:18 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:34:48.139 00:42:18 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:48.139 00:42:18 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:48.139 00:42:18 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:48.139 00:42:18 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:48.139 00:42:18 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:48.139 00:42:18 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:48.139 00:42:18 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:48.139 00:42:18 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:48.139 00:42:18 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:48.139 00:42:18 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:48.139 00:42:18 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:34:48.139 00:42:18 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:34:48.139 00:42:18 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:48.139 00:42:18 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:48.139 00:42:18 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:48.139 00:42:18 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:48.139 00:42:18 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:48.139 00:42:18 spdkcli_nvmf_tcp -- scripts/common.sh@15 -- # shopt -s extglob 00:34:48.139 00:42:18 spdkcli_nvmf_tcp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:48.139 00:42:18 spdkcli_nvmf_tcp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:48.139 00:42:18 spdkcli_nvmf_tcp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:48.139 00:42:18 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:48.139 00:42:18 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:48.139 00:42:18 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:48.139 00:42:18 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:34:48.139 00:42:18 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:48.139 00:42:18 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # : 0 00:34:48.139 00:42:18 spdkcli_nvmf_tcp -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:48.139 00:42:18 spdkcli_nvmf_tcp -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:48.139 00:42:18 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:48.139 00:42:18 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:48.139 00:42:18 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:48.139 00:42:18 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:34:48.140 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:34:48.140 00:42:18 spdkcli_nvmf_tcp -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:48.140 00:42:18 spdkcli_nvmf_tcp -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:48.140 00:42:18 spdkcli_nvmf_tcp -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:48.140 00:42:18 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:34:48.140 00:42:18 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:34:48.140 00:42:18 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:34:48.140 00:42:18 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:34:48.140 00:42:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:34:48.140 00:42:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:48.140 00:42:18 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:34:48.140 00:42:18 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=3536544 00:34:48.140 00:42:18 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 3536544 00:34:48.140 00:42:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@831 -- # '[' -z 3536544 ']' 00:34:48.140 00:42:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:48.140 00:42:18 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:34:48.140 00:42:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:34:48.140 00:42:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:48.140 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:48.140 00:42:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:34:48.140 00:42:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:48.140 [2024-10-09 00:42:18.627601] Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 initialization... 00:34:48.140 [2024-10-09 00:42:18.627672] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3536544 ] 00:34:48.140 [2024-10-09 00:42:18.707814] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:34:48.401 [2024-10-09 00:42:18.803941] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:34:48.401 [2024-10-09 00:42:18.804076] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:34:48.973 00:42:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:34:48.973 00:42:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@864 -- # return 0 00:34:48.973 00:42:19 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:34:48.973 00:42:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:34:48.973 00:42:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:48.973 00:42:19 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:34:48.973 00:42:19 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:34:48.973 00:42:19 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:34:48.973 00:42:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:34:48.973 00:42:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:48.973 00:42:19 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:34:48.973 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:34:48.973 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:34:48.973 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:34:48.973 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:34:48.973 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:34:48.973 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:34:48.973 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:34:48.973 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:34:48.973 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:34:48.973 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:34:48.973 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:34:48.973 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:34:48.973 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:34:48.973 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:34:48.973 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:34:48.973 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:34:48.973 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:34:48.973 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:34:48.973 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:34:48.973 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:34:48.973 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:34:48.973 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:34:48.973 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:34:48.974 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:34:48.974 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:34:48.974 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:34:48.974 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:34:48.974 ' 00:34:51.531 [2024-10-09 00:42:22.164962] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:52.912 [2024-10-09 00:42:23.525153] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:34:55.459 [2024-10-09 00:42:26.056180] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:34:58.006 [2024-10-09 00:42:28.274555] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:34:59.397 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:34:59.397 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:34:59.397 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:34:59.397 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:34:59.397 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:34:59.397 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:34:59.397 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:34:59.397 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:34:59.397 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:34:59.397 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:34:59.397 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:34:59.397 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:34:59.397 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:34:59.397 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:34:59.397 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:34:59.397 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:34:59.397 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:34:59.397 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:34:59.397 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:34:59.397 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:34:59.397 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:34:59.397 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:34:59.397 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:34:59.398 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:34:59.398 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:34:59.398 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:34:59.398 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:34:59.398 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:34:59.659 00:42:30 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:34:59.659 00:42:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:34:59.659 00:42:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:59.659 00:42:30 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:34:59.659 00:42:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:34:59.659 00:42:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:59.659 00:42:30 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:34:59.659 00:42:30 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:34:59.919 00:42:30 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:34:59.919 00:42:30 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:34:59.919 00:42:30 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:34:59.919 00:42:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:34:59.919 00:42:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:00.180 00:42:30 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:35:00.180 00:42:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:35:00.180 00:42:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:00.180 00:42:30 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:35:00.180 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:35:00.180 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:35:00.180 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:35:00.180 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:35:00.180 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:35:00.180 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:35:00.180 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:35:00.180 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:35:00.180 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:35:00.180 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:35:00.180 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:35:00.180 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:35:00.180 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:35:00.180 ' 00:35:06.778 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:35:06.778 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:35:06.778 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:35:06.778 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:35:06.778 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:35:06.778 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:35:06.778 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:35:06.778 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:35:06.778 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:35:06.778 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:35:06.778 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:35:06.778 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:35:06.778 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:35:06.778 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:35:06.778 00:42:36 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:35:06.778 00:42:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:06.778 00:42:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:06.778 00:42:36 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 3536544 00:35:06.778 00:42:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@950 -- # '[' -z 3536544 ']' 00:35:06.778 00:42:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # kill -0 3536544 00:35:06.778 00:42:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@955 -- # uname 00:35:06.778 00:42:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:35:06.778 00:42:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3536544 00:35:06.778 00:42:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:35:06.778 00:42:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:35:06.778 00:42:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3536544' 00:35:06.778 killing process with pid 3536544 00:35:06.778 00:42:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@969 -- # kill 3536544 00:35:06.778 00:42:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@974 -- # wait 3536544 00:35:06.778 00:42:36 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:35:06.778 00:42:36 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:35:06.778 00:42:36 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 3536544 ']' 00:35:06.778 00:42:36 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 3536544 00:35:06.779 00:42:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@950 -- # '[' -z 3536544 ']' 00:35:06.779 00:42:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # kill -0 3536544 00:35:06.779 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (3536544) - No such process 00:35:06.779 00:42:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@977 -- # echo 'Process with pid 3536544 is not found' 00:35:06.779 Process with pid 3536544 is not found 00:35:06.779 00:42:36 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:35:06.779 00:42:36 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:35:06.779 00:42:36 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:35:06.779 00:35:06.779 real 0m18.137s 00:35:06.779 user 0m40.212s 00:35:06.779 sys 0m0.872s 00:35:06.779 00:42:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:35:06.779 00:42:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:06.779 ************************************ 00:35:06.779 END TEST spdkcli_nvmf_tcp 00:35:06.779 ************************************ 00:35:06.779 00:42:36 -- spdk/autotest.sh@283 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:35:06.779 00:42:36 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:35:06.779 00:42:36 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:35:06.779 00:42:36 -- common/autotest_common.sh@10 -- # set +x 00:35:06.779 ************************************ 00:35:06.779 START TEST nvmf_identify_passthru 00:35:06.779 ************************************ 00:35:06.779 00:42:36 nvmf_identify_passthru -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:35:06.779 * Looking for test storage... 00:35:06.779 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:06.779 00:42:36 nvmf_identify_passthru -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:35:06.779 00:42:36 nvmf_identify_passthru -- common/autotest_common.sh@1681 -- # lcov --version 00:35:06.779 00:42:36 nvmf_identify_passthru -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:35:06.779 00:42:36 nvmf_identify_passthru -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:35:06.779 00:42:36 nvmf_identify_passthru -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:06.779 00:42:36 nvmf_identify_passthru -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:06.779 00:42:36 nvmf_identify_passthru -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:06.779 00:42:36 nvmf_identify_passthru -- scripts/common.sh@336 -- # IFS=.-: 00:35:06.779 00:42:36 nvmf_identify_passthru -- scripts/common.sh@336 -- # read -ra ver1 00:35:06.779 00:42:36 nvmf_identify_passthru -- scripts/common.sh@337 -- # IFS=.-: 00:35:06.779 00:42:36 nvmf_identify_passthru -- scripts/common.sh@337 -- # read -ra ver2 00:35:06.779 00:42:36 nvmf_identify_passthru -- scripts/common.sh@338 -- # local 'op=<' 00:35:06.779 00:42:36 nvmf_identify_passthru -- scripts/common.sh@340 -- # ver1_l=2 00:35:06.779 00:42:36 nvmf_identify_passthru -- scripts/common.sh@341 -- # ver2_l=1 00:35:06.779 00:42:36 nvmf_identify_passthru -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:06.779 00:42:36 nvmf_identify_passthru -- scripts/common.sh@344 -- # case "$op" in 00:35:06.779 00:42:36 nvmf_identify_passthru -- scripts/common.sh@345 -- # : 1 00:35:06.779 00:42:36 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:06.779 00:42:36 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:06.779 00:42:36 nvmf_identify_passthru -- scripts/common.sh@365 -- # decimal 1 00:35:06.779 00:42:36 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=1 00:35:06.779 00:42:36 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:06.779 00:42:36 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 1 00:35:06.779 00:42:36 nvmf_identify_passthru -- scripts/common.sh@365 -- # ver1[v]=1 00:35:06.779 00:42:36 nvmf_identify_passthru -- scripts/common.sh@366 -- # decimal 2 00:35:06.779 00:42:36 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=2 00:35:06.779 00:42:36 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:06.779 00:42:36 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 2 00:35:06.779 00:42:36 nvmf_identify_passthru -- scripts/common.sh@366 -- # ver2[v]=2 00:35:06.779 00:42:36 nvmf_identify_passthru -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:06.779 00:42:36 nvmf_identify_passthru -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:06.779 00:42:36 nvmf_identify_passthru -- scripts/common.sh@368 -- # return 0 00:35:06.779 00:42:36 nvmf_identify_passthru -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:06.779 00:42:36 nvmf_identify_passthru -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:35:06.779 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:06.779 --rc genhtml_branch_coverage=1 00:35:06.779 --rc genhtml_function_coverage=1 00:35:06.779 --rc genhtml_legend=1 00:35:06.779 --rc geninfo_all_blocks=1 00:35:06.779 --rc geninfo_unexecuted_blocks=1 00:35:06.779 00:35:06.779 ' 00:35:06.779 00:42:36 nvmf_identify_passthru -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:35:06.779 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:06.779 --rc genhtml_branch_coverage=1 00:35:06.779 --rc genhtml_function_coverage=1 00:35:06.779 --rc genhtml_legend=1 00:35:06.779 --rc geninfo_all_blocks=1 00:35:06.779 --rc geninfo_unexecuted_blocks=1 00:35:06.779 00:35:06.779 ' 00:35:06.779 00:42:36 nvmf_identify_passthru -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:35:06.779 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:06.779 --rc genhtml_branch_coverage=1 00:35:06.779 --rc genhtml_function_coverage=1 00:35:06.779 --rc genhtml_legend=1 00:35:06.779 --rc geninfo_all_blocks=1 00:35:06.779 --rc geninfo_unexecuted_blocks=1 00:35:06.779 00:35:06.779 ' 00:35:06.779 00:42:36 nvmf_identify_passthru -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:35:06.779 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:06.779 --rc genhtml_branch_coverage=1 00:35:06.779 --rc genhtml_function_coverage=1 00:35:06.779 --rc genhtml_legend=1 00:35:06.779 --rc geninfo_all_blocks=1 00:35:06.779 --rc geninfo_unexecuted_blocks=1 00:35:06.779 00:35:06.779 ' 00:35:06.779 00:42:36 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:06.779 00:42:36 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:35:06.779 00:42:36 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:06.779 00:42:36 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:06.779 00:42:36 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:06.779 00:42:36 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:06.779 00:42:36 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:06.779 00:42:36 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:06.779 00:42:36 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:06.779 00:42:36 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:06.779 00:42:36 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:06.779 00:42:36 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:06.779 00:42:36 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:35:06.779 00:42:36 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:35:06.779 00:42:36 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:06.779 00:42:36 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:06.779 00:42:36 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:06.779 00:42:36 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:06.779 00:42:36 nvmf_identify_passthru -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:06.779 00:42:36 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:35:06.779 00:42:36 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:06.779 00:42:36 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:06.779 00:42:36 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:06.779 00:42:36 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:06.779 00:42:36 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:06.779 00:42:36 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:06.779 00:42:36 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:35:06.779 00:42:36 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:06.779 00:42:36 nvmf_identify_passthru -- nvmf/common.sh@51 -- # : 0 00:35:06.779 00:42:36 nvmf_identify_passthru -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:06.779 00:42:36 nvmf_identify_passthru -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:06.779 00:42:36 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:06.779 00:42:36 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:06.779 00:42:36 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:06.779 00:42:36 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:06.780 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:06.780 00:42:36 nvmf_identify_passthru -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:06.780 00:42:36 nvmf_identify_passthru -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:06.780 00:42:36 nvmf_identify_passthru -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:06.780 00:42:36 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:06.780 00:42:36 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:35:06.780 00:42:36 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:06.780 00:42:36 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:06.780 00:42:36 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:06.780 00:42:36 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:06.780 00:42:36 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:06.780 00:42:36 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:06.780 00:42:36 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:35:06.780 00:42:36 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:06.780 00:42:36 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:35:06.780 00:42:36 nvmf_identify_passthru -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:35:06.780 00:42:36 nvmf_identify_passthru -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:06.780 00:42:36 nvmf_identify_passthru -- nvmf/common.sh@474 -- # prepare_net_devs 00:35:06.780 00:42:36 nvmf_identify_passthru -- nvmf/common.sh@436 -- # local -g is_hw=no 00:35:06.780 00:42:36 nvmf_identify_passthru -- nvmf/common.sh@438 -- # remove_spdk_ns 00:35:06.780 00:42:36 nvmf_identify_passthru -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:06.780 00:42:36 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:06.780 00:42:36 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:06.780 00:42:36 nvmf_identify_passthru -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:35:06.780 00:42:36 nvmf_identify_passthru -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:35:06.780 00:42:36 nvmf_identify_passthru -- nvmf/common.sh@309 -- # xtrace_disable 00:35:06.780 00:42:36 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:13.371 00:42:43 nvmf_identify_passthru -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:13.371 00:42:43 nvmf_identify_passthru -- nvmf/common.sh@315 -- # pci_devs=() 00:35:13.371 00:42:43 nvmf_identify_passthru -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:13.371 00:42:43 nvmf_identify_passthru -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:13.371 00:42:43 nvmf_identify_passthru -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:13.371 00:42:43 nvmf_identify_passthru -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:13.371 00:42:43 nvmf_identify_passthru -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:13.371 00:42:43 nvmf_identify_passthru -- nvmf/common.sh@319 -- # net_devs=() 00:35:13.371 00:42:43 nvmf_identify_passthru -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:13.371 00:42:43 nvmf_identify_passthru -- nvmf/common.sh@320 -- # e810=() 00:35:13.371 00:42:43 nvmf_identify_passthru -- nvmf/common.sh@320 -- # local -ga e810 00:35:13.371 00:42:43 nvmf_identify_passthru -- nvmf/common.sh@321 -- # x722=() 00:35:13.371 00:42:43 nvmf_identify_passthru -- nvmf/common.sh@321 -- # local -ga x722 00:35:13.371 00:42:43 nvmf_identify_passthru -- nvmf/common.sh@322 -- # mlx=() 00:35:13.371 00:42:43 nvmf_identify_passthru -- nvmf/common.sh@322 -- # local -ga mlx 00:35:13.371 00:42:43 nvmf_identify_passthru -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:13.371 00:42:43 nvmf_identify_passthru -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:13.371 00:42:43 nvmf_identify_passthru -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:13.371 00:42:43 nvmf_identify_passthru -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:13.371 00:42:43 nvmf_identify_passthru -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:13.371 00:42:43 nvmf_identify_passthru -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:13.371 00:42:43 nvmf_identify_passthru -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:13.371 00:42:43 nvmf_identify_passthru -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:13.371 00:42:43 nvmf_identify_passthru -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:13.371 00:42:43 nvmf_identify_passthru -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:13.371 00:42:43 nvmf_identify_passthru -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:13.371 00:42:43 nvmf_identify_passthru -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:13.371 00:42:43 nvmf_identify_passthru -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:13.371 00:42:43 nvmf_identify_passthru -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:13.371 00:42:43 nvmf_identify_passthru -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:13.371 00:42:43 nvmf_identify_passthru -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:13.371 00:42:43 nvmf_identify_passthru -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:13.371 00:42:43 nvmf_identify_passthru -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:13.371 00:42:43 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:13.371 00:42:43 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:35:13.371 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:35:13.371 00:42:43 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:13.371 00:42:43 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:13.371 00:42:43 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:13.371 00:42:43 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:13.371 00:42:43 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:13.371 00:42:43 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:13.371 00:42:43 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:35:13.371 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:35:13.371 00:42:43 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:13.371 00:42:43 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:13.371 00:42:43 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:13.371 00:42:43 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:13.371 00:42:43 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:13.371 00:42:43 nvmf_identify_passthru -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:13.371 00:42:43 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:13.371 00:42:43 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:13.371 00:42:43 nvmf_identify_passthru -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:35:13.371 00:42:43 nvmf_identify_passthru -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:13.372 00:42:43 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:35:13.372 00:42:43 nvmf_identify_passthru -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:13.372 00:42:43 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ up == up ]] 00:35:13.372 00:42:43 nvmf_identify_passthru -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:35:13.372 00:42:43 nvmf_identify_passthru -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:13.372 00:42:43 nvmf_identify_passthru -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:35:13.372 Found net devices under 0000:4b:00.0: cvl_0_0 00:35:13.372 00:42:43 nvmf_identify_passthru -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:35:13.372 00:42:43 nvmf_identify_passthru -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:35:13.372 00:42:43 nvmf_identify_passthru -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:13.372 00:42:43 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:35:13.372 00:42:43 nvmf_identify_passthru -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:13.372 00:42:43 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ up == up ]] 00:35:13.372 00:42:43 nvmf_identify_passthru -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:35:13.372 00:42:43 nvmf_identify_passthru -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:13.372 00:42:43 nvmf_identify_passthru -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:35:13.372 Found net devices under 0000:4b:00.1: cvl_0_1 00:35:13.372 00:42:43 nvmf_identify_passthru -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:35:13.372 00:42:43 nvmf_identify_passthru -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:35:13.372 00:42:43 nvmf_identify_passthru -- nvmf/common.sh@440 -- # is_hw=yes 00:35:13.372 00:42:43 nvmf_identify_passthru -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:35:13.372 00:42:43 nvmf_identify_passthru -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:35:13.372 00:42:43 nvmf_identify_passthru -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:35:13.372 00:42:43 nvmf_identify_passthru -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:13.372 00:42:43 nvmf_identify_passthru -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:13.372 00:42:43 nvmf_identify_passthru -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:13.372 00:42:43 nvmf_identify_passthru -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:13.372 00:42:43 nvmf_identify_passthru -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:13.372 00:42:43 nvmf_identify_passthru -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:13.372 00:42:43 nvmf_identify_passthru -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:13.372 00:42:43 nvmf_identify_passthru -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:13.372 00:42:43 nvmf_identify_passthru -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:13.372 00:42:43 nvmf_identify_passthru -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:13.372 00:42:43 nvmf_identify_passthru -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:13.372 00:42:43 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:13.372 00:42:43 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:13.372 00:42:43 nvmf_identify_passthru -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:13.372 00:42:43 nvmf_identify_passthru -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:13.633 00:42:44 nvmf_identify_passthru -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:13.633 00:42:44 nvmf_identify_passthru -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:13.633 00:42:44 nvmf_identify_passthru -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:13.633 00:42:44 nvmf_identify_passthru -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:13.633 00:42:44 nvmf_identify_passthru -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:13.633 00:42:44 nvmf_identify_passthru -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:13.633 00:42:44 nvmf_identify_passthru -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:13.633 00:42:44 nvmf_identify_passthru -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:13.633 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:13.633 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.622 ms 00:35:13.633 00:35:13.633 --- 10.0.0.2 ping statistics --- 00:35:13.633 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:13.633 rtt min/avg/max/mdev = 0.622/0.622/0.622/0.000 ms 00:35:13.633 00:42:44 nvmf_identify_passthru -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:13.633 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:13.633 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.285 ms 00:35:13.633 00:35:13.633 --- 10.0.0.1 ping statistics --- 00:35:13.633 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:13.633 rtt min/avg/max/mdev = 0.285/0.285/0.285/0.000 ms 00:35:13.633 00:42:44 nvmf_identify_passthru -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:13.633 00:42:44 nvmf_identify_passthru -- nvmf/common.sh@448 -- # return 0 00:35:13.633 00:42:44 nvmf_identify_passthru -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:35:13.633 00:42:44 nvmf_identify_passthru -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:13.633 00:42:44 nvmf_identify_passthru -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:35:13.633 00:42:44 nvmf_identify_passthru -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:35:13.633 00:42:44 nvmf_identify_passthru -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:13.633 00:42:44 nvmf_identify_passthru -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:35:13.633 00:42:44 nvmf_identify_passthru -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:35:13.894 00:42:44 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:35:13.894 00:42:44 nvmf_identify_passthru -- common/autotest_common.sh@724 -- # xtrace_disable 00:35:13.894 00:42:44 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:13.894 00:42:44 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:35:13.894 00:42:44 nvmf_identify_passthru -- common/autotest_common.sh@1507 -- # bdfs=() 00:35:13.894 00:42:44 nvmf_identify_passthru -- common/autotest_common.sh@1507 -- # local bdfs 00:35:13.894 00:42:44 nvmf_identify_passthru -- common/autotest_common.sh@1508 -- # bdfs=($(get_nvme_bdfs)) 00:35:13.894 00:42:44 nvmf_identify_passthru -- common/autotest_common.sh@1508 -- # get_nvme_bdfs 00:35:13.894 00:42:44 nvmf_identify_passthru -- common/autotest_common.sh@1496 -- # bdfs=() 00:35:13.894 00:42:44 nvmf_identify_passthru -- common/autotest_common.sh@1496 -- # local bdfs 00:35:13.894 00:42:44 nvmf_identify_passthru -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:35:13.894 00:42:44 nvmf_identify_passthru -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:35:13.894 00:42:44 nvmf_identify_passthru -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:35:13.894 00:42:44 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:35:13.894 00:42:44 nvmf_identify_passthru -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:65:00.0 00:35:13.894 00:42:44 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # echo 0000:65:00.0 00:35:13.894 00:42:44 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:65:00.0 00:35:13.894 00:42:44 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:65:00.0 ']' 00:35:13.894 00:42:44 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:35:13.894 00:42:44 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:35:13.894 00:42:44 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:35:14.561 00:42:44 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=S64GNE0R605487 00:35:14.561 00:42:44 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:35:14.561 00:42:44 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:35:14.561 00:42:44 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:35:14.896 00:42:45 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=SAMSUNG 00:35:14.896 00:42:45 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:35:14.896 00:42:45 nvmf_identify_passthru -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:14.896 00:42:45 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:14.896 00:42:45 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:35:14.896 00:42:45 nvmf_identify_passthru -- common/autotest_common.sh@724 -- # xtrace_disable 00:35:14.896 00:42:45 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:14.896 00:42:45 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=3543956 00:35:14.896 00:42:45 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:35:14.896 00:42:45 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:35:14.896 00:42:45 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 3543956 00:35:14.896 00:42:45 nvmf_identify_passthru -- common/autotest_common.sh@831 -- # '[' -z 3543956 ']' 00:35:14.896 00:42:45 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:14.896 00:42:45 nvmf_identify_passthru -- common/autotest_common.sh@836 -- # local max_retries=100 00:35:14.896 00:42:45 nvmf_identify_passthru -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:14.896 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:14.896 00:42:45 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # xtrace_disable 00:35:14.896 00:42:45 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:14.896 [2024-10-09 00:42:45.487898] Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 initialization... 00:35:14.896 [2024-10-09 00:42:45.487964] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:15.246 [2024-10-09 00:42:45.575394] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:35:15.246 [2024-10-09 00:42:45.672291] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:15.246 [2024-10-09 00:42:45.672351] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:15.246 [2024-10-09 00:42:45.672360] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:15.246 [2024-10-09 00:42:45.672367] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:15.246 [2024-10-09 00:42:45.672374] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:15.246 [2024-10-09 00:42:45.674857] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:35:15.246 [2024-10-09 00:42:45.675010] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:35:15.246 [2024-10-09 00:42:45.675178] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:35:15.246 [2024-10-09 00:42:45.675178] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:35:15.820 00:42:46 nvmf_identify_passthru -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:35:15.820 00:42:46 nvmf_identify_passthru -- common/autotest_common.sh@864 -- # return 0 00:35:15.820 00:42:46 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:35:15.820 00:42:46 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:15.820 00:42:46 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:15.820 INFO: Log level set to 20 00:35:15.820 INFO: Requests: 00:35:15.820 { 00:35:15.820 "jsonrpc": "2.0", 00:35:15.820 "method": "nvmf_set_config", 00:35:15.820 "id": 1, 00:35:15.820 "params": { 00:35:15.820 "admin_cmd_passthru": { 00:35:15.820 "identify_ctrlr": true 00:35:15.820 } 00:35:15.820 } 00:35:15.820 } 00:35:15.820 00:35:15.820 INFO: response: 00:35:15.820 { 00:35:15.820 "jsonrpc": "2.0", 00:35:15.820 "id": 1, 00:35:15.820 "result": true 00:35:15.820 } 00:35:15.820 00:35:15.820 00:42:46 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:15.820 00:42:46 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:35:15.820 00:42:46 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:15.820 00:42:46 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:15.820 INFO: Setting log level to 20 00:35:15.820 INFO: Setting log level to 20 00:35:15.820 INFO: Log level set to 20 00:35:15.820 INFO: Log level set to 20 00:35:15.820 INFO: Requests: 00:35:15.820 { 00:35:15.820 "jsonrpc": "2.0", 00:35:15.820 "method": "framework_start_init", 00:35:15.820 "id": 1 00:35:15.820 } 00:35:15.820 00:35:15.820 INFO: Requests: 00:35:15.820 { 00:35:15.820 "jsonrpc": "2.0", 00:35:15.820 "method": "framework_start_init", 00:35:15.820 "id": 1 00:35:15.820 } 00:35:15.820 00:35:15.820 [2024-10-09 00:42:46.423915] nvmf_tgt.c: 462:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:35:15.820 INFO: response: 00:35:15.820 { 00:35:15.820 "jsonrpc": "2.0", 00:35:15.820 "id": 1, 00:35:15.820 "result": true 00:35:15.820 } 00:35:15.820 00:35:15.820 INFO: response: 00:35:15.820 { 00:35:15.820 "jsonrpc": "2.0", 00:35:15.820 "id": 1, 00:35:15.820 "result": true 00:35:15.820 } 00:35:15.820 00:35:15.820 00:42:46 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:15.820 00:42:46 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:35:15.820 00:42:46 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:15.820 00:42:46 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:15.820 INFO: Setting log level to 40 00:35:15.820 INFO: Setting log level to 40 00:35:15.820 INFO: Setting log level to 40 00:35:15.820 [2024-10-09 00:42:46.437563] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:15.820 00:42:46 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:15.820 00:42:46 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:35:15.820 00:42:46 nvmf_identify_passthru -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:15.820 00:42:46 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:16.081 00:42:46 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:65:00.0 00:35:16.081 00:42:46 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:16.081 00:42:46 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:16.342 Nvme0n1 00:35:16.342 00:42:46 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:16.342 00:42:46 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:35:16.342 00:42:46 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:16.342 00:42:46 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:16.342 00:42:46 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:16.342 00:42:46 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:35:16.342 00:42:46 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:16.342 00:42:46 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:16.342 00:42:46 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:16.342 00:42:46 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:16.342 00:42:46 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:16.342 00:42:46 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:16.342 [2024-10-09 00:42:46.831620] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:16.342 00:42:46 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:16.343 00:42:46 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:35:16.343 00:42:46 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:16.343 00:42:46 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:16.343 [ 00:35:16.343 { 00:35:16.343 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:35:16.343 "subtype": "Discovery", 00:35:16.343 "listen_addresses": [], 00:35:16.343 "allow_any_host": true, 00:35:16.343 "hosts": [] 00:35:16.343 }, 00:35:16.343 { 00:35:16.343 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:35:16.343 "subtype": "NVMe", 00:35:16.343 "listen_addresses": [ 00:35:16.343 { 00:35:16.343 "trtype": "TCP", 00:35:16.343 "adrfam": "IPv4", 00:35:16.343 "traddr": "10.0.0.2", 00:35:16.343 "trsvcid": "4420" 00:35:16.343 } 00:35:16.343 ], 00:35:16.343 "allow_any_host": true, 00:35:16.343 "hosts": [], 00:35:16.343 "serial_number": "SPDK00000000000001", 00:35:16.343 "model_number": "SPDK bdev Controller", 00:35:16.343 "max_namespaces": 1, 00:35:16.343 "min_cntlid": 1, 00:35:16.343 "max_cntlid": 65519, 00:35:16.343 "namespaces": [ 00:35:16.343 { 00:35:16.343 "nsid": 1, 00:35:16.343 "bdev_name": "Nvme0n1", 00:35:16.343 "name": "Nvme0n1", 00:35:16.343 "nguid": "36344730526054870025384500000044", 00:35:16.343 "uuid": "36344730-5260-5487-0025-384500000044" 00:35:16.343 } 00:35:16.343 ] 00:35:16.343 } 00:35:16.343 ] 00:35:16.343 00:42:46 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:16.343 00:42:46 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:35:16.343 00:42:46 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:35:16.343 00:42:46 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:35:16.604 00:42:46 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=S64GNE0R605487 00:35:16.604 00:42:46 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:35:16.604 00:42:46 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:35:16.604 00:42:46 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:35:16.604 00:42:47 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=SAMSUNG 00:35:16.604 00:42:47 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' S64GNE0R605487 '!=' S64GNE0R605487 ']' 00:35:16.604 00:42:47 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' SAMSUNG '!=' SAMSUNG ']' 00:35:16.604 00:42:47 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:16.604 00:42:47 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:16.604 00:42:47 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:16.604 00:42:47 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:16.604 00:42:47 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:35:16.604 00:42:47 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:35:16.604 00:42:47 nvmf_identify_passthru -- nvmf/common.sh@514 -- # nvmfcleanup 00:35:16.604 00:42:47 nvmf_identify_passthru -- nvmf/common.sh@121 -- # sync 00:35:16.604 00:42:47 nvmf_identify_passthru -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:16.604 00:42:47 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set +e 00:35:16.604 00:42:47 nvmf_identify_passthru -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:16.604 00:42:47 nvmf_identify_passthru -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:16.604 rmmod nvme_tcp 00:35:16.604 rmmod nvme_fabrics 00:35:16.604 rmmod nvme_keyring 00:35:16.604 00:42:47 nvmf_identify_passthru -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:16.604 00:42:47 nvmf_identify_passthru -- nvmf/common.sh@128 -- # set -e 00:35:16.604 00:42:47 nvmf_identify_passthru -- nvmf/common.sh@129 -- # return 0 00:35:16.604 00:42:47 nvmf_identify_passthru -- nvmf/common.sh@515 -- # '[' -n 3543956 ']' 00:35:16.604 00:42:47 nvmf_identify_passthru -- nvmf/common.sh@516 -- # killprocess 3543956 00:35:16.604 00:42:47 nvmf_identify_passthru -- common/autotest_common.sh@950 -- # '[' -z 3543956 ']' 00:35:16.604 00:42:47 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # kill -0 3543956 00:35:16.604 00:42:47 nvmf_identify_passthru -- common/autotest_common.sh@955 -- # uname 00:35:16.604 00:42:47 nvmf_identify_passthru -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:35:16.865 00:42:47 nvmf_identify_passthru -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3543956 00:35:16.865 00:42:47 nvmf_identify_passthru -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:35:16.865 00:42:47 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:35:16.865 00:42:47 nvmf_identify_passthru -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3543956' 00:35:16.865 killing process with pid 3543956 00:35:16.865 00:42:47 nvmf_identify_passthru -- common/autotest_common.sh@969 -- # kill 3543956 00:35:16.865 00:42:47 nvmf_identify_passthru -- common/autotest_common.sh@974 -- # wait 3543956 00:35:17.131 00:42:47 nvmf_identify_passthru -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:35:17.131 00:42:47 nvmf_identify_passthru -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:35:17.131 00:42:47 nvmf_identify_passthru -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:35:17.131 00:42:47 nvmf_identify_passthru -- nvmf/common.sh@297 -- # iptr 00:35:17.131 00:42:47 nvmf_identify_passthru -- nvmf/common.sh@789 -- # iptables-save 00:35:17.131 00:42:47 nvmf_identify_passthru -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:35:17.131 00:42:47 nvmf_identify_passthru -- nvmf/common.sh@789 -- # iptables-restore 00:35:17.131 00:42:47 nvmf_identify_passthru -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:17.131 00:42:47 nvmf_identify_passthru -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:17.131 00:42:47 nvmf_identify_passthru -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:17.131 00:42:47 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:17.131 00:42:47 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:19.684 00:42:49 nvmf_identify_passthru -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:19.684 00:35:19.684 real 0m13.169s 00:35:19.684 user 0m9.904s 00:35:19.684 sys 0m6.824s 00:35:19.684 00:42:49 nvmf_identify_passthru -- common/autotest_common.sh@1126 -- # xtrace_disable 00:35:19.684 00:42:49 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:19.684 ************************************ 00:35:19.684 END TEST nvmf_identify_passthru 00:35:19.684 ************************************ 00:35:19.684 00:42:49 -- spdk/autotest.sh@285 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:35:19.684 00:42:49 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:35:19.684 00:42:49 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:35:19.684 00:42:49 -- common/autotest_common.sh@10 -- # set +x 00:35:19.684 ************************************ 00:35:19.684 START TEST nvmf_dif 00:35:19.684 ************************************ 00:35:19.684 00:42:49 nvmf_dif -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:35:19.684 * Looking for test storage... 00:35:19.684 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:19.684 00:42:49 nvmf_dif -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:35:19.684 00:42:49 nvmf_dif -- common/autotest_common.sh@1681 -- # lcov --version 00:35:19.684 00:42:49 nvmf_dif -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:35:19.684 00:42:49 nvmf_dif -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:35:19.684 00:42:49 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:19.684 00:42:49 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:19.684 00:42:49 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:19.684 00:42:49 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:35:19.684 00:42:49 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:35:19.684 00:42:49 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:35:19.684 00:42:49 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:35:19.684 00:42:49 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:35:19.684 00:42:49 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:35:19.684 00:42:49 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:35:19.684 00:42:49 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:19.684 00:42:49 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:35:19.684 00:42:49 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:35:19.684 00:42:49 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:19.684 00:42:49 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:19.684 00:42:49 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:35:19.684 00:42:49 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:35:19.684 00:42:49 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:19.684 00:42:49 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:35:19.684 00:42:49 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:35:19.684 00:42:49 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:35:19.684 00:42:49 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:35:19.684 00:42:49 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:19.684 00:42:49 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:35:19.684 00:42:49 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:35:19.684 00:42:49 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:19.684 00:42:49 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:19.684 00:42:49 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:35:19.684 00:42:49 nvmf_dif -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:19.684 00:42:49 nvmf_dif -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:35:19.684 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:19.684 --rc genhtml_branch_coverage=1 00:35:19.684 --rc genhtml_function_coverage=1 00:35:19.684 --rc genhtml_legend=1 00:35:19.684 --rc geninfo_all_blocks=1 00:35:19.684 --rc geninfo_unexecuted_blocks=1 00:35:19.684 00:35:19.684 ' 00:35:19.684 00:42:49 nvmf_dif -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:35:19.684 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:19.684 --rc genhtml_branch_coverage=1 00:35:19.684 --rc genhtml_function_coverage=1 00:35:19.684 --rc genhtml_legend=1 00:35:19.684 --rc geninfo_all_blocks=1 00:35:19.684 --rc geninfo_unexecuted_blocks=1 00:35:19.684 00:35:19.684 ' 00:35:19.684 00:42:49 nvmf_dif -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:35:19.684 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:19.684 --rc genhtml_branch_coverage=1 00:35:19.684 --rc genhtml_function_coverage=1 00:35:19.684 --rc genhtml_legend=1 00:35:19.684 --rc geninfo_all_blocks=1 00:35:19.684 --rc geninfo_unexecuted_blocks=1 00:35:19.684 00:35:19.684 ' 00:35:19.684 00:42:49 nvmf_dif -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:35:19.684 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:19.684 --rc genhtml_branch_coverage=1 00:35:19.684 --rc genhtml_function_coverage=1 00:35:19.684 --rc genhtml_legend=1 00:35:19.684 --rc geninfo_all_blocks=1 00:35:19.684 --rc geninfo_unexecuted_blocks=1 00:35:19.684 00:35:19.684 ' 00:35:19.684 00:42:49 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:19.684 00:42:49 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:35:19.684 00:42:49 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:19.684 00:42:49 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:19.684 00:42:49 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:19.684 00:42:49 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:19.684 00:42:49 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:19.684 00:42:49 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:19.684 00:42:49 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:19.684 00:42:49 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:19.684 00:42:49 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:19.684 00:42:49 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:19.684 00:42:50 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:35:19.684 00:42:50 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:35:19.684 00:42:50 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:19.684 00:42:50 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:19.684 00:42:50 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:19.684 00:42:50 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:19.684 00:42:50 nvmf_dif -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:19.684 00:42:50 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:35:19.684 00:42:50 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:19.684 00:42:50 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:19.684 00:42:50 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:19.684 00:42:50 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:19.684 00:42:50 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:19.684 00:42:50 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:19.684 00:42:50 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:35:19.685 00:42:50 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:19.685 00:42:50 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:35:19.685 00:42:50 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:19.685 00:42:50 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:19.685 00:42:50 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:19.685 00:42:50 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:19.685 00:42:50 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:19.685 00:42:50 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:19.685 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:19.685 00:42:50 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:19.685 00:42:50 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:19.685 00:42:50 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:19.685 00:42:50 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:35:19.685 00:42:50 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:35:19.685 00:42:50 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:35:19.685 00:42:50 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:35:19.685 00:42:50 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:35:19.685 00:42:50 nvmf_dif -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:35:19.685 00:42:50 nvmf_dif -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:19.685 00:42:50 nvmf_dif -- nvmf/common.sh@474 -- # prepare_net_devs 00:35:19.685 00:42:50 nvmf_dif -- nvmf/common.sh@436 -- # local -g is_hw=no 00:35:19.685 00:42:50 nvmf_dif -- nvmf/common.sh@438 -- # remove_spdk_ns 00:35:19.685 00:42:50 nvmf_dif -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:19.685 00:42:50 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:19.685 00:42:50 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:19.685 00:42:50 nvmf_dif -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:35:19.685 00:42:50 nvmf_dif -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:35:19.685 00:42:50 nvmf_dif -- nvmf/common.sh@309 -- # xtrace_disable 00:35:19.685 00:42:50 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:27.825 00:42:57 nvmf_dif -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:27.825 00:42:57 nvmf_dif -- nvmf/common.sh@315 -- # pci_devs=() 00:35:27.825 00:42:57 nvmf_dif -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:27.825 00:42:57 nvmf_dif -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:27.825 00:42:57 nvmf_dif -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:27.825 00:42:57 nvmf_dif -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:27.825 00:42:57 nvmf_dif -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:27.825 00:42:57 nvmf_dif -- nvmf/common.sh@319 -- # net_devs=() 00:35:27.825 00:42:57 nvmf_dif -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:27.825 00:42:57 nvmf_dif -- nvmf/common.sh@320 -- # e810=() 00:35:27.825 00:42:57 nvmf_dif -- nvmf/common.sh@320 -- # local -ga e810 00:35:27.825 00:42:57 nvmf_dif -- nvmf/common.sh@321 -- # x722=() 00:35:27.825 00:42:57 nvmf_dif -- nvmf/common.sh@321 -- # local -ga x722 00:35:27.825 00:42:57 nvmf_dif -- nvmf/common.sh@322 -- # mlx=() 00:35:27.825 00:42:57 nvmf_dif -- nvmf/common.sh@322 -- # local -ga mlx 00:35:27.825 00:42:57 nvmf_dif -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:27.825 00:42:57 nvmf_dif -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:27.825 00:42:57 nvmf_dif -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:27.825 00:42:57 nvmf_dif -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:27.825 00:42:57 nvmf_dif -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:27.825 00:42:57 nvmf_dif -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:27.825 00:42:57 nvmf_dif -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:27.825 00:42:57 nvmf_dif -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:27.825 00:42:57 nvmf_dif -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:27.825 00:42:57 nvmf_dif -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:27.825 00:42:57 nvmf_dif -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:27.825 00:42:57 nvmf_dif -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:27.825 00:42:57 nvmf_dif -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:27.825 00:42:57 nvmf_dif -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:27.825 00:42:57 nvmf_dif -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:27.825 00:42:57 nvmf_dif -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:27.825 00:42:57 nvmf_dif -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:27.825 00:42:57 nvmf_dif -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:27.825 00:42:57 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:27.825 00:42:57 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:35:27.825 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:35:27.825 00:42:57 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:27.825 00:42:57 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:27.825 00:42:57 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:27.825 00:42:57 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:27.825 00:42:57 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:27.825 00:42:57 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:27.825 00:42:57 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:35:27.825 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:35:27.825 00:42:57 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:27.825 00:42:57 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:27.825 00:42:57 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:27.825 00:42:57 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:27.825 00:42:57 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:27.825 00:42:57 nvmf_dif -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:27.825 00:42:57 nvmf_dif -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:27.825 00:42:57 nvmf_dif -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:27.825 00:42:57 nvmf_dif -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:35:27.826 00:42:57 nvmf_dif -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:27.826 00:42:57 nvmf_dif -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:35:27.826 00:42:57 nvmf_dif -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:27.826 00:42:57 nvmf_dif -- nvmf/common.sh@416 -- # [[ up == up ]] 00:35:27.826 00:42:57 nvmf_dif -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:35:27.826 00:42:57 nvmf_dif -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:27.826 00:42:57 nvmf_dif -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:35:27.826 Found net devices under 0000:4b:00.0: cvl_0_0 00:35:27.826 00:42:57 nvmf_dif -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:35:27.826 00:42:57 nvmf_dif -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:35:27.826 00:42:57 nvmf_dif -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:27.826 00:42:57 nvmf_dif -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:35:27.826 00:42:57 nvmf_dif -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:27.826 00:42:57 nvmf_dif -- nvmf/common.sh@416 -- # [[ up == up ]] 00:35:27.826 00:42:57 nvmf_dif -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:35:27.826 00:42:57 nvmf_dif -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:27.826 00:42:57 nvmf_dif -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:35:27.826 Found net devices under 0000:4b:00.1: cvl_0_1 00:35:27.826 00:42:57 nvmf_dif -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:35:27.826 00:42:57 nvmf_dif -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:35:27.826 00:42:57 nvmf_dif -- nvmf/common.sh@440 -- # is_hw=yes 00:35:27.826 00:42:57 nvmf_dif -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:35:27.826 00:42:57 nvmf_dif -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:35:27.826 00:42:57 nvmf_dif -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:35:27.826 00:42:57 nvmf_dif -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:27.826 00:42:57 nvmf_dif -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:27.826 00:42:57 nvmf_dif -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:27.826 00:42:57 nvmf_dif -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:27.826 00:42:57 nvmf_dif -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:27.826 00:42:57 nvmf_dif -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:27.826 00:42:57 nvmf_dif -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:27.826 00:42:57 nvmf_dif -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:27.826 00:42:57 nvmf_dif -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:27.826 00:42:57 nvmf_dif -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:27.826 00:42:57 nvmf_dif -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:27.826 00:42:57 nvmf_dif -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:27.826 00:42:57 nvmf_dif -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:27.826 00:42:57 nvmf_dif -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:27.826 00:42:57 nvmf_dif -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:27.826 00:42:57 nvmf_dif -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:27.826 00:42:57 nvmf_dif -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:27.826 00:42:57 nvmf_dif -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:27.826 00:42:57 nvmf_dif -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:27.826 00:42:57 nvmf_dif -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:27.826 00:42:57 nvmf_dif -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:27.826 00:42:57 nvmf_dif -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:27.826 00:42:57 nvmf_dif -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:27.826 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:27.826 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.563 ms 00:35:27.826 00:35:27.826 --- 10.0.0.2 ping statistics --- 00:35:27.826 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:27.826 rtt min/avg/max/mdev = 0.563/0.563/0.563/0.000 ms 00:35:27.826 00:42:57 nvmf_dif -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:27.826 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:27.826 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.279 ms 00:35:27.826 00:35:27.826 --- 10.0.0.1 ping statistics --- 00:35:27.826 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:27.826 rtt min/avg/max/mdev = 0.279/0.279/0.279/0.000 ms 00:35:27.826 00:42:57 nvmf_dif -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:27.826 00:42:57 nvmf_dif -- nvmf/common.sh@448 -- # return 0 00:35:27.826 00:42:57 nvmf_dif -- nvmf/common.sh@476 -- # '[' iso == iso ']' 00:35:27.826 00:42:57 nvmf_dif -- nvmf/common.sh@477 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:35:30.402 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:35:30.402 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:35:30.402 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:35:30.402 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:35:30.402 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:35:30.402 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:35:30.402 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:35:30.402 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:35:30.402 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:35:30.402 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:35:30.402 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:35:30.402 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:35:30.402 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:35:30.402 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:35:30.402 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:35:30.402 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:35:30.402 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:35:30.402 00:43:01 nvmf_dif -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:30.402 00:43:01 nvmf_dif -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:35:30.402 00:43:01 nvmf_dif -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:35:30.402 00:43:01 nvmf_dif -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:30.402 00:43:01 nvmf_dif -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:35:30.402 00:43:01 nvmf_dif -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:35:30.663 00:43:01 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:35:30.663 00:43:01 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:35:30.663 00:43:01 nvmf_dif -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:35:30.663 00:43:01 nvmf_dif -- common/autotest_common.sh@724 -- # xtrace_disable 00:35:30.663 00:43:01 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:30.663 00:43:01 nvmf_dif -- nvmf/common.sh@507 -- # nvmfpid=3549917 00:35:30.663 00:43:01 nvmf_dif -- nvmf/common.sh@508 -- # waitforlisten 3549917 00:35:30.663 00:43:01 nvmf_dif -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:35:30.663 00:43:01 nvmf_dif -- common/autotest_common.sh@831 -- # '[' -z 3549917 ']' 00:35:30.663 00:43:01 nvmf_dif -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:30.663 00:43:01 nvmf_dif -- common/autotest_common.sh@836 -- # local max_retries=100 00:35:30.663 00:43:01 nvmf_dif -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:30.663 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:30.663 00:43:01 nvmf_dif -- common/autotest_common.sh@840 -- # xtrace_disable 00:35:30.663 00:43:01 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:30.663 [2024-10-09 00:43:01.115768] Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 initialization... 00:35:30.663 [2024-10-09 00:43:01.115834] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:30.663 [2024-10-09 00:43:01.207993] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:30.924 [2024-10-09 00:43:01.303229] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:30.924 [2024-10-09 00:43:01.303293] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:30.924 [2024-10-09 00:43:01.303302] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:30.924 [2024-10-09 00:43:01.303309] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:30.924 [2024-10-09 00:43:01.303317] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:30.924 [2024-10-09 00:43:01.304116] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:35:31.497 00:43:01 nvmf_dif -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:35:31.497 00:43:01 nvmf_dif -- common/autotest_common.sh@864 -- # return 0 00:35:31.497 00:43:01 nvmf_dif -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:35:31.497 00:43:01 nvmf_dif -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:31.497 00:43:01 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:31.497 00:43:01 nvmf_dif -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:31.497 00:43:01 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:35:31.497 00:43:01 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:35:31.497 00:43:01 nvmf_dif -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:31.497 00:43:01 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:31.497 [2024-10-09 00:43:01.985838] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:31.497 00:43:01 nvmf_dif -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:31.497 00:43:01 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:35:31.497 00:43:01 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:35:31.497 00:43:01 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:35:31.497 00:43:01 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:31.497 ************************************ 00:35:31.497 START TEST fio_dif_1_default 00:35:31.497 ************************************ 00:35:31.497 00:43:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1125 -- # fio_dif_1 00:35:31.497 00:43:02 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:35:31.497 00:43:02 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:35:31.497 00:43:02 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:35:31.497 00:43:02 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:35:31.497 00:43:02 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:35:31.497 00:43:02 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:35:31.497 00:43:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:31.497 00:43:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:31.497 bdev_null0 00:35:31.497 00:43:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:31.497 00:43:02 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:31.497 00:43:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:31.497 00:43:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:31.497 00:43:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:31.497 00:43:02 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:31.497 00:43:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:31.497 00:43:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:31.497 00:43:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:31.497 00:43:02 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:31.497 00:43:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:31.497 00:43:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:31.497 [2024-10-09 00:43:02.078283] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:31.497 00:43:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:31.497 00:43:02 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:35:31.497 00:43:02 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:35:31.497 00:43:02 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:35:31.497 00:43:02 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # config=() 00:35:31.497 00:43:02 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:31.497 00:43:02 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # local subsystem config 00:35:31.497 00:43:02 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:35:31.497 00:43:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:31.497 00:43:02 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:35:31.497 { 00:35:31.497 "params": { 00:35:31.497 "name": "Nvme$subsystem", 00:35:31.497 "trtype": "$TEST_TRANSPORT", 00:35:31.497 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:31.497 "adrfam": "ipv4", 00:35:31.497 "trsvcid": "$NVMF_PORT", 00:35:31.497 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:31.497 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:31.497 "hdgst": ${hdgst:-false}, 00:35:31.497 "ddgst": ${ddgst:-false} 00:35:31.497 }, 00:35:31.497 "method": "bdev_nvme_attach_controller" 00:35:31.497 } 00:35:31.497 EOF 00:35:31.497 )") 00:35:31.497 00:43:02 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:35:31.497 00:43:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:35:31.497 00:43:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:31.497 00:43:02 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:35:31.497 00:43:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local sanitizers 00:35:31.497 00:43:02 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:35:31.497 00:43:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:31.497 00:43:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # shift 00:35:31.497 00:43:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local asan_lib= 00:35:31.497 00:43:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:35:31.497 00:43:02 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@580 -- # cat 00:35:31.497 00:43:02 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:35:31.497 00:43:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:31.497 00:43:02 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:35:31.497 00:43:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libasan 00:35:31.497 00:43:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:35:31.497 00:43:02 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # jq . 00:35:31.497 00:43:02 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@583 -- # IFS=, 00:35:31.497 00:43:02 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:35:31.497 "params": { 00:35:31.497 "name": "Nvme0", 00:35:31.497 "trtype": "tcp", 00:35:31.497 "traddr": "10.0.0.2", 00:35:31.497 "adrfam": "ipv4", 00:35:31.497 "trsvcid": "4420", 00:35:31.497 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:31.497 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:31.497 "hdgst": false, 00:35:31.497 "ddgst": false 00:35:31.497 }, 00:35:31.497 "method": "bdev_nvme_attach_controller" 00:35:31.497 }' 00:35:31.771 00:43:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:35:31.771 00:43:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:35:31.771 00:43:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:35:31.771 00:43:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:31.771 00:43:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:35:31.771 00:43:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:35:31.771 00:43:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:35:31.771 00:43:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:35:31.771 00:43:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:31.771 00:43:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:32.066 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:35:32.066 fio-3.35 00:35:32.066 Starting 1 thread 00:35:44.350 00:35:44.350 filename0: (groupid=0, jobs=1): err= 0: pid=3550510: Wed Oct 9 00:43:13 2024 00:35:44.350 read: IOPS=194, BW=777KiB/s (796kB/s)(7792KiB/10025msec) 00:35:44.350 slat (nsec): min=5363, max=67763, avg=6503.13, stdev=3181.12 00:35:44.350 clat (usec): min=396, max=42084, avg=20567.28, stdev=20182.38 00:35:44.350 lat (usec): min=402, max=42092, avg=20573.78, stdev=20182.05 00:35:44.350 clat percentiles (usec): 00:35:44.350 | 1.00th=[ 469], 5.00th=[ 766], 10.00th=[ 783], 20.00th=[ 807], 00:35:44.350 | 30.00th=[ 824], 40.00th=[ 889], 50.00th=[ 1106], 60.00th=[41157], 00:35:44.350 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:35:44.350 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:35:44.350 | 99.99th=[42206] 00:35:44.350 bw ( KiB/s): min= 704, max= 1088, per=99.97%, avg=777.60, stdev=77.76, samples=20 00:35:44.350 iops : min= 176, max= 272, avg=194.40, stdev=19.44, samples=20 00:35:44.350 lat (usec) : 500=1.54%, 750=0.72%, 1000=46.30% 00:35:44.350 lat (msec) : 2=2.36%, 4=0.21%, 50=48.87% 00:35:44.350 cpu : usr=93.57%, sys=6.20%, ctx=9, majf=0, minf=249 00:35:44.350 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:44.350 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:44.350 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:44.350 issued rwts: total=1948,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:44.350 latency : target=0, window=0, percentile=100.00%, depth=4 00:35:44.350 00:35:44.350 Run status group 0 (all jobs): 00:35:44.351 READ: bw=777KiB/s (796kB/s), 777KiB/s-777KiB/s (796kB/s-796kB/s), io=7792KiB (7979kB), run=10025-10025msec 00:35:44.351 00:43:13 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:35:44.351 00:43:13 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:35:44.351 00:43:13 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:35:44.351 00:43:13 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:44.351 00:43:13 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:35:44.351 00:43:13 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:44.351 00:43:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:44.351 00:43:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:44.351 00:43:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:44.351 00:43:13 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:44.351 00:43:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:44.351 00:43:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:44.351 00:43:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:44.351 00:35:44.351 real 0m11.284s 00:35:44.351 user 0m18.303s 00:35:44.351 sys 0m1.076s 00:35:44.351 00:43:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1126 -- # xtrace_disable 00:35:44.351 00:43:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:44.351 ************************************ 00:35:44.351 END TEST fio_dif_1_default 00:35:44.351 ************************************ 00:35:44.351 00:43:13 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:35:44.352 00:43:13 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:35:44.352 00:43:13 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:35:44.352 00:43:13 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:44.352 ************************************ 00:35:44.352 START TEST fio_dif_1_multi_subsystems 00:35:44.352 ************************************ 00:35:44.352 00:43:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1125 -- # fio_dif_1_multi_subsystems 00:35:44.352 00:43:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:35:44.352 00:43:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:35:44.352 00:43:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:35:44.352 00:43:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:35:44.352 00:43:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:35:44.352 00:43:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:35:44.352 00:43:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:35:44.352 00:43:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:44.352 00:43:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:44.352 bdev_null0 00:35:44.352 00:43:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:44.352 00:43:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:44.352 00:43:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:44.352 00:43:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:44.352 00:43:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:44.352 00:43:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:44.352 00:43:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:44.352 00:43:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:44.352 00:43:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:44.353 00:43:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:44.353 00:43:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:44.353 00:43:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:44.353 [2024-10-09 00:43:13.444691] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:44.353 00:43:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:44.353 00:43:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:35:44.353 00:43:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:35:44.353 00:43:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:35:44.353 00:43:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:35:44.353 00:43:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:44.353 00:43:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:44.353 bdev_null1 00:35:44.353 00:43:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:44.353 00:43:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:35:44.353 00:43:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:44.353 00:43:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:44.354 00:43:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:44.354 00:43:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:35:44.354 00:43:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:44.354 00:43:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:44.354 00:43:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:44.354 00:43:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:44.354 00:43:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:44.354 00:43:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:44.354 00:43:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:44.354 00:43:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:35:44.354 00:43:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:35:44.354 00:43:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:35:44.354 00:43:13 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # config=() 00:35:44.354 00:43:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:44.354 00:43:13 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # local subsystem config 00:35:44.354 00:43:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:44.354 00:43:13 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:35:44.354 00:43:13 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:35:44.354 { 00:35:44.354 "params": { 00:35:44.354 "name": "Nvme$subsystem", 00:35:44.354 "trtype": "$TEST_TRANSPORT", 00:35:44.354 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:44.355 "adrfam": "ipv4", 00:35:44.355 "trsvcid": "$NVMF_PORT", 00:35:44.355 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:44.355 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:44.355 "hdgst": ${hdgst:-false}, 00:35:44.355 "ddgst": ${ddgst:-false} 00:35:44.355 }, 00:35:44.355 "method": "bdev_nvme_attach_controller" 00:35:44.355 } 00:35:44.355 EOF 00:35:44.355 )") 00:35:44.355 00:43:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:35:44.355 00:43:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:35:44.355 00:43:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:44.355 00:43:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:35:44.355 00:43:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local sanitizers 00:35:44.355 00:43:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:35:44.355 00:43:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:44.355 00:43:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # shift 00:35:44.355 00:43:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local asan_lib= 00:35:44.355 00:43:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:35:44.355 00:43:13 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@580 -- # cat 00:35:44.355 00:43:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:44.356 00:43:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:35:44.356 00:43:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libasan 00:35:44.356 00:43:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:35:44.356 00:43:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:35:44.356 00:43:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:35:44.356 00:43:13 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:35:44.356 00:43:13 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:35:44.356 { 00:35:44.356 "params": { 00:35:44.356 "name": "Nvme$subsystem", 00:35:44.356 "trtype": "$TEST_TRANSPORT", 00:35:44.356 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:44.356 "adrfam": "ipv4", 00:35:44.356 "trsvcid": "$NVMF_PORT", 00:35:44.356 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:44.356 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:44.356 "hdgst": ${hdgst:-false}, 00:35:44.356 "ddgst": ${ddgst:-false} 00:35:44.356 }, 00:35:44.356 "method": "bdev_nvme_attach_controller" 00:35:44.356 } 00:35:44.356 EOF 00:35:44.356 )") 00:35:44.356 00:43:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:35:44.356 00:43:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:35:44.356 00:43:13 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@580 -- # cat 00:35:44.356 00:43:13 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # jq . 00:35:44.356 00:43:13 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@583 -- # IFS=, 00:35:44.356 00:43:13 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:35:44.356 "params": { 00:35:44.357 "name": "Nvme0", 00:35:44.357 "trtype": "tcp", 00:35:44.357 "traddr": "10.0.0.2", 00:35:44.357 "adrfam": "ipv4", 00:35:44.357 "trsvcid": "4420", 00:35:44.357 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:44.357 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:44.357 "hdgst": false, 00:35:44.357 "ddgst": false 00:35:44.357 }, 00:35:44.357 "method": "bdev_nvme_attach_controller" 00:35:44.357 },{ 00:35:44.357 "params": { 00:35:44.357 "name": "Nvme1", 00:35:44.357 "trtype": "tcp", 00:35:44.357 "traddr": "10.0.0.2", 00:35:44.357 "adrfam": "ipv4", 00:35:44.357 "trsvcid": "4420", 00:35:44.357 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:44.357 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:44.357 "hdgst": false, 00:35:44.357 "ddgst": false 00:35:44.357 }, 00:35:44.357 "method": "bdev_nvme_attach_controller" 00:35:44.357 }' 00:35:44.357 00:43:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:35:44.357 00:43:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:35:44.357 00:43:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:35:44.357 00:43:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:44.357 00:43:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:35:44.357 00:43:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:35:44.357 00:43:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:35:44.357 00:43:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:35:44.357 00:43:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:44.357 00:43:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:44.357 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:35:44.357 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:35:44.357 fio-3.35 00:35:44.357 Starting 2 threads 00:35:54.349 00:35:54.349 filename0: (groupid=0, jobs=1): err= 0: pid=3552859: Wed Oct 9 00:43:24 2024 00:35:54.349 read: IOPS=97, BW=389KiB/s (399kB/s)(3904KiB/10024msec) 00:35:54.349 slat (nsec): min=5381, max=31214, avg=6310.63, stdev=1523.52 00:35:54.349 clat (usec): min=40857, max=42441, avg=41062.47, stdev=270.89 00:35:54.349 lat (usec): min=40863, max=42472, avg=41068.78, stdev=271.08 00:35:54.349 clat percentiles (usec): 00:35:54.349 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:35:54.349 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:35:54.349 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[42206], 00:35:54.349 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:35:54.349 | 99.99th=[42206] 00:35:54.349 bw ( KiB/s): min= 384, max= 416, per=33.86%, avg=388.80, stdev=11.72, samples=20 00:35:54.349 iops : min= 96, max= 104, avg=97.20, stdev= 2.93, samples=20 00:35:54.349 lat (msec) : 50=100.00% 00:35:54.349 cpu : usr=95.20%, sys=4.60%, ctx=10, majf=0, minf=145 00:35:54.349 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:54.349 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:54.349 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:54.349 issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:54.349 latency : target=0, window=0, percentile=100.00%, depth=4 00:35:54.349 filename1: (groupid=0, jobs=1): err= 0: pid=3552860: Wed Oct 9 00:43:24 2024 00:35:54.349 read: IOPS=189, BW=758KiB/s (777kB/s)(7584KiB/10001msec) 00:35:54.349 slat (nsec): min=5384, max=32742, avg=6254.12, stdev=1456.67 00:35:54.349 clat (usec): min=423, max=43054, avg=21080.72, stdev=20370.53 00:35:54.349 lat (usec): min=431, max=43086, avg=21086.97, stdev=20370.53 00:35:54.349 clat percentiles (usec): 00:35:54.349 | 1.00th=[ 490], 5.00th=[ 545], 10.00th=[ 562], 20.00th=[ 578], 00:35:54.349 | 30.00th=[ 603], 40.00th=[ 676], 50.00th=[40633], 60.00th=[41157], 00:35:54.349 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41681], 95.00th=[41681], 00:35:54.349 | 99.00th=[41681], 99.50th=[41681], 99.90th=[43254], 99.95th=[43254], 00:35:54.349 | 99.99th=[43254] 00:35:54.349 bw ( KiB/s): min= 672, max= 768, per=66.23%, avg=759.58, stdev=25.78, samples=19 00:35:54.349 iops : min= 168, max= 192, avg=189.89, stdev= 6.45, samples=19 00:35:54.349 lat (usec) : 500=1.27%, 750=41.24%, 1000=5.80% 00:35:54.349 lat (msec) : 2=1.48%, 50=50.21% 00:35:54.349 cpu : usr=95.33%, sys=4.45%, ctx=8, majf=0, minf=116 00:35:54.349 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:54.349 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:54.349 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:54.349 issued rwts: total=1896,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:54.349 latency : target=0, window=0, percentile=100.00%, depth=4 00:35:54.349 00:35:54.349 Run status group 0 (all jobs): 00:35:54.349 READ: bw=1146KiB/s (1174kB/s), 389KiB/s-758KiB/s (399kB/s-777kB/s), io=11.2MiB (11.8MB), run=10001-10024msec 00:35:54.349 00:43:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:35:54.349 00:43:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:35:54.349 00:43:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:35:54.349 00:43:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:54.349 00:43:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:35:54.349 00:43:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:54.349 00:43:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:54.349 00:43:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:54.349 00:43:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:54.349 00:43:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:54.349 00:43:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:54.349 00:43:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:54.349 00:43:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:54.349 00:43:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:35:54.349 00:43:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:35:54.349 00:43:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:35:54.349 00:43:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:54.349 00:43:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:54.349 00:43:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:54.349 00:43:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:54.349 00:43:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:35:54.349 00:43:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:54.349 00:43:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:54.349 00:43:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:54.349 00:35:54.349 real 0m11.500s 00:35:54.349 user 0m34.069s 00:35:54.349 sys 0m1.219s 00:35:54.349 00:43:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1126 -- # xtrace_disable 00:35:54.349 00:43:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:54.349 ************************************ 00:35:54.349 END TEST fio_dif_1_multi_subsystems 00:35:54.349 ************************************ 00:35:54.349 00:43:24 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:35:54.349 00:43:24 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:35:54.349 00:43:24 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:35:54.349 00:43:24 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:54.609 ************************************ 00:35:54.609 START TEST fio_dif_rand_params 00:35:54.609 ************************************ 00:35:54.609 00:43:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1125 -- # fio_dif_rand_params 00:35:54.609 00:43:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:35:54.609 00:43:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:35:54.609 00:43:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:35:54.609 00:43:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:35:54.609 00:43:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:35:54.609 00:43:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:35:54.609 00:43:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:35:54.609 00:43:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:35:54.609 00:43:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:35:54.609 00:43:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:54.609 00:43:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:35:54.610 00:43:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:35:54.610 00:43:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:35:54.610 00:43:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:54.610 00:43:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:54.610 bdev_null0 00:35:54.610 00:43:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:54.610 00:43:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:54.610 00:43:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:54.610 00:43:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:54.610 00:43:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:54.610 00:43:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:54.610 00:43:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:54.610 00:43:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:54.610 00:43:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:54.610 00:43:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:54.610 00:43:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:54.610 00:43:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:54.610 [2024-10-09 00:43:25.026942] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:54.610 00:43:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:54.610 00:43:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:35:54.610 00:43:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:35:54.610 00:43:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:35:54.610 00:43:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # config=() 00:35:54.610 00:43:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:54.610 00:43:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # local subsystem config 00:35:54.610 00:43:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:54.610 00:43:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:35:54.610 00:43:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:35:54.610 { 00:35:54.610 "params": { 00:35:54.610 "name": "Nvme$subsystem", 00:35:54.610 "trtype": "$TEST_TRANSPORT", 00:35:54.610 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:54.610 "adrfam": "ipv4", 00:35:54.610 "trsvcid": "$NVMF_PORT", 00:35:54.610 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:54.610 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:54.610 "hdgst": ${hdgst:-false}, 00:35:54.610 "ddgst": ${ddgst:-false} 00:35:54.610 }, 00:35:54.610 "method": "bdev_nvme_attach_controller" 00:35:54.610 } 00:35:54.610 EOF 00:35:54.610 )") 00:35:54.610 00:43:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:35:54.610 00:43:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:35:54.610 00:43:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:54.610 00:43:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:35:54.610 00:43:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:35:54.610 00:43:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:35:54.610 00:43:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:54.610 00:43:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:35:54.610 00:43:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:35:54.610 00:43:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:35:54.610 00:43:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # cat 00:35:54.610 00:43:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:54.610 00:43:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:35:54.610 00:43:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:35:54.610 00:43:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:54.610 00:43:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:35:54.610 00:43:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # jq . 00:35:54.610 00:43:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@583 -- # IFS=, 00:35:54.610 00:43:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:35:54.610 "params": { 00:35:54.610 "name": "Nvme0", 00:35:54.610 "trtype": "tcp", 00:35:54.610 "traddr": "10.0.0.2", 00:35:54.610 "adrfam": "ipv4", 00:35:54.610 "trsvcid": "4420", 00:35:54.610 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:54.610 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:54.610 "hdgst": false, 00:35:54.610 "ddgst": false 00:35:54.610 }, 00:35:54.610 "method": "bdev_nvme_attach_controller" 00:35:54.610 }' 00:35:54.610 00:43:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:35:54.610 00:43:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:35:54.610 00:43:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:35:54.610 00:43:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:54.610 00:43:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:35:54.610 00:43:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:35:54.610 00:43:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:35:54.610 00:43:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:35:54.610 00:43:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:54.610 00:43:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:54.870 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:35:54.870 ... 00:35:54.870 fio-3.35 00:35:54.870 Starting 3 threads 00:36:01.472 00:36:01.472 filename0: (groupid=0, jobs=1): err= 0: pid=3555054: Wed Oct 9 00:43:31 2024 00:36:01.472 read: IOPS=349, BW=43.6MiB/s (45.7MB/s)(219MiB/5008msec) 00:36:01.472 slat (nsec): min=5379, max=30984, avg=6148.39, stdev=1568.57 00:36:01.472 clat (usec): min=4129, max=86161, avg=8586.11, stdev=8151.62 00:36:01.472 lat (usec): min=4134, max=86167, avg=8592.26, stdev=8151.59 00:36:01.472 clat percentiles (usec): 00:36:01.472 | 1.00th=[ 4817], 5.00th=[ 5473], 10.00th=[ 5800], 20.00th=[ 6259], 00:36:01.472 | 30.00th=[ 6521], 40.00th=[ 6783], 50.00th=[ 7046], 60.00th=[ 7242], 00:36:01.472 | 70.00th=[ 7504], 80.00th=[ 7767], 90.00th=[ 8291], 95.00th=[ 8979], 00:36:01.472 | 99.00th=[47449], 99.50th=[48497], 99.90th=[85459], 99.95th=[86508], 00:36:01.472 | 99.99th=[86508] 00:36:01.472 bw ( KiB/s): min=34304, max=54528, per=37.11%, avg=44672.00, stdev=7626.00, samples=10 00:36:01.472 iops : min= 268, max= 426, avg=349.00, stdev=59.58, samples=10 00:36:01.472 lat (msec) : 10=95.82%, 20=0.17%, 50=3.89%, 100=0.11% 00:36:01.472 cpu : usr=93.65%, sys=6.13%, ctx=9, majf=0, minf=126 00:36:01.472 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:01.472 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:01.472 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:01.472 issued rwts: total=1748,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:01.472 latency : target=0, window=0, percentile=100.00%, depth=3 00:36:01.472 filename0: (groupid=0, jobs=1): err= 0: pid=3555055: Wed Oct 9 00:43:31 2024 00:36:01.472 read: IOPS=303, BW=38.0MiB/s (39.8MB/s)(192MiB/5046msec) 00:36:01.472 slat (nsec): min=5415, max=31525, avg=6037.42, stdev=889.58 00:36:01.472 clat (usec): min=4766, max=88986, avg=9844.77, stdev=7140.59 00:36:01.472 lat (usec): min=4772, max=88993, avg=9850.80, stdev=7140.75 00:36:01.472 clat percentiles (usec): 00:36:01.472 | 1.00th=[ 5473], 5.00th=[ 6128], 10.00th=[ 6783], 20.00th=[ 7504], 00:36:01.472 | 30.00th=[ 8094], 40.00th=[ 8586], 50.00th=[ 9110], 60.00th=[ 9634], 00:36:01.472 | 70.00th=[10028], 80.00th=[10421], 90.00th=[10945], 95.00th=[11469], 00:36:01.472 | 99.00th=[49546], 99.50th=[54264], 99.90th=[88605], 99.95th=[88605], 00:36:01.472 | 99.99th=[88605] 00:36:01.472 bw ( KiB/s): min=32000, max=46080, per=32.54%, avg=39168.00, stdev=4251.28, samples=10 00:36:01.472 iops : min= 250, max= 360, avg=306.00, stdev=33.21, samples=10 00:36:01.472 lat (msec) : 10=70.37%, 20=27.74%, 50=1.04%, 100=0.85% 00:36:01.472 cpu : usr=94.29%, sys=5.49%, ctx=6, majf=0, minf=86 00:36:01.472 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:01.472 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:01.472 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:01.472 issued rwts: total=1532,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:01.472 latency : target=0, window=0, percentile=100.00%, depth=3 00:36:01.472 filename0: (groupid=0, jobs=1): err= 0: pid=3555056: Wed Oct 9 00:43:31 2024 00:36:01.472 read: IOPS=290, BW=36.3MiB/s (38.1MB/s)(183MiB/5042msec) 00:36:01.472 slat (nsec): min=5440, max=30770, avg=8307.36, stdev=2339.91 00:36:01.472 clat (usec): min=4607, max=90496, avg=10287.03, stdev=7237.46 00:36:01.472 lat (usec): min=4616, max=90505, avg=10295.33, stdev=7237.79 00:36:01.472 clat percentiles (usec): 00:36:01.472 | 1.00th=[ 5080], 5.00th=[ 5997], 10.00th=[ 6652], 20.00th=[ 7701], 00:36:01.472 | 30.00th=[ 8356], 40.00th=[ 8979], 50.00th=[ 9503], 60.00th=[ 9896], 00:36:01.472 | 70.00th=[10159], 80.00th=[10552], 90.00th=[11207], 95.00th=[11994], 00:36:01.472 | 99.00th=[49546], 99.50th=[50594], 99.90th=[86508], 99.95th=[90702], 00:36:01.472 | 99.99th=[90702] 00:36:01.472 bw ( KiB/s): min=15616, max=44544, per=31.12%, avg=37452.80, stdev=8252.27, samples=10 00:36:01.472 iops : min= 122, max= 348, avg=292.60, stdev=64.47, samples=10 00:36:01.472 lat (msec) : 10=64.51%, 20=32.63%, 50=2.05%, 100=0.82% 00:36:01.473 cpu : usr=92.05%, sys=6.43%, ctx=590, majf=0, minf=58 00:36:01.473 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:01.473 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:01.473 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:01.473 issued rwts: total=1465,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:01.473 latency : target=0, window=0, percentile=100.00%, depth=3 00:36:01.473 00:36:01.473 Run status group 0 (all jobs): 00:36:01.473 READ: bw=118MiB/s (123MB/s), 36.3MiB/s-43.6MiB/s (38.1MB/s-45.7MB/s), io=593MiB (622MB), run=5008-5046msec 00:36:01.473 00:43:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:36:01.473 00:43:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:36:01.473 00:43:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:36:01.473 00:43:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:36:01.473 00:43:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:36:01.473 00:43:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:01.473 00:43:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:01.473 00:43:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:01.473 00:43:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:01.473 00:43:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:36:01.473 00:43:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:01.473 00:43:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:01.473 00:43:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:01.473 00:43:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:36:01.473 00:43:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:36:01.473 00:43:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:36:01.473 00:43:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:36:01.473 00:43:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:36:01.473 00:43:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:36:01.473 00:43:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:36:01.473 00:43:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:36:01.473 00:43:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:36:01.473 00:43:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:36:01.473 00:43:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:36:01.473 00:43:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:36:01.473 00:43:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:01.473 00:43:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:01.473 bdev_null0 00:36:01.473 00:43:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:01.473 00:43:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:36:01.473 00:43:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:01.473 00:43:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:01.473 00:43:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:01.473 00:43:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:36:01.473 00:43:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:01.473 00:43:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:01.473 00:43:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:01.473 00:43:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:01.473 00:43:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:01.473 00:43:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:01.473 [2024-10-09 00:43:31.323648] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:01.473 00:43:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:01.473 00:43:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:36:01.473 00:43:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:36:01.473 00:43:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:36:01.473 00:43:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:36:01.473 00:43:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:01.473 00:43:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:01.473 bdev_null1 00:36:01.473 00:43:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:01.473 00:43:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:36:01.473 00:43:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:01.473 00:43:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:01.473 00:43:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:01.473 00:43:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:36:01.473 00:43:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:01.473 00:43:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:01.473 00:43:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:01.473 00:43:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:01.473 00:43:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:01.473 00:43:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:01.473 00:43:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:01.473 00:43:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:36:01.473 00:43:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:36:01.473 00:43:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:36:01.473 00:43:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:36:01.473 00:43:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:01.473 00:43:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:01.473 bdev_null2 00:36:01.473 00:43:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:01.473 00:43:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:36:01.473 00:43:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:01.473 00:43:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:01.473 00:43:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:01.473 00:43:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:36:01.473 00:43:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:01.473 00:43:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:01.473 00:43:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:01.473 00:43:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:36:01.473 00:43:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:01.473 00:43:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:01.473 00:43:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:01.473 00:43:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:36:01.473 00:43:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:36:01.473 00:43:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:36:01.473 00:43:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # config=() 00:36:01.473 00:43:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:01.473 00:43:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # local subsystem config 00:36:01.473 00:43:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:36:01.473 00:43:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:01.473 00:43:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:36:01.473 { 00:36:01.473 "params": { 00:36:01.473 "name": "Nvme$subsystem", 00:36:01.473 "trtype": "$TEST_TRANSPORT", 00:36:01.473 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:01.473 "adrfam": "ipv4", 00:36:01.473 "trsvcid": "$NVMF_PORT", 00:36:01.473 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:01.473 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:01.473 "hdgst": ${hdgst:-false}, 00:36:01.473 "ddgst": ${ddgst:-false} 00:36:01.473 }, 00:36:01.473 "method": "bdev_nvme_attach_controller" 00:36:01.473 } 00:36:01.473 EOF 00:36:01.473 )") 00:36:01.473 00:43:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:36:01.473 00:43:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:36:01.473 00:43:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:36:01.473 00:43:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:36:01.473 00:43:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:36:01.473 00:43:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:36:01.473 00:43:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:01.473 00:43:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:36:01.473 00:43:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:36:01.473 00:43:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:36:01.473 00:43:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # cat 00:36:01.473 00:43:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:01.473 00:43:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:36:01.473 00:43:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:36:01.473 00:43:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:36:01.473 00:43:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:36:01.473 00:43:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:36:01.473 00:43:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:36:01.473 00:43:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:36:01.473 { 00:36:01.474 "params": { 00:36:01.474 "name": "Nvme$subsystem", 00:36:01.474 "trtype": "$TEST_TRANSPORT", 00:36:01.474 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:01.474 "adrfam": "ipv4", 00:36:01.474 "trsvcid": "$NVMF_PORT", 00:36:01.474 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:01.474 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:01.474 "hdgst": ${hdgst:-false}, 00:36:01.474 "ddgst": ${ddgst:-false} 00:36:01.474 }, 00:36:01.474 "method": "bdev_nvme_attach_controller" 00:36:01.474 } 00:36:01.474 EOF 00:36:01.474 )") 00:36:01.474 00:43:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:36:01.474 00:43:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:36:01.474 00:43:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:36:01.474 00:43:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # cat 00:36:01.474 00:43:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:36:01.474 00:43:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:36:01.474 00:43:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:36:01.474 00:43:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:36:01.474 { 00:36:01.474 "params": { 00:36:01.474 "name": "Nvme$subsystem", 00:36:01.474 "trtype": "$TEST_TRANSPORT", 00:36:01.474 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:01.474 "adrfam": "ipv4", 00:36:01.474 "trsvcid": "$NVMF_PORT", 00:36:01.474 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:01.474 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:01.474 "hdgst": ${hdgst:-false}, 00:36:01.474 "ddgst": ${ddgst:-false} 00:36:01.474 }, 00:36:01.474 "method": "bdev_nvme_attach_controller" 00:36:01.474 } 00:36:01.474 EOF 00:36:01.474 )") 00:36:01.474 00:43:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # cat 00:36:01.474 00:43:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # jq . 00:36:01.474 00:43:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@583 -- # IFS=, 00:36:01.474 00:43:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:36:01.474 "params": { 00:36:01.474 "name": "Nvme0", 00:36:01.474 "trtype": "tcp", 00:36:01.474 "traddr": "10.0.0.2", 00:36:01.474 "adrfam": "ipv4", 00:36:01.474 "trsvcid": "4420", 00:36:01.474 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:01.474 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:01.474 "hdgst": false, 00:36:01.474 "ddgst": false 00:36:01.474 }, 00:36:01.474 "method": "bdev_nvme_attach_controller" 00:36:01.474 },{ 00:36:01.474 "params": { 00:36:01.474 "name": "Nvme1", 00:36:01.474 "trtype": "tcp", 00:36:01.474 "traddr": "10.0.0.2", 00:36:01.474 "adrfam": "ipv4", 00:36:01.474 "trsvcid": "4420", 00:36:01.474 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:36:01.474 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:36:01.474 "hdgst": false, 00:36:01.474 "ddgst": false 00:36:01.474 }, 00:36:01.474 "method": "bdev_nvme_attach_controller" 00:36:01.474 },{ 00:36:01.474 "params": { 00:36:01.474 "name": "Nvme2", 00:36:01.474 "trtype": "tcp", 00:36:01.474 "traddr": "10.0.0.2", 00:36:01.474 "adrfam": "ipv4", 00:36:01.474 "trsvcid": "4420", 00:36:01.474 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:36:01.474 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:36:01.474 "hdgst": false, 00:36:01.474 "ddgst": false 00:36:01.474 }, 00:36:01.474 "method": "bdev_nvme_attach_controller" 00:36:01.474 }' 00:36:01.474 00:43:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:36:01.474 00:43:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:36:01.474 00:43:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:36:01.474 00:43:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:01.474 00:43:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:36:01.474 00:43:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:36:01.474 00:43:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:36:01.474 00:43:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:36:01.474 00:43:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:36:01.474 00:43:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:01.474 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:36:01.474 ... 00:36:01.474 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:36:01.474 ... 00:36:01.474 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:36:01.474 ... 00:36:01.474 fio-3.35 00:36:01.474 Starting 24 threads 00:36:13.708 00:36:13.708 filename0: (groupid=0, jobs=1): err= 0: pid=3556562: Wed Oct 9 00:43:42 2024 00:36:13.708 read: IOPS=691, BW=2765KiB/s (2831kB/s)(27.0MiB/10003msec) 00:36:13.708 slat (usec): min=5, max=139, avg=17.81, stdev=18.11 00:36:13.708 clat (usec): min=6023, max=30914, avg=23001.89, stdev=1996.32 00:36:13.708 lat (usec): min=6033, max=30920, avg=23019.69, stdev=1996.12 00:36:13.708 clat percentiles (usec): 00:36:13.708 | 1.00th=[12911], 5.00th=[21103], 10.00th=[22152], 20.00th=[22676], 00:36:13.708 | 30.00th=[22938], 40.00th=[22938], 50.00th=[23200], 60.00th=[23462], 00:36:13.708 | 70.00th=[23725], 80.00th=[23987], 90.00th=[24511], 95.00th=[24773], 00:36:13.708 | 99.00th=[25560], 99.50th=[26346], 99.90th=[27132], 99.95th=[27657], 00:36:13.708 | 99.99th=[30802] 00:36:13.708 bw ( KiB/s): min= 2560, max= 3168, per=4.15%, avg=2762.63, stdev=125.86, samples=19 00:36:13.708 iops : min= 640, max= 792, avg=690.63, stdev=31.45, samples=19 00:36:13.708 lat (msec) : 10=0.39%, 20=3.59%, 50=96.02% 00:36:13.708 cpu : usr=98.95%, sys=0.73%, ctx=14, majf=0, minf=40 00:36:13.708 IO depths : 1=5.9%, 2=11.9%, 4=24.2%, 8=51.3%, 16=6.6%, 32=0.0%, >=64=0.0% 00:36:13.708 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:13.708 complete : 0=0.0%, 4=93.9%, 8=0.3%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:13.708 issued rwts: total=6914,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:13.708 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:13.708 filename0: (groupid=0, jobs=1): err= 0: pid=3556564: Wed Oct 9 00:43:42 2024 00:36:13.708 read: IOPS=744, BW=2977KiB/s (3048kB/s)(29.1MiB/10020msec) 00:36:13.708 slat (usec): min=5, max=166, avg=19.48, stdev=20.37 00:36:13.708 clat (usec): min=1798, max=43211, avg=21336.53, stdev=4657.40 00:36:13.708 lat (usec): min=1817, max=43228, avg=21356.00, stdev=4660.90 00:36:13.708 clat percentiles (usec): 00:36:13.708 | 1.00th=[ 7373], 5.00th=[13960], 10.00th=[15008], 20.00th=[17171], 00:36:13.708 | 30.00th=[19530], 40.00th=[22152], 50.00th=[22676], 60.00th=[22938], 00:36:13.708 | 70.00th=[23200], 80.00th=[23725], 90.00th=[24249], 95.00th=[27395], 00:36:13.708 | 99.00th=[35914], 99.50th=[36963], 99.90th=[41157], 99.95th=[43254], 00:36:13.708 | 99.99th=[43254] 00:36:13.708 bw ( KiB/s): min= 2688, max= 3456, per=4.48%, avg=2978.80, stdev=246.36, samples=20 00:36:13.708 iops : min= 672, max= 864, avg=744.70, stdev=61.59, samples=20 00:36:13.708 lat (msec) : 2=0.13%, 4=0.30%, 10=0.90%, 20=29.54%, 50=69.13% 00:36:13.708 cpu : usr=99.03%, sys=0.64%, ctx=41, majf=0, minf=49 00:36:13.708 IO depths : 1=2.9%, 2=6.0%, 4=15.5%, 8=65.8%, 16=9.9%, 32=0.0%, >=64=0.0% 00:36:13.708 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:13.708 complete : 0=0.0%, 4=91.5%, 8=3.0%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:13.708 issued rwts: total=7457,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:13.708 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:13.708 filename0: (groupid=0, jobs=1): err= 0: pid=3556566: Wed Oct 9 00:43:42 2024 00:36:13.708 read: IOPS=706, BW=2826KiB/s (2894kB/s)(27.6MiB/10015msec) 00:36:13.708 slat (usec): min=5, max=190, avg=24.22, stdev=24.17 00:36:13.708 clat (usec): min=4406, max=39296, avg=22454.01, stdev=3532.08 00:36:13.708 lat (usec): min=4420, max=39314, avg=22478.23, stdev=3534.35 00:36:13.708 clat percentiles (usec): 00:36:13.708 | 1.00th=[10028], 5.00th=[15008], 10.00th=[17695], 20.00th=[22152], 00:36:13.708 | 30.00th=[22676], 40.00th=[22938], 50.00th=[22938], 60.00th=[23200], 00:36:13.708 | 70.00th=[23462], 80.00th=[23725], 90.00th=[24249], 95.00th=[25297], 00:36:13.708 | 99.00th=[35390], 99.50th=[36963], 99.90th=[39060], 99.95th=[39060], 00:36:13.708 | 99.99th=[39060] 00:36:13.708 bw ( KiB/s): min= 2640, max= 3280, per=4.26%, avg=2830.42, stdev=146.11, samples=19 00:36:13.708 iops : min= 660, max= 820, avg=707.58, stdev=36.55, samples=19 00:36:13.708 lat (msec) : 10=0.96%, 20=11.90%, 50=87.14% 00:36:13.708 cpu : usr=99.08%, sys=0.61%, ctx=14, majf=0, minf=51 00:36:13.708 IO depths : 1=4.8%, 2=9.7%, 4=20.8%, 8=56.8%, 16=7.9%, 32=0.0%, >=64=0.0% 00:36:13.708 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:13.708 complete : 0=0.0%, 4=93.0%, 8=1.4%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:13.708 issued rwts: total=7075,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:13.708 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:13.708 filename0: (groupid=0, jobs=1): err= 0: pid=3556567: Wed Oct 9 00:43:42 2024 00:36:13.708 read: IOPS=691, BW=2767KiB/s (2833kB/s)(27.0MiB/10005msec) 00:36:13.708 slat (usec): min=5, max=176, avg=29.68, stdev=22.76 00:36:13.708 clat (usec): min=5241, max=42074, avg=22876.23, stdev=2864.95 00:36:13.708 lat (usec): min=5250, max=42090, avg=22905.91, stdev=2866.78 00:36:13.708 clat percentiles (usec): 00:36:13.708 | 1.00th=[12387], 5.00th=[17957], 10.00th=[21365], 20.00th=[22414], 00:36:13.708 | 30.00th=[22676], 40.00th=[22938], 50.00th=[22938], 60.00th=[23200], 00:36:13.708 | 70.00th=[23462], 80.00th=[23987], 90.00th=[24511], 95.00th=[25297], 00:36:13.708 | 99.00th=[31851], 99.50th=[34866], 99.90th=[42206], 99.95th=[42206], 00:36:13.708 | 99.99th=[42206] 00:36:13.708 bw ( KiB/s): min= 2565, max= 2960, per=4.13%, avg=2749.74, stdev=98.29, samples=19 00:36:13.708 iops : min= 641, max= 740, avg=687.42, stdev=24.60, samples=19 00:36:13.708 lat (msec) : 10=0.46%, 20=7.43%, 50=92.11% 00:36:13.708 cpu : usr=98.65%, sys=0.84%, ctx=62, majf=0, minf=25 00:36:13.708 IO depths : 1=4.2%, 2=8.8%, 4=20.4%, 8=58.0%, 16=8.7%, 32=0.0%, >=64=0.0% 00:36:13.708 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:13.708 complete : 0=0.0%, 4=93.0%, 8=1.6%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:13.708 issued rwts: total=6920,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:13.708 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:13.708 filename0: (groupid=0, jobs=1): err= 0: pid=3556568: Wed Oct 9 00:43:42 2024 00:36:13.708 read: IOPS=684, BW=2738KiB/s (2804kB/s)(26.8MiB/10009msec) 00:36:13.708 slat (usec): min=5, max=147, avg=30.64, stdev=24.42 00:36:13.708 clat (usec): min=10571, max=44550, avg=23102.52, stdev=1934.35 00:36:13.708 lat (usec): min=10580, max=44579, avg=23133.16, stdev=1933.03 00:36:13.708 clat percentiles (usec): 00:36:13.708 | 1.00th=[15008], 5.00th=[21627], 10.00th=[22152], 20.00th=[22414], 00:36:13.708 | 30.00th=[22676], 40.00th=[22938], 50.00th=[23200], 60.00th=[23200], 00:36:13.708 | 70.00th=[23462], 80.00th=[23987], 90.00th=[24511], 95.00th=[24773], 00:36:13.708 | 99.00th=[28443], 99.50th=[33162], 99.90th=[35390], 99.95th=[41157], 00:36:13.708 | 99.99th=[44303] 00:36:13.708 bw ( KiB/s): min= 2560, max= 2832, per=4.11%, avg=2736.84, stdev=78.47, samples=19 00:36:13.708 iops : min= 640, max= 708, avg=684.21, stdev=19.62, samples=19 00:36:13.708 lat (msec) : 20=3.34%, 50=96.66% 00:36:13.708 cpu : usr=98.96%, sys=0.71%, ctx=14, majf=0, minf=28 00:36:13.708 IO depths : 1=5.5%, 2=11.4%, 4=23.3%, 8=52.6%, 16=7.1%, 32=0.0%, >=64=0.0% 00:36:13.708 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:13.708 complete : 0=0.0%, 4=93.6%, 8=0.7%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:13.708 issued rwts: total=6852,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:13.708 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:13.708 filename0: (groupid=0, jobs=1): err= 0: pid=3556569: Wed Oct 9 00:43:42 2024 00:36:13.708 read: IOPS=683, BW=2732KiB/s (2798kB/s)(26.7MiB/10002msec) 00:36:13.708 slat (usec): min=5, max=110, avg=18.26, stdev=16.35 00:36:13.708 clat (usec): min=13358, max=33629, avg=23271.24, stdev=1140.08 00:36:13.708 lat (usec): min=13364, max=33634, avg=23289.50, stdev=1138.06 00:36:13.708 clat percentiles (usec): 00:36:13.708 | 1.00th=[20317], 5.00th=[22152], 10.00th=[22414], 20.00th=[22676], 00:36:13.708 | 30.00th=[22938], 40.00th=[22938], 50.00th=[23200], 60.00th=[23462], 00:36:13.708 | 70.00th=[23725], 80.00th=[23987], 90.00th=[24511], 95.00th=[24773], 00:36:13.708 | 99.00th=[25560], 99.50th=[25560], 99.90th=[30802], 99.95th=[31851], 00:36:13.708 | 99.99th=[33817] 00:36:13.708 bw ( KiB/s): min= 2560, max= 2816, per=4.10%, avg=2727.79, stdev=74.36, samples=19 00:36:13.708 iops : min= 640, max= 704, avg=681.89, stdev=18.58, samples=19 00:36:13.708 lat (msec) : 20=0.82%, 50=99.18% 00:36:13.708 cpu : usr=99.06%, sys=0.62%, ctx=14, majf=0, minf=41 00:36:13.708 IO depths : 1=6.1%, 2=12.3%, 4=24.9%, 8=50.3%, 16=6.4%, 32=0.0%, >=64=0.0% 00:36:13.708 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:13.708 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:13.708 issued rwts: total=6832,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:13.708 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:13.708 filename0: (groupid=0, jobs=1): err= 0: pid=3556570: Wed Oct 9 00:43:42 2024 00:36:13.708 read: IOPS=689, BW=2760KiB/s (2826kB/s)(27.0MiB/10012msec) 00:36:13.708 slat (usec): min=5, max=132, avg=12.85, stdev=13.78 00:36:13.708 clat (usec): min=4959, max=43084, avg=23082.09, stdev=2449.97 00:36:13.708 lat (usec): min=4972, max=43089, avg=23094.94, stdev=2448.42 00:36:13.708 clat percentiles (usec): 00:36:13.708 | 1.00th=[ 8356], 5.00th=[21890], 10.00th=[22152], 20.00th=[22676], 00:36:13.708 | 30.00th=[22938], 40.00th=[22938], 50.00th=[23200], 60.00th=[23462], 00:36:13.708 | 70.00th=[23725], 80.00th=[23987], 90.00th=[24511], 95.00th=[24773], 00:36:13.708 | 99.00th=[26084], 99.50th=[28705], 99.90th=[43254], 99.95th=[43254], 00:36:13.708 | 99.99th=[43254] 00:36:13.708 bw ( KiB/s): min= 2682, max= 3376, per=4.15%, avg=2757.58, stdev=160.44, samples=19 00:36:13.708 iops : min= 670, max= 844, avg=689.37, stdev=40.12, samples=19 00:36:13.708 lat (msec) : 10=1.03%, 20=1.91%, 50=97.06% 00:36:13.708 cpu : usr=98.90%, sys=0.77%, ctx=16, majf=0, minf=34 00:36:13.708 IO depths : 1=6.0%, 2=12.0%, 4=24.3%, 8=51.1%, 16=6.6%, 32=0.0%, >=64=0.0% 00:36:13.708 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:13.708 complete : 0=0.0%, 4=94.0%, 8=0.2%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:13.708 issued rwts: total=6908,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:13.708 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:13.708 filename0: (groupid=0, jobs=1): err= 0: pid=3556571: Wed Oct 9 00:43:42 2024 00:36:13.708 read: IOPS=705, BW=2823KiB/s (2891kB/s)(27.6MiB/10005msec) 00:36:13.708 slat (usec): min=5, max=245, avg=24.20, stdev=23.70 00:36:13.708 clat (usec): min=10674, max=45602, avg=22483.17, stdev=4514.47 00:36:13.708 lat (usec): min=10681, max=45611, avg=22507.37, stdev=4517.93 00:36:13.708 clat percentiles (usec): 00:36:13.708 | 1.00th=[12518], 5.00th=[14353], 10.00th=[16188], 20.00th=[19792], 00:36:13.708 | 30.00th=[22152], 40.00th=[22676], 50.00th=[22938], 60.00th=[23200], 00:36:13.708 | 70.00th=[23462], 80.00th=[23987], 90.00th=[25560], 95.00th=[30278], 00:36:13.708 | 99.00th=[38011], 99.50th=[39584], 99.90th=[44303], 99.95th=[45351], 00:36:13.708 | 99.99th=[45351] 00:36:13.708 bw ( KiB/s): min= 2640, max= 2997, per=4.25%, avg=2824.79, stdev=107.49, samples=19 00:36:13.708 iops : min= 660, max= 749, avg=706.16, stdev=26.84, samples=19 00:36:13.708 lat (msec) : 20=20.68%, 50=79.32% 00:36:13.708 cpu : usr=99.10%, sys=0.57%, ctx=30, majf=0, minf=70 00:36:13.708 IO depths : 1=2.7%, 2=5.4%, 4=14.9%, 8=66.8%, 16=10.1%, 32=0.0%, >=64=0.0% 00:36:13.708 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:13.708 complete : 0=0.0%, 4=91.4%, 8=3.2%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:13.708 issued rwts: total=7061,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:13.708 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:13.708 filename1: (groupid=0, jobs=1): err= 0: pid=3556572: Wed Oct 9 00:43:42 2024 00:36:13.708 read: IOPS=690, BW=2761KiB/s (2827kB/s)(27.0MiB/10005msec) 00:36:13.708 slat (usec): min=5, max=295, avg=21.77, stdev=19.67 00:36:13.708 clat (usec): min=5095, max=42784, avg=22997.25, stdev=3164.55 00:36:13.708 lat (usec): min=5102, max=42812, avg=23019.03, stdev=3165.10 00:36:13.708 clat percentiles (usec): 00:36:13.708 | 1.00th=[11338], 5.00th=[17433], 10.00th=[21890], 20.00th=[22414], 00:36:13.708 | 30.00th=[22676], 40.00th=[22938], 50.00th=[23200], 60.00th=[23462], 00:36:13.708 | 70.00th=[23725], 80.00th=[23987], 90.00th=[24511], 95.00th=[25035], 00:36:13.708 | 99.00th=[35914], 99.50th=[39060], 99.90th=[42730], 99.95th=[42730], 00:36:13.708 | 99.99th=[42730] 00:36:13.708 bw ( KiB/s): min= 2560, max= 2960, per=4.13%, avg=2746.37, stdev=112.08, samples=19 00:36:13.708 iops : min= 640, max= 740, avg=686.58, stdev=28.04, samples=19 00:36:13.708 lat (msec) : 10=0.52%, 20=6.57%, 50=92.90% 00:36:13.708 cpu : usr=98.65%, sys=0.81%, ctx=92, majf=0, minf=36 00:36:13.708 IO depths : 1=2.9%, 2=7.6%, 4=19.3%, 8=59.7%, 16=10.5%, 32=0.0%, >=64=0.0% 00:36:13.708 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:13.708 complete : 0=0.0%, 4=92.9%, 8=2.3%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:13.708 issued rwts: total=6906,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:13.708 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:13.708 filename1: (groupid=0, jobs=1): err= 0: pid=3556573: Wed Oct 9 00:43:42 2024 00:36:13.708 read: IOPS=695, BW=2783KiB/s (2850kB/s)(27.2MiB/10002msec) 00:36:13.708 slat (usec): min=5, max=150, avg=21.48, stdev=21.63 00:36:13.708 clat (usec): min=6169, max=53343, avg=22857.73, stdev=4662.84 00:36:13.708 lat (usec): min=6189, max=53362, avg=22879.22, stdev=4665.63 00:36:13.708 clat percentiles (usec): 00:36:13.708 | 1.00th=[11469], 5.00th=[14746], 10.00th=[16581], 20.00th=[20579], 00:36:13.708 | 30.00th=[22414], 40.00th=[22676], 50.00th=[22938], 60.00th=[23462], 00:36:13.708 | 70.00th=[23725], 80.00th=[24249], 90.00th=[27657], 95.00th=[32113], 00:36:13.708 | 99.00th=[37487], 99.50th=[39584], 99.90th=[43779], 99.95th=[43779], 00:36:13.708 | 99.99th=[53216] 00:36:13.708 bw ( KiB/s): min= 2496, max= 3056, per=4.17%, avg=2775.16, stdev=143.11, samples=19 00:36:13.708 iops : min= 624, max= 764, avg=693.79, stdev=35.78, samples=19 00:36:13.708 lat (msec) : 10=0.56%, 20=18.12%, 50=81.30%, 100=0.01% 00:36:13.708 cpu : usr=98.95%, sys=0.70%, ctx=17, majf=0, minf=58 00:36:13.708 IO depths : 1=0.9%, 2=2.3%, 4=10.4%, 8=73.0%, 16=13.4%, 32=0.0%, >=64=0.0% 00:36:13.708 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:13.708 complete : 0=0.0%, 4=90.6%, 8=5.4%, 16=4.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:13.708 issued rwts: total=6959,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:13.708 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:13.708 filename1: (groupid=0, jobs=1): err= 0: pid=3556574: Wed Oct 9 00:43:42 2024 00:36:13.708 read: IOPS=683, BW=2735KiB/s (2801kB/s)(26.7MiB/10001msec) 00:36:13.709 slat (usec): min=5, max=139, avg=26.11, stdev=19.47 00:36:13.709 clat (usec): min=8577, max=44570, avg=23157.40, stdev=2136.07 00:36:13.709 lat (usec): min=8589, max=44597, avg=23183.50, stdev=2135.43 00:36:13.709 clat percentiles (usec): 00:36:13.709 | 1.00th=[12256], 5.00th=[22152], 10.00th=[22414], 20.00th=[22676], 00:36:13.709 | 30.00th=[22676], 40.00th=[22938], 50.00th=[23200], 60.00th=[23200], 00:36:13.709 | 70.00th=[23462], 80.00th=[23987], 90.00th=[24249], 95.00th=[24773], 00:36:13.709 | 99.00th=[27657], 99.50th=[33817], 99.90th=[43254], 99.95th=[43254], 00:36:13.709 | 99.99th=[44827] 00:36:13.709 bw ( KiB/s): min= 2560, max= 2864, per=4.10%, avg=2724.21, stdev=74.83, samples=19 00:36:13.709 iops : min= 640, max= 716, avg=681.05, stdev=18.71, samples=19 00:36:13.709 lat (msec) : 10=0.23%, 20=2.54%, 50=97.22% 00:36:13.709 cpu : usr=98.87%, sys=0.80%, ctx=17, majf=0, minf=33 00:36:13.709 IO depths : 1=5.6%, 2=11.7%, 4=24.5%, 8=51.2%, 16=6.9%, 32=0.0%, >=64=0.0% 00:36:13.709 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:13.709 complete : 0=0.0%, 4=94.0%, 8=0.2%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:13.709 issued rwts: total=6838,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:13.709 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:13.709 filename1: (groupid=0, jobs=1): err= 0: pid=3556575: Wed Oct 9 00:43:42 2024 00:36:13.709 read: IOPS=701, BW=2805KiB/s (2872kB/s)(27.4MiB/10011msec) 00:36:13.709 slat (usec): min=4, max=151, avg=29.63, stdev=23.96 00:36:13.709 clat (usec): min=10386, max=44638, avg=22545.70, stdev=3052.61 00:36:13.709 lat (usec): min=10394, max=44644, avg=22575.33, stdev=3056.85 00:36:13.709 clat percentiles (usec): 00:36:13.709 | 1.00th=[13435], 5.00th=[15664], 10.00th=[18220], 20.00th=[22152], 00:36:13.709 | 30.00th=[22414], 40.00th=[22676], 50.00th=[22938], 60.00th=[23200], 00:36:13.709 | 70.00th=[23462], 80.00th=[23725], 90.00th=[24511], 95.00th=[25297], 00:36:13.709 | 99.00th=[32113], 99.50th=[33817], 99.90th=[37487], 99.95th=[40109], 00:36:13.709 | 99.99th=[44827] 00:36:13.709 bw ( KiB/s): min= 2560, max= 3152, per=4.22%, avg=2807.26, stdev=171.57, samples=19 00:36:13.709 iops : min= 640, max= 788, avg=701.79, stdev=42.87, samples=19 00:36:13.709 lat (msec) : 20=12.49%, 50=87.51% 00:36:13.709 cpu : usr=98.98%, sys=0.67%, ctx=14, majf=0, minf=33 00:36:13.709 IO depths : 1=4.6%, 2=9.4%, 4=20.2%, 8=57.6%, 16=8.2%, 32=0.0%, >=64=0.0% 00:36:13.709 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:13.709 complete : 0=0.0%, 4=92.7%, 8=1.7%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:13.709 issued rwts: total=7020,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:13.709 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:13.709 filename1: (groupid=0, jobs=1): err= 0: pid=3556576: Wed Oct 9 00:43:42 2024 00:36:13.709 read: IOPS=688, BW=2756KiB/s (2822kB/s)(26.9MiB/10012msec) 00:36:13.709 slat (usec): min=5, max=143, avg=27.60, stdev=22.37 00:36:13.709 clat (usec): min=6518, max=40149, avg=22983.34, stdev=3265.70 00:36:13.709 lat (usec): min=6524, max=40157, avg=23010.94, stdev=3267.09 00:36:13.709 clat percentiles (usec): 00:36:13.709 | 1.00th=[12911], 5.00th=[17171], 10.00th=[20055], 20.00th=[22414], 00:36:13.709 | 30.00th=[22676], 40.00th=[22938], 50.00th=[22938], 60.00th=[23200], 00:36:13.709 | 70.00th=[23462], 80.00th=[23987], 90.00th=[24773], 95.00th=[27657], 00:36:13.709 | 99.00th=[35390], 99.50th=[36963], 99.90th=[40109], 99.95th=[40109], 00:36:13.709 | 99.99th=[40109] 00:36:13.709 bw ( KiB/s): min= 2560, max= 3056, per=4.13%, avg=2749.16, stdev=107.35, samples=19 00:36:13.709 iops : min= 640, max= 764, avg=687.26, stdev=26.83, samples=19 00:36:13.709 lat (msec) : 10=0.06%, 20=10.09%, 50=89.85% 00:36:13.709 cpu : usr=98.83%, sys=0.73%, ctx=49, majf=0, minf=20 00:36:13.709 IO depths : 1=4.0%, 2=8.4%, 4=19.0%, 8=59.6%, 16=8.9%, 32=0.0%, >=64=0.0% 00:36:13.709 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:13.709 complete : 0=0.0%, 4=92.5%, 8=2.2%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:13.709 issued rwts: total=6898,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:13.709 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:13.709 filename1: (groupid=0, jobs=1): err= 0: pid=3556577: Wed Oct 9 00:43:42 2024 00:36:13.709 read: IOPS=687, BW=2751KiB/s (2817kB/s)(26.9MiB/10014msec) 00:36:13.709 slat (usec): min=5, max=136, avg=16.82, stdev=18.44 00:36:13.709 clat (usec): min=5355, max=33056, avg=23133.00, stdev=1936.41 00:36:13.709 lat (usec): min=5363, max=33116, avg=23149.82, stdev=1935.76 00:36:13.709 clat percentiles (usec): 00:36:13.709 | 1.00th=[12911], 5.00th=[21890], 10.00th=[22414], 20.00th=[22676], 00:36:13.709 | 30.00th=[22938], 40.00th=[23200], 50.00th=[23200], 60.00th=[23462], 00:36:13.709 | 70.00th=[23725], 80.00th=[23987], 90.00th=[24511], 95.00th=[24773], 00:36:13.709 | 99.00th=[26346], 99.50th=[28181], 99.90th=[31327], 99.95th=[32900], 00:36:13.709 | 99.99th=[33162] 00:36:13.709 bw ( KiB/s): min= 2560, max= 3072, per=4.12%, avg=2741.58, stdev=115.57, samples=19 00:36:13.709 iops : min= 640, max= 768, avg=685.37, stdev=28.91, samples=19 00:36:13.709 lat (msec) : 10=0.23%, 20=2.77%, 50=96.99% 00:36:13.709 cpu : usr=99.04%, sys=0.64%, ctx=14, majf=0, minf=33 00:36:13.709 IO depths : 1=5.9%, 2=12.0%, 4=24.4%, 8=51.1%, 16=6.6%, 32=0.0%, >=64=0.0% 00:36:13.709 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:13.709 complete : 0=0.0%, 4=94.0%, 8=0.2%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:13.709 issued rwts: total=6886,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:13.709 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:13.709 filename1: (groupid=0, jobs=1): err= 0: pid=3556578: Wed Oct 9 00:43:42 2024 00:36:13.709 read: IOPS=697, BW=2791KiB/s (2857kB/s)(27.4MiB/10044msec) 00:36:13.709 slat (usec): min=5, max=141, avg=20.92, stdev=21.01 00:36:13.709 clat (usec): min=4478, max=49905, avg=22679.87, stdev=4195.11 00:36:13.709 lat (usec): min=4485, max=49921, avg=22700.79, stdev=4197.37 00:36:13.709 clat percentiles (usec): 00:36:13.709 | 1.00th=[12125], 5.00th=[14877], 10.00th=[16909], 20.00th=[20841], 00:36:13.709 | 30.00th=[22414], 40.00th=[22676], 50.00th=[22938], 60.00th=[23200], 00:36:13.709 | 70.00th=[23725], 80.00th=[23987], 90.00th=[25822], 95.00th=[30016], 00:36:13.709 | 99.00th=[36963], 99.50th=[39060], 99.90th=[41681], 99.95th=[49546], 00:36:13.709 | 99.99th=[50070] 00:36:13.709 bw ( KiB/s): min= 2512, max= 3056, per=4.21%, avg=2800.15, stdev=146.15, samples=20 00:36:13.709 iops : min= 628, max= 764, avg=700.00, stdev=36.55, samples=20 00:36:13.709 lat (msec) : 10=0.36%, 20=16.51%, 50=83.13% 00:36:13.709 cpu : usr=98.89%, sys=0.77%, ctx=21, majf=0, minf=47 00:36:13.709 IO depths : 1=2.5%, 2=5.2%, 4=13.3%, 8=67.7%, 16=11.2%, 32=0.0%, >=64=0.0% 00:36:13.709 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:13.709 complete : 0=0.0%, 4=90.9%, 8=4.6%, 16=4.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:13.709 issued rwts: total=7007,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:13.709 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:13.709 filename1: (groupid=0, jobs=1): err= 0: pid=3556579: Wed Oct 9 00:43:42 2024 00:36:13.709 read: IOPS=704, BW=2817KiB/s (2885kB/s)(27.5MiB/10010msec) 00:36:13.709 slat (usec): min=4, max=157, avg=27.96, stdev=24.96 00:36:13.709 clat (usec): min=10707, max=40526, avg=22458.83, stdev=3118.35 00:36:13.709 lat (usec): min=10715, max=40553, avg=22486.79, stdev=3122.19 00:36:13.709 clat percentiles (usec): 00:36:13.709 | 1.00th=[12911], 5.00th=[15664], 10.00th=[17957], 20.00th=[22152], 00:36:13.709 | 30.00th=[22414], 40.00th=[22676], 50.00th=[22938], 60.00th=[23200], 00:36:13.709 | 70.00th=[23462], 80.00th=[23725], 90.00th=[24249], 95.00th=[25035], 00:36:13.709 | 99.00th=[31589], 99.50th=[36439], 99.90th=[39584], 99.95th=[40633], 00:36:13.709 | 99.99th=[40633] 00:36:13.709 bw ( KiB/s): min= 2682, max= 3120, per=4.24%, avg=2820.74, stdev=135.82, samples=19 00:36:13.709 iops : min= 670, max= 780, avg=705.16, stdev=33.98, samples=19 00:36:13.709 lat (msec) : 20=13.57%, 50=86.43% 00:36:13.709 cpu : usr=98.94%, sys=0.70%, ctx=34, majf=0, minf=39 00:36:13.709 IO depths : 1=4.5%, 2=9.5%, 4=21.0%, 8=56.9%, 16=8.1%, 32=0.0%, >=64=0.0% 00:36:13.709 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:13.709 complete : 0=0.0%, 4=93.0%, 8=1.3%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:13.709 issued rwts: total=7050,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:13.709 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:13.709 filename2: (groupid=0, jobs=1): err= 0: pid=3556580: Wed Oct 9 00:43:42 2024 00:36:13.709 read: IOPS=691, BW=2767KiB/s (2833kB/s)(27.0MiB/10006msec) 00:36:13.709 slat (usec): min=5, max=143, avg=18.12, stdev=18.86 00:36:13.709 clat (usec): min=5114, max=62290, avg=23036.03, stdev=4474.76 00:36:13.709 lat (usec): min=5122, max=62308, avg=23054.14, stdev=4476.01 00:36:13.709 clat percentiles (usec): 00:36:13.709 | 1.00th=[11469], 5.00th=[14877], 10.00th=[17695], 20.00th=[21365], 00:36:13.709 | 30.00th=[22414], 40.00th=[22938], 50.00th=[23200], 60.00th=[23462], 00:36:13.709 | 70.00th=[23725], 80.00th=[24511], 90.00th=[27132], 95.00th=[30802], 00:36:13.709 | 99.00th=[36439], 99.50th=[37487], 99.90th=[44303], 99.95th=[62129], 00:36:13.709 | 99.99th=[62129] 00:36:13.709 bw ( KiB/s): min= 2452, max= 2928, per=4.14%, avg=2753.47, stdev=118.34, samples=19 00:36:13.709 iops : min= 613, max= 732, avg=688.37, stdev=29.58, samples=19 00:36:13.709 lat (msec) : 10=0.78%, 20=14.71%, 50=84.44%, 100=0.07% 00:36:13.709 cpu : usr=98.97%, sys=0.68%, ctx=18, majf=0, minf=32 00:36:13.709 IO depths : 1=0.4%, 2=1.3%, 4=7.1%, 8=76.5%, 16=14.7%, 32=0.0%, >=64=0.0% 00:36:13.709 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:13.709 complete : 0=0.0%, 4=89.9%, 8=7.0%, 16=3.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:13.709 issued rwts: total=6921,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:13.709 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:13.709 filename2: (groupid=0, jobs=1): err= 0: pid=3556581: Wed Oct 9 00:43:42 2024 00:36:13.709 read: IOPS=689, BW=2757KiB/s (2824kB/s)(27.0MiB/10021msec) 00:36:13.709 slat (usec): min=5, max=145, avg=20.41, stdev=19.78 00:36:13.709 clat (usec): min=5795, max=30886, avg=23039.59, stdev=2179.51 00:36:13.709 lat (usec): min=5801, max=30892, avg=23059.99, stdev=2180.07 00:36:13.709 clat percentiles (usec): 00:36:13.709 | 1.00th=[10290], 5.00th=[21627], 10.00th=[22152], 20.00th=[22676], 00:36:13.709 | 30.00th=[22938], 40.00th=[22938], 50.00th=[23200], 60.00th=[23462], 00:36:13.709 | 70.00th=[23725], 80.00th=[23987], 90.00th=[24511], 95.00th=[24773], 00:36:13.709 | 99.00th=[26608], 99.50th=[28967], 99.90th=[29492], 99.95th=[29492], 00:36:13.709 | 99.99th=[30802] 00:36:13.709 bw ( KiB/s): min= 2688, max= 3120, per=4.14%, avg=2756.80, stdev=107.91, samples=20 00:36:13.709 iops : min= 672, max= 780, avg=689.20, stdev=26.98, samples=20 00:36:13.709 lat (msec) : 10=0.83%, 20=2.65%, 50=96.53% 00:36:13.709 cpu : usr=98.91%, sys=0.75%, ctx=16, majf=0, minf=40 00:36:13.709 IO depths : 1=5.8%, 2=11.8%, 4=24.1%, 8=51.6%, 16=6.7%, 32=0.0%, >=64=0.0% 00:36:13.709 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:13.709 complete : 0=0.0%, 4=93.9%, 8=0.3%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:13.709 issued rwts: total=6908,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:13.709 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:13.709 filename2: (groupid=0, jobs=1): err= 0: pid=3556582: Wed Oct 9 00:43:42 2024 00:36:13.709 read: IOPS=689, BW=2757KiB/s (2823kB/s)(26.9MiB/10001msec) 00:36:13.709 slat (usec): min=5, max=140, avg=23.35, stdev=23.01 00:36:13.709 clat (usec): min=8643, max=43480, avg=23096.86, stdev=3341.12 00:36:13.709 lat (usec): min=8674, max=43498, avg=23120.21, stdev=3341.67 00:36:13.710 clat percentiles (usec): 00:36:13.710 | 1.00th=[13435], 5.00th=[16909], 10.00th=[19530], 20.00th=[22414], 00:36:13.710 | 30.00th=[22676], 40.00th=[22938], 50.00th=[23200], 60.00th=[23462], 00:36:13.710 | 70.00th=[23725], 80.00th=[24249], 90.00th=[25035], 95.00th=[28967], 00:36:13.710 | 99.00th=[33424], 99.50th=[35914], 99.90th=[43254], 99.95th=[43254], 00:36:13.710 | 99.99th=[43254] 00:36:13.710 bw ( KiB/s): min= 2560, max= 2912, per=4.13%, avg=2746.95, stdev=79.12, samples=19 00:36:13.710 iops : min= 640, max= 728, avg=686.74, stdev=19.78, samples=19 00:36:13.710 lat (msec) : 10=0.35%, 20=10.66%, 50=88.99% 00:36:13.710 cpu : usr=98.85%, sys=0.81%, ctx=14, majf=0, minf=33 00:36:13.710 IO depths : 1=0.3%, 2=0.7%, 4=5.5%, 8=77.5%, 16=16.0%, 32=0.0%, >=64=0.0% 00:36:13.710 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:13.710 complete : 0=0.0%, 4=90.9%, 8=5.7%, 16=3.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:13.710 issued rwts: total=6892,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:13.710 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:13.710 filename2: (groupid=0, jobs=1): err= 0: pid=3556583: Wed Oct 9 00:43:42 2024 00:36:13.710 read: IOPS=725, BW=2901KiB/s (2971kB/s)(28.3MiB/10005msec) 00:36:13.710 slat (usec): min=5, max=195, avg=25.04, stdev=25.12 00:36:13.710 clat (usec): min=10995, max=44646, avg=21865.54, stdev=4169.40 00:36:13.710 lat (usec): min=11003, max=44659, avg=21890.58, stdev=4175.47 00:36:13.710 clat percentiles (usec): 00:36:13.710 | 1.00th=[12911], 5.00th=[14222], 10.00th=[15664], 20.00th=[18220], 00:36:13.710 | 30.00th=[21103], 40.00th=[22414], 50.00th=[22676], 60.00th=[22938], 00:36:13.710 | 70.00th=[23462], 80.00th=[23987], 90.00th=[25035], 95.00th=[27919], 00:36:13.710 | 99.00th=[36439], 99.50th=[38536], 99.90th=[41157], 99.95th=[41157], 00:36:13.710 | 99.99th=[44827] 00:36:13.710 bw ( KiB/s): min= 2688, max= 3344, per=4.37%, avg=2906.63, stdev=174.37, samples=19 00:36:13.710 iops : min= 672, max= 836, avg=726.63, stdev=43.60, samples=19 00:36:13.710 lat (msec) : 20=26.49%, 50=73.51% 00:36:13.710 cpu : usr=99.01%, sys=0.67%, ctx=33, majf=0, minf=39 00:36:13.710 IO depths : 1=2.8%, 2=5.7%, 4=14.9%, 8=66.6%, 16=10.1%, 32=0.0%, >=64=0.0% 00:36:13.710 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:13.710 complete : 0=0.0%, 4=91.3%, 8=3.3%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:13.710 issued rwts: total=7256,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:13.710 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:13.710 filename2: (groupid=0, jobs=1): err= 0: pid=3556584: Wed Oct 9 00:43:42 2024 00:36:13.710 read: IOPS=687, BW=2752KiB/s (2818kB/s)(26.9MiB/10009msec) 00:36:13.710 slat (usec): min=5, max=128, avg=22.36, stdev=18.17 00:36:13.710 clat (usec): min=7791, max=44708, avg=23081.73, stdev=2642.11 00:36:13.710 lat (usec): min=7797, max=44726, avg=23104.10, stdev=2643.22 00:36:13.710 clat percentiles (usec): 00:36:13.710 | 1.00th=[12387], 5.00th=[18744], 10.00th=[21890], 20.00th=[22676], 00:36:13.710 | 30.00th=[22676], 40.00th=[22938], 50.00th=[23200], 60.00th=[23462], 00:36:13.710 | 70.00th=[23725], 80.00th=[23987], 90.00th=[24511], 95.00th=[25560], 00:36:13.710 | 99.00th=[31851], 99.50th=[33817], 99.90th=[42730], 99.95th=[44303], 00:36:13.710 | 99.99th=[44827] 00:36:13.710 bw ( KiB/s): min= 2640, max= 2928, per=4.13%, avg=2744.42, stdev=82.73, samples=19 00:36:13.710 iops : min= 660, max= 732, avg=686.11, stdev=20.68, samples=19 00:36:13.710 lat (msec) : 10=0.57%, 20=5.91%, 50=93.52% 00:36:13.710 cpu : usr=99.05%, sys=0.67%, ctx=13, majf=0, minf=28 00:36:13.710 IO depths : 1=4.3%, 2=8.7%, 4=19.0%, 8=59.4%, 16=8.6%, 32=0.0%, >=64=0.0% 00:36:13.710 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:13.710 complete : 0=0.0%, 4=92.5%, 8=2.2%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:13.710 issued rwts: total=6886,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:13.710 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:13.710 filename2: (groupid=0, jobs=1): err= 0: pid=3556585: Wed Oct 9 00:43:42 2024 00:36:13.710 read: IOPS=682, BW=2732KiB/s (2797kB/s)(26.7MiB/10004msec) 00:36:13.710 slat (usec): min=4, max=136, avg=19.36, stdev=16.69 00:36:13.710 clat (usec): min=10607, max=35343, avg=23263.41, stdev=1110.51 00:36:13.710 lat (usec): min=10613, max=35351, avg=23282.77, stdev=1108.82 00:36:13.710 clat percentiles (usec): 00:36:13.710 | 1.00th=[20055], 5.00th=[22152], 10.00th=[22414], 20.00th=[22676], 00:36:13.710 | 30.00th=[22938], 40.00th=[22938], 50.00th=[23200], 60.00th=[23462], 00:36:13.710 | 70.00th=[23725], 80.00th=[23987], 90.00th=[24249], 95.00th=[24511], 00:36:13.710 | 99.00th=[25560], 99.50th=[25560], 99.90th=[26870], 99.95th=[29492], 00:36:13.710 | 99.99th=[35390] 00:36:13.710 bw ( KiB/s): min= 2682, max= 2816, per=4.10%, avg=2728.11, stdev=61.36, samples=19 00:36:13.710 iops : min= 670, max= 704, avg=682.00, stdev=15.36, samples=19 00:36:13.710 lat (msec) : 20=0.92%, 50=99.08% 00:36:13.710 cpu : usr=99.09%, sys=0.62%, ctx=32, majf=0, minf=27 00:36:13.710 IO depths : 1=6.0%, 2=12.1%, 4=24.7%, 8=50.6%, 16=6.5%, 32=0.0%, >=64=0.0% 00:36:13.710 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:13.710 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:13.710 issued rwts: total=6832,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:13.710 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:13.710 filename2: (groupid=0, jobs=1): err= 0: pid=3556586: Wed Oct 9 00:43:42 2024 00:36:13.710 read: IOPS=681, BW=2726KiB/s (2791kB/s)(26.6MiB/10005msec) 00:36:13.710 slat (usec): min=5, max=130, avg=17.18, stdev=17.70 00:36:13.710 clat (usec): min=4886, max=42830, avg=23366.45, stdev=4296.24 00:36:13.710 lat (usec): min=4892, max=42855, avg=23383.63, stdev=4297.60 00:36:13.710 clat percentiles (usec): 00:36:13.710 | 1.00th=[11207], 5.00th=[15664], 10.00th=[19006], 20.00th=[22152], 00:36:13.710 | 30.00th=[22676], 40.00th=[22938], 50.00th=[23200], 60.00th=[23462], 00:36:13.710 | 70.00th=[23987], 80.00th=[24511], 90.00th=[27395], 95.00th=[31065], 00:36:13.710 | 99.00th=[38011], 99.50th=[40633], 99.90th=[42206], 99.95th=[42730], 00:36:13.710 | 99.99th=[42730] 00:36:13.710 bw ( KiB/s): min= 2528, max= 2928, per=4.08%, avg=2716.05, stdev=114.15, samples=19 00:36:13.710 iops : min= 632, max= 732, avg=679.00, stdev=28.55, samples=19 00:36:13.710 lat (msec) : 10=0.81%, 20=11.56%, 50=87.64% 00:36:13.710 cpu : usr=98.63%, sys=0.82%, ctx=186, majf=0, minf=26 00:36:13.710 IO depths : 1=1.6%, 2=3.5%, 4=10.6%, 8=71.3%, 16=13.0%, 32=0.0%, >=64=0.0% 00:36:13.710 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:13.710 complete : 0=0.0%, 4=90.6%, 8=5.7%, 16=3.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:13.710 issued rwts: total=6818,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:13.710 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:13.710 filename2: (groupid=0, jobs=1): err= 0: pid=3556587: Wed Oct 9 00:43:42 2024 00:36:13.710 read: IOPS=685, BW=2743KiB/s (2808kB/s)(26.8MiB/10005msec) 00:36:13.710 slat (usec): min=5, max=189, avg=22.22, stdev=22.53 00:36:13.710 clat (usec): min=3807, max=57529, avg=23215.98, stdev=3613.75 00:36:13.710 lat (usec): min=3812, max=57546, avg=23238.21, stdev=3614.22 00:36:13.710 clat percentiles (usec): 00:36:13.710 | 1.00th=[12518], 5.00th=[16909], 10.00th=[20841], 20.00th=[22414], 00:36:13.710 | 30.00th=[22938], 40.00th=[22938], 50.00th=[23200], 60.00th=[23462], 00:36:13.710 | 70.00th=[23725], 80.00th=[23987], 90.00th=[24773], 95.00th=[27919], 00:36:13.710 | 99.00th=[34341], 99.50th=[38536], 99.90th=[57410], 99.95th=[57410], 00:36:13.710 | 99.99th=[57410] 00:36:13.710 bw ( KiB/s): min= 2452, max= 3008, per=4.10%, avg=2726.95, stdev=118.81, samples=19 00:36:13.710 iops : min= 613, max= 752, avg=681.74, stdev=29.70, samples=19 00:36:13.710 lat (msec) : 4=0.10%, 10=0.51%, 20=7.36%, 50=91.79%, 100=0.23% 00:36:13.710 cpu : usr=99.03%, sys=0.60%, ctx=97, majf=0, minf=29 00:36:13.710 IO depths : 1=0.5%, 2=1.2%, 4=5.4%, 8=77.7%, 16=15.3%, 32=0.0%, >=64=0.0% 00:36:13.710 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:13.710 complete : 0=0.0%, 4=89.8%, 8=7.5%, 16=2.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:13.710 issued rwts: total=6860,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:13.710 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:13.710 00:36:13.710 Run status group 0 (all jobs): 00:36:13.710 READ: bw=64.9MiB/s (68.1MB/s), 2726KiB/s-2977KiB/s (2791kB/s-3048kB/s), io=652MiB (684MB), run=10001-10044msec 00:36:13.710 00:43:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:36:13.710 00:43:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:36:13.710 00:43:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:36:13.710 00:43:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:36:13.710 00:43:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:36:13.710 00:43:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:13.710 00:43:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:13.710 00:43:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:13.710 00:43:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:13.710 00:43:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:36:13.710 00:43:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:13.710 00:43:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:13.710 00:43:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:13.710 00:43:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:36:13.710 00:43:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:36:13.710 00:43:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:36:13.710 00:43:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:36:13.710 00:43:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:13.710 00:43:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:13.710 00:43:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:13.710 00:43:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:36:13.710 00:43:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:13.710 00:43:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:13.710 00:43:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:13.710 00:43:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:36:13.710 00:43:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:36:13.710 00:43:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:36:13.710 00:43:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:36:13.710 00:43:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:13.710 00:43:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:13.710 00:43:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:13.710 00:43:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:36:13.710 00:43:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:13.710 00:43:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:13.710 00:43:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:13.710 00:43:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:36:13.710 00:43:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:36:13.711 00:43:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:36:13.711 00:43:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:36:13.711 00:43:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:36:13.711 00:43:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:36:13.711 00:43:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:36:13.711 00:43:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:36:13.711 00:43:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:36:13.711 00:43:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:36:13.711 00:43:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:36:13.711 00:43:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:36:13.711 00:43:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:13.711 00:43:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:13.711 bdev_null0 00:36:13.711 00:43:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:13.711 00:43:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:36:13.711 00:43:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:13.711 00:43:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:13.711 00:43:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:13.711 00:43:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:36:13.711 00:43:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:13.711 00:43:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:13.711 00:43:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:13.711 00:43:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:13.711 00:43:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:13.711 00:43:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:13.711 [2024-10-09 00:43:43.228624] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:13.711 00:43:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:13.711 00:43:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:36:13.711 00:43:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:36:13.711 00:43:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:36:13.711 00:43:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:36:13.711 00:43:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:13.711 00:43:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:13.711 bdev_null1 00:36:13.711 00:43:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:13.711 00:43:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:36:13.711 00:43:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:13.711 00:43:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:13.711 00:43:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:13.711 00:43:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:36:13.711 00:43:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:13.711 00:43:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:13.711 00:43:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:13.711 00:43:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:13.711 00:43:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:13.711 00:43:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:13.711 00:43:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:13.711 00:43:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:36:13.711 00:43:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:36:13.711 00:43:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:36:13.711 00:43:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # config=() 00:36:13.711 00:43:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:13.711 00:43:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # local subsystem config 00:36:13.711 00:43:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:36:13.711 00:43:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:13.711 00:43:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:36:13.711 { 00:36:13.711 "params": { 00:36:13.711 "name": "Nvme$subsystem", 00:36:13.711 "trtype": "$TEST_TRANSPORT", 00:36:13.711 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:13.711 "adrfam": "ipv4", 00:36:13.711 "trsvcid": "$NVMF_PORT", 00:36:13.711 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:13.711 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:13.711 "hdgst": ${hdgst:-false}, 00:36:13.711 "ddgst": ${ddgst:-false} 00:36:13.711 }, 00:36:13.711 "method": "bdev_nvme_attach_controller" 00:36:13.711 } 00:36:13.711 EOF 00:36:13.711 )") 00:36:13.711 00:43:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:36:13.711 00:43:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:36:13.711 00:43:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:36:13.711 00:43:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:36:13.711 00:43:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:36:13.711 00:43:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:36:13.711 00:43:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:13.711 00:43:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:36:13.711 00:43:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:36:13.711 00:43:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:36:13.711 00:43:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # cat 00:36:13.711 00:43:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:13.711 00:43:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:36:13.711 00:43:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:36:13.711 00:43:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:36:13.711 00:43:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:36:13.711 00:43:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:36:13.711 00:43:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:36:13.711 00:43:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:36:13.711 { 00:36:13.711 "params": { 00:36:13.711 "name": "Nvme$subsystem", 00:36:13.711 "trtype": "$TEST_TRANSPORT", 00:36:13.711 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:13.711 "adrfam": "ipv4", 00:36:13.711 "trsvcid": "$NVMF_PORT", 00:36:13.711 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:13.711 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:13.711 "hdgst": ${hdgst:-false}, 00:36:13.711 "ddgst": ${ddgst:-false} 00:36:13.711 }, 00:36:13.711 "method": "bdev_nvme_attach_controller" 00:36:13.711 } 00:36:13.711 EOF 00:36:13.711 )") 00:36:13.711 00:43:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:36:13.711 00:43:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:36:13.711 00:43:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # cat 00:36:13.711 00:43:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # jq . 00:36:13.711 00:43:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@583 -- # IFS=, 00:36:13.711 00:43:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:36:13.711 "params": { 00:36:13.711 "name": "Nvme0", 00:36:13.711 "trtype": "tcp", 00:36:13.711 "traddr": "10.0.0.2", 00:36:13.711 "adrfam": "ipv4", 00:36:13.711 "trsvcid": "4420", 00:36:13.711 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:13.711 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:13.711 "hdgst": false, 00:36:13.711 "ddgst": false 00:36:13.711 }, 00:36:13.711 "method": "bdev_nvme_attach_controller" 00:36:13.711 },{ 00:36:13.711 "params": { 00:36:13.711 "name": "Nvme1", 00:36:13.711 "trtype": "tcp", 00:36:13.711 "traddr": "10.0.0.2", 00:36:13.711 "adrfam": "ipv4", 00:36:13.711 "trsvcid": "4420", 00:36:13.711 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:36:13.711 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:36:13.711 "hdgst": false, 00:36:13.711 "ddgst": false 00:36:13.711 }, 00:36:13.711 "method": "bdev_nvme_attach_controller" 00:36:13.711 }' 00:36:13.711 00:43:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:36:13.711 00:43:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:36:13.711 00:43:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:36:13.711 00:43:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:13.711 00:43:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:36:13.711 00:43:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:36:13.711 00:43:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:36:13.711 00:43:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:36:13.711 00:43:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:36:13.711 00:43:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:13.711 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:36:13.711 ... 00:36:13.711 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:36:13.711 ... 00:36:13.711 fio-3.35 00:36:13.711 Starting 4 threads 00:36:19.025 00:36:19.025 filename0: (groupid=0, jobs=1): err= 0: pid=3558808: Wed Oct 9 00:43:49 2024 00:36:19.025 read: IOPS=2979, BW=23.3MiB/s (24.4MB/s)(116MiB/5002msec) 00:36:19.025 slat (nsec): min=5380, max=67101, avg=6270.29, stdev=2485.37 00:36:19.025 clat (usec): min=1122, max=4399, avg=2668.24, stdev=221.07 00:36:19.025 lat (usec): min=1141, max=4404, avg=2674.51, stdev=220.92 00:36:19.025 clat percentiles (usec): 00:36:19.025 | 1.00th=[ 2089], 5.00th=[ 2376], 10.00th=[ 2507], 20.00th=[ 2606], 00:36:19.025 | 30.00th=[ 2638], 40.00th=[ 2671], 50.00th=[ 2671], 60.00th=[ 2671], 00:36:19.025 | 70.00th=[ 2704], 80.00th=[ 2704], 90.00th=[ 2835], 95.00th=[ 2933], 00:36:19.025 | 99.00th=[ 3752], 99.50th=[ 3982], 99.90th=[ 4228], 99.95th=[ 4228], 00:36:19.025 | 99.99th=[ 4424] 00:36:19.025 bw ( KiB/s): min=23728, max=23968, per=25.05%, avg=23838.40, stdev=82.78, samples=10 00:36:19.025 iops : min= 2966, max= 2996, avg=2979.80, stdev=10.35, samples=10 00:36:19.025 lat (msec) : 2=0.58%, 4=99.03%, 10=0.39% 00:36:19.025 cpu : usr=96.12%, sys=3.64%, ctx=6, majf=0, minf=43 00:36:19.025 IO depths : 1=0.1%, 2=0.1%, 4=70.4%, 8=29.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:19.025 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:19.025 complete : 0=0.0%, 4=93.9%, 8=6.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:19.025 issued rwts: total=14904,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:19.025 latency : target=0, window=0, percentile=100.00%, depth=8 00:36:19.025 filename0: (groupid=0, jobs=1): err= 0: pid=3558809: Wed Oct 9 00:43:49 2024 00:36:19.025 read: IOPS=3005, BW=23.5MiB/s (24.6MB/s)(117MiB/5001msec) 00:36:19.025 slat (nsec): min=5383, max=75609, avg=6233.76, stdev=2689.94 00:36:19.025 clat (usec): min=1343, max=4705, avg=2646.75, stdev=273.91 00:36:19.025 lat (usec): min=1349, max=4711, avg=2652.98, stdev=273.91 00:36:19.025 clat percentiles (usec): 00:36:19.025 | 1.00th=[ 1958], 5.00th=[ 2212], 10.00th=[ 2376], 20.00th=[ 2540], 00:36:19.025 | 30.00th=[ 2606], 40.00th=[ 2638], 50.00th=[ 2671], 60.00th=[ 2671], 00:36:19.025 | 70.00th=[ 2704], 80.00th=[ 2704], 90.00th=[ 2802], 95.00th=[ 2999], 00:36:19.025 | 99.00th=[ 3818], 99.50th=[ 3916], 99.90th=[ 4015], 99.95th=[ 4293], 00:36:19.025 | 99.99th=[ 4686] 00:36:19.025 bw ( KiB/s): min=23728, max=24320, per=25.26%, avg=24038.40, stdev=169.53, samples=10 00:36:19.025 iops : min= 2966, max= 3040, avg=3004.80, stdev=21.19, samples=10 00:36:19.025 lat (msec) : 2=1.34%, 4=98.40%, 10=0.27% 00:36:19.025 cpu : usr=96.70%, sys=3.08%, ctx=5, majf=0, minf=79 00:36:19.025 IO depths : 1=0.1%, 2=0.2%, 4=68.6%, 8=31.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:19.025 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:19.025 complete : 0=0.0%, 4=95.4%, 8=4.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:19.025 issued rwts: total=15029,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:19.025 latency : target=0, window=0, percentile=100.00%, depth=8 00:36:19.025 filename1: (groupid=0, jobs=1): err= 0: pid=3558810: Wed Oct 9 00:43:49 2024 00:36:19.025 read: IOPS=2975, BW=23.2MiB/s (24.4MB/s)(116MiB/5001msec) 00:36:19.025 slat (nsec): min=5381, max=82734, avg=6296.65, stdev=2555.69 00:36:19.025 clat (usec): min=1241, max=5237, avg=2671.61, stdev=223.84 00:36:19.025 lat (usec): min=1246, max=5264, avg=2677.91, stdev=223.99 00:36:19.025 clat percentiles (usec): 00:36:19.025 | 1.00th=[ 2147], 5.00th=[ 2409], 10.00th=[ 2507], 20.00th=[ 2606], 00:36:19.025 | 30.00th=[ 2638], 40.00th=[ 2671], 50.00th=[ 2671], 60.00th=[ 2671], 00:36:19.025 | 70.00th=[ 2671], 80.00th=[ 2704], 90.00th=[ 2802], 95.00th=[ 2933], 00:36:19.025 | 99.00th=[ 3818], 99.50th=[ 3982], 99.90th=[ 4293], 99.95th=[ 5145], 00:36:19.025 | 99.99th=[ 5211] 00:36:19.025 bw ( KiB/s): min=23326, max=24064, per=25.01%, avg=23798.20, stdev=220.53, samples=10 00:36:19.025 iops : min= 2915, max= 3008, avg=2974.70, stdev=27.75, samples=10 00:36:19.025 lat (msec) : 2=0.46%, 4=99.15%, 10=0.40% 00:36:19.025 cpu : usr=96.54%, sys=3.20%, ctx=6, majf=0, minf=37 00:36:19.026 IO depths : 1=0.1%, 2=0.1%, 4=72.4%, 8=27.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:19.026 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:19.026 complete : 0=0.0%, 4=92.2%, 8=7.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:19.026 issued rwts: total=14879,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:19.026 latency : target=0, window=0, percentile=100.00%, depth=8 00:36:19.026 filename1: (groupid=0, jobs=1): err= 0: pid=3558811: Wed Oct 9 00:43:49 2024 00:36:19.026 read: IOPS=2937, BW=23.0MiB/s (24.1MB/s)(115MiB/5001msec) 00:36:19.026 slat (nsec): min=5375, max=83962, avg=6074.27, stdev=2355.28 00:36:19.026 clat (usec): min=900, max=5191, avg=2706.27, stdev=264.54 00:36:19.026 lat (usec): min=906, max=5225, avg=2712.35, stdev=264.74 00:36:19.026 clat percentiles (usec): 00:36:19.026 | 1.00th=[ 2245], 5.00th=[ 2474], 10.00th=[ 2507], 20.00th=[ 2638], 00:36:19.026 | 30.00th=[ 2638], 40.00th=[ 2671], 50.00th=[ 2671], 60.00th=[ 2671], 00:36:19.026 | 70.00th=[ 2671], 80.00th=[ 2704], 90.00th=[ 2900], 95.00th=[ 3064], 00:36:19.026 | 99.00th=[ 3982], 99.50th=[ 4113], 99.90th=[ 4686], 99.95th=[ 4752], 00:36:19.026 | 99.99th=[ 5145] 00:36:19.026 bw ( KiB/s): min=22989, max=23728, per=24.69%, avg=23494.10, stdev=204.08, samples=10 00:36:19.026 iops : min= 2873, max= 2966, avg=2936.70, stdev=25.68, samples=10 00:36:19.026 lat (usec) : 1000=0.02% 00:36:19.026 lat (msec) : 2=0.27%, 4=98.75%, 10=0.96% 00:36:19.026 cpu : usr=97.02%, sys=2.76%, ctx=6, majf=0, minf=35 00:36:19.026 IO depths : 1=0.1%, 2=0.1%, 4=73.7%, 8=26.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:19.026 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:19.026 complete : 0=0.0%, 4=91.2%, 8=8.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:19.026 issued rwts: total=14692,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:19.026 latency : target=0, window=0, percentile=100.00%, depth=8 00:36:19.026 00:36:19.026 Run status group 0 (all jobs): 00:36:19.026 READ: bw=92.9MiB/s (97.5MB/s), 23.0MiB/s-23.5MiB/s (24.1MB/s-24.6MB/s), io=465MiB (487MB), run=5001-5002msec 00:36:19.026 00:43:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:36:19.026 00:43:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:36:19.026 00:43:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:36:19.026 00:43:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:36:19.026 00:43:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:36:19.026 00:43:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:19.026 00:43:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:19.026 00:43:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:19.026 00:43:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:19.026 00:43:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:36:19.026 00:43:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:19.026 00:43:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:19.287 00:43:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:19.287 00:43:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:36:19.287 00:43:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:36:19.287 00:43:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:36:19.287 00:43:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:36:19.287 00:43:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:19.287 00:43:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:19.287 00:43:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:19.287 00:43:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:36:19.287 00:43:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:19.287 00:43:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:19.287 00:43:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:19.287 00:36:19.287 real 0m24.702s 00:36:19.287 user 5m18.217s 00:36:19.287 sys 0m4.362s 00:36:19.287 00:43:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1126 -- # xtrace_disable 00:36:19.287 00:43:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:19.287 ************************************ 00:36:19.287 END TEST fio_dif_rand_params 00:36:19.287 ************************************ 00:36:19.287 00:43:49 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:36:19.287 00:43:49 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:36:19.287 00:43:49 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:36:19.287 00:43:49 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:19.287 ************************************ 00:36:19.287 START TEST fio_dif_digest 00:36:19.287 ************************************ 00:36:19.287 00:43:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1125 -- # fio_dif_digest 00:36:19.287 00:43:49 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:36:19.287 00:43:49 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:36:19.287 00:43:49 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:36:19.287 00:43:49 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:36:19.287 00:43:49 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:36:19.287 00:43:49 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:36:19.287 00:43:49 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:36:19.287 00:43:49 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:36:19.287 00:43:49 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:36:19.287 00:43:49 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:36:19.287 00:43:49 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:36:19.287 00:43:49 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:36:19.287 00:43:49 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:36:19.287 00:43:49 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:36:19.287 00:43:49 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:36:19.287 00:43:49 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:36:19.287 00:43:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:19.287 00:43:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:19.287 bdev_null0 00:36:19.287 00:43:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:19.287 00:43:49 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:36:19.287 00:43:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:19.287 00:43:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:19.287 00:43:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:19.287 00:43:49 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:36:19.287 00:43:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:19.287 00:43:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:19.287 00:43:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:19.287 00:43:49 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:19.287 00:43:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:19.287 00:43:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:19.287 [2024-10-09 00:43:49.813367] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:19.287 00:43:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:19.287 00:43:49 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:36:19.287 00:43:49 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:36:19.287 00:43:49 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:36:19.287 00:43:49 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # config=() 00:36:19.287 00:43:49 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:19.287 00:43:49 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # local subsystem config 00:36:19.287 00:43:49 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:36:19.287 00:43:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:19.287 00:43:49 nvmf_dif.fio_dif_digest -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:36:19.287 { 00:36:19.287 "params": { 00:36:19.287 "name": "Nvme$subsystem", 00:36:19.287 "trtype": "$TEST_TRANSPORT", 00:36:19.287 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:19.287 "adrfam": "ipv4", 00:36:19.287 "trsvcid": "$NVMF_PORT", 00:36:19.287 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:19.287 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:19.287 "hdgst": ${hdgst:-false}, 00:36:19.287 "ddgst": ${ddgst:-false} 00:36:19.287 }, 00:36:19.287 "method": "bdev_nvme_attach_controller" 00:36:19.287 } 00:36:19.287 EOF 00:36:19.287 )") 00:36:19.287 00:43:49 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:36:19.287 00:43:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:36:19.287 00:43:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:36:19.287 00:43:49 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:36:19.287 00:43:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local sanitizers 00:36:19.287 00:43:49 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:36:19.287 00:43:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:19.287 00:43:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # shift 00:36:19.287 00:43:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local asan_lib= 00:36:19.287 00:43:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:36:19.287 00:43:49 nvmf_dif.fio_dif_digest -- nvmf/common.sh@580 -- # cat 00:36:19.287 00:43:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:19.287 00:43:49 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:36:19.287 00:43:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libasan 00:36:19.287 00:43:49 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:36:19.287 00:43:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:36:19.287 00:43:49 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # jq . 00:36:19.287 00:43:49 nvmf_dif.fio_dif_digest -- nvmf/common.sh@583 -- # IFS=, 00:36:19.287 00:43:49 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:36:19.287 "params": { 00:36:19.287 "name": "Nvme0", 00:36:19.287 "trtype": "tcp", 00:36:19.287 "traddr": "10.0.0.2", 00:36:19.287 "adrfam": "ipv4", 00:36:19.287 "trsvcid": "4420", 00:36:19.287 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:19.287 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:19.287 "hdgst": true, 00:36:19.287 "ddgst": true 00:36:19.287 }, 00:36:19.287 "method": "bdev_nvme_attach_controller" 00:36:19.287 }' 00:36:19.287 00:43:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:36:19.287 00:43:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:36:19.287 00:43:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:36:19.287 00:43:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:19.287 00:43:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:36:19.287 00:43:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:36:19.287 00:43:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:36:19.287 00:43:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:36:19.287 00:43:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:36:19.287 00:43:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:19.885 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:36:19.885 ... 00:36:19.885 fio-3.35 00:36:19.885 Starting 3 threads 00:36:32.126 00:36:32.126 filename0: (groupid=0, jobs=1): err= 0: pid=3560282: Wed Oct 9 00:44:00 2024 00:36:32.126 read: IOPS=290, BW=36.3MiB/s (38.1MB/s)(365MiB/10047msec) 00:36:32.126 slat (nsec): min=5786, max=48220, avg=8497.85, stdev=1682.28 00:36:32.126 clat (usec): min=7676, max=51738, avg=10293.09, stdev=1872.12 00:36:32.126 lat (usec): min=7684, max=51744, avg=10301.59, stdev=1872.08 00:36:32.127 clat percentiles (usec): 00:36:32.127 | 1.00th=[ 8356], 5.00th=[ 8848], 10.00th=[ 9110], 20.00th=[ 9503], 00:36:32.127 | 30.00th=[ 9765], 40.00th=[10028], 50.00th=[10159], 60.00th=[10421], 00:36:32.127 | 70.00th=[10683], 80.00th=[10945], 90.00th=[11338], 95.00th=[11731], 00:36:32.127 | 99.00th=[12387], 99.50th=[13042], 99.90th=[51119], 99.95th=[51119], 00:36:32.127 | 99.99th=[51643] 00:36:32.127 bw ( KiB/s): min=34560, max=38912, per=32.42%, avg=37363.20, stdev=904.05, samples=20 00:36:32.127 iops : min= 270, max= 304, avg=291.90, stdev= 7.06, samples=20 00:36:32.127 lat (msec) : 10=40.05%, 20=59.77%, 50=0.03%, 100=0.14% 00:36:32.127 cpu : usr=94.47%, sys=5.27%, ctx=33, majf=0, minf=142 00:36:32.127 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:32.127 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:32.127 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:32.127 issued rwts: total=2921,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:32.127 latency : target=0, window=0, percentile=100.00%, depth=3 00:36:32.127 filename0: (groupid=0, jobs=1): err= 0: pid=3560283: Wed Oct 9 00:44:00 2024 00:36:32.127 read: IOPS=303, BW=38.0MiB/s (39.8MB/s)(381MiB/10046msec) 00:36:32.127 slat (nsec): min=5673, max=39724, avg=7302.24, stdev=1870.75 00:36:32.127 clat (usec): min=6018, max=48536, avg=9857.66, stdev=1274.09 00:36:32.127 lat (usec): min=6025, max=48543, avg=9864.96, stdev=1274.11 00:36:32.127 clat percentiles (usec): 00:36:32.127 | 1.00th=[ 7767], 5.00th=[ 8455], 10.00th=[ 8848], 20.00th=[ 9110], 00:36:32.127 | 30.00th=[ 9372], 40.00th=[ 9634], 50.00th=[ 9765], 60.00th=[10028], 00:36:32.127 | 70.00th=[10159], 80.00th=[10552], 90.00th=[10945], 95.00th=[11207], 00:36:32.127 | 99.00th=[11994], 99.50th=[12256], 99.90th=[13829], 99.95th=[45351], 00:36:32.127 | 99.99th=[48497] 00:36:32.127 bw ( KiB/s): min=37632, max=39936, per=33.86%, avg=39018.20, stdev=595.66, samples=20 00:36:32.127 iops : min= 294, max= 312, avg=304.80, stdev= 4.70, samples=20 00:36:32.127 lat (msec) : 10=59.44%, 20=40.49%, 50=0.07% 00:36:32.127 cpu : usr=95.48%, sys=4.29%, ctx=19, majf=0, minf=239 00:36:32.127 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:32.127 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:32.127 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:32.127 issued rwts: total=3050,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:32.127 latency : target=0, window=0, percentile=100.00%, depth=3 00:36:32.127 filename0: (groupid=0, jobs=1): err= 0: pid=3560284: Wed Oct 9 00:44:00 2024 00:36:32.127 read: IOPS=306, BW=38.3MiB/s (40.1MB/s)(384MiB/10044msec) 00:36:32.127 slat (nsec): min=5710, max=32151, avg=7517.48, stdev=1722.91 00:36:32.127 clat (usec): min=6735, max=51140, avg=9776.11, stdev=1263.15 00:36:32.127 lat (usec): min=6742, max=51147, avg=9783.63, stdev=1263.14 00:36:32.127 clat percentiles (usec): 00:36:32.127 | 1.00th=[ 7963], 5.00th=[ 8455], 10.00th=[ 8848], 20.00th=[ 9110], 00:36:32.127 | 30.00th=[ 9372], 40.00th=[ 9503], 50.00th=[ 9765], 60.00th=[ 9896], 00:36:32.127 | 70.00th=[10159], 80.00th=[10421], 90.00th=[10683], 95.00th=[10945], 00:36:32.127 | 99.00th=[11469], 99.50th=[11731], 99.90th=[12125], 99.95th=[47973], 00:36:32.127 | 99.99th=[51119] 00:36:32.127 bw ( KiB/s): min=38400, max=40448, per=34.13%, avg=39330.50, stdev=532.65, samples=20 00:36:32.127 iops : min= 300, max= 316, avg=307.25, stdev= 4.19, samples=20 00:36:32.127 lat (msec) : 10=64.13%, 20=35.80%, 50=0.03%, 100=0.03% 00:36:32.127 cpu : usr=95.93%, sys=3.84%, ctx=14, majf=0, minf=116 00:36:32.127 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:32.127 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:32.127 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:32.127 issued rwts: total=3075,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:32.127 latency : target=0, window=0, percentile=100.00%, depth=3 00:36:32.127 00:36:32.127 Run status group 0 (all jobs): 00:36:32.127 READ: bw=113MiB/s (118MB/s), 36.3MiB/s-38.3MiB/s (38.1MB/s-40.1MB/s), io=1131MiB (1186MB), run=10044-10047msec 00:36:32.127 00:44:00 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:36:32.127 00:44:00 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:36:32.127 00:44:00 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:36:32.127 00:44:00 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:36:32.127 00:44:00 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:36:32.127 00:44:00 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:32.127 00:44:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:32.127 00:44:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:32.127 00:44:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:32.127 00:44:00 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:36:32.127 00:44:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:32.127 00:44:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:32.127 00:44:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:32.127 00:36:32.127 real 0m11.104s 00:36:32.127 user 0m45.166s 00:36:32.127 sys 0m1.666s 00:36:32.127 00:44:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1126 -- # xtrace_disable 00:36:32.127 00:44:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:32.127 ************************************ 00:36:32.127 END TEST fio_dif_digest 00:36:32.127 ************************************ 00:36:32.127 00:44:00 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:36:32.127 00:44:00 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:36:32.127 00:44:00 nvmf_dif -- nvmf/common.sh@514 -- # nvmfcleanup 00:36:32.127 00:44:00 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:36:32.127 00:44:00 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:32.127 00:44:00 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:36:32.127 00:44:00 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:32.127 00:44:00 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:32.127 rmmod nvme_tcp 00:36:32.127 rmmod nvme_fabrics 00:36:32.127 rmmod nvme_keyring 00:36:32.127 00:44:00 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:32.127 00:44:00 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:36:32.127 00:44:00 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:36:32.127 00:44:00 nvmf_dif -- nvmf/common.sh@515 -- # '[' -n 3549917 ']' 00:36:32.127 00:44:00 nvmf_dif -- nvmf/common.sh@516 -- # killprocess 3549917 00:36:32.127 00:44:00 nvmf_dif -- common/autotest_common.sh@950 -- # '[' -z 3549917 ']' 00:36:32.127 00:44:00 nvmf_dif -- common/autotest_common.sh@954 -- # kill -0 3549917 00:36:32.127 00:44:00 nvmf_dif -- common/autotest_common.sh@955 -- # uname 00:36:32.127 00:44:00 nvmf_dif -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:36:32.127 00:44:01 nvmf_dif -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3549917 00:36:32.127 00:44:01 nvmf_dif -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:36:32.127 00:44:01 nvmf_dif -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:36:32.127 00:44:01 nvmf_dif -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3549917' 00:36:32.127 killing process with pid 3549917 00:36:32.127 00:44:01 nvmf_dif -- common/autotest_common.sh@969 -- # kill 3549917 00:36:32.127 00:44:01 nvmf_dif -- common/autotest_common.sh@974 -- # wait 3549917 00:36:32.127 00:44:01 nvmf_dif -- nvmf/common.sh@518 -- # '[' iso == iso ']' 00:36:32.127 00:44:01 nvmf_dif -- nvmf/common.sh@519 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:36:34.041 Waiting for block devices as requested 00:36:34.041 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:36:34.041 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:36:34.301 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:36:34.301 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:36:34.301 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:36:34.561 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:36:34.561 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:36:34.561 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:36:34.561 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:36:34.821 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:36:34.821 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:36:35.082 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:36:35.082 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:36:35.082 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:36:35.343 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:36:35.343 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:36:35.343 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:36:35.343 00:44:05 nvmf_dif -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:36:35.343 00:44:05 nvmf_dif -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:36:35.343 00:44:05 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:36:35.343 00:44:05 nvmf_dif -- nvmf/common.sh@789 -- # iptables-save 00:36:35.343 00:44:05 nvmf_dif -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:36:35.343 00:44:05 nvmf_dif -- nvmf/common.sh@789 -- # iptables-restore 00:36:35.343 00:44:05 nvmf_dif -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:35.343 00:44:05 nvmf_dif -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:35.343 00:44:05 nvmf_dif -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:35.343 00:44:05 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:36:35.343 00:44:05 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:37.891 00:44:08 nvmf_dif -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:37.891 00:36:37.891 real 1m18.224s 00:36:37.891 user 7m58.093s 00:36:37.891 sys 0m21.832s 00:36:37.891 00:44:08 nvmf_dif -- common/autotest_common.sh@1126 -- # xtrace_disable 00:36:37.891 00:44:08 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:37.891 ************************************ 00:36:37.891 END TEST nvmf_dif 00:36:37.891 ************************************ 00:36:37.891 00:44:08 -- spdk/autotest.sh@286 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:36:37.891 00:44:08 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:36:37.891 00:44:08 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:36:37.891 00:44:08 -- common/autotest_common.sh@10 -- # set +x 00:36:37.891 ************************************ 00:36:37.891 START TEST nvmf_abort_qd_sizes 00:36:37.891 ************************************ 00:36:37.891 00:44:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:36:37.891 * Looking for test storage... 00:36:37.891 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:36:37.891 00:44:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:36:37.891 00:44:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@1681 -- # lcov --version 00:36:37.891 00:44:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:36:37.891 00:44:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:36:37.891 00:44:08 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:37.892 00:44:08 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:37.892 00:44:08 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:37.892 00:44:08 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:36:37.892 00:44:08 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:36:37.892 00:44:08 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:36:37.892 00:44:08 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:36:37.892 00:44:08 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:36:37.892 00:44:08 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:36:37.892 00:44:08 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:36:37.892 00:44:08 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:37.892 00:44:08 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:36:37.892 00:44:08 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:36:37.892 00:44:08 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:37.892 00:44:08 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:37.892 00:44:08 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:36:37.892 00:44:08 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:36:37.892 00:44:08 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:37.892 00:44:08 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:36:37.892 00:44:08 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:36:37.892 00:44:08 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:36:37.892 00:44:08 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:36:37.892 00:44:08 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:37.892 00:44:08 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:36:37.892 00:44:08 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:36:37.892 00:44:08 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:37.892 00:44:08 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:37.892 00:44:08 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:36:37.892 00:44:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:37.892 00:44:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:36:37.892 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:37.892 --rc genhtml_branch_coverage=1 00:36:37.892 --rc genhtml_function_coverage=1 00:36:37.892 --rc genhtml_legend=1 00:36:37.892 --rc geninfo_all_blocks=1 00:36:37.892 --rc geninfo_unexecuted_blocks=1 00:36:37.892 00:36:37.892 ' 00:36:37.892 00:44:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:36:37.892 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:37.892 --rc genhtml_branch_coverage=1 00:36:37.892 --rc genhtml_function_coverage=1 00:36:37.892 --rc genhtml_legend=1 00:36:37.892 --rc geninfo_all_blocks=1 00:36:37.892 --rc geninfo_unexecuted_blocks=1 00:36:37.892 00:36:37.892 ' 00:36:37.892 00:44:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:36:37.892 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:37.892 --rc genhtml_branch_coverage=1 00:36:37.892 --rc genhtml_function_coverage=1 00:36:37.892 --rc genhtml_legend=1 00:36:37.892 --rc geninfo_all_blocks=1 00:36:37.892 --rc geninfo_unexecuted_blocks=1 00:36:37.892 00:36:37.892 ' 00:36:37.892 00:44:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:36:37.892 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:37.892 --rc genhtml_branch_coverage=1 00:36:37.892 --rc genhtml_function_coverage=1 00:36:37.892 --rc genhtml_legend=1 00:36:37.892 --rc geninfo_all_blocks=1 00:36:37.892 --rc geninfo_unexecuted_blocks=1 00:36:37.892 00:36:37.892 ' 00:36:37.892 00:44:08 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:37.892 00:44:08 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:36:37.892 00:44:08 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:37.892 00:44:08 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:37.892 00:44:08 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:37.892 00:44:08 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:37.892 00:44:08 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:37.892 00:44:08 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:37.892 00:44:08 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:37.892 00:44:08 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:37.892 00:44:08 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:37.892 00:44:08 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:37.892 00:44:08 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:36:37.892 00:44:08 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:36:37.892 00:44:08 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:37.892 00:44:08 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:37.892 00:44:08 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:37.892 00:44:08 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:37.892 00:44:08 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:37.892 00:44:08 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:36:37.892 00:44:08 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:37.892 00:44:08 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:37.892 00:44:08 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:37.892 00:44:08 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:37.892 00:44:08 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:37.892 00:44:08 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:37.892 00:44:08 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:36:37.892 00:44:08 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:37.892 00:44:08 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:36:37.892 00:44:08 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:37.892 00:44:08 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:37.892 00:44:08 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:37.892 00:44:08 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:37.892 00:44:08 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:37.892 00:44:08 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:36:37.892 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:36:37.892 00:44:08 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:37.892 00:44:08 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:37.892 00:44:08 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:37.892 00:44:08 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:36:37.892 00:44:08 nvmf_abort_qd_sizes -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:36:37.892 00:44:08 nvmf_abort_qd_sizes -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:37.892 00:44:08 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # prepare_net_devs 00:36:37.892 00:44:08 nvmf_abort_qd_sizes -- nvmf/common.sh@436 -- # local -g is_hw=no 00:36:37.892 00:44:08 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # remove_spdk_ns 00:36:37.892 00:44:08 nvmf_abort_qd_sizes -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:37.892 00:44:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:36:37.892 00:44:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:37.892 00:44:08 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:36:37.892 00:44:08 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:36:37.892 00:44:08 nvmf_abort_qd_sizes -- nvmf/common.sh@309 -- # xtrace_disable 00:36:37.892 00:44:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:46.034 00:44:15 nvmf_abort_qd_sizes -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:46.034 00:44:15 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # pci_devs=() 00:36:46.034 00:44:15 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:46.034 00:44:15 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:46.034 00:44:15 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:46.034 00:44:15 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:46.034 00:44:15 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:46.034 00:44:15 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # net_devs=() 00:36:46.034 00:44:15 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:46.034 00:44:15 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # e810=() 00:36:46.034 00:44:15 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # local -ga e810 00:36:46.034 00:44:15 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # x722=() 00:36:46.034 00:44:15 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # local -ga x722 00:36:46.034 00:44:15 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # mlx=() 00:36:46.034 00:44:15 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # local -ga mlx 00:36:46.034 00:44:15 nvmf_abort_qd_sizes -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:46.034 00:44:15 nvmf_abort_qd_sizes -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:46.034 00:44:15 nvmf_abort_qd_sizes -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:46.034 00:44:15 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:46.034 00:44:15 nvmf_abort_qd_sizes -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:46.034 00:44:15 nvmf_abort_qd_sizes -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:46.034 00:44:15 nvmf_abort_qd_sizes -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:46.034 00:44:15 nvmf_abort_qd_sizes -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:36:46.034 00:44:15 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:46.034 00:44:15 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:46.034 00:44:15 nvmf_abort_qd_sizes -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:46.034 00:44:15 nvmf_abort_qd_sizes -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:46.034 00:44:15 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:36:46.034 00:44:15 nvmf_abort_qd_sizes -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:36:46.034 00:44:15 nvmf_abort_qd_sizes -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:36:46.034 00:44:15 nvmf_abort_qd_sizes -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:36:46.034 00:44:15 nvmf_abort_qd_sizes -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:36:46.034 00:44:15 nvmf_abort_qd_sizes -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:36:46.034 00:44:15 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:46.034 00:44:15 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:36:46.034 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:36:46.034 00:44:15 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:46.034 00:44:15 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:46.034 00:44:15 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:46.034 00:44:15 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:46.034 00:44:15 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:46.034 00:44:15 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:46.034 00:44:15 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:36:46.034 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:36:46.034 00:44:15 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:46.034 00:44:15 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:46.034 00:44:15 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:46.034 00:44:15 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:46.034 00:44:15 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:46.034 00:44:15 nvmf_abort_qd_sizes -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:36:46.034 00:44:15 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:36:46.034 00:44:15 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:36:46.034 00:44:15 nvmf_abort_qd_sizes -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:36:46.034 00:44:15 nvmf_abort_qd_sizes -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:46.034 00:44:15 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:36:46.034 00:44:15 nvmf_abort_qd_sizes -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:46.034 00:44:15 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ up == up ]] 00:36:46.034 00:44:15 nvmf_abort_qd_sizes -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:36:46.034 00:44:15 nvmf_abort_qd_sizes -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:46.034 00:44:15 nvmf_abort_qd_sizes -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:36:46.034 Found net devices under 0000:4b:00.0: cvl_0_0 00:36:46.034 00:44:15 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:36:46.034 00:44:15 nvmf_abort_qd_sizes -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:36:46.034 00:44:15 nvmf_abort_qd_sizes -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:46.034 00:44:15 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:36:46.034 00:44:15 nvmf_abort_qd_sizes -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:46.034 00:44:15 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ up == up ]] 00:36:46.034 00:44:15 nvmf_abort_qd_sizes -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:36:46.034 00:44:15 nvmf_abort_qd_sizes -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:46.034 00:44:15 nvmf_abort_qd_sizes -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:36:46.034 Found net devices under 0000:4b:00.1: cvl_0_1 00:36:46.034 00:44:15 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:36:46.034 00:44:15 nvmf_abort_qd_sizes -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:36:46.034 00:44:15 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # is_hw=yes 00:36:46.034 00:44:15 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:36:46.034 00:44:15 nvmf_abort_qd_sizes -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:36:46.034 00:44:15 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:36:46.034 00:44:15 nvmf_abort_qd_sizes -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:46.034 00:44:15 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:46.034 00:44:15 nvmf_abort_qd_sizes -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:46.034 00:44:15 nvmf_abort_qd_sizes -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:46.034 00:44:15 nvmf_abort_qd_sizes -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:36:46.034 00:44:15 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:46.034 00:44:15 nvmf_abort_qd_sizes -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:46.034 00:44:15 nvmf_abort_qd_sizes -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:36:46.034 00:44:15 nvmf_abort_qd_sizes -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:36:46.034 00:44:15 nvmf_abort_qd_sizes -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:46.034 00:44:15 nvmf_abort_qd_sizes -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:46.034 00:44:15 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:36:46.034 00:44:15 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:36:46.034 00:44:15 nvmf_abort_qd_sizes -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:36:46.034 00:44:15 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:46.034 00:44:15 nvmf_abort_qd_sizes -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:46.034 00:44:15 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:46.034 00:44:15 nvmf_abort_qd_sizes -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:36:46.034 00:44:15 nvmf_abort_qd_sizes -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:46.034 00:44:15 nvmf_abort_qd_sizes -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:46.034 00:44:15 nvmf_abort_qd_sizes -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:46.034 00:44:15 nvmf_abort_qd_sizes -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:36:46.034 00:44:15 nvmf_abort_qd_sizes -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:36:46.034 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:46.034 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.567 ms 00:36:46.034 00:36:46.034 --- 10.0.0.2 ping statistics --- 00:36:46.034 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:46.034 rtt min/avg/max/mdev = 0.567/0.567/0.567/0.000 ms 00:36:46.034 00:44:15 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:46.034 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:46.034 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.298 ms 00:36:46.034 00:36:46.034 --- 10.0.0.1 ping statistics --- 00:36:46.035 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:46.035 rtt min/avg/max/mdev = 0.298/0.298/0.298/0.000 ms 00:36:46.035 00:44:15 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:46.035 00:44:15 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # return 0 00:36:46.035 00:44:15 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # '[' iso == iso ']' 00:36:46.035 00:44:15 nvmf_abort_qd_sizes -- nvmf/common.sh@477 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:36:48.680 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:36:48.680 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:36:48.680 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:36:48.680 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:36:48.680 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:36:48.680 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:36:48.680 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:36:48.680 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:36:48.680 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:36:48.680 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:36:48.680 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:36:48.680 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:36:48.680 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:36:48.680 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:36:48.680 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:36:48.680 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:36:48.680 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:36:48.680 00:44:19 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:48.680 00:44:19 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:36:48.680 00:44:19 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:36:48.680 00:44:19 nvmf_abort_qd_sizes -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:48.680 00:44:19 nvmf_abort_qd_sizes -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:36:48.680 00:44:19 nvmf_abort_qd_sizes -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:36:48.680 00:44:19 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:36:48.680 00:44:19 nvmf_abort_qd_sizes -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:36:48.680 00:44:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@724 -- # xtrace_disable 00:36:48.680 00:44:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:48.680 00:44:19 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # nvmfpid=3569566 00:36:48.680 00:44:19 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # waitforlisten 3569566 00:36:48.680 00:44:19 nvmf_abort_qd_sizes -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:36:48.680 00:44:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@831 -- # '[' -z 3569566 ']' 00:36:48.680 00:44:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:48.680 00:44:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # local max_retries=100 00:36:48.681 00:44:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:48.681 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:48.681 00:44:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # xtrace_disable 00:36:48.681 00:44:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:48.681 [2024-10-09 00:44:19.268679] Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 initialization... 00:36:48.681 [2024-10-09 00:44:19.268731] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:48.941 [2024-10-09 00:44:19.354108] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:36:48.941 [2024-10-09 00:44:19.424479] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:48.941 [2024-10-09 00:44:19.424525] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:48.941 [2024-10-09 00:44:19.424534] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:48.941 [2024-10-09 00:44:19.424541] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:48.941 [2024-10-09 00:44:19.424547] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:48.941 [2024-10-09 00:44:19.426273] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:36:48.941 [2024-10-09 00:44:19.426425] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:36:48.941 [2024-10-09 00:44:19.426581] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:36:48.941 [2024-10-09 00:44:19.426581] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:36:49.512 00:44:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:36:49.512 00:44:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # return 0 00:36:49.512 00:44:20 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:36:49.512 00:44:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@730 -- # xtrace_disable 00:36:49.512 00:44:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:49.512 00:44:20 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:49.512 00:44:20 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:36:49.512 00:44:20 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:36:49.512 00:44:20 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:36:49.512 00:44:20 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:36:49.512 00:44:20 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:36:49.512 00:44:20 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n 0000:65:00.0 ]] 00:36:49.512 00:44:20 nvmf_abort_qd_sizes -- scripts/common.sh@316 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:36:49.512 00:44:20 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:36:49.512 00:44:20 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:65:00.0 ]] 00:36:49.512 00:44:20 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:36:49.512 00:44:20 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:36:49.512 00:44:20 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:36:49.512 00:44:20 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 1 )) 00:36:49.512 00:44:20 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:65:00.0 00:36:49.512 00:44:20 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:36:49.512 00:44:20 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:65:00.0 00:36:49.512 00:44:20 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:36:49.512 00:44:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:36:49.512 00:44:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@1107 -- # xtrace_disable 00:36:49.512 00:44:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:49.773 ************************************ 00:36:49.773 START TEST spdk_target_abort 00:36:49.773 ************************************ 00:36:49.773 00:44:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1125 -- # spdk_target 00:36:49.773 00:44:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:36:49.773 00:44:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:65:00.0 -b spdk_target 00:36:49.773 00:44:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:49.773 00:44:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:50.034 spdk_targetn1 00:36:50.034 00:44:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:50.034 00:44:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:36:50.034 00:44:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:50.034 00:44:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:50.034 [2024-10-09 00:44:20.503388] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:50.034 00:44:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:50.034 00:44:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:36:50.034 00:44:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:50.034 00:44:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:50.034 00:44:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:50.034 00:44:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:36:50.034 00:44:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:50.034 00:44:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:50.034 00:44:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:50.034 00:44:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:36:50.034 00:44:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:50.034 00:44:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:50.034 [2024-10-09 00:44:20.543701] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:50.034 00:44:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:50.034 00:44:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:36:50.034 00:44:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:36:50.034 00:44:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:36:50.034 00:44:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:36:50.034 00:44:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:36:50.034 00:44:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:36:50.034 00:44:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:36:50.034 00:44:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:36:50.034 00:44:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:36:50.034 00:44:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:50.034 00:44:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:36:50.034 00:44:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:50.034 00:44:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:36:50.034 00:44:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:50.034 00:44:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:36:50.034 00:44:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:50.034 00:44:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:36:50.034 00:44:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:50.034 00:44:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:50.035 00:44:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:50.035 00:44:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:50.303 [2024-10-09 00:44:20.799226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:189 nsid:1 lba:472 len:8 PRP1 0x2000078c6000 PRP2 0x0 00:36:50.303 [2024-10-09 00:44:20.799262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:189 cdw0:0 sqhd:003c p:1 m:0 dnr:0 00:36:50.303 [2024-10-09 00:44:20.808303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:191 nsid:1 lba:816 len:8 PRP1 0x2000078be000 PRP2 0x0 00:36:50.303 [2024-10-09 00:44:20.808325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:191 cdw0:0 sqhd:0067 p:1 m:0 dnr:0 00:36:50.303 [2024-10-09 00:44:20.831304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:188 nsid:1 lba:1632 len:8 PRP1 0x2000078c4000 PRP2 0x0 00:36:50.303 [2024-10-09 00:44:20.831326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:188 cdw0:0 sqhd:00cf p:1 m:0 dnr:0 00:36:50.303 [2024-10-09 00:44:20.863272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:190 nsid:1 lba:2824 len:8 PRP1 0x2000078be000 PRP2 0x0 00:36:50.303 [2024-10-09 00:44:20.863298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:190 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:36:50.303 [2024-10-09 00:44:20.893162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:190 nsid:1 lba:3776 len:8 PRP1 0x2000078c2000 PRP2 0x0 00:36:50.303 [2024-10-09 00:44:20.893184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:190 cdw0:0 sqhd:00d9 p:0 m:0 dnr:0 00:36:53.608 Initializing NVMe Controllers 00:36:53.609 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:36:53.609 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:53.609 Initialization complete. Launching workers. 00:36:53.609 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 12509, failed: 5 00:36:53.609 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 3033, failed to submit 9481 00:36:53.609 success 744, unsuccessful 2289, failed 0 00:36:53.609 00:44:23 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:53.609 00:44:23 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:53.609 [2024-10-09 00:44:24.081914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:181 nsid:1 lba:640 len:8 PRP1 0x200007c58000 PRP2 0x0 00:36:53.609 [2024-10-09 00:44:24.081956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:181 cdw0:0 sqhd:0057 p:1 m:0 dnr:0 00:36:53.609 [2024-10-09 00:44:24.103534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:176 nsid:1 lba:1176 len:8 PRP1 0x200007c5c000 PRP2 0x0 00:36:53.609 [2024-10-09 00:44:24.103559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:176 cdw0:0 sqhd:009a p:1 m:0 dnr:0 00:36:53.609 [2024-10-09 00:44:24.111801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:172 nsid:1 lba:1328 len:8 PRP1 0x200007c42000 PRP2 0x0 00:36:53.609 [2024-10-09 00:44:24.111823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:172 cdw0:0 sqhd:00b0 p:1 m:0 dnr:0 00:36:53.609 [2024-10-09 00:44:24.127852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:186 nsid:1 lba:1728 len:8 PRP1 0x200007c58000 PRP2 0x0 00:36:53.609 [2024-10-09 00:44:24.127882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:186 cdw0:0 sqhd:00d9 p:1 m:0 dnr:0 00:36:53.609 [2024-10-09 00:44:24.210276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:176 nsid:1 lba:3512 len:8 PRP1 0x200007c3a000 PRP2 0x0 00:36:53.609 [2024-10-09 00:44:24.210303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:176 cdw0:0 sqhd:00bb p:0 m:0 dnr:0 00:36:54.180 [2024-10-09 00:44:24.506838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:185 nsid:1 lba:10192 len:8 PRP1 0x200007c3c000 PRP2 0x0 00:36:54.180 [2024-10-09 00:44:24.506869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:185 cdw0:0 sqhd:0004 p:1 m:0 dnr:0 00:36:56.723 Initializing NVMe Controllers 00:36:56.723 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:36:56.723 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:56.723 Initialization complete. Launching workers. 00:36:56.723 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8540, failed: 6 00:36:56.723 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1197, failed to submit 7349 00:36:56.723 success 387, unsuccessful 810, failed 0 00:36:56.723 00:44:27 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:56.723 00:44:27 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:58.107 [2024-10-09 00:44:28.550151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:186 nsid:1 lba:125568 len:8 PRP1 0x2000078ee000 PRP2 0x0 00:36:58.107 [2024-10-09 00:44:28.550195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:186 cdw0:0 sqhd:0034 p:1 m:0 dnr:0 00:37:00.026 Initializing NVMe Controllers 00:37:00.026 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:37:00.026 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:37:00.026 Initialization complete. Launching workers. 00:37:00.026 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 43752, failed: 1 00:37:00.026 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2712, failed to submit 41041 00:37:00.026 success 607, unsuccessful 2105, failed 0 00:37:00.026 00:44:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:37:00.026 00:44:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:00.026 00:44:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:37:00.026 00:44:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:00.027 00:44:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:37:00.027 00:44:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:00.027 00:44:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:37:01.937 00:44:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:01.937 00:44:32 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 3569566 00:37:01.937 00:44:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@950 -- # '[' -z 3569566 ']' 00:37:01.937 00:44:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # kill -0 3569566 00:37:01.937 00:44:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@955 -- # uname 00:37:01.937 00:44:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:37:01.937 00:44:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3569566 00:37:01.937 00:44:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:37:01.937 00:44:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:37:01.937 00:44:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3569566' 00:37:01.937 killing process with pid 3569566 00:37:01.937 00:44:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@969 -- # kill 3569566 00:37:01.937 00:44:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@974 -- # wait 3569566 00:37:01.937 00:37:01.937 real 0m12.359s 00:37:01.937 user 0m50.192s 00:37:01.937 sys 0m2.090s 00:37:01.937 00:44:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:37:01.937 00:44:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:37:01.937 ************************************ 00:37:01.937 END TEST spdk_target_abort 00:37:01.937 ************************************ 00:37:02.198 00:44:32 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:37:02.198 00:44:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:37:02.198 00:44:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@1107 -- # xtrace_disable 00:37:02.198 00:44:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:37:02.198 ************************************ 00:37:02.198 START TEST kernel_target_abort 00:37:02.198 ************************************ 00:37:02.198 00:44:32 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1125 -- # kernel_target 00:37:02.198 00:44:32 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:37:02.198 00:44:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@767 -- # local ip 00:37:02.198 00:44:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@768 -- # ip_candidates=() 00:37:02.198 00:44:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@768 -- # local -A ip_candidates 00:37:02.198 00:44:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:02.198 00:44:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:02.198 00:44:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:37:02.198 00:44:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:02.198 00:44:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:37:02.198 00:44:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:37:02.198 00:44:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:37:02.198 00:44:32 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:37:02.198 00:44:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:37:02.198 00:44:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # nvmet=/sys/kernel/config/nvmet 00:37:02.198 00:44:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@661 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:37:02.198 00:44:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:37:02.198 00:44:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:37:02.198 00:44:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # local block nvme 00:37:02.198 00:44:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # [[ ! -e /sys/module/nvmet ]] 00:37:02.198 00:44:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # modprobe nvmet 00:37:02.198 00:44:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # [[ -e /sys/kernel/config/nvmet ]] 00:37:02.199 00:44:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:37:05.499 Waiting for block devices as requested 00:37:05.499 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:37:05.499 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:37:05.759 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:37:05.759 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:37:05.759 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:37:06.019 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:37:06.019 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:37:06.019 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:37:06.280 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:37:06.280 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:37:06.280 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:37:06.540 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:37:06.540 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:37:06.540 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:37:06.801 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:37:06.801 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:37:06.801 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:37:07.067 00:44:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@676 -- # for block in /sys/block/nvme* 00:37:07.067 00:44:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # [[ -e /sys/block/nvme0n1 ]] 00:37:07.067 00:44:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # is_block_zoned nvme0n1 00:37:07.067 00:44:37 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:37:07.067 00:44:37 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:37:07.067 00:44:37 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:37:07.067 00:44:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # block_in_use nvme0n1 00:37:07.067 00:44:37 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:37:07.067 00:44:37 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:37:07.067 No valid GPT data, bailing 00:37:07.067 00:44:37 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:37:07.067 00:44:37 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:37:07.067 00:44:37 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:37:07.067 00:44:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # nvme=/dev/nvme0n1 00:37:07.067 00:44:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@682 -- # [[ -b /dev/nvme0n1 ]] 00:37:07.067 00:44:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:37:07.067 00:44:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@685 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:37:07.067 00:44:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:37:07.067 00:44:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:37:07.067 00:44:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo 1 00:37:07.067 00:44:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@694 -- # echo /dev/nvme0n1 00:37:07.067 00:44:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 00:37:07.067 00:44:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 10.0.0.1 00:37:07.067 00:44:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # echo tcp 00:37:07.067 00:44:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 4420 00:37:07.067 00:44:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo ipv4 00:37:07.067 00:44:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@703 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:37:07.067 00:44:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@706 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.1 -t tcp -s 4420 00:37:07.067 00:37:07.067 Discovery Log Number of Records 2, Generation counter 2 00:37:07.067 =====Discovery Log Entry 0====== 00:37:07.067 trtype: tcp 00:37:07.067 adrfam: ipv4 00:37:07.067 subtype: current discovery subsystem 00:37:07.067 treq: not specified, sq flow control disable supported 00:37:07.067 portid: 1 00:37:07.067 trsvcid: 4420 00:37:07.067 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:37:07.067 traddr: 10.0.0.1 00:37:07.067 eflags: none 00:37:07.067 sectype: none 00:37:07.067 =====Discovery Log Entry 1====== 00:37:07.067 trtype: tcp 00:37:07.068 adrfam: ipv4 00:37:07.068 subtype: nvme subsystem 00:37:07.068 treq: not specified, sq flow control disable supported 00:37:07.068 portid: 1 00:37:07.068 trsvcid: 4420 00:37:07.068 subnqn: nqn.2016-06.io.spdk:testnqn 00:37:07.068 traddr: 10.0.0.1 00:37:07.068 eflags: none 00:37:07.068 sectype: none 00:37:07.068 00:44:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:37:07.068 00:44:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:37:07.068 00:44:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:37:07.068 00:44:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:37:07.068 00:44:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:37:07.068 00:44:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:37:07.068 00:44:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:37:07.068 00:44:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:37:07.068 00:44:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:37:07.068 00:44:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:07.068 00:44:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:37:07.068 00:44:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:07.068 00:44:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:37:07.068 00:44:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:07.068 00:44:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:37:07.068 00:44:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:07.068 00:44:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:37:07.068 00:44:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:07.068 00:44:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:37:07.068 00:44:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:37:07.068 00:44:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:37:10.370 Initializing NVMe Controllers 00:37:10.370 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:37:10.370 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:37:10.370 Initialization complete. Launching workers. 00:37:10.370 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 66972, failed: 0 00:37:10.370 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 66972, failed to submit 0 00:37:10.370 success 0, unsuccessful 66972, failed 0 00:37:10.370 00:44:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:37:10.370 00:44:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:37:13.669 Initializing NVMe Controllers 00:37:13.669 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:37:13.669 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:37:13.669 Initialization complete. Launching workers. 00:37:13.669 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 117796, failed: 0 00:37:13.669 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 29670, failed to submit 88126 00:37:13.669 success 0, unsuccessful 29670, failed 0 00:37:13.669 00:44:43 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:37:13.669 00:44:43 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:37:16.218 Initializing NVMe Controllers 00:37:16.218 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:37:16.218 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:37:16.218 Initialization complete. Launching workers. 00:37:16.218 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 145087, failed: 0 00:37:16.218 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 36282, failed to submit 108805 00:37:16.218 success 0, unsuccessful 36282, failed 0 00:37:16.218 00:44:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:37:16.218 00:44:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@710 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:37:16.218 00:44:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # echo 0 00:37:16.218 00:44:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:37:16.218 00:44:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@715 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:37:16.218 00:44:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:37:16.218 00:44:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:37:16.218 00:44:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # modules=(/sys/module/nvmet/holders/*) 00:37:16.218 00:44:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modprobe -r nvmet_tcp nvmet 00:37:16.478 00:44:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@724 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:37:19.795 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:37:19.795 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:37:19.795 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:37:19.795 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:37:19.795 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:37:19.795 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:37:19.795 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:37:19.795 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:37:19.795 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:37:19.795 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:37:19.795 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:37:19.795 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:37:19.795 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:37:19.795 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:37:19.795 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:37:19.795 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:37:21.710 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:37:21.710 00:37:21.710 real 0m19.505s 00:37:21.710 user 0m9.593s 00:37:21.710 sys 0m5.664s 00:37:21.710 00:44:52 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:37:21.710 00:44:52 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:37:21.710 ************************************ 00:37:21.710 END TEST kernel_target_abort 00:37:21.710 ************************************ 00:37:21.710 00:44:52 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:37:21.710 00:44:52 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:37:21.710 00:44:52 nvmf_abort_qd_sizes -- nvmf/common.sh@514 -- # nvmfcleanup 00:37:21.710 00:44:52 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:37:21.710 00:44:52 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:21.710 00:44:52 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:37:21.710 00:44:52 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:21.710 00:44:52 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:21.710 rmmod nvme_tcp 00:37:21.710 rmmod nvme_fabrics 00:37:21.710 rmmod nvme_keyring 00:37:21.710 00:44:52 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:21.710 00:44:52 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:37:21.710 00:44:52 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:37:21.710 00:44:52 nvmf_abort_qd_sizes -- nvmf/common.sh@515 -- # '[' -n 3569566 ']' 00:37:21.710 00:44:52 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # killprocess 3569566 00:37:21.710 00:44:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@950 -- # '[' -z 3569566 ']' 00:37:21.710 00:44:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # kill -0 3569566 00:37:21.710 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (3569566) - No such process 00:37:21.710 00:44:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@977 -- # echo 'Process with pid 3569566 is not found' 00:37:21.710 Process with pid 3569566 is not found 00:37:21.710 00:44:52 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # '[' iso == iso ']' 00:37:21.710 00:44:52 nvmf_abort_qd_sizes -- nvmf/common.sh@519 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:37:25.023 Waiting for block devices as requested 00:37:25.023 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:37:25.024 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:37:25.284 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:37:25.284 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:37:25.284 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:37:25.556 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:37:25.556 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:37:25.556 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:37:25.556 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:37:25.816 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:37:25.816 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:37:26.076 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:37:26.076 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:37:26.076 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:37:26.336 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:37:26.336 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:37:26.336 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:37:26.336 00:44:56 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:37:26.336 00:44:56 nvmf_abort_qd_sizes -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:37:26.336 00:44:56 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:37:26.336 00:44:56 nvmf_abort_qd_sizes -- nvmf/common.sh@789 -- # iptables-save 00:37:26.336 00:44:56 nvmf_abort_qd_sizes -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:37:26.336 00:44:56 nvmf_abort_qd_sizes -- nvmf/common.sh@789 -- # iptables-restore 00:37:26.336 00:44:56 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:26.336 00:44:56 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:26.336 00:44:56 nvmf_abort_qd_sizes -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:26.336 00:44:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:37:26.336 00:44:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:28.881 00:44:59 nvmf_abort_qd_sizes -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:28.881 00:37:28.881 real 0m50.949s 00:37:28.881 user 1m4.887s 00:37:28.881 sys 0m18.371s 00:37:28.881 00:44:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:37:28.881 00:44:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:37:28.881 ************************************ 00:37:28.881 END TEST nvmf_abort_qd_sizes 00:37:28.881 ************************************ 00:37:28.881 00:44:59 -- spdk/autotest.sh@288 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:37:28.881 00:44:59 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:37:28.881 00:44:59 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:37:28.881 00:44:59 -- common/autotest_common.sh@10 -- # set +x 00:37:28.881 ************************************ 00:37:28.881 START TEST keyring_file 00:37:28.881 ************************************ 00:37:28.881 00:44:59 keyring_file -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:37:28.881 * Looking for test storage... 00:37:28.881 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:37:28.881 00:44:59 keyring_file -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:37:28.881 00:44:59 keyring_file -- common/autotest_common.sh@1681 -- # lcov --version 00:37:28.881 00:44:59 keyring_file -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:37:28.881 00:44:59 keyring_file -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:37:28.881 00:44:59 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:28.881 00:44:59 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:28.881 00:44:59 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:28.881 00:44:59 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:37:28.881 00:44:59 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:37:28.881 00:44:59 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:37:28.881 00:44:59 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:37:28.881 00:44:59 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:37:28.881 00:44:59 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:37:28.881 00:44:59 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:37:28.881 00:44:59 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:28.881 00:44:59 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:37:28.881 00:44:59 keyring_file -- scripts/common.sh@345 -- # : 1 00:37:28.881 00:44:59 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:28.881 00:44:59 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:28.881 00:44:59 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:37:28.881 00:44:59 keyring_file -- scripts/common.sh@353 -- # local d=1 00:37:28.881 00:44:59 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:28.881 00:44:59 keyring_file -- scripts/common.sh@355 -- # echo 1 00:37:28.881 00:44:59 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:37:28.881 00:44:59 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:37:28.881 00:44:59 keyring_file -- scripts/common.sh@353 -- # local d=2 00:37:28.881 00:44:59 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:28.881 00:44:59 keyring_file -- scripts/common.sh@355 -- # echo 2 00:37:28.881 00:44:59 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:37:28.881 00:44:59 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:28.881 00:44:59 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:28.881 00:44:59 keyring_file -- scripts/common.sh@368 -- # return 0 00:37:28.881 00:44:59 keyring_file -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:28.881 00:44:59 keyring_file -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:37:28.881 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:28.881 --rc genhtml_branch_coverage=1 00:37:28.881 --rc genhtml_function_coverage=1 00:37:28.881 --rc genhtml_legend=1 00:37:28.881 --rc geninfo_all_blocks=1 00:37:28.881 --rc geninfo_unexecuted_blocks=1 00:37:28.881 00:37:28.881 ' 00:37:28.881 00:44:59 keyring_file -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:37:28.881 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:28.881 --rc genhtml_branch_coverage=1 00:37:28.881 --rc genhtml_function_coverage=1 00:37:28.881 --rc genhtml_legend=1 00:37:28.881 --rc geninfo_all_blocks=1 00:37:28.881 --rc geninfo_unexecuted_blocks=1 00:37:28.881 00:37:28.881 ' 00:37:28.881 00:44:59 keyring_file -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:37:28.881 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:28.881 --rc genhtml_branch_coverage=1 00:37:28.881 --rc genhtml_function_coverage=1 00:37:28.881 --rc genhtml_legend=1 00:37:28.881 --rc geninfo_all_blocks=1 00:37:28.881 --rc geninfo_unexecuted_blocks=1 00:37:28.881 00:37:28.881 ' 00:37:28.881 00:44:59 keyring_file -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:37:28.881 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:28.881 --rc genhtml_branch_coverage=1 00:37:28.881 --rc genhtml_function_coverage=1 00:37:28.881 --rc genhtml_legend=1 00:37:28.881 --rc geninfo_all_blocks=1 00:37:28.881 --rc geninfo_unexecuted_blocks=1 00:37:28.881 00:37:28.881 ' 00:37:28.881 00:44:59 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:37:28.881 00:44:59 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:28.881 00:44:59 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:37:28.881 00:44:59 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:28.881 00:44:59 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:28.881 00:44:59 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:28.881 00:44:59 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:28.881 00:44:59 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:28.881 00:44:59 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:28.881 00:44:59 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:28.881 00:44:59 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:28.881 00:44:59 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:28.881 00:44:59 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:28.881 00:44:59 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:37:28.881 00:44:59 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:37:28.881 00:44:59 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:28.881 00:44:59 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:28.881 00:44:59 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:28.881 00:44:59 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:28.881 00:44:59 keyring_file -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:28.881 00:44:59 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:37:28.881 00:44:59 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:28.881 00:44:59 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:28.881 00:44:59 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:28.881 00:44:59 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:28.881 00:44:59 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:28.882 00:44:59 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:28.882 00:44:59 keyring_file -- paths/export.sh@5 -- # export PATH 00:37:28.882 00:44:59 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:28.882 00:44:59 keyring_file -- nvmf/common.sh@51 -- # : 0 00:37:28.882 00:44:59 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:28.882 00:44:59 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:28.882 00:44:59 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:28.882 00:44:59 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:28.882 00:44:59 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:28.882 00:44:59 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:37:28.882 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:37:28.882 00:44:59 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:28.882 00:44:59 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:28.882 00:44:59 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:28.882 00:44:59 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:37:28.882 00:44:59 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:37:28.882 00:44:59 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:37:28.882 00:44:59 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:37:28.882 00:44:59 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:37:28.882 00:44:59 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:37:28.882 00:44:59 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:37:28.882 00:44:59 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:37:28.882 00:44:59 keyring_file -- keyring/common.sh@17 -- # name=key0 00:37:28.882 00:44:59 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:37:28.882 00:44:59 keyring_file -- keyring/common.sh@17 -- # digest=0 00:37:28.882 00:44:59 keyring_file -- keyring/common.sh@18 -- # mktemp 00:37:28.882 00:44:59 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.mFKCRgpwaG 00:37:28.882 00:44:59 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:37:28.882 00:44:59 keyring_file -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:37:28.882 00:44:59 keyring_file -- nvmf/common.sh@728 -- # local prefix key digest 00:37:28.882 00:44:59 keyring_file -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:37:28.882 00:44:59 keyring_file -- nvmf/common.sh@730 -- # key=00112233445566778899aabbccddeeff 00:37:28.882 00:44:59 keyring_file -- nvmf/common.sh@730 -- # digest=0 00:37:28.882 00:44:59 keyring_file -- nvmf/common.sh@731 -- # python - 00:37:28.882 00:44:59 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.mFKCRgpwaG 00:37:28.882 00:44:59 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.mFKCRgpwaG 00:37:28.882 00:44:59 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.mFKCRgpwaG 00:37:28.882 00:44:59 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:37:28.882 00:44:59 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:37:28.882 00:44:59 keyring_file -- keyring/common.sh@17 -- # name=key1 00:37:28.882 00:44:59 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:37:28.882 00:44:59 keyring_file -- keyring/common.sh@17 -- # digest=0 00:37:28.882 00:44:59 keyring_file -- keyring/common.sh@18 -- # mktemp 00:37:28.882 00:44:59 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.5WjNROP9Lw 00:37:28.882 00:44:59 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:37:28.882 00:44:59 keyring_file -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:37:28.882 00:44:59 keyring_file -- nvmf/common.sh@728 -- # local prefix key digest 00:37:28.882 00:44:59 keyring_file -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:37:28.882 00:44:59 keyring_file -- nvmf/common.sh@730 -- # key=112233445566778899aabbccddeeff00 00:37:28.882 00:44:59 keyring_file -- nvmf/common.sh@730 -- # digest=0 00:37:28.882 00:44:59 keyring_file -- nvmf/common.sh@731 -- # python - 00:37:28.882 00:44:59 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.5WjNROP9Lw 00:37:28.882 00:44:59 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.5WjNROP9Lw 00:37:28.882 00:44:59 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.5WjNROP9Lw 00:37:28.882 00:44:59 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:37:28.882 00:44:59 keyring_file -- keyring/file.sh@30 -- # tgtpid=3579627 00:37:28.882 00:44:59 keyring_file -- keyring/file.sh@32 -- # waitforlisten 3579627 00:37:28.882 00:44:59 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 3579627 ']' 00:37:28.882 00:44:59 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:28.882 00:44:59 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:37:28.882 00:44:59 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:28.882 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:28.882 00:44:59 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:37:28.882 00:44:59 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:37:28.882 [2024-10-09 00:44:59.503950] Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 initialization... 00:37:28.882 [2024-10-09 00:44:59.504007] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3579627 ] 00:37:29.143 [2024-10-09 00:44:59.581793] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:29.143 [2024-10-09 00:44:59.648170] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:37:29.714 00:45:00 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:37:29.714 00:45:00 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:37:29.714 00:45:00 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:37:29.714 00:45:00 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:29.714 00:45:00 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:37:29.714 [2024-10-09 00:45:00.306842] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:29.714 null0 00:37:29.714 [2024-10-09 00:45:00.338887] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:37:29.714 [2024-10-09 00:45:00.339312] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:37:29.975 00:45:00 keyring_file -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:29.975 00:45:00 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:37:29.975 00:45:00 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:37:29.975 00:45:00 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:37:29.975 00:45:00 keyring_file -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:37:29.975 00:45:00 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:29.975 00:45:00 keyring_file -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:37:29.975 00:45:00 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:29.975 00:45:00 keyring_file -- common/autotest_common.sh@653 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:37:29.975 00:45:00 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:29.975 00:45:00 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:37:29.975 [2024-10-09 00:45:00.370950] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:37:29.975 request: 00:37:29.975 { 00:37:29.975 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:37:29.975 "secure_channel": false, 00:37:29.975 "listen_address": { 00:37:29.975 "trtype": "tcp", 00:37:29.975 "traddr": "127.0.0.1", 00:37:29.975 "trsvcid": "4420" 00:37:29.975 }, 00:37:29.975 "method": "nvmf_subsystem_add_listener", 00:37:29.975 "req_id": 1 00:37:29.975 } 00:37:29.975 Got JSON-RPC error response 00:37:29.975 response: 00:37:29.975 { 00:37:29.975 "code": -32602, 00:37:29.975 "message": "Invalid parameters" 00:37:29.975 } 00:37:29.975 00:45:00 keyring_file -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:37:29.975 00:45:00 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:37:29.975 00:45:00 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:37:29.975 00:45:00 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:37:29.975 00:45:00 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:37:29.975 00:45:00 keyring_file -- keyring/file.sh@47 -- # bperfpid=3579956 00:37:29.975 00:45:00 keyring_file -- keyring/file.sh@49 -- # waitforlisten 3579956 /var/tmp/bperf.sock 00:37:29.975 00:45:00 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 3579956 ']' 00:37:29.975 00:45:00 keyring_file -- keyring/file.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:37:29.975 00:45:00 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:37:29.975 00:45:00 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:37:29.976 00:45:00 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:37:29.976 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:37:29.976 00:45:00 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:37:29.976 00:45:00 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:37:29.976 [2024-10-09 00:45:00.439794] Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 initialization... 00:37:29.976 [2024-10-09 00:45:00.439853] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3579956 ] 00:37:29.976 [2024-10-09 00:45:00.521121] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:30.236 [2024-10-09 00:45:00.615531] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:37:30.807 00:45:01 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:37:30.807 00:45:01 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:37:30.807 00:45:01 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.mFKCRgpwaG 00:37:30.807 00:45:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.mFKCRgpwaG 00:37:30.807 00:45:01 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.5WjNROP9Lw 00:37:30.807 00:45:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.5WjNROP9Lw 00:37:31.067 00:45:01 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:37:31.067 00:45:01 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:37:31.067 00:45:01 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:31.067 00:45:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:31.067 00:45:01 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:31.328 00:45:01 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.mFKCRgpwaG == \/\t\m\p\/\t\m\p\.\m\F\K\C\R\g\p\w\a\G ]] 00:37:31.328 00:45:01 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:37:31.328 00:45:01 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:37:31.328 00:45:01 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:31.328 00:45:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:31.328 00:45:01 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:37:31.589 00:45:02 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.5WjNROP9Lw == \/\t\m\p\/\t\m\p\.\5\W\j\N\R\O\P\9\L\w ]] 00:37:31.589 00:45:02 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:37:31.589 00:45:02 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:31.589 00:45:02 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:31.589 00:45:02 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:31.589 00:45:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:31.589 00:45:02 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:31.589 00:45:02 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:37:31.589 00:45:02 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:37:31.589 00:45:02 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:37:31.589 00:45:02 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:31.589 00:45:02 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:31.589 00:45:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:31.589 00:45:02 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:37:31.849 00:45:02 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:37:31.849 00:45:02 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:31.849 00:45:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:32.108 [2024-10-09 00:45:02.561399] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:37:32.108 nvme0n1 00:37:32.108 00:45:02 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:37:32.108 00:45:02 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:32.108 00:45:02 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:32.108 00:45:02 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:32.108 00:45:02 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:32.108 00:45:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:32.369 00:45:02 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:37:32.369 00:45:02 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:37:32.369 00:45:02 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:37:32.369 00:45:02 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:32.369 00:45:02 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:32.369 00:45:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:32.369 00:45:02 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:37:32.630 00:45:03 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:37:32.630 00:45:03 keyring_file -- keyring/file.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:37:32.630 Running I/O for 1 seconds... 00:37:33.569 18407.00 IOPS, 71.90 MiB/s 00:37:33.569 Latency(us) 00:37:33.569 [2024-10-08T22:45:04.204Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:33.569 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:37:33.569 nvme0n1 : 1.00 18463.29 72.12 0.00 0.00 6919.44 3877.55 21080.75 00:37:33.569 [2024-10-08T22:45:04.204Z] =================================================================================================================== 00:37:33.569 [2024-10-08T22:45:04.204Z] Total : 18463.29 72.12 0.00 0.00 6919.44 3877.55 21080.75 00:37:33.569 { 00:37:33.569 "results": [ 00:37:33.569 { 00:37:33.569 "job": "nvme0n1", 00:37:33.569 "core_mask": "0x2", 00:37:33.569 "workload": "randrw", 00:37:33.569 "percentage": 50, 00:37:33.569 "status": "finished", 00:37:33.569 "queue_depth": 128, 00:37:33.569 "io_size": 4096, 00:37:33.569 "runtime": 1.003884, 00:37:33.569 "iops": 18463.288587127598, 00:37:33.569 "mibps": 72.12222104346718, 00:37:33.569 "io_failed": 0, 00:37:33.569 "io_timeout": 0, 00:37:33.569 "avg_latency_us": 6919.43640751731, 00:37:33.569 "min_latency_us": 3877.5466666666666, 00:37:33.569 "max_latency_us": 21080.746666666666 00:37:33.569 } 00:37:33.569 ], 00:37:33.569 "core_count": 1 00:37:33.569 } 00:37:33.569 00:45:04 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:37:33.569 00:45:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:37:33.847 00:45:04 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:37:33.847 00:45:04 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:33.847 00:45:04 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:33.847 00:45:04 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:33.847 00:45:04 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:33.847 00:45:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:34.110 00:45:04 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:37:34.110 00:45:04 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:37:34.110 00:45:04 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:37:34.110 00:45:04 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:34.110 00:45:04 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:34.110 00:45:04 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:37:34.110 00:45:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:34.371 00:45:04 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:37:34.371 00:45:04 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:37:34.371 00:45:04 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:37:34.371 00:45:04 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:37:34.371 00:45:04 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:37:34.371 00:45:04 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:34.371 00:45:04 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:37:34.371 00:45:04 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:34.371 00:45:04 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:37:34.371 00:45:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:37:34.371 [2024-10-09 00:45:04.891910] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:37:34.371 [2024-10-09 00:45:04.892466] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1982cc0 (107): Transport endpoint is not connected 00:37:34.371 [2024-10-09 00:45:04.893462] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1982cc0 (9): Bad file descriptor 00:37:34.371 [2024-10-09 00:45:04.894464] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:37:34.371 [2024-10-09 00:45:04.894470] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:37:34.371 [2024-10-09 00:45:04.894476] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:37:34.371 [2024-10-09 00:45:04.894482] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:37:34.371 request: 00:37:34.371 { 00:37:34.371 "name": "nvme0", 00:37:34.371 "trtype": "tcp", 00:37:34.371 "traddr": "127.0.0.1", 00:37:34.371 "adrfam": "ipv4", 00:37:34.371 "trsvcid": "4420", 00:37:34.371 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:34.371 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:34.371 "prchk_reftag": false, 00:37:34.371 "prchk_guard": false, 00:37:34.371 "hdgst": false, 00:37:34.371 "ddgst": false, 00:37:34.371 "psk": "key1", 00:37:34.371 "allow_unrecognized_csi": false, 00:37:34.371 "method": "bdev_nvme_attach_controller", 00:37:34.371 "req_id": 1 00:37:34.371 } 00:37:34.371 Got JSON-RPC error response 00:37:34.371 response: 00:37:34.371 { 00:37:34.371 "code": -5, 00:37:34.371 "message": "Input/output error" 00:37:34.371 } 00:37:34.371 00:45:04 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:37:34.371 00:45:04 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:37:34.371 00:45:04 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:37:34.371 00:45:04 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:37:34.371 00:45:04 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:37:34.371 00:45:04 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:34.371 00:45:04 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:34.371 00:45:04 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:34.371 00:45:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:34.371 00:45:04 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:34.632 00:45:05 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:37:34.632 00:45:05 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:37:34.632 00:45:05 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:37:34.632 00:45:05 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:34.632 00:45:05 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:34.632 00:45:05 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:37:34.632 00:45:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:34.632 00:45:05 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:37:34.632 00:45:05 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:37:34.632 00:45:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:37:34.892 00:45:05 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:37:34.892 00:45:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:37:35.152 00:45:05 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:37:35.152 00:45:05 keyring_file -- keyring/file.sh@78 -- # jq length 00:37:35.152 00:45:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:35.152 00:45:05 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:37:35.152 00:45:05 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.mFKCRgpwaG 00:37:35.152 00:45:05 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.mFKCRgpwaG 00:37:35.152 00:45:05 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:37:35.152 00:45:05 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.mFKCRgpwaG 00:37:35.152 00:45:05 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:37:35.152 00:45:05 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:35.152 00:45:05 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:37:35.152 00:45:05 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:35.152 00:45:05 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.mFKCRgpwaG 00:37:35.152 00:45:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.mFKCRgpwaG 00:37:35.412 [2024-10-09 00:45:05.905142] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.mFKCRgpwaG': 0100660 00:37:35.412 [2024-10-09 00:45:05.905159] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:37:35.412 request: 00:37:35.412 { 00:37:35.412 "name": "key0", 00:37:35.412 "path": "/tmp/tmp.mFKCRgpwaG", 00:37:35.412 "method": "keyring_file_add_key", 00:37:35.412 "req_id": 1 00:37:35.412 } 00:37:35.412 Got JSON-RPC error response 00:37:35.412 response: 00:37:35.412 { 00:37:35.412 "code": -1, 00:37:35.412 "message": "Operation not permitted" 00:37:35.412 } 00:37:35.412 00:45:05 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:37:35.412 00:45:05 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:37:35.412 00:45:05 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:37:35.412 00:45:05 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:37:35.412 00:45:05 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.mFKCRgpwaG 00:37:35.412 00:45:05 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.mFKCRgpwaG 00:37:35.412 00:45:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.mFKCRgpwaG 00:37:35.671 00:45:06 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.mFKCRgpwaG 00:37:35.671 00:45:06 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:37:35.671 00:45:06 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:35.671 00:45:06 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:35.671 00:45:06 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:35.671 00:45:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:35.671 00:45:06 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:35.671 00:45:06 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:37:35.671 00:45:06 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:35.671 00:45:06 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:37:35.671 00:45:06 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:35.671 00:45:06 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:37:35.671 00:45:06 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:35.671 00:45:06 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:37:35.671 00:45:06 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:35.671 00:45:06 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:35.671 00:45:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:35.929 [2024-10-09 00:45:06.430478] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.mFKCRgpwaG': No such file or directory 00:37:35.929 [2024-10-09 00:45:06.430492] nvme_tcp.c:2609:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:37:35.929 [2024-10-09 00:45:06.430505] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:37:35.929 [2024-10-09 00:45:06.430515] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:37:35.929 [2024-10-09 00:45:06.430521] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:37:35.929 [2024-10-09 00:45:06.430525] bdev_nvme.c:6438:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:37:35.929 request: 00:37:35.929 { 00:37:35.929 "name": "nvme0", 00:37:35.929 "trtype": "tcp", 00:37:35.929 "traddr": "127.0.0.1", 00:37:35.929 "adrfam": "ipv4", 00:37:35.929 "trsvcid": "4420", 00:37:35.929 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:35.929 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:35.929 "prchk_reftag": false, 00:37:35.929 "prchk_guard": false, 00:37:35.929 "hdgst": false, 00:37:35.929 "ddgst": false, 00:37:35.929 "psk": "key0", 00:37:35.929 "allow_unrecognized_csi": false, 00:37:35.929 "method": "bdev_nvme_attach_controller", 00:37:35.929 "req_id": 1 00:37:35.929 } 00:37:35.929 Got JSON-RPC error response 00:37:35.929 response: 00:37:35.929 { 00:37:35.929 "code": -19, 00:37:35.929 "message": "No such device" 00:37:35.929 } 00:37:35.929 00:45:06 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:37:35.929 00:45:06 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:37:35.929 00:45:06 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:37:35.929 00:45:06 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:37:35.929 00:45:06 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:37:35.929 00:45:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:37:36.188 00:45:06 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:37:36.188 00:45:06 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:37:36.188 00:45:06 keyring_file -- keyring/common.sh@17 -- # name=key0 00:37:36.188 00:45:06 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:37:36.188 00:45:06 keyring_file -- keyring/common.sh@17 -- # digest=0 00:37:36.188 00:45:06 keyring_file -- keyring/common.sh@18 -- # mktemp 00:37:36.189 00:45:06 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.qtBSy5rBL9 00:37:36.189 00:45:06 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:37:36.189 00:45:06 keyring_file -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:37:36.189 00:45:06 keyring_file -- nvmf/common.sh@728 -- # local prefix key digest 00:37:36.189 00:45:06 keyring_file -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:37:36.189 00:45:06 keyring_file -- nvmf/common.sh@730 -- # key=00112233445566778899aabbccddeeff 00:37:36.189 00:45:06 keyring_file -- nvmf/common.sh@730 -- # digest=0 00:37:36.189 00:45:06 keyring_file -- nvmf/common.sh@731 -- # python - 00:37:36.189 00:45:06 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.qtBSy5rBL9 00:37:36.189 00:45:06 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.qtBSy5rBL9 00:37:36.189 00:45:06 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.qtBSy5rBL9 00:37:36.189 00:45:06 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.qtBSy5rBL9 00:37:36.189 00:45:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.qtBSy5rBL9 00:37:36.448 00:45:06 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:36.448 00:45:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:36.708 nvme0n1 00:37:36.708 00:45:07 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:37:36.708 00:45:07 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:36.708 00:45:07 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:36.708 00:45:07 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:36.708 00:45:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:36.708 00:45:07 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:36.708 00:45:07 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:37:36.708 00:45:07 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:37:36.708 00:45:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:37:36.967 00:45:07 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:37:36.967 00:45:07 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:37:36.967 00:45:07 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:36.967 00:45:07 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:36.967 00:45:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:37.227 00:45:07 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:37:37.227 00:45:07 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:37:37.227 00:45:07 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:37.227 00:45:07 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:37.227 00:45:07 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:37.227 00:45:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:37.227 00:45:07 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:37.227 00:45:07 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:37:37.227 00:45:07 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:37:37.227 00:45:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:37:37.488 00:45:08 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:37:37.488 00:45:08 keyring_file -- keyring/file.sh@105 -- # jq length 00:37:37.488 00:45:08 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:37.748 00:45:08 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:37:37.748 00:45:08 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.qtBSy5rBL9 00:37:37.748 00:45:08 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.qtBSy5rBL9 00:37:38.008 00:45:08 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.5WjNROP9Lw 00:37:38.008 00:45:08 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.5WjNROP9Lw 00:37:38.008 00:45:08 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:38.008 00:45:08 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:38.271 nvme0n1 00:37:38.271 00:45:08 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:37:38.271 00:45:08 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:37:38.533 00:45:09 keyring_file -- keyring/file.sh@113 -- # config='{ 00:37:38.533 "subsystems": [ 00:37:38.533 { 00:37:38.533 "subsystem": "keyring", 00:37:38.533 "config": [ 00:37:38.533 { 00:37:38.533 "method": "keyring_file_add_key", 00:37:38.533 "params": { 00:37:38.533 "name": "key0", 00:37:38.533 "path": "/tmp/tmp.qtBSy5rBL9" 00:37:38.533 } 00:37:38.533 }, 00:37:38.533 { 00:37:38.533 "method": "keyring_file_add_key", 00:37:38.533 "params": { 00:37:38.533 "name": "key1", 00:37:38.533 "path": "/tmp/tmp.5WjNROP9Lw" 00:37:38.533 } 00:37:38.533 } 00:37:38.533 ] 00:37:38.533 }, 00:37:38.533 { 00:37:38.533 "subsystem": "iobuf", 00:37:38.533 "config": [ 00:37:38.533 { 00:37:38.533 "method": "iobuf_set_options", 00:37:38.533 "params": { 00:37:38.533 "small_pool_count": 8192, 00:37:38.533 "large_pool_count": 1024, 00:37:38.533 "small_bufsize": 8192, 00:37:38.533 "large_bufsize": 135168 00:37:38.533 } 00:37:38.533 } 00:37:38.533 ] 00:37:38.533 }, 00:37:38.533 { 00:37:38.533 "subsystem": "sock", 00:37:38.533 "config": [ 00:37:38.533 { 00:37:38.533 "method": "sock_set_default_impl", 00:37:38.533 "params": { 00:37:38.533 "impl_name": "posix" 00:37:38.533 } 00:37:38.533 }, 00:37:38.533 { 00:37:38.533 "method": "sock_impl_set_options", 00:37:38.533 "params": { 00:37:38.533 "impl_name": "ssl", 00:37:38.533 "recv_buf_size": 4096, 00:37:38.533 "send_buf_size": 4096, 00:37:38.533 "enable_recv_pipe": true, 00:37:38.533 "enable_quickack": false, 00:37:38.533 "enable_placement_id": 0, 00:37:38.533 "enable_zerocopy_send_server": true, 00:37:38.533 "enable_zerocopy_send_client": false, 00:37:38.533 "zerocopy_threshold": 0, 00:37:38.533 "tls_version": 0, 00:37:38.533 "enable_ktls": false 00:37:38.533 } 00:37:38.533 }, 00:37:38.533 { 00:37:38.533 "method": "sock_impl_set_options", 00:37:38.533 "params": { 00:37:38.533 "impl_name": "posix", 00:37:38.533 "recv_buf_size": 2097152, 00:37:38.533 "send_buf_size": 2097152, 00:37:38.533 "enable_recv_pipe": true, 00:37:38.533 "enable_quickack": false, 00:37:38.533 "enable_placement_id": 0, 00:37:38.533 "enable_zerocopy_send_server": true, 00:37:38.533 "enable_zerocopy_send_client": false, 00:37:38.533 "zerocopy_threshold": 0, 00:37:38.533 "tls_version": 0, 00:37:38.533 "enable_ktls": false 00:37:38.533 } 00:37:38.533 } 00:37:38.533 ] 00:37:38.533 }, 00:37:38.533 { 00:37:38.533 "subsystem": "vmd", 00:37:38.533 "config": [] 00:37:38.533 }, 00:37:38.533 { 00:37:38.533 "subsystem": "accel", 00:37:38.533 "config": [ 00:37:38.533 { 00:37:38.533 "method": "accel_set_options", 00:37:38.533 "params": { 00:37:38.533 "small_cache_size": 128, 00:37:38.533 "large_cache_size": 16, 00:37:38.533 "task_count": 2048, 00:37:38.533 "sequence_count": 2048, 00:37:38.533 "buf_count": 2048 00:37:38.533 } 00:37:38.533 } 00:37:38.533 ] 00:37:38.533 }, 00:37:38.533 { 00:37:38.533 "subsystem": "bdev", 00:37:38.533 "config": [ 00:37:38.533 { 00:37:38.533 "method": "bdev_set_options", 00:37:38.533 "params": { 00:37:38.533 "bdev_io_pool_size": 65535, 00:37:38.533 "bdev_io_cache_size": 256, 00:37:38.533 "bdev_auto_examine": true, 00:37:38.533 "iobuf_small_cache_size": 128, 00:37:38.533 "iobuf_large_cache_size": 16 00:37:38.533 } 00:37:38.533 }, 00:37:38.533 { 00:37:38.533 "method": "bdev_raid_set_options", 00:37:38.533 "params": { 00:37:38.533 "process_window_size_kb": 1024, 00:37:38.533 "process_max_bandwidth_mb_sec": 0 00:37:38.533 } 00:37:38.533 }, 00:37:38.533 { 00:37:38.533 "method": "bdev_iscsi_set_options", 00:37:38.533 "params": { 00:37:38.533 "timeout_sec": 30 00:37:38.533 } 00:37:38.533 }, 00:37:38.533 { 00:37:38.533 "method": "bdev_nvme_set_options", 00:37:38.533 "params": { 00:37:38.533 "action_on_timeout": "none", 00:37:38.533 "timeout_us": 0, 00:37:38.533 "timeout_admin_us": 0, 00:37:38.533 "keep_alive_timeout_ms": 10000, 00:37:38.533 "arbitration_burst": 0, 00:37:38.533 "low_priority_weight": 0, 00:37:38.533 "medium_priority_weight": 0, 00:37:38.533 "high_priority_weight": 0, 00:37:38.534 "nvme_adminq_poll_period_us": 10000, 00:37:38.534 "nvme_ioq_poll_period_us": 0, 00:37:38.534 "io_queue_requests": 512, 00:37:38.534 "delay_cmd_submit": true, 00:37:38.534 "transport_retry_count": 4, 00:37:38.534 "bdev_retry_count": 3, 00:37:38.534 "transport_ack_timeout": 0, 00:37:38.534 "ctrlr_loss_timeout_sec": 0, 00:37:38.534 "reconnect_delay_sec": 0, 00:37:38.534 "fast_io_fail_timeout_sec": 0, 00:37:38.534 "disable_auto_failback": false, 00:37:38.534 "generate_uuids": false, 00:37:38.534 "transport_tos": 0, 00:37:38.534 "nvme_error_stat": false, 00:37:38.534 "rdma_srq_size": 0, 00:37:38.534 "io_path_stat": false, 00:37:38.534 "allow_accel_sequence": false, 00:37:38.534 "rdma_max_cq_size": 0, 00:37:38.534 "rdma_cm_event_timeout_ms": 0, 00:37:38.534 "dhchap_digests": [ 00:37:38.534 "sha256", 00:37:38.534 "sha384", 00:37:38.534 "sha512" 00:37:38.534 ], 00:37:38.534 "dhchap_dhgroups": [ 00:37:38.534 "null", 00:37:38.534 "ffdhe2048", 00:37:38.534 "ffdhe3072", 00:37:38.534 "ffdhe4096", 00:37:38.534 "ffdhe6144", 00:37:38.534 "ffdhe8192" 00:37:38.534 ] 00:37:38.534 } 00:37:38.534 }, 00:37:38.534 { 00:37:38.534 "method": "bdev_nvme_attach_controller", 00:37:38.534 "params": { 00:37:38.534 "name": "nvme0", 00:37:38.534 "trtype": "TCP", 00:37:38.534 "adrfam": "IPv4", 00:37:38.534 "traddr": "127.0.0.1", 00:37:38.534 "trsvcid": "4420", 00:37:38.534 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:38.534 "prchk_reftag": false, 00:37:38.534 "prchk_guard": false, 00:37:38.534 "ctrlr_loss_timeout_sec": 0, 00:37:38.534 "reconnect_delay_sec": 0, 00:37:38.534 "fast_io_fail_timeout_sec": 0, 00:37:38.534 "psk": "key0", 00:37:38.534 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:38.534 "hdgst": false, 00:37:38.534 "ddgst": false, 00:37:38.534 "multipath": "multipath" 00:37:38.534 } 00:37:38.534 }, 00:37:38.534 { 00:37:38.534 "method": "bdev_nvme_set_hotplug", 00:37:38.534 "params": { 00:37:38.534 "period_us": 100000, 00:37:38.534 "enable": false 00:37:38.534 } 00:37:38.534 }, 00:37:38.534 { 00:37:38.534 "method": "bdev_wait_for_examine" 00:37:38.534 } 00:37:38.534 ] 00:37:38.534 }, 00:37:38.534 { 00:37:38.534 "subsystem": "nbd", 00:37:38.534 "config": [] 00:37:38.534 } 00:37:38.534 ] 00:37:38.534 }' 00:37:38.534 00:45:09 keyring_file -- keyring/file.sh@115 -- # killprocess 3579956 00:37:38.534 00:45:09 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 3579956 ']' 00:37:38.534 00:45:09 keyring_file -- common/autotest_common.sh@954 -- # kill -0 3579956 00:37:38.534 00:45:09 keyring_file -- common/autotest_common.sh@955 -- # uname 00:37:38.534 00:45:09 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:37:38.534 00:45:09 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3579956 00:37:38.534 00:45:09 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:37:38.534 00:45:09 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:37:38.534 00:45:09 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3579956' 00:37:38.534 killing process with pid 3579956 00:37:38.534 00:45:09 keyring_file -- common/autotest_common.sh@969 -- # kill 3579956 00:37:38.534 Received shutdown signal, test time was about 1.000000 seconds 00:37:38.534 00:37:38.534 Latency(us) 00:37:38.534 [2024-10-08T22:45:09.169Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:38.534 [2024-10-08T22:45:09.169Z] =================================================================================================================== 00:37:38.534 [2024-10-08T22:45:09.169Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:37:38.534 00:45:09 keyring_file -- common/autotest_common.sh@974 -- # wait 3579956 00:37:38.800 00:45:09 keyring_file -- keyring/file.sh@118 -- # bperfpid=3581862 00:37:38.800 00:45:09 keyring_file -- keyring/file.sh@120 -- # waitforlisten 3581862 /var/tmp/bperf.sock 00:37:38.800 00:45:09 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 3581862 ']' 00:37:38.800 00:45:09 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:37:38.800 00:45:09 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:37:38.800 00:45:09 keyring_file -- keyring/file.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:37:38.800 00:45:09 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:37:38.800 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:37:38.800 00:45:09 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:37:38.800 00:45:09 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:37:38.800 00:45:09 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:37:38.800 "subsystems": [ 00:37:38.800 { 00:37:38.800 "subsystem": "keyring", 00:37:38.800 "config": [ 00:37:38.800 { 00:37:38.800 "method": "keyring_file_add_key", 00:37:38.800 "params": { 00:37:38.800 "name": "key0", 00:37:38.800 "path": "/tmp/tmp.qtBSy5rBL9" 00:37:38.800 } 00:37:38.801 }, 00:37:38.801 { 00:37:38.801 "method": "keyring_file_add_key", 00:37:38.801 "params": { 00:37:38.801 "name": "key1", 00:37:38.801 "path": "/tmp/tmp.5WjNROP9Lw" 00:37:38.801 } 00:37:38.801 } 00:37:38.801 ] 00:37:38.801 }, 00:37:38.801 { 00:37:38.801 "subsystem": "iobuf", 00:37:38.801 "config": [ 00:37:38.801 { 00:37:38.801 "method": "iobuf_set_options", 00:37:38.801 "params": { 00:37:38.801 "small_pool_count": 8192, 00:37:38.801 "large_pool_count": 1024, 00:37:38.801 "small_bufsize": 8192, 00:37:38.801 "large_bufsize": 135168 00:37:38.801 } 00:37:38.801 } 00:37:38.801 ] 00:37:38.801 }, 00:37:38.801 { 00:37:38.801 "subsystem": "sock", 00:37:38.801 "config": [ 00:37:38.801 { 00:37:38.801 "method": "sock_set_default_impl", 00:37:38.801 "params": { 00:37:38.801 "impl_name": "posix" 00:37:38.801 } 00:37:38.801 }, 00:37:38.801 { 00:37:38.801 "method": "sock_impl_set_options", 00:37:38.801 "params": { 00:37:38.801 "impl_name": "ssl", 00:37:38.801 "recv_buf_size": 4096, 00:37:38.801 "send_buf_size": 4096, 00:37:38.801 "enable_recv_pipe": true, 00:37:38.801 "enable_quickack": false, 00:37:38.801 "enable_placement_id": 0, 00:37:38.801 "enable_zerocopy_send_server": true, 00:37:38.801 "enable_zerocopy_send_client": false, 00:37:38.801 "zerocopy_threshold": 0, 00:37:38.801 "tls_version": 0, 00:37:38.801 "enable_ktls": false 00:37:38.801 } 00:37:38.801 }, 00:37:38.801 { 00:37:38.801 "method": "sock_impl_set_options", 00:37:38.801 "params": { 00:37:38.801 "impl_name": "posix", 00:37:38.801 "recv_buf_size": 2097152, 00:37:38.801 "send_buf_size": 2097152, 00:37:38.801 "enable_recv_pipe": true, 00:37:38.801 "enable_quickack": false, 00:37:38.801 "enable_placement_id": 0, 00:37:38.801 "enable_zerocopy_send_server": true, 00:37:38.801 "enable_zerocopy_send_client": false, 00:37:38.801 "zerocopy_threshold": 0, 00:37:38.801 "tls_version": 0, 00:37:38.801 "enable_ktls": false 00:37:38.801 } 00:37:38.801 } 00:37:38.801 ] 00:37:38.801 }, 00:37:38.801 { 00:37:38.801 "subsystem": "vmd", 00:37:38.801 "config": [] 00:37:38.801 }, 00:37:38.801 { 00:37:38.801 "subsystem": "accel", 00:37:38.801 "config": [ 00:37:38.801 { 00:37:38.801 "method": "accel_set_options", 00:37:38.801 "params": { 00:37:38.801 "small_cache_size": 128, 00:37:38.801 "large_cache_size": 16, 00:37:38.801 "task_count": 2048, 00:37:38.801 "sequence_count": 2048, 00:37:38.801 "buf_count": 2048 00:37:38.801 } 00:37:38.801 } 00:37:38.801 ] 00:37:38.801 }, 00:37:38.801 { 00:37:38.801 "subsystem": "bdev", 00:37:38.801 "config": [ 00:37:38.801 { 00:37:38.801 "method": "bdev_set_options", 00:37:38.801 "params": { 00:37:38.801 "bdev_io_pool_size": 65535, 00:37:38.801 "bdev_io_cache_size": 256, 00:37:38.801 "bdev_auto_examine": true, 00:37:38.801 "iobuf_small_cache_size": 128, 00:37:38.801 "iobuf_large_cache_size": 16 00:37:38.801 } 00:37:38.801 }, 00:37:38.801 { 00:37:38.801 "method": "bdev_raid_set_options", 00:37:38.801 "params": { 00:37:38.801 "process_window_size_kb": 1024, 00:37:38.801 "process_max_bandwidth_mb_sec": 0 00:37:38.801 } 00:37:38.801 }, 00:37:38.801 { 00:37:38.801 "method": "bdev_iscsi_set_options", 00:37:38.801 "params": { 00:37:38.801 "timeout_sec": 30 00:37:38.801 } 00:37:38.801 }, 00:37:38.801 { 00:37:38.801 "method": "bdev_nvme_set_options", 00:37:38.801 "params": { 00:37:38.801 "action_on_timeout": "none", 00:37:38.801 "timeout_us": 0, 00:37:38.801 "timeout_admin_us": 0, 00:37:38.801 "keep_alive_timeout_ms": 10000, 00:37:38.801 "arbitration_burst": 0, 00:37:38.801 "low_priority_weight": 0, 00:37:38.801 "medium_priority_weight": 0, 00:37:38.801 "high_priority_weight": 0, 00:37:38.801 "nvme_adminq_poll_period_us": 10000, 00:37:38.801 "nvme_ioq_poll_period_us": 0, 00:37:38.801 "io_queue_requests": 512, 00:37:38.801 "delay_cmd_submit": true, 00:37:38.801 "transport_retry_count": 4, 00:37:38.801 "bdev_retry_count": 3, 00:37:38.801 "transport_ack_timeout": 0, 00:37:38.801 "ctrlr_loss_timeout_sec": 0, 00:37:38.801 "reconnect_delay_sec": 0, 00:37:38.801 "fast_io_fail_timeout_sec": 0, 00:37:38.801 "disable_auto_failback": false, 00:37:38.801 "generate_uuids": false, 00:37:38.801 "transport_tos": 0, 00:37:38.801 "nvme_error_stat": false, 00:37:38.801 "rdma_srq_size": 0, 00:37:38.801 "io_path_stat": false, 00:37:38.801 "allow_accel_sequence": false, 00:37:38.801 "rdma_max_cq_size": 0, 00:37:38.801 "rdma_cm_event_timeout_ms": 0, 00:37:38.801 "dhchap_digests": [ 00:37:38.801 "sha256", 00:37:38.801 "sha384", 00:37:38.801 "sha512" 00:37:38.801 ], 00:37:38.801 "dhchap_dhgroups": [ 00:37:38.801 "null", 00:37:38.801 "ffdhe2048", 00:37:38.801 "ffdhe3072", 00:37:38.801 "ffdhe4096", 00:37:38.801 "ffdhe6144", 00:37:38.801 "ffdhe8192" 00:37:38.801 ] 00:37:38.801 } 00:37:38.801 }, 00:37:38.801 { 00:37:38.801 "method": "bdev_nvme_attach_controller", 00:37:38.801 "params": { 00:37:38.801 "name": "nvme0", 00:37:38.801 "trtype": "TCP", 00:37:38.801 "adrfam": "IPv4", 00:37:38.801 "traddr": "127.0.0.1", 00:37:38.801 "trsvcid": "4420", 00:37:38.801 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:38.801 "prchk_reftag": false, 00:37:38.801 "prchk_guard": false, 00:37:38.801 "ctrlr_loss_timeout_sec": 0, 00:37:38.801 "reconnect_delay_sec": 0, 00:37:38.801 "fast_io_fail_timeout_sec": 0, 00:37:38.801 "psk": "key0", 00:37:38.801 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:38.801 "hdgst": false, 00:37:38.801 "ddgst": false, 00:37:38.801 "multipath": "multipath" 00:37:38.801 } 00:37:38.801 }, 00:37:38.801 { 00:37:38.801 "method": "bdev_nvme_set_hotplug", 00:37:38.801 "params": { 00:37:38.801 "period_us": 100000, 00:37:38.801 "enable": false 00:37:38.801 } 00:37:38.801 }, 00:37:38.801 { 00:37:38.801 "method": "bdev_wait_for_examine" 00:37:38.801 } 00:37:38.801 ] 00:37:38.801 }, 00:37:38.801 { 00:37:38.801 "subsystem": "nbd", 00:37:38.801 "config": [] 00:37:38.801 } 00:37:38.801 ] 00:37:38.801 }' 00:37:38.801 [2024-10-09 00:45:09.279230] Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 initialization... 00:37:38.801 [2024-10-09 00:45:09.279287] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3581862 ] 00:37:38.801 [2024-10-09 00:45:09.352608] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:38.801 [2024-10-09 00:45:09.406052] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:37:39.108 [2024-10-09 00:45:09.548413] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:37:39.737 00:45:10 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:37:39.737 00:45:10 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:37:39.737 00:45:10 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:37:39.737 00:45:10 keyring_file -- keyring/file.sh@121 -- # jq length 00:37:39.737 00:45:10 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:39.737 00:45:10 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:37:39.737 00:45:10 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:37:39.737 00:45:10 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:39.737 00:45:10 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:39.737 00:45:10 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:39.737 00:45:10 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:39.737 00:45:10 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:39.998 00:45:10 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:37:39.998 00:45:10 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:37:39.998 00:45:10 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:37:39.998 00:45:10 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:39.998 00:45:10 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:39.998 00:45:10 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:39.998 00:45:10 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:37:39.998 00:45:10 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:37:39.998 00:45:10 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:37:39.999 00:45:10 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:37:39.999 00:45:10 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:37:40.267 00:45:10 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:37:40.267 00:45:10 keyring_file -- keyring/file.sh@1 -- # cleanup 00:37:40.267 00:45:10 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.qtBSy5rBL9 /tmp/tmp.5WjNROP9Lw 00:37:40.267 00:45:10 keyring_file -- keyring/file.sh@20 -- # killprocess 3581862 00:37:40.267 00:45:10 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 3581862 ']' 00:37:40.267 00:45:10 keyring_file -- common/autotest_common.sh@954 -- # kill -0 3581862 00:37:40.267 00:45:10 keyring_file -- common/autotest_common.sh@955 -- # uname 00:37:40.267 00:45:10 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:37:40.267 00:45:10 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3581862 00:37:40.267 00:45:10 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:37:40.267 00:45:10 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:37:40.267 00:45:10 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3581862' 00:37:40.267 killing process with pid 3581862 00:37:40.267 00:45:10 keyring_file -- common/autotest_common.sh@969 -- # kill 3581862 00:37:40.267 Received shutdown signal, test time was about 1.000000 seconds 00:37:40.267 00:37:40.267 Latency(us) 00:37:40.267 [2024-10-08T22:45:10.902Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:40.267 [2024-10-08T22:45:10.902Z] =================================================================================================================== 00:37:40.267 [2024-10-08T22:45:10.902Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:37:40.267 00:45:10 keyring_file -- common/autotest_common.sh@974 -- # wait 3581862 00:37:40.529 00:45:10 keyring_file -- keyring/file.sh@21 -- # killprocess 3579627 00:37:40.529 00:45:10 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 3579627 ']' 00:37:40.529 00:45:10 keyring_file -- common/autotest_common.sh@954 -- # kill -0 3579627 00:37:40.529 00:45:10 keyring_file -- common/autotest_common.sh@955 -- # uname 00:37:40.529 00:45:10 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:37:40.529 00:45:10 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3579627 00:37:40.529 00:45:11 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:37:40.529 00:45:11 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:37:40.529 00:45:11 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3579627' 00:37:40.529 killing process with pid 3579627 00:37:40.529 00:45:11 keyring_file -- common/autotest_common.sh@969 -- # kill 3579627 00:37:40.529 00:45:11 keyring_file -- common/autotest_common.sh@974 -- # wait 3579627 00:37:40.789 00:37:40.789 real 0m12.125s 00:37:40.789 user 0m29.328s 00:37:40.789 sys 0m2.693s 00:37:40.789 00:45:11 keyring_file -- common/autotest_common.sh@1126 -- # xtrace_disable 00:37:40.789 00:45:11 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:37:40.789 ************************************ 00:37:40.789 END TEST keyring_file 00:37:40.789 ************************************ 00:37:40.789 00:45:11 -- spdk/autotest.sh@289 -- # [[ y == y ]] 00:37:40.789 00:45:11 -- spdk/autotest.sh@290 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:37:40.789 00:45:11 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:37:40.789 00:45:11 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:37:40.789 00:45:11 -- common/autotest_common.sh@10 -- # set +x 00:37:40.789 ************************************ 00:37:40.789 START TEST keyring_linux 00:37:40.789 ************************************ 00:37:40.790 00:45:11 keyring_linux -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:37:40.790 Joined session keyring: 995103129 00:37:40.790 * Looking for test storage... 00:37:41.051 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:37:41.051 00:45:11 keyring_linux -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:37:41.051 00:45:11 keyring_linux -- common/autotest_common.sh@1681 -- # lcov --version 00:37:41.051 00:45:11 keyring_linux -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:37:41.051 00:45:11 keyring_linux -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:37:41.051 00:45:11 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:41.051 00:45:11 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:41.051 00:45:11 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:41.051 00:45:11 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:37:41.051 00:45:11 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:37:41.051 00:45:11 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:37:41.051 00:45:11 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:37:41.051 00:45:11 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:37:41.051 00:45:11 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:37:41.051 00:45:11 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:37:41.051 00:45:11 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:41.051 00:45:11 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:37:41.051 00:45:11 keyring_linux -- scripts/common.sh@345 -- # : 1 00:37:41.051 00:45:11 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:41.051 00:45:11 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:41.051 00:45:11 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:37:41.051 00:45:11 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:37:41.051 00:45:11 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:41.051 00:45:11 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:37:41.051 00:45:11 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:37:41.051 00:45:11 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:37:41.051 00:45:11 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:37:41.051 00:45:11 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:41.051 00:45:11 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:37:41.051 00:45:11 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:37:41.051 00:45:11 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:41.051 00:45:11 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:41.051 00:45:11 keyring_linux -- scripts/common.sh@368 -- # return 0 00:37:41.052 00:45:11 keyring_linux -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:41.052 00:45:11 keyring_linux -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:37:41.052 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:41.052 --rc genhtml_branch_coverage=1 00:37:41.052 --rc genhtml_function_coverage=1 00:37:41.052 --rc genhtml_legend=1 00:37:41.052 --rc geninfo_all_blocks=1 00:37:41.052 --rc geninfo_unexecuted_blocks=1 00:37:41.052 00:37:41.052 ' 00:37:41.052 00:45:11 keyring_linux -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:37:41.052 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:41.052 --rc genhtml_branch_coverage=1 00:37:41.052 --rc genhtml_function_coverage=1 00:37:41.052 --rc genhtml_legend=1 00:37:41.052 --rc geninfo_all_blocks=1 00:37:41.052 --rc geninfo_unexecuted_blocks=1 00:37:41.052 00:37:41.052 ' 00:37:41.052 00:45:11 keyring_linux -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:37:41.052 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:41.052 --rc genhtml_branch_coverage=1 00:37:41.052 --rc genhtml_function_coverage=1 00:37:41.052 --rc genhtml_legend=1 00:37:41.052 --rc geninfo_all_blocks=1 00:37:41.052 --rc geninfo_unexecuted_blocks=1 00:37:41.052 00:37:41.052 ' 00:37:41.052 00:45:11 keyring_linux -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:37:41.052 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:41.052 --rc genhtml_branch_coverage=1 00:37:41.052 --rc genhtml_function_coverage=1 00:37:41.052 --rc genhtml_legend=1 00:37:41.052 --rc geninfo_all_blocks=1 00:37:41.052 --rc geninfo_unexecuted_blocks=1 00:37:41.052 00:37:41.052 ' 00:37:41.052 00:45:11 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:37:41.052 00:45:11 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:41.052 00:45:11 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:37:41.052 00:45:11 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:41.052 00:45:11 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:41.052 00:45:11 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:41.052 00:45:11 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:41.052 00:45:11 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:41.052 00:45:11 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:41.052 00:45:11 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:41.052 00:45:11 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:41.052 00:45:11 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:41.052 00:45:11 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:41.052 00:45:11 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:37:41.052 00:45:11 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:37:41.052 00:45:11 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:41.052 00:45:11 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:41.052 00:45:11 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:41.052 00:45:11 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:41.052 00:45:11 keyring_linux -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:41.052 00:45:11 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:37:41.052 00:45:11 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:41.052 00:45:11 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:41.052 00:45:11 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:41.052 00:45:11 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:41.052 00:45:11 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:41.052 00:45:11 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:41.052 00:45:11 keyring_linux -- paths/export.sh@5 -- # export PATH 00:37:41.052 00:45:11 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:41.052 00:45:11 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:37:41.052 00:45:11 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:41.052 00:45:11 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:41.052 00:45:11 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:41.052 00:45:11 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:41.052 00:45:11 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:41.052 00:45:11 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:37:41.052 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:37:41.052 00:45:11 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:41.052 00:45:11 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:41.052 00:45:11 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:41.052 00:45:11 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:37:41.052 00:45:11 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:37:41.052 00:45:11 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:37:41.052 00:45:11 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:37:41.052 00:45:11 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:37:41.052 00:45:11 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:37:41.052 00:45:11 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:37:41.052 00:45:11 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:37:41.052 00:45:11 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:37:41.052 00:45:11 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:37:41.052 00:45:11 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:37:41.052 00:45:11 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:37:41.052 00:45:11 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:37:41.052 00:45:11 keyring_linux -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:37:41.052 00:45:11 keyring_linux -- nvmf/common.sh@728 -- # local prefix key digest 00:37:41.052 00:45:11 keyring_linux -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:37:41.052 00:45:11 keyring_linux -- nvmf/common.sh@730 -- # key=00112233445566778899aabbccddeeff 00:37:41.052 00:45:11 keyring_linux -- nvmf/common.sh@730 -- # digest=0 00:37:41.052 00:45:11 keyring_linux -- nvmf/common.sh@731 -- # python - 00:37:41.052 00:45:11 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:37:41.052 00:45:11 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:37:41.052 /tmp/:spdk-test:key0 00:37:41.052 00:45:11 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:37:41.052 00:45:11 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:37:41.052 00:45:11 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:37:41.052 00:45:11 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:37:41.052 00:45:11 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:37:41.052 00:45:11 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:37:41.052 00:45:11 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:37:41.052 00:45:11 keyring_linux -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:37:41.052 00:45:11 keyring_linux -- nvmf/common.sh@728 -- # local prefix key digest 00:37:41.052 00:45:11 keyring_linux -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:37:41.052 00:45:11 keyring_linux -- nvmf/common.sh@730 -- # key=112233445566778899aabbccddeeff00 00:37:41.052 00:45:11 keyring_linux -- nvmf/common.sh@730 -- # digest=0 00:37:41.052 00:45:11 keyring_linux -- nvmf/common.sh@731 -- # python - 00:37:41.052 00:45:11 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:37:41.052 00:45:11 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:37:41.052 /tmp/:spdk-test:key1 00:37:41.052 00:45:11 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:37:41.052 00:45:11 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=3582736 00:37:41.052 00:45:11 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 3582736 00:37:41.052 00:45:11 keyring_linux -- common/autotest_common.sh@831 -- # '[' -z 3582736 ']' 00:37:41.052 00:45:11 keyring_linux -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:41.052 00:45:11 keyring_linux -- common/autotest_common.sh@836 -- # local max_retries=100 00:37:41.052 00:45:11 keyring_linux -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:41.052 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:41.052 00:45:11 keyring_linux -- common/autotest_common.sh@840 -- # xtrace_disable 00:37:41.052 00:45:11 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:37:41.312 [2024-10-09 00:45:11.702351] Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 initialization... 00:37:41.312 [2024-10-09 00:45:11.702426] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3582736 ] 00:37:41.312 [2024-10-09 00:45:11.782446] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:41.312 [2024-10-09 00:45:11.844265] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:37:41.883 00:45:12 keyring_linux -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:37:41.883 00:45:12 keyring_linux -- common/autotest_common.sh@864 -- # return 0 00:37:41.883 00:45:12 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:37:41.883 00:45:12 keyring_linux -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:41.883 00:45:12 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:37:41.883 [2024-10-09 00:45:12.511686] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:42.143 null0 00:37:42.143 [2024-10-09 00:45:12.543746] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:37:42.143 [2024-10-09 00:45:12.544103] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:37:42.143 00:45:12 keyring_linux -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:42.143 00:45:12 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:37:42.143 960801932 00:37:42.143 00:45:12 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:37:42.143 968203111 00:37:42.143 00:45:12 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=3582933 00:37:42.143 00:45:12 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 3582933 /var/tmp/bperf.sock 00:37:42.143 00:45:12 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:37:42.143 00:45:12 keyring_linux -- common/autotest_common.sh@831 -- # '[' -z 3582933 ']' 00:37:42.143 00:45:12 keyring_linux -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:37:42.143 00:45:12 keyring_linux -- common/autotest_common.sh@836 -- # local max_retries=100 00:37:42.143 00:45:12 keyring_linux -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:37:42.143 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:37:42.143 00:45:12 keyring_linux -- common/autotest_common.sh@840 -- # xtrace_disable 00:37:42.143 00:45:12 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:37:42.143 [2024-10-09 00:45:12.630426] Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 initialization... 00:37:42.143 [2024-10-09 00:45:12.630492] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3582933 ] 00:37:42.143 [2024-10-09 00:45:12.707055] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:42.143 [2024-10-09 00:45:12.760942] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:37:43.092 00:45:13 keyring_linux -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:37:43.092 00:45:13 keyring_linux -- common/autotest_common.sh@864 -- # return 0 00:37:43.092 00:45:13 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:37:43.092 00:45:13 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:37:43.092 00:45:13 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:37:43.092 00:45:13 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:37:43.353 00:45:13 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:37:43.353 00:45:13 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:37:43.353 [2024-10-09 00:45:13.944706] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:37:43.613 nvme0n1 00:37:43.613 00:45:14 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:37:43.613 00:45:14 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:37:43.613 00:45:14 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:37:43.613 00:45:14 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:37:43.613 00:45:14 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:37:43.613 00:45:14 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:43.613 00:45:14 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:37:43.613 00:45:14 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:37:43.613 00:45:14 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:37:43.613 00:45:14 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:37:43.613 00:45:14 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:43.613 00:45:14 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:37:43.613 00:45:14 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:43.873 00:45:14 keyring_linux -- keyring/linux.sh@25 -- # sn=960801932 00:37:43.873 00:45:14 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:37:43.873 00:45:14 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:37:43.873 00:45:14 keyring_linux -- keyring/linux.sh@26 -- # [[ 960801932 == \9\6\0\8\0\1\9\3\2 ]] 00:37:43.873 00:45:14 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 960801932 00:37:43.873 00:45:14 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:37:43.873 00:45:14 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:37:43.873 Running I/O for 1 seconds... 00:37:45.256 24386.00 IOPS, 95.26 MiB/s 00:37:45.257 Latency(us) 00:37:45.257 [2024-10-08T22:45:15.892Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:45.257 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:37:45.257 nvme0n1 : 1.01 24385.90 95.26 0.00 0.00 5233.62 4423.68 13052.59 00:37:45.257 [2024-10-08T22:45:15.892Z] =================================================================================================================== 00:37:45.257 [2024-10-08T22:45:15.892Z] Total : 24385.90 95.26 0.00 0.00 5233.62 4423.68 13052.59 00:37:45.257 { 00:37:45.257 "results": [ 00:37:45.257 { 00:37:45.257 "job": "nvme0n1", 00:37:45.257 "core_mask": "0x2", 00:37:45.257 "workload": "randread", 00:37:45.257 "status": "finished", 00:37:45.257 "queue_depth": 128, 00:37:45.257 "io_size": 4096, 00:37:45.257 "runtime": 1.005253, 00:37:45.257 "iops": 24385.90086276788, 00:37:45.257 "mibps": 95.25742524518704, 00:37:45.257 "io_failed": 0, 00:37:45.257 "io_timeout": 0, 00:37:45.257 "avg_latency_us": 5233.615516575563, 00:37:45.257 "min_latency_us": 4423.68, 00:37:45.257 "max_latency_us": 13052.586666666666 00:37:45.257 } 00:37:45.257 ], 00:37:45.257 "core_count": 1 00:37:45.257 } 00:37:45.257 00:45:15 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:37:45.257 00:45:15 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:37:45.257 00:45:15 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:37:45.257 00:45:15 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:37:45.257 00:45:15 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:37:45.257 00:45:15 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:37:45.257 00:45:15 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:37:45.257 00:45:15 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:45.517 00:45:15 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:37:45.517 00:45:15 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:37:45.517 00:45:15 keyring_linux -- keyring/linux.sh@23 -- # return 00:37:45.517 00:45:15 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:37:45.517 00:45:15 keyring_linux -- common/autotest_common.sh@650 -- # local es=0 00:37:45.517 00:45:15 keyring_linux -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:37:45.517 00:45:15 keyring_linux -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:37:45.517 00:45:15 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:45.517 00:45:15 keyring_linux -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:37:45.517 00:45:15 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:45.517 00:45:15 keyring_linux -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:37:45.517 00:45:15 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:37:45.517 [2024-10-09 00:45:16.042818] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:37:45.517 [2024-10-09 00:45:16.043492] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x158ba70 (107): Transport endpoint is not connected 00:37:45.518 [2024-10-09 00:45:16.044489] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x158ba70 (9): Bad file descriptor 00:37:45.518 [2024-10-09 00:45:16.045491] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:37:45.518 [2024-10-09 00:45:16.045497] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:37:45.518 [2024-10-09 00:45:16.045503] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:37:45.518 [2024-10-09 00:45:16.045509] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:37:45.518 request: 00:37:45.518 { 00:37:45.518 "name": "nvme0", 00:37:45.518 "trtype": "tcp", 00:37:45.518 "traddr": "127.0.0.1", 00:37:45.518 "adrfam": "ipv4", 00:37:45.518 "trsvcid": "4420", 00:37:45.518 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:45.518 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:45.518 "prchk_reftag": false, 00:37:45.518 "prchk_guard": false, 00:37:45.518 "hdgst": false, 00:37:45.518 "ddgst": false, 00:37:45.518 "psk": ":spdk-test:key1", 00:37:45.518 "allow_unrecognized_csi": false, 00:37:45.518 "method": "bdev_nvme_attach_controller", 00:37:45.518 "req_id": 1 00:37:45.518 } 00:37:45.518 Got JSON-RPC error response 00:37:45.518 response: 00:37:45.518 { 00:37:45.518 "code": -5, 00:37:45.518 "message": "Input/output error" 00:37:45.518 } 00:37:45.518 00:45:16 keyring_linux -- common/autotest_common.sh@653 -- # es=1 00:37:45.518 00:45:16 keyring_linux -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:37:45.518 00:45:16 keyring_linux -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:37:45.518 00:45:16 keyring_linux -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:37:45.518 00:45:16 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:37:45.518 00:45:16 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:37:45.518 00:45:16 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:37:45.518 00:45:16 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:37:45.518 00:45:16 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:37:45.518 00:45:16 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:37:45.518 00:45:16 keyring_linux -- keyring/linux.sh@33 -- # sn=960801932 00:37:45.518 00:45:16 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 960801932 00:37:45.518 1 links removed 00:37:45.518 00:45:16 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:37:45.518 00:45:16 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:37:45.518 00:45:16 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:37:45.518 00:45:16 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:37:45.518 00:45:16 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:37:45.518 00:45:16 keyring_linux -- keyring/linux.sh@33 -- # sn=968203111 00:37:45.518 00:45:16 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 968203111 00:37:45.518 1 links removed 00:37:45.518 00:45:16 keyring_linux -- keyring/linux.sh@41 -- # killprocess 3582933 00:37:45.518 00:45:16 keyring_linux -- common/autotest_common.sh@950 -- # '[' -z 3582933 ']' 00:37:45.518 00:45:16 keyring_linux -- common/autotest_common.sh@954 -- # kill -0 3582933 00:37:45.518 00:45:16 keyring_linux -- common/autotest_common.sh@955 -- # uname 00:37:45.518 00:45:16 keyring_linux -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:37:45.518 00:45:16 keyring_linux -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3582933 00:37:45.518 00:45:16 keyring_linux -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:37:45.518 00:45:16 keyring_linux -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:37:45.518 00:45:16 keyring_linux -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3582933' 00:37:45.518 killing process with pid 3582933 00:37:45.518 00:45:16 keyring_linux -- common/autotest_common.sh@969 -- # kill 3582933 00:37:45.518 Received shutdown signal, test time was about 1.000000 seconds 00:37:45.518 00:37:45.518 Latency(us) 00:37:45.518 [2024-10-08T22:45:16.153Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:45.518 [2024-10-08T22:45:16.153Z] =================================================================================================================== 00:37:45.518 [2024-10-08T22:45:16.153Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:37:45.518 00:45:16 keyring_linux -- common/autotest_common.sh@974 -- # wait 3582933 00:37:45.791 00:45:16 keyring_linux -- keyring/linux.sh@42 -- # killprocess 3582736 00:37:45.791 00:45:16 keyring_linux -- common/autotest_common.sh@950 -- # '[' -z 3582736 ']' 00:37:45.791 00:45:16 keyring_linux -- common/autotest_common.sh@954 -- # kill -0 3582736 00:37:45.791 00:45:16 keyring_linux -- common/autotest_common.sh@955 -- # uname 00:37:45.791 00:45:16 keyring_linux -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:37:45.791 00:45:16 keyring_linux -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3582736 00:37:45.792 00:45:16 keyring_linux -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:37:45.792 00:45:16 keyring_linux -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:37:45.792 00:45:16 keyring_linux -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3582736' 00:37:45.792 killing process with pid 3582736 00:37:45.792 00:45:16 keyring_linux -- common/autotest_common.sh@969 -- # kill 3582736 00:37:45.792 00:45:16 keyring_linux -- common/autotest_common.sh@974 -- # wait 3582736 00:37:46.058 00:37:46.058 real 0m5.212s 00:37:46.058 user 0m9.662s 00:37:46.058 sys 0m1.434s 00:37:46.058 00:45:16 keyring_linux -- common/autotest_common.sh@1126 -- # xtrace_disable 00:37:46.058 00:45:16 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:37:46.058 ************************************ 00:37:46.058 END TEST keyring_linux 00:37:46.058 ************************************ 00:37:46.058 00:45:16 -- spdk/autotest.sh@307 -- # '[' 0 -eq 1 ']' 00:37:46.058 00:45:16 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:37:46.058 00:45:16 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:37:46.058 00:45:16 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:37:46.058 00:45:16 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:37:46.058 00:45:16 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:37:46.058 00:45:16 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:37:46.058 00:45:16 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:37:46.058 00:45:16 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:37:46.058 00:45:16 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:37:46.058 00:45:16 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:37:46.058 00:45:16 -- spdk/autotest.sh@362 -- # [[ 0 -eq 1 ]] 00:37:46.058 00:45:16 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:37:46.058 00:45:16 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:37:46.058 00:45:16 -- spdk/autotest.sh@374 -- # [[ '' -eq 1 ]] 00:37:46.058 00:45:16 -- spdk/autotest.sh@381 -- # trap - SIGINT SIGTERM EXIT 00:37:46.058 00:45:16 -- spdk/autotest.sh@383 -- # timing_enter post_cleanup 00:37:46.058 00:45:16 -- common/autotest_common.sh@724 -- # xtrace_disable 00:37:46.058 00:45:16 -- common/autotest_common.sh@10 -- # set +x 00:37:46.058 00:45:16 -- spdk/autotest.sh@384 -- # autotest_cleanup 00:37:46.058 00:45:16 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:37:46.058 00:45:16 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:37:46.058 00:45:16 -- common/autotest_common.sh@10 -- # set +x 00:37:54.214 INFO: APP EXITING 00:37:54.214 INFO: killing all VMs 00:37:54.214 INFO: killing vhost app 00:37:54.214 INFO: EXIT DONE 00:37:57.513 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:37:57.513 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:37:57.513 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:37:57.513 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:37:57.513 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:37:57.513 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:37:57.513 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:37:57.513 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:37:57.513 0000:65:00.0 (144d a80a): Already using the nvme driver 00:37:57.513 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:37:57.513 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:37:57.513 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:37:57.513 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:37:57.513 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:37:57.513 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:37:57.513 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:37:57.513 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:38:00.823 Cleaning 00:38:00.823 Removing: /var/run/dpdk/spdk0/config 00:38:00.823 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:38:00.823 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:38:00.823 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:38:00.823 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:38:00.823 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:38:00.823 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:38:00.823 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:38:00.823 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:38:00.823 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:38:00.823 Removing: /var/run/dpdk/spdk0/hugepage_info 00:38:00.823 Removing: /var/run/dpdk/spdk1/config 00:38:00.823 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:38:00.823 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:38:00.823 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:38:00.823 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:38:00.823 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:38:00.823 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:38:00.823 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:38:00.824 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:38:00.824 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:38:00.824 Removing: /var/run/dpdk/spdk1/hugepage_info 00:38:00.824 Removing: /var/run/dpdk/spdk2/config 00:38:00.824 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:38:00.824 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:38:00.824 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:38:00.824 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:38:00.824 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:38:00.824 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:38:00.824 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:38:01.093 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:38:01.093 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:38:01.093 Removing: /var/run/dpdk/spdk2/hugepage_info 00:38:01.093 Removing: /var/run/dpdk/spdk3/config 00:38:01.093 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:38:01.093 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:38:01.093 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:38:01.093 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:38:01.093 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:38:01.093 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:38:01.093 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:38:01.093 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:38:01.093 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:38:01.093 Removing: /var/run/dpdk/spdk3/hugepage_info 00:38:01.093 Removing: /var/run/dpdk/spdk4/config 00:38:01.093 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:38:01.093 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:38:01.093 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:38:01.093 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:38:01.093 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:38:01.093 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:38:01.093 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:38:01.093 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:38:01.093 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:38:01.093 Removing: /var/run/dpdk/spdk4/hugepage_info 00:38:01.093 Removing: /dev/shm/bdev_svc_trace.1 00:38:01.093 Removing: /dev/shm/nvmf_trace.0 00:38:01.093 Removing: /dev/shm/spdk_tgt_trace.pid3011535 00:38:01.093 Removing: /var/run/dpdk/spdk0 00:38:01.093 Removing: /var/run/dpdk/spdk1 00:38:01.093 Removing: /var/run/dpdk/spdk2 00:38:01.093 Removing: /var/run/dpdk/spdk3 00:38:01.093 Removing: /var/run/dpdk/spdk4 00:38:01.093 Removing: /var/run/dpdk/spdk_pid3009864 00:38:01.093 Removing: /var/run/dpdk/spdk_pid3011535 00:38:01.093 Removing: /var/run/dpdk/spdk_pid3012202 00:38:01.093 Removing: /var/run/dpdk/spdk_pid3013250 00:38:01.093 Removing: /var/run/dpdk/spdk_pid3013581 00:38:01.093 Removing: /var/run/dpdk/spdk_pid3014733 00:38:01.093 Removing: /var/run/dpdk/spdk_pid3014989 00:38:01.093 Removing: /var/run/dpdk/spdk_pid3015369 00:38:01.093 Removing: /var/run/dpdk/spdk_pid3016356 00:38:01.093 Removing: /var/run/dpdk/spdk_pid3017049 00:38:01.093 Removing: /var/run/dpdk/spdk_pid3017442 00:38:01.093 Removing: /var/run/dpdk/spdk_pid3017840 00:38:01.093 Removing: /var/run/dpdk/spdk_pid3018255 00:38:01.093 Removing: /var/run/dpdk/spdk_pid3018652 00:38:01.093 Removing: /var/run/dpdk/spdk_pid3018870 00:38:01.093 Removing: /var/run/dpdk/spdk_pid3019050 00:38:01.093 Removing: /var/run/dpdk/spdk_pid3019437 00:38:01.093 Removing: /var/run/dpdk/spdk_pid3020819 00:38:01.093 Removing: /var/run/dpdk/spdk_pid3024747 00:38:01.093 Removing: /var/run/dpdk/spdk_pid3025314 00:38:01.093 Removing: /var/run/dpdk/spdk_pid3025600 00:38:01.093 Removing: /var/run/dpdk/spdk_pid3025737 00:38:01.093 Removing: /var/run/dpdk/spdk_pid3026198 00:38:01.093 Removing: /var/run/dpdk/spdk_pid3026462 00:38:01.093 Removing: /var/run/dpdk/spdk_pid3026864 00:38:01.093 Removing: /var/run/dpdk/spdk_pid3027172 00:38:01.093 Removing: /var/run/dpdk/spdk_pid3027537 00:38:01.093 Removing: /var/run/dpdk/spdk_pid3027557 00:38:01.093 Removing: /var/run/dpdk/spdk_pid3027913 00:38:01.093 Removing: /var/run/dpdk/spdk_pid3027969 00:38:01.093 Removing: /var/run/dpdk/spdk_pid3028658 00:38:01.093 Removing: /var/run/dpdk/spdk_pid3028810 00:38:01.353 Removing: /var/run/dpdk/spdk_pid3029141 00:38:01.353 Removing: /var/run/dpdk/spdk_pid3033942 00:38:01.353 Removing: /var/run/dpdk/spdk_pid3039074 00:38:01.353 Removing: /var/run/dpdk/spdk_pid3051138 00:38:01.353 Removing: /var/run/dpdk/spdk_pid3051817 00:38:01.353 Removing: /var/run/dpdk/spdk_pid3057135 00:38:01.353 Removing: /var/run/dpdk/spdk_pid3057566 00:38:01.353 Removing: /var/run/dpdk/spdk_pid3062627 00:38:01.353 Removing: /var/run/dpdk/spdk_pid3069704 00:38:01.353 Removing: /var/run/dpdk/spdk_pid3073057 00:38:01.353 Removing: /var/run/dpdk/spdk_pid3086232 00:38:01.353 Removing: /var/run/dpdk/spdk_pid3097281 00:38:01.353 Removing: /var/run/dpdk/spdk_pid3099297 00:38:01.353 Removing: /var/run/dpdk/spdk_pid3100316 00:38:01.353 Removing: /var/run/dpdk/spdk_pid3121320 00:38:01.353 Removing: /var/run/dpdk/spdk_pid3126073 00:38:01.353 Removing: /var/run/dpdk/spdk_pid3182584 00:38:01.353 Removing: /var/run/dpdk/spdk_pid3189548 00:38:01.353 Removing: /var/run/dpdk/spdk_pid3196737 00:38:01.353 Removing: /var/run/dpdk/spdk_pid3203960 00:38:01.353 Removing: /var/run/dpdk/spdk_pid3203962 00:38:01.353 Removing: /var/run/dpdk/spdk_pid3204965 00:38:01.353 Removing: /var/run/dpdk/spdk_pid3205984 00:38:01.353 Removing: /var/run/dpdk/spdk_pid3206988 00:38:01.353 Removing: /var/run/dpdk/spdk_pid3207667 00:38:01.353 Removing: /var/run/dpdk/spdk_pid3207669 00:38:01.353 Removing: /var/run/dpdk/spdk_pid3208003 00:38:01.353 Removing: /var/run/dpdk/spdk_pid3208094 00:38:01.353 Removing: /var/run/dpdk/spdk_pid3208202 00:38:01.353 Removing: /var/run/dpdk/spdk_pid3209255 00:38:01.353 Removing: /var/run/dpdk/spdk_pid3210272 00:38:01.353 Removing: /var/run/dpdk/spdk_pid3211352 00:38:01.353 Removing: /var/run/dpdk/spdk_pid3211922 00:38:01.353 Removing: /var/run/dpdk/spdk_pid3212025 00:38:01.353 Removing: /var/run/dpdk/spdk_pid3212262 00:38:01.353 Removing: /var/run/dpdk/spdk_pid3213644 00:38:01.353 Removing: /var/run/dpdk/spdk_pid3214883 00:38:01.353 Removing: /var/run/dpdk/spdk_pid3224876 00:38:01.353 Removing: /var/run/dpdk/spdk_pid3258641 00:38:01.353 Removing: /var/run/dpdk/spdk_pid3264051 00:38:01.353 Removing: /var/run/dpdk/spdk_pid3266051 00:38:01.353 Removing: /var/run/dpdk/spdk_pid3268324 00:38:01.353 Removing: /var/run/dpdk/spdk_pid3268511 00:38:01.353 Removing: /var/run/dpdk/spdk_pid3268763 00:38:01.353 Removing: /var/run/dpdk/spdk_pid3269098 00:38:01.353 Removing: /var/run/dpdk/spdk_pid3269921 00:38:01.353 Removing: /var/run/dpdk/spdk_pid3272737 00:38:01.353 Removing: /var/run/dpdk/spdk_pid3273925 00:38:01.353 Removing: /var/run/dpdk/spdk_pid3274553 00:38:01.353 Removing: /var/run/dpdk/spdk_pid3277258 00:38:01.353 Removing: /var/run/dpdk/spdk_pid3277975 00:38:01.353 Removing: /var/run/dpdk/spdk_pid3278825 00:38:01.353 Removing: /var/run/dpdk/spdk_pid3283765 00:38:01.353 Removing: /var/run/dpdk/spdk_pid3290451 00:38:01.353 Removing: /var/run/dpdk/spdk_pid3290452 00:38:01.353 Removing: /var/run/dpdk/spdk_pid3290453 00:38:01.353 Removing: /var/run/dpdk/spdk_pid3295142 00:38:01.353 Removing: /var/run/dpdk/spdk_pid3305392 00:38:01.353 Removing: /var/run/dpdk/spdk_pid3310206 00:38:01.353 Removing: /var/run/dpdk/spdk_pid3317222 00:38:01.353 Removing: /var/run/dpdk/spdk_pid3318799 00:38:01.353 Removing: /var/run/dpdk/spdk_pid3320554 00:38:01.353 Removing: /var/run/dpdk/spdk_pid3322853 00:38:01.353 Removing: /var/run/dpdk/spdk_pid3328613 00:38:01.613 Removing: /var/run/dpdk/spdk_pid3333530 00:38:01.613 Removing: /var/run/dpdk/spdk_pid3342750 00:38:01.613 Removing: /var/run/dpdk/spdk_pid3342755 00:38:01.613 Removing: /var/run/dpdk/spdk_pid3347803 00:38:01.613 Removing: /var/run/dpdk/spdk_pid3348135 00:38:01.613 Removing: /var/run/dpdk/spdk_pid3348420 00:38:01.613 Removing: /var/run/dpdk/spdk_pid3348816 00:38:01.613 Removing: /var/run/dpdk/spdk_pid3348822 00:38:01.613 Removing: /var/run/dpdk/spdk_pid3354517 00:38:01.613 Removing: /var/run/dpdk/spdk_pid3355026 00:38:01.613 Removing: /var/run/dpdk/spdk_pid3360530 00:38:01.613 Removing: /var/run/dpdk/spdk_pid3363711 00:38:01.613 Removing: /var/run/dpdk/spdk_pid3370278 00:38:01.613 Removing: /var/run/dpdk/spdk_pid3376934 00:38:01.613 Removing: /var/run/dpdk/spdk_pid3387625 00:38:01.613 Removing: /var/run/dpdk/spdk_pid3396295 00:38:01.613 Removing: /var/run/dpdk/spdk_pid3396297 00:38:01.613 Removing: /var/run/dpdk/spdk_pid3418465 00:38:01.613 Removing: /var/run/dpdk/spdk_pid3419207 00:38:01.613 Removing: /var/run/dpdk/spdk_pid3420076 00:38:01.613 Removing: /var/run/dpdk/spdk_pid3420840 00:38:01.613 Removing: /var/run/dpdk/spdk_pid3421898 00:38:01.613 Removing: /var/run/dpdk/spdk_pid3422579 00:38:01.613 Removing: /var/run/dpdk/spdk_pid3423271 00:38:01.613 Removing: /var/run/dpdk/spdk_pid3423988 00:38:01.613 Removing: /var/run/dpdk/spdk_pid3429425 00:38:01.613 Removing: /var/run/dpdk/spdk_pid3429975 00:38:01.613 Removing: /var/run/dpdk/spdk_pid3437264 00:38:01.613 Removing: /var/run/dpdk/spdk_pid3437596 00:38:01.613 Removing: /var/run/dpdk/spdk_pid3444103 00:38:01.613 Removing: /var/run/dpdk/spdk_pid3449144 00:38:01.613 Removing: /var/run/dpdk/spdk_pid3460778 00:38:01.613 Removing: /var/run/dpdk/spdk_pid3461458 00:38:01.613 Removing: /var/run/dpdk/spdk_pid3466502 00:38:01.613 Removing: /var/run/dpdk/spdk_pid3466855 00:38:01.613 Removing: /var/run/dpdk/spdk_pid3471897 00:38:01.613 Removing: /var/run/dpdk/spdk_pid3478878 00:38:01.613 Removing: /var/run/dpdk/spdk_pid3482455 00:38:01.613 Removing: /var/run/dpdk/spdk_pid3494746 00:38:01.613 Removing: /var/run/dpdk/spdk_pid3505430 00:38:01.613 Removing: /var/run/dpdk/spdk_pid3507438 00:38:01.613 Removing: /var/run/dpdk/spdk_pid3508445 00:38:01.614 Removing: /var/run/dpdk/spdk_pid3528050 00:38:01.614 Removing: /var/run/dpdk/spdk_pid3532878 00:38:01.614 Removing: /var/run/dpdk/spdk_pid3536544 00:38:01.614 Removing: /var/run/dpdk/spdk_pid3544301 00:38:01.614 Removing: /var/run/dpdk/spdk_pid3544313 00:38:01.614 Removing: /var/run/dpdk/spdk_pid3550188 00:38:01.614 Removing: /var/run/dpdk/spdk_pid3552410 00:38:01.614 Removing: /var/run/dpdk/spdk_pid3554895 00:38:01.614 Removing: /var/run/dpdk/spdk_pid3556091 00:38:01.614 Removing: /var/run/dpdk/spdk_pid3558607 00:38:01.614 Removing: /var/run/dpdk/spdk_pid3559936 00:38:01.614 Removing: /var/run/dpdk/spdk_pid3569758 00:38:01.614 Removing: /var/run/dpdk/spdk_pid3570420 00:38:01.614 Removing: /var/run/dpdk/spdk_pid3571084 00:38:01.614 Removing: /var/run/dpdk/spdk_pid3573945 00:38:01.614 Removing: /var/run/dpdk/spdk_pid3574386 00:38:01.614 Removing: /var/run/dpdk/spdk_pid3575042 00:38:01.614 Removing: /var/run/dpdk/spdk_pid3579627 00:38:01.614 Removing: /var/run/dpdk/spdk_pid3579956 00:38:01.614 Removing: /var/run/dpdk/spdk_pid3581862 00:38:01.614 Removing: /var/run/dpdk/spdk_pid3582736 00:38:01.614 Removing: /var/run/dpdk/spdk_pid3582933 00:38:01.614 Clean 00:38:01.874 00:45:32 -- common/autotest_common.sh@1451 -- # return 0 00:38:01.874 00:45:32 -- spdk/autotest.sh@385 -- # timing_exit post_cleanup 00:38:01.874 00:45:32 -- common/autotest_common.sh@730 -- # xtrace_disable 00:38:01.874 00:45:32 -- common/autotest_common.sh@10 -- # set +x 00:38:01.874 00:45:32 -- spdk/autotest.sh@387 -- # timing_exit autotest 00:38:01.874 00:45:32 -- common/autotest_common.sh@730 -- # xtrace_disable 00:38:01.874 00:45:32 -- common/autotest_common.sh@10 -- # set +x 00:38:01.874 00:45:32 -- spdk/autotest.sh@388 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:38:01.874 00:45:32 -- spdk/autotest.sh@390 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:38:01.874 00:45:32 -- spdk/autotest.sh@390 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:38:01.874 00:45:32 -- spdk/autotest.sh@392 -- # [[ y == y ]] 00:38:01.874 00:45:32 -- spdk/autotest.sh@394 -- # hostname 00:38:01.874 00:45:32 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-cyp-09 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:38:02.142 geninfo: WARNING: invalid characters removed from testname! 00:38:28.743 00:45:57 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:38:30.129 00:46:00 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:38:32.687 00:46:02 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:38:34.070 00:46:04 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:38:35.464 00:46:06 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:38:37.381 00:46:07 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:38:39.292 00:46:09 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:38:39.554 00:46:09 -- common/autotest_common.sh@1680 -- $ [[ y == y ]] 00:38:39.554 00:46:09 -- common/autotest_common.sh@1681 -- $ lcov --version 00:38:39.554 00:46:09 -- common/autotest_common.sh@1681 -- $ awk '{print $NF}' 00:38:39.554 00:46:10 -- common/autotest_common.sh@1681 -- $ lt 1.15 2 00:38:39.554 00:46:10 -- scripts/common.sh@373 -- $ cmp_versions 1.15 '<' 2 00:38:39.554 00:46:10 -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:38:39.554 00:46:10 -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:38:39.554 00:46:10 -- scripts/common.sh@336 -- $ IFS=.-: 00:38:39.554 00:46:10 -- scripts/common.sh@336 -- $ read -ra ver1 00:38:39.554 00:46:10 -- scripts/common.sh@337 -- $ IFS=.-: 00:38:39.554 00:46:10 -- scripts/common.sh@337 -- $ read -ra ver2 00:38:39.554 00:46:10 -- scripts/common.sh@338 -- $ local 'op=<' 00:38:39.554 00:46:10 -- scripts/common.sh@340 -- $ ver1_l=2 00:38:39.554 00:46:10 -- scripts/common.sh@341 -- $ ver2_l=1 00:38:39.554 00:46:10 -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:38:39.554 00:46:10 -- scripts/common.sh@344 -- $ case "$op" in 00:38:39.554 00:46:10 -- scripts/common.sh@345 -- $ : 1 00:38:39.554 00:46:10 -- scripts/common.sh@364 -- $ (( v = 0 )) 00:38:39.554 00:46:10 -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:39.554 00:46:10 -- scripts/common.sh@365 -- $ decimal 1 00:38:39.554 00:46:10 -- scripts/common.sh@353 -- $ local d=1 00:38:39.554 00:46:10 -- scripts/common.sh@354 -- $ [[ 1 =~ ^[0-9]+$ ]] 00:38:39.554 00:46:10 -- scripts/common.sh@355 -- $ echo 1 00:38:39.554 00:46:10 -- scripts/common.sh@365 -- $ ver1[v]=1 00:38:39.554 00:46:10 -- scripts/common.sh@366 -- $ decimal 2 00:38:39.554 00:46:10 -- scripts/common.sh@353 -- $ local d=2 00:38:39.554 00:46:10 -- scripts/common.sh@354 -- $ [[ 2 =~ ^[0-9]+$ ]] 00:38:39.554 00:46:10 -- scripts/common.sh@355 -- $ echo 2 00:38:39.554 00:46:10 -- scripts/common.sh@366 -- $ ver2[v]=2 00:38:39.554 00:46:10 -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:38:39.554 00:46:10 -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:38:39.554 00:46:10 -- scripts/common.sh@368 -- $ return 0 00:38:39.554 00:46:10 -- common/autotest_common.sh@1682 -- $ lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:39.554 00:46:10 -- common/autotest_common.sh@1694 -- $ export 'LCOV_OPTS= 00:38:39.554 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:39.554 --rc genhtml_branch_coverage=1 00:38:39.554 --rc genhtml_function_coverage=1 00:38:39.554 --rc genhtml_legend=1 00:38:39.554 --rc geninfo_all_blocks=1 00:38:39.554 --rc geninfo_unexecuted_blocks=1 00:38:39.554 00:38:39.554 ' 00:38:39.554 00:46:10 -- common/autotest_common.sh@1694 -- $ LCOV_OPTS=' 00:38:39.554 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:39.554 --rc genhtml_branch_coverage=1 00:38:39.554 --rc genhtml_function_coverage=1 00:38:39.554 --rc genhtml_legend=1 00:38:39.554 --rc geninfo_all_blocks=1 00:38:39.554 --rc geninfo_unexecuted_blocks=1 00:38:39.554 00:38:39.554 ' 00:38:39.554 00:46:10 -- common/autotest_common.sh@1695 -- $ export 'LCOV=lcov 00:38:39.554 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:39.554 --rc genhtml_branch_coverage=1 00:38:39.554 --rc genhtml_function_coverage=1 00:38:39.554 --rc genhtml_legend=1 00:38:39.554 --rc geninfo_all_blocks=1 00:38:39.554 --rc geninfo_unexecuted_blocks=1 00:38:39.554 00:38:39.554 ' 00:38:39.554 00:46:10 -- common/autotest_common.sh@1695 -- $ LCOV='lcov 00:38:39.554 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:39.554 --rc genhtml_branch_coverage=1 00:38:39.554 --rc genhtml_function_coverage=1 00:38:39.554 --rc genhtml_legend=1 00:38:39.554 --rc geninfo_all_blocks=1 00:38:39.554 --rc geninfo_unexecuted_blocks=1 00:38:39.554 00:38:39.554 ' 00:38:39.554 00:46:10 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:39.554 00:46:10 -- scripts/common.sh@15 -- $ shopt -s extglob 00:38:39.554 00:46:10 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:38:39.554 00:46:10 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:39.554 00:46:10 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:39.554 00:46:10 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:39.554 00:46:10 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:39.554 00:46:10 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:39.554 00:46:10 -- paths/export.sh@5 -- $ export PATH 00:38:39.554 00:46:10 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:39.554 00:46:10 -- common/autobuild_common.sh@485 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:38:39.554 00:46:10 -- common/autobuild_common.sh@486 -- $ date +%s 00:38:39.554 00:46:10 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1728427570.XXXXXX 00:38:39.554 00:46:10 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1728427570.qEVPIk 00:38:39.554 00:46:10 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:38:39.554 00:46:10 -- common/autobuild_common.sh@492 -- $ '[' -n '' ']' 00:38:39.554 00:46:10 -- common/autobuild_common.sh@495 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:38:39.554 00:46:10 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:38:39.554 00:46:10 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:38:39.554 00:46:10 -- common/autobuild_common.sh@502 -- $ get_config_params 00:38:39.554 00:46:10 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:38:39.554 00:46:10 -- common/autotest_common.sh@10 -- $ set +x 00:38:39.554 00:46:10 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:38:39.554 00:46:10 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:38:39.554 00:46:10 -- pm/common@17 -- $ local monitor 00:38:39.555 00:46:10 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:38:39.555 00:46:10 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:38:39.555 00:46:10 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:38:39.555 00:46:10 -- pm/common@21 -- $ date +%s 00:38:39.555 00:46:10 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:38:39.555 00:46:10 -- pm/common@25 -- $ sleep 1 00:38:39.555 00:46:10 -- pm/common@21 -- $ date +%s 00:38:39.555 00:46:10 -- pm/common@21 -- $ date +%s 00:38:39.555 00:46:10 -- pm/common@21 -- $ date +%s 00:38:39.555 00:46:10 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1728427570 00:38:39.555 00:46:10 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1728427570 00:38:39.555 00:46:10 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1728427570 00:38:39.555 00:46:10 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1728427570 00:38:39.816 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1728427570_collect-cpu-load.pm.log 00:38:39.816 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1728427570_collect-vmstat.pm.log 00:38:39.816 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1728427570_collect-cpu-temp.pm.log 00:38:39.816 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1728427570_collect-bmc-pm.bmc.pm.log 00:38:40.760 00:46:11 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:38:40.760 00:46:11 -- spdk/autopackage.sh@10 -- $ [[ 0 -eq 1 ]] 00:38:40.760 00:46:11 -- spdk/autopackage.sh@14 -- $ timing_finish 00:38:40.760 00:46:11 -- common/autotest_common.sh@736 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:38:40.760 00:46:11 -- common/autotest_common.sh@737 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:38:40.760 00:46:11 -- common/autotest_common.sh@740 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:38:40.760 00:46:11 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:38:40.760 00:46:11 -- pm/common@29 -- $ signal_monitor_resources TERM 00:38:40.760 00:46:11 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:38:40.760 00:46:11 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:38:40.760 00:46:11 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:38:40.760 00:46:11 -- pm/common@44 -- $ pid=3595713 00:38:40.760 00:46:11 -- pm/common@50 -- $ kill -TERM 3595713 00:38:40.760 00:46:11 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:38:40.760 00:46:11 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:38:40.760 00:46:11 -- pm/common@44 -- $ pid=3595714 00:38:40.760 00:46:11 -- pm/common@50 -- $ kill -TERM 3595714 00:38:40.760 00:46:11 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:38:40.760 00:46:11 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:38:40.760 00:46:11 -- pm/common@44 -- $ pid=3595716 00:38:40.760 00:46:11 -- pm/common@50 -- $ kill -TERM 3595716 00:38:40.760 00:46:11 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:38:40.760 00:46:11 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:38:40.760 00:46:11 -- pm/common@44 -- $ pid=3595741 00:38:40.760 00:46:11 -- pm/common@50 -- $ sudo -E kill -TERM 3595741 00:38:40.760 + [[ -n 2925087 ]] 00:38:40.760 + sudo kill 2925087 00:38:40.770 [Pipeline] } 00:38:40.784 [Pipeline] // stage 00:38:40.788 [Pipeline] } 00:38:40.801 [Pipeline] // timeout 00:38:40.805 [Pipeline] } 00:38:40.818 [Pipeline] // catchError 00:38:40.823 [Pipeline] } 00:38:40.837 [Pipeline] // wrap 00:38:40.843 [Pipeline] } 00:38:40.855 [Pipeline] // catchError 00:38:40.863 [Pipeline] stage 00:38:40.865 [Pipeline] { (Epilogue) 00:38:40.878 [Pipeline] catchError 00:38:40.879 [Pipeline] { 00:38:40.892 [Pipeline] echo 00:38:40.894 Cleanup processes 00:38:40.901 [Pipeline] sh 00:38:41.197 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:38:41.197 3595864 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/sdr.cache 00:38:41.197 3596410 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:38:41.212 [Pipeline] sh 00:38:41.502 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:38:41.502 ++ grep -v 'sudo pgrep' 00:38:41.502 ++ awk '{print $1}' 00:38:41.502 + sudo kill -9 3595864 00:38:41.515 [Pipeline] sh 00:38:41.806 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:38:54.042 [Pipeline] sh 00:38:54.331 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:38:54.331 Artifacts sizes are good 00:38:54.346 [Pipeline] archiveArtifacts 00:38:54.354 Archiving artifacts 00:38:54.560 [Pipeline] sh 00:38:54.964 + sudo chown -R sys_sgci: /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:38:54.979 [Pipeline] cleanWs 00:38:54.990 [WS-CLEANUP] Deleting project workspace... 00:38:54.990 [WS-CLEANUP] Deferred wipeout is used... 00:38:54.997 [WS-CLEANUP] done 00:38:54.999 [Pipeline] } 00:38:55.016 [Pipeline] // catchError 00:38:55.027 [Pipeline] sh 00:38:55.316 + logger -p user.info -t JENKINS-CI 00:38:55.326 [Pipeline] } 00:38:55.340 [Pipeline] // stage 00:38:55.345 [Pipeline] } 00:38:55.359 [Pipeline] // node 00:38:55.364 [Pipeline] End of Pipeline 00:38:55.410 Finished: SUCCESS